 Dive into Showtime, according to the little button I just clicked. Welcome to the Home Lab Show. We're going to be diving into Proxmox. This is Episode 13 with Tom Lawrence and Jay LaCroix. And we're excited. We got through hypervisors as a broad topic to talk about the popular ones. And now we want to dive into Proxmox. But of course, this also leaves the door open for us to dive into other ones and future episodes because, you know, hey, why not? And diving into singular topics like this, hopefully it will give you that information you need because these are so foundational things because you have a pile of hardware with no operating system and you want to put many operating systems on there. And this is where hypervisors come in. It's exciting. Watch our previous show or listen to our previous show on hypervisors if you kind of want to dive into the topic as a whole, talking about all the different ones. But today we're going to focus on Proxmox. And Jay's kind of the expert on that. So I'll be playing the noob who asks questions because I still haven't, I mean, I have used it briefly and a little bit, but I haven't really dove deep into it. But before we dive into all that, we do want to thank the sponsor that is literally bringing you this show, especially if you listen to it at a podcast, which is Linode. How long have you been using Linode for, Jay? I've lost count. I want to say two plus years, maybe pushing three at this point. They were the first sponsor of Learn Linux TV ever. I had a lot of requests, believe me, but it's like, I don't really want this sponsor, that sponsor. Linode made sense because they're a Linux company. I'm a Linux YouTube channel. So they became the official back-end provider for pretty much everything that has a web presence when it comes to Learn Linux TV. So the website, the forums, things like that. So if you are on our site for this podcast, you're using Linode. If you're on my website for the YouTube channel, LearnLinux.tv, that's hosted on Linode, the community, as I just mentioned. And I started using them when I met them at PenguinCon. So what was that, three years ago? Probably around then, yeah. So that's about how long I've been using them then. And I like their features a lot. There's just way too many to go over in one spot. But one of my favorite things is you could literally use DD to backup your Linode instance to your local computer, just like you would use DD on anything else. You could just boot your Linode in rescue mode, use DD, pull it down, and you have the hard drive local on your machine. And you could also DD something up there. I often, not often, but at least once, I've ran a distribution that they don't support. They have a bunch of distros like Arch, Debian, Ubuntu, Fedora, CentOS, you name it. But they have Allmalinux now too, which is brand new. But I was running Allmalinux on there before they even offered it because you could just upload your own image, which is pretty cool. So lots of great features. And if you use the URL that'll be in the show notes or description and or description that gives you $100 in credit, that's good for three months. So the credit will last three months and it's for a new account. So if you want to just play around with some stuff, you set up a next cloud server, for example, anything that you want to just play around with. And you know, it's a great fit for that. Yeah, well, we hopefully will inspire lots of ideas that you can host there. All right, things you can host yourself is going to be Proxmox. And it's based on Debian. We'll start with that because that's, there's the one of the important factors is what base operating system it starts with, because that offers a lot of compatibility for the hardware you want to run it on. So whether you have a custom self built server, or you go to, you know, something off the shelf, like a Dell server or something super micro that you can find on eBay used, or we're lucky enough to be there when a company was going, you know, we just want to make all this old hardware go away, but it still has some life in it. Debian is great for compatibility, getting all the network cards up and running and attached. So it's not a bad base operating system. And it's funny because a lot of people ask me why I don't run Proxmox, being that I'm a big fan of Debian. It's just because I like XCPNG, but I'll let Jay take it from there because that's, that's as much as I know it's based on Debian. After that, it's a hypervisor and Jay knows the rest. Yeah, there's gonna be a few things I haven't really done as much, which I'll talk about some of the areas I haven't really explored yet. But let's talk about the Debian thing real quick, because I think that's important. In my case, I run Nagios, I know, right? Nagios of all things. Come on, that's boring, but, but it works, right? So I use Nagios for monitoring all the things, just letting me know when something's down. And with Proxmox, it's not hard to get Nagios working the NRPE plugin. It's just apt install whatever the package name is. I think it's like Nagios, something NRPE I don't remember. I automate everything, then I forget everything afterwards, which is kind of sad. But anyway, if there's no magic or science, it's Debian. So the same way you would install that plugin with Debian, you do that, you know, on Proxmox. And then all of a sudden my Nagios server is able to reach it and check it, check the memory, because all my plugins are installed. So that gives you some benefit there, too. Obviously, you don't want to go crazy with that and start removing plugins. And I don't think I need this package or that package, then you break everything. You still got to be careful and consider Proxmox and appliance, because that's what it is. But you still have that flexibility if you need to run like a Xabix plugin or Nagios or whatever you're using. You could do that. I'm assuming, I haven't tried it, but I'll see why not. You know, things like Greylog and Grafana would work there. Yeah. Well, so you have all those, all the Debian things. I mean, you can, you could just run whatever. So that's awesome. Now, I'll talk a little bit about my decision to go with Proxmox, because before I actually did, Tom and I were talking for probably months back and forth about which one I'm going to go with, because he already had XCPNG set up and ready to go. And I had a lot of experience with XCPNG because I used Citrix at a previous jobs on server, you know, which that's built on. So you would have, I think anyone probably would have assumed I would go the same direction, because I have all this experience with this. I'm already familiar with it. Why not go that direction? What I found was that XCPNG and Proxmox were just neck and neck, very close to each other. The one thing that kind of pushed me over to choose Proxmox was the fact that they have containers built in. That doesn't mean you can't do containers in XCPNG. I know you can. There's at least one plug-in, if not other methods as well. I know that, correct me if I'm wrong, that XCPNG supports an API, so you could have something spin up Kubernetes instances if you wanted to. Yeah, we'll just say it's not as well developed, because it's not as popular of a use case. You're completely capable of doing it, but the container system is not as integrated as it seems to be in Proxmox. So if you're leaning towards containers, you're probably going to lean towards Proxmox. I think you're making the right decision there. Yeah. So as an aside, you could actually just set up a Kubernetes server, maybe just some Debian instances, for example, to run Docker containers without Kubernetes. But yeah, like Tom said, it's built into Proxmox. I really like to be able to have the conversation with myself. I want to set up this server, this services app, whatever it is, and do I want it to be a VM, or do I want it to be a container? It's always important to understand that this mindset of containerize all the things. I don't feel that's valid because some things aren't really a good fit there. It's often a good fit, but you can make that decision per app with what should be a VM and what should be a container. And there is one situation, I think, where you should containerize all the things, which I'll get to later. But I really like having that option to intelligently design the layout the way I want it to be. Am I going to have mostly containers, mostly VMs, even mix of both? You know, it's fully up to me. Yeah, I don't use many containers. And we actually, the scale we do with XCPNG, you know, we did some consulting with a client just a couple weeks ago that had 2100 VMs in XCPNG. So most of the stuff we've been dealing with has just been large scale instances, or larger. I mean, that's not the hugest scale I've seen, but some large scale instances with lots of individual VMs and even myself. And we had this discussion on the hypervisor episode where, yes, we definitely, I lean towards Mark running a lot of things all in your independent VMs. And there's good reasons to do that. But I think the first thing we should probably talk about is, what do you run it on? Because that's a very important thing also leads right into another aspect of containers that I like. So what should you run it on? Well, anything, honestly, if it has virtualization extensions in the BIOS that you can enable, you can run it. Within reason, obviously, if you're running something like Super Ancient, yeah, no. But it doesn't have to be like a really good machine. It could be an old desktop. Maybe you had one and you replaced it or an older laptop. And if you think about it, as I've said a few times before, what does a laptop have as a battery? So it has UPS built in. It has a keyboard and a display. So it has built in KVM. So essentially, a laptop, if you have an extra one, that's essentially a data center right there. And I've gone on Facebook Marketplace, which, by the way, I don't like Facebook, but if I see a good deal on a laptop, I might consider it. And I did a few videos on this where I bought some really good laptops for like 200 or 250, I can't remember, that had maybe four or eight gigs of RAM. It was a while ago. But it's perfect for Proxmox because, again, you have that self-contained server. You could go on eBay by an off lease power edge. That's not a bad idea. Whatever you want to run it on, you can. But one thing you can run it on, and people have done this, I think it's smart as an Intel NUC, believe it or not. And automatically, when I think of a NUC, I think of something that's somewhat memory starved when it comes to a server. You're probably not going to fit 128 gigs of RAM in a NUC. But you don't have to. You don't even need 128 gigs anyway in a home lab unless you're going completely crazy hosting all the things. But when it comes to a NUC, that's when containers can really help out. Because if you have a NUC with four gigs or eight gigs, that's the value of containers. You stretch your hardware further and you don't have to, I mean, we'll get into memory ballooning in a moment. But for the most part, you don't really have to dedicate memory for the task. You have a memory ceiling. So you don't want this particular container to consume more than, let's just say, 512 megabytes of the host memory. Then great. That's awesome. Because now with that Intel NUC, or that old laptop with like two gigs of RAM, actually probably should be four. I don't know if I'd be comfortable with two anymore. But either way, you could really stretch it a lot further. I think it's the back down architecture design and whether or not that's a fit for you. I'll admit, because someone is probably already cringing to saying that around a lot of VMs. But Tom, you're wasting memory. But memory is cheap. I mean, old heart. The VMs, for the VMs we run, it doesn't take too much RAM for each one of them. It is true. But I think that there's this fan base that technologies often have where people are excited about it. And I know how that feels. I get excited about it too, just like everybody else does. And they're like, yeah, I want to learn it. And then someone learns it. And then it feels like a superpower because this thing that was kind of really hard to learn, you now know it after all the attempts to learn it, you now know it. And now you got to use it everywhere because it just makes sense. And I get that. But I've seen with containers, though, some apps just don't play well. And for no reason other than something the developer did didn't make it as portable as it needed to be. I've run into this, not extremely common. Another thing too is if you are running Proxmox in a business or for business purposes, there are some companies out there that have a stigma. Oh, you're running it in a container. We don't support that. For no other reason than they're scared of it. Let's be honest. They haven't taken the time to learn it. So they just throw the, they slap the unsupported sticker on it. Well, you don't know how that works. So we'll just call that unsupported. Right. So we're not going to deal with that. That's often the case. My previous job would do a lot of Atlassian support. So Confluence and Jira and the like. And for the longest time, Atlassian didn't support containers. If they found out you're running it in a container, then they wouldn't support you. Then all of a sudden, seemingly out of nowhere, they're like, yeah, we have an official container now for all our apps. Oh, now they support it. Great. So you do have to at least know that that's a thing that you have to keep in mind every now and then. So what do you run it on? Well, whatever you have. I would love to say, let's go buy an expensive server. It'd be great, but we don't have money for that unless you do, then go for it. But if you have something lying around, it's the cheapest way to go. And a few people asked about pasture I've seen coming up in the comments. I have a mixed feelings all the time on pasture. So in the enterprise market, you usually want redundancy and high availability. Pasture really breaks that because I can't pass through this device for this VM and then accept it to start up on another machine because I have to have the same card and pass it through in a similar way. So pasture is great for really optimizing some of the hardware, but it's not something I use a lot. But as far as I know, it's well supported in Proxmox. Am I correct on that? So if you wanted to pass through certain devices, you don't really do that much either. I don't do pass through at all. It's possible that it's fine now. And I'm a little biased against it, to be honest. The reason being, it's a lot of work. When I think of, okay, it works over, it's time to play games. And that's the main reason why most people do that. There's other reasons other than gaming to do that. They have Windows games, they want to run a Windows VM, pass the GPU through the Windows VM so they can play their games. I'm just not a fan of that. I just have never been a fan of that. It seems like it's convoluted. And if anything goes wrong or it's not set up, I'm spending more time tweaking it to get it working. I know once you do get it working, it's fine. It should work as long as you make no changes from that point forward. But I just don't, I don't like to add complexity with leisure time. So that said, my solution, you could probably justly argue is also complex. But I like to do the steam streaming. And they make these HDMI, I think I've mentioned this before. They're like a flash drive, they go on the HDMI port, and they tell the computer that there's a monitor attached, which allows you to have a Windows gaming PC that's headless on the network with a really fast network connection, especially if you can get 10 gig. You can stream your Windows game straight to your Linux PC, Linux laptop. You don't need to pass through anything. And you have a dedicated actual Windows PC. It's not a VM. You're not lying to it other than telling it that it has a monitor when it really doesn't, because you need a monitor. That works for me. It works very, very well for me. Again, you should do that over Wi-Fi, though. Keep in mind. But that's my go-to. So unfortunately, I haven't actually looked into PCI-Pastor. I think it's better now. I think it's way easier now than it was before. It's had a lot more time to bake, so to speak. So I think it's probably fine. But my solution works. So I guess I just, I just go that way. Yeah. And not to get too far off topic, but there is, if you want to do a little further reading, you can look up SR-IOV. And single-root IO virtualization is a popular methodology that is used in the commercial market. So before someone calls me out and says, Tom, it is used in commercial because, you know, some devices support SR-IOV and you can buy multiple servers with it on there. Yes. And that will drive us off topic. So we understand there's edge cases and things like that. Generally, when people ask about Pastu, they just want to set up a hypervisor. They want to pass through their video card, which, like Jay said, for gaming can be a very valid reason. In the commercial space, there is the ability for devices to use a special protocol that is then supported again through the hypervisors to pass through devices so they can be passed through any similar way across physical servers. But like I said, it'll steer us too far off topic. We're going to keep it narrowed to the proxmox here. Yeah. And I have a habit of going off topic anyway. So yeah. So all right. So we know what to run it on. And I think already people are getting an impression, okay, like I know what I could run it on. Why would I want to run it? One of the things I like about it, like I mentioned, is the container thing. It uses Lexi containers, which people get upset with me. LXC, Lexi, I don't make their decisions. Like I, a long time ago was interviewing with Canonical like six or seven years ago. And I'm saying LXD and LXC. LXD is a container management system, which I have a video now on my channel for. And then LXC being for containers, it's LexD and Lexi. Again, talk to the developers. People will, even when I say good, no, people get upset. But anyway, with proxmox, that's the type of container that it runs. So it's not a, you know, what most people think about when they think about containers, they think of Docker containers, but that's not what it's running. It's running Lexi containers or Linux containers is probably a better way to put it. And there's some challenges with that, which I'll get to in a moment. But I love that flexibility. I also love the fact that the interface is built in. So with proxmox, excuse me, XCPNG, you have to run blanking on the name, help me out. Zen Orchestra. Because Zen Orchestra is a separate VM. And we won't get to our topic, but when we will dive into why that is and why that's a good thing. And Zen, but of course, it bothers people. And this is, if this is your dividing point, which for some people it is, proxmox integrates the web UI right into the system completely. Right. But one good thing I'll say about XCPNG's way of doing that is you don't have to waste CPU cycles running a UI layer that you may or may not be using at any one moment. You could run it on your local laptop and manage your Zen server or your XCPNG server with that. So now that I did feel that way, like that divide, but after I thought about it like that, I'm like, you know what, that actually does kind of make sense. I do like that. So the UI and proxmox is, it's pretty easy to use. I wouldn't say it's the easiest, but it's not difficult at all. You have a data center view and then you have your individual hypervisors and you obviously can have more than one. So you can have a cluster. It's usually recommended to have like, you know, three. If you're going to have a cluster having two can represent some problems with any virtualization solution where they get into what's called fencing. Three, you kind of have an uneven vote. Basically, there's other reasons why. And I think it's gotten better, but it's better to have, if you have a cluster, you can have three servers, which can just have a single server. Right now I'm down to a single server. I am going to, I had a cluster before, I decided to simplify it. I think I'm going to complicate it again and get a cluster going. I'll probably make a video about it, but you could decide later on to do a cluster. You don't have to like do that right now. You basically just set up a Proxmox server later on down the road. If you want to do live migration, move a VM for one server to another, you can do that. And that might be a benefit or if you want failover, you want to basically upgrade or, you know, install the updates on one, you know, after moving the VMs off of it, and then you could basically keep your VMs running. But then I could also argue most people in a home lab, they probably don't care so much about uptime as much as others might. So shutting down or rebooting the entire Proxmox server to install updates, that might not be a problem for some people. And it's especially not a problem if none of those VMs are publicly exposed. Anyway, you should still update them, but the threat surface is a lot lower. So there's that. So there's a lot of great options without how to set that up, but you have the data center view at the top layer or top level, you have your individual VMs or hypervisors underneath that. You could have settings that apply to basically the data center, which means all the servers underneath it or certain, you know, settings for each one. It has a built-in backup system. So you could choose the schedule that you want your disks to be backed up. They have a dedicated backup service that you can install on another server, which I'm not using, but I do plan on diving into that. I'm currently using TrueNAS for my backup. So that's where they go. You choose the schedule, whether you want to try to back up a VM while it's running. Probably not a great idea. You could have it paused, shut down. So you have options for how to handle the backups. That's, I mean, I've seen early in the comments people were asking about it. The backup system is like Jay said, something he's not dove into. I know it's a newer thing that has been added on to Proxmox, because I did do a comparison between Proxmox versus XEP and G, and at the time I did the video, they have not released their backup system. So that is, you know, you still have to have a solid backup strategy. Also, the live migration, Jay, does that work if the two Proxmox servers, can it work if the Proxmox servers aren't attached and like I think you refer to as data center mode? I've never tried that. So they call it a cluster. So you create your cluster, you name it, and then you have a way to join, which basically comes down to, it gives you some key that you copy and then paste into the other, you type the password for root from the previous server, then they talk. I've not tried to do any kind of migration without it. So I don't think you can, but I don't want to say no, because anytime I say you can't do something in text, someone's like, you know what, I know how to do it. I know how to make it work, and I'll show you. But I think it's probably just, I'm going to say no in general, okay, do that. So now, migration is an interesting thing because there's several ways that you can do this. Now, you could argue the best way is probably to have shared storage, which means you could have something like TrueNAS, Synology, QNAP, whatever you use, and have like an NFS or maybe a nice fuzzy mount that's exposed that you could basically have all of your VMs, their disks to be on that storage rather than on the actual hypervisor itself. And the way live migration works, it's very quick. You don't lose a ping, at least I never have. It's just, it seems like witchcraft, like the very first time, like very early in my career, like we're probably talking like, like, I don't know, 12, 13 years ago or more, like somebody said, you can live migrate now. I'm like, what? That's not possible. You can't do that. Yeah, you can do that. That was when VMware first came out with it, I think. Anyway, you could do that in Proxmox and you could just ping the server and it moves it to the other host and you don't skip a ping, which is great because the storage is in the same place. The storage didn't move, where the VM is running on, that moved. But now you can actually live migrate without shared storage, which takes a lot longer because you are moving the disk from one server to another. It does work. I think in my test, it took me five minutes, so it wasn't that bad. But I didn't have a ton of data on that server either. So it does work. And I did play around with it. It works pretty well. So you do have those options. Now, shared storage is going to be more expensive, obviously. It does represent a single point of failure for all the servers that you're running. So if that link is severed to that shared storage, then it's like the disk, the SATA cable and the server, whatever cable being pulled right out of the disk. That's not a good thing to have happen. So it needs to be stable. Yeah, setting up the shared storage is always tricky because you have to really make sure it's reliable. And sometimes people think about a lot of redundancy between the servers, but once you still end up with somewhat of a single point of failure. What I'm doing now is I'm down to a single server currently. I am going to be adding another. I just like the cluster thing. It'd be a good topic for a video as an excuse to buy something. Let's be honest. But anyway, no, no, actuality. I'm thinking about doing a Proxmox series again. I've done one before. It was a little while ago. So I do want to refresh that. And when I built the recent server that I think I've had it over a year now, I went with an M2 SSD and I felt like maybe shared storage doesn't make sense for me because I don't really care that it's not, you know, it's slower to migrate than if I had shared storage. And it's SSD. And it's not even a huge SSD either. I think it might be 500 gig at most. But if I set up another server the same way, it's going to be a lot faster to live migrate server to server with a faster SSD of spinning rust. Of course, it's going to add a lot of time to that. So if you're setting up Proxmox, I think it's a question of like, how important is it to you? Is it just something that you're going to run Plex on or a number of other VMs? You don't really care if it's down every now and then to reboot or to patch it or whatever, then you probably don't need to cluster honestly. You probably don't need shared storage either. You should still back up the disks. But then again, if you're practicing IT and you're learning and you're getting into enterprise IT or you already are in enterprise IT and you're trying to broaden your horizons a bit, then yeah, you probably do want to do shared storage because you want to get used to those, that terminology and the SFP, SFP plus if you're going to 10 gig and Tom has a bunch of videos on that if that's something that you want. One thing I would caution though, that I've experienced, is if you're using, I'm sure someone has already asked in the chat, but if you have one gig to your storage device like a TrueNAS or Synology or whatever, one gig link, it's going to crawl. Like one VM, I mean, I had like I think five or six VMs on there, I tested it, maybe even 10, and they all ran fast. Like they were fine, totally fine. But as soon as Ansible hit those servers and if Ansible hit more than one at a time, and he had more than one generating a lot of IO, patching, installing applications or whatever to the central storage, the VMs became almost unusable. So I don't recommend a shared storage solution with a single one gig connection. You can do it if you're okay with massive slowdown, but I just don't recommend that. It's very cheap to go 10 gig with the right hardware. Tom has videos and links for all of that. So if you're going to do shared storage, definitely check out his videos for those, for that information, highly recommend that. But just make a decision, do I need clustering? Is it going to benefit me at all? Is it just going to end up being a headache? That's a discussion you have to have with yourself, basically. Yeah. And 10 gig is becoming an easier discussion to have with yourself because there's no doubt that especially with used markets, absolutely affordable now to throw a few 10 gig cards in there. I've always recommended a lot of the Intel dual SFP cards because you can find them on eBay for like 60 bucks. And really, it's such an easy low-cost thing to throw in there and have some connectivity between there. Matter of fact, I'm actually looking into, because I know even some of the faster stuff is starting to get cheaper. And I may buy a few pieces and components and find some deals on things for people to see how to build on your network as fast as possible between the servers so you can migrate them faster. Yeah. Now, one thing I'm not going to be able to talk about as well is ZFS and Proxmox. Unfortunately, I do know that this is going to disappoint a lot of people because that's one of the main things that people love about it. I'm going to be doing, when I do the Proxmox series, I'm going to definitely show that off so people can still get that information. The reason why, I mean, I'm not using ZFS currently, I had some issues, but I'm not going to really talk about what those issues are with Proxmox because that was like more than two years ago and I don't really feel like it's a good thing to hold a grudge. And I think they've probably figured it out by now. I think it's more than likely fine. I'll give them the benefit of the doubt. ZFS gives you some amazing features as we've talked about before. Scrubbing and things like that is a great benefit to have. The server that I'm running right now, it only has like a single drive, so I lose some of the benefits of ZFS as I understand it, but I might throw another SST in there and convert it over to ZFS is something I'm really thinking about doing because I want to dive back into that, especially considering I'm going to be doing a series about it when I get the time to film it. So I need to definitely re-expose myself to all the extra features. But one thing I noticed is that when Proxmox switched to ZFS, I mean, you can still not use it, right? But they took away some of the RAID options. They used to have a, I believe it was like a RAID 1, that was just MD RAID and Linux, and then a ZFS option when they first introduced it. Now it's like single driver ZFS. The last time I installed it, which was probably a little less than a year ago, those are the only options. So good options to have just with ZFS, you're going to have a little bit more overhead. You need at least two disks for that. But I do feel that's a benefit of Proxmox for sure. You could leverage ZFS. I think a lot of people would love to be able to do that. Yeah, and I think it's cool that they built it in like that. It's especially because they did an integration, as I understand it. So the snapshots work on ZFS too. So the snapshots thing are part of the VM snapshots. Is that correct, Jay? I've heard that. I haven't personally tested it yet, but that's very well likely the case. So when I, basically at some point, I'm going to be mentioning in the show, by the way, guys, I have a Proxmox series I just refreshed. Now it could take me a few months to even get it done, to be honest. So don't expect this like next Friday or anything. But at some point, yeah, when I have that, when I mentioned that, it's going to cover that stuff. So yeah, if you don't know it now, you'll know it then. Because you have a Proxmox series now, but it's going to be a little dated because it's from about a year ago. More than that. Actually, I believe it was filmed in your studio. Oh, okay. So that's been two years. Yeah. People would sometimes joke like, or not joke, but say, Tom, he's stealing your studio idea. He'd like set it up the exact same way. Dude, it's the same studio. The same studio. Me and Jay have known each other for a while, and I let Jay, when he used to live relatively close, he used my studio until he had his own. So simple as that. Okay. The main studios, Tom really helped me out. And in that studio, I filmed that series. That was when I first started getting, putting a very serious effort into the YouTube channel. It was just a passive hobby, but it was at that point where I'm like, yeah, I should probably do more with this. So that was, Tom helped me launch what LearnLinuxTV became as of today. So anyway, ZFS, ZFS is good. So if you can use it, if you get a benefit out of that, definitely consider that as part of your system. Maybe do some tests, installing Proxmox one way, trash it, install it another until you find the magic incantation that works the best for you. That's the great thing about HomeLab. We could destroy it. We could rebuild it. And at work, we don't get that benefit, right? If we have a day job in Enterprise IT, we can't just wipe all the servers and decide to redo all the things. But in HomeLab, we could redesign everything five times in a day if we have enough energy. And the only thing that's going to complain is maybe family members or the Plex servers down. Yeah. My staff gets upset when I decided to change their entire stack of hypervisors that run the business from day to day. Well, they get upset or they get to go home because there's nothing to do if I break it all down. Yeah. So it's just an amazing benefit to be able to play around with these things. And then if your employer ever says, hey, do you know about X? And it's one of those things you've been running on your HomeLab. It's a great feeling. So if they say, let's switch to Proxmox, which has happened to me a previous job. I was using Proxmox and they're like, yeah, we couldn't consider using it. They were actually trying to use OpenStack from one server in production. Oh my God, do not do that ever. Do not run OpenStack on one server for everything. And this was a huge server. It was really beefy. Like, well, you know, I've been testing out Proxmox. I think you'll like it. And that company, even though I don't work there anymore, they're still using it now. So it works out well for them. And I feel justified because something I like ended up being a good choice for a company. Yep. Now, what's the next thing? What about building the VMs? I think this is where the confusion comes in. It is because there is basically, when it comes to a lot of these virtualization technologies, they're, you know, when it comes to, especially KVM and QEMU there, which is what it uses, there is no, unless they've done this recently, there's no template feature. Like one of the things I loved about XEPNG was that I had a template feature. The VM templates are great. I loved it. I had a blueprint for whatever distro we were wanting to use. It was awesome. And then I picked up a book about, it was all about QEMU KVM before I even got into Proxmox. And it says there's no template feature, but you could just create a VM, name it template, and then just clone it every time, and effectively just do it anyway, even though it's not a native feature, which I guess you can argue it doesn't need to be. So in Proxmox, you can create a VM, and you could call it template, Ubuntu, whatever you want to call it, or Debian, whatever you're running. And you could normal, you know, just sanitize the VM a little bit. You could use Cloud in it. I have a video about that coming up on my channel whenever I can freaking get that edited. But anyway, Cloud in it is built in, which can help you do things like generalize the host keys, which if you don't do that, then you're basically like your SSH client is going to get confused. Every time it hits a server, it thinks it's the same server, same host key, and you get that message saying something like, this doesn't look right. The server is different than the last time I connected to it, even though it's a different server. But Cloud in it can do that. There's other ways of doing the same thing without Cloud in it, but you create that template. And what you do basically is you right click the VM, convert to template. It's that easy. It becomes a template. And then anytime you want to spin up a VM, you right click on that template clone, and then you'd have a VM. And it's great. I always have a template handy ready to go. And I make sure that template is backed up because let's be honest, it's, you know, I want to do all that work creating that template again. You could also use like Terraform and Packer and all these other automation tools and DevOps against Proxmox because it has an API. So that's one of my favorite things. Is there even a virtualization solution nowadays that doesn't have API access at this point? I think they kind of all do, don't they? Yeah, I imagine they do. For the most part, at least the popular ones, the more well-developed ones. But Proxmox and XCPNG are both well-known for their APIs, offering you a lot of flexibility for controlling things. I know because I use it that the one is extremely extensive in the Zen world. Because it's all based on, you know, the core Zen server project. Pretty much is Proxmox, I know it has an API, but is it is extensive? Like you can pretty much do everything and create with it or? I've done everything that I wanted or needed to do. So I did hit Terraform or point Terraform to it with the Terra, with the provider that they have for that. It was about a year ago, but it's like, yeah, I could create a VM and set all the things. And that's great because I can script everything. I'm going to get back into that again, because it's like I got kind of lazy. I have the template. I just right-click and clone it. There's a VM, so there's that. But I could automate all the things. And I never felt like I was limited in any kind of way for anything that I was doing personally. So I think it's probably fine. And I know maybe some other systems go higher than others. But at least you have these tools that you can use to, you know, get your stuff going. There are some, there's going to be some differences when it comes to VMs and containers that we should probably talk about though, that go beyond, you know, why you should use one or the other. So when you have a container, you have limits. So you can set CPU limits in memory, which is important because you don't want a stuck thread or some kind of runaway processor, wherever you want to call it, just going crazy and then taking up the host completely and all your other instances can't really do anything. So you can set some logical limits there. And with VMs traditionally, you're saying, this VM has two gigs. So two gigs of the virtual, excuse me, two gigs of the host are being used for this VM all the time. But with a container, you're setting a ceiling instead of saying, I'm always going to assign this memory to you, there's a limit. Now, nowadays, there's memory ballooning. So you're not actually always dedicating memory to the virtual machine. Memory ballooning allows you, and I'm not as knowledgeable on this, to be honest, but my understanding is that if it's not using all of that memory, it could share that across other VMs that might need that memory to just do their thing. Is that your understanding? Because I think XCPNG has ballooning as well, doesn't it? Yeah, you have that ability to basically over provision memory. So we only have 32 gigs, but we can allocate 48 because each VM may not be using all of it and provided that the hypervisor's aware of the actual memory usage, not the, I guess you could say, fake allocated memory over provision to it, you're able to do that. So that way, if there's a peak usage, like you have a SQL system that doesn't always use a lot of memory, but when there's heavy workload on it, and as long as that workload is not heavy across the other ones, this is kind of a neat way hypervisors allow you to optimize the hardware better. We allocate it there, but it's not needed. But when it is, it's available as long as all the other ones aren't using it, we don't end up with a bad event happening where everything wants all the RAM at once. Take some careful planning because it's one of those things that you can run into some trouble with if you are over provisioned and the worst case scenario happens that can sometimes cause the machines to lock up and everything goes south on you really quick. Yeah, and that's important to understand what the limitations are of one solution versus another. So when it comes to live migration, you know, VMs are great because you could just ping them, they move over and no one even knows that happened, but the container on Proxmox, they stop and start up on the other containers. So they will lose pings. So that is a thing when it comes to containers in that system. If you have a system where it can't lose pings ever, even when I migrate it, then you do not want to use containers for that solution. You want to run that as a VM. So that's going to be a situation where you want to run that as a VM. So I'm not completely sure, but I thought that basically what it's doing is it's copying the entire disk from one server to another. And that's what it's doing every single time I go to live migrated. I don't know if that's a limitation as shared storage. I think it might be, but either way, you're going to lose pings, regardless when you migrate a container versus a VM, you actually can make that stay up the whole time. Yeah. And you got to remember too, whatever you're running has to support. And just because we said some companies slap, we don't support containers on there. There's also people who run because they have a need for it, Windows machines and things like that. You're not going to throw Windows under a Linux container. It doesn't work like that. Or many of your appliance-based distributions that you may be setting up or importing. In PF Sense is a great example that your firewall distributions, they don't run as containers. Maybe there's some service out there that may, but for the most part, PF Sense and Open Sense and Untangle and the popular ones we've talked about in previous episodes, they don't run as a container. Therefore, they need to be their own VM. So there's a lot of times when you will be doing it like that. That's right. And there's going to be some use cases that work better for one than another. Like Tom mentioned, I would say I can't think of a reason why a website, so if you're hosting a website, that that would ever need to be a virtual machine. There probably is an edge case, but if you are exposing something via Nginx, Apache or whatever, if that's all it is, I mean, honestly, it's probably better as a container because I can't think of any reason for it not to be. Some of these other apps, I have these installers or they just overcomplicate things or it's a Windows machine like Tom mentioned. Yeah, that's not going to happen. I'm assuming maybe Microsoft would get around to it. I mean, I didn't even think they would make Windows 11, but here we are. You know, they said Windows 10 is going to be the last one and now we have Windows 11. I'm not even going to get into that, but basically we don't even know what Microsoft is going to do. They could have a containerized Windows thing next month or they might have it in 10 years and nobody's using it anymore. They just need to move to the Linux kernel and bring it, bring all those things with them and, you know, have one happy Linux universe. I could totally see that happening, but that's a debate that we can get into in another dedicated episode. That's an off topic. That's the most off topic of all today. So the thing is Proxmox just gives you these tools and you can implement them however you'd like. You can containerize everything if all your apps work out that way, virtualize everything because you have the memory, why not? Or you just need the migration to work better. And there's other things that work pretty well too. You know, disclaimer, TuxCare is a sponsor of my YouTube channel, but it works. It's Debian. So if you want to do live patching, you can do that. And if you, since the containers are going to use the host kernel, basically you could just run TuxCare, something similar to it, where it's going to live patch your running kernel without having to reboot. You still have to restart services though, but that's a value because if you really, really, really don't want to reboot anything, then you could consider doing something like that. But for HomeLab, honestly, I'm curious how many people actually care to when it comes to uptime. It's like I'll see engineers bragging that their laptop has been up for four years or not really, but they'll brag about uptime, but nowadays I kind of wonder if that really matters as much as it used to. Yes, we could get the uptime, but is it the most practical way to do things? Maybe not. That's another, that's also another debate for another day. Yeah, something that's probably worth mentioning because a lot of people have been bouncing around a few different things. There is ZFS, which we did talk about, but GlustraFS, NFS, and Cepha, CephFS, I think I'd say it, Ceph files. Yes, Ceph. Yeah. I see they haven't labeled it as CephSS. I'm looking at all their lists on Proximax. They say CephFS. Yeah. And GlustraFS. But anyways, they do have, and of course, Iskazi, they do have all the really popular file support for the shared storage on there. Of course, the shared ones being NFS Glustra, they do support Iskazi shared and ZFS over Iskazi. That's actually interesting. I know they did ZFS over Iskazi. Probably sometimes. Yeah. That's interesting. But nonetheless, there is a broad range of file support. So if you're using it in a shared storage environment, they have quite a few options on there. Interesting. I didn't know they supported CIFS. Have you ever set it up there? No, I haven't actually as of yet. So they, I think that's a good benefit though to talk about. They support a lot of things. I mean, probably more than a lot of the other competitors would support. But it's cool to say, yeah, I want to check out Ceph or something else. And oh, yeah, they support it. Cool. Then I have a way to do that. And I'm sure other providers do the same thing, but it's great to have those options to figure out how you want to take it. But one downside, in my opinion, about Proxmox between the XCPNG, I'm not going to spend too much time on this because I think we talked about this exactly. But as a quick aside, I feel like there's more excitement around XCPNG. Like it's, there's just more, the development is accelerated. Now sometimes that's because it's new. And it's not new because then server's been around for a while. But it's like once XCPNG became a thing, it's still kind of young. When we talk about the years that it's been around, but it's been out long enough to where we can trust it. It's like they're still excited about it. They're still crazy about the development. They're passionate about it. It's not to say that XCPNG developers aren't passionate about it. I'm sure they are. But I feel like with Proxmox, it's like more status quo. They come out with new stuff. They support all these things. But they're a lot less likely, in my opinion, to do something like crazy to say, oh my God, they just changed, they overhauled the whole UI or they redid the container engine. I'm sure they might someday. But I would say Proxmox is a good fit for people that have the Debian mentality. Leave it alone. It works. It's fine. Let's not overcomplicate it. I think that's a good thing. Now, the elephant in the room that I probably should talk about is that it is free. But I pay for Proxmox. I pay for the cheapest option. It gives you an extra repository and more updates and things. You don't have to use it though. And you could perfectly fine get away with just not ever paying money. But that's how they basically keep afloat because you've got to understand how it costs money to host this, provide it for download, and the developers that work on it. I love supporting projects that resonate with me, so I had no problem buying it because I figured I'm using it and they're working hard on it. So you do get some extra features. I've always paid for it to the point where I've actually forgot what it was like not to. But what I remember is that you will get errors when you do app update because it's trying to hit repositories that you're not authorized to hit. But you still get updates. You just don't get all of them and there's extra features. So you can look at the feature list on their site and you can make a determination if you even care to pay for it. But if you find out, like, yeah, this is everything that works, then I say why not. I think I can't remember what currency that theirs is in. But I think it converted to like 70 U.S. dollars, maybe. I thought it was less than 100. Yeah, it's 90 euros a year and CPU socket. That's how it's... I'm listening to this on our site. So is it $90 per CPU socket per? I don't know. I've only... Well, I've had... I've always had two CPU sockets until recently. So I've never had to... I don't remember... Well, maybe I did. That was so long ago. It might be per CPU socket. Come to think about it. So I know I had two CPUs back then when I first started. Now I'm down to one. Because my needs aren't as much as they used to be. And also I want to save power because you got to save energy. Always look into energy costs when you're trying to buy a server. Oh, yeah. So yeah, just keep in mind about the cost. I mean, they technically want you to pay for as an orchestra, but there's a one-liner you can run, even though I don't always like one-liners, but you can run it. It just sets it up for you. And you don't have to pay to do that. So you could still use either one without paying. It's just... There's some quirks if you don't. As someone pointed out, there's a nag screen if you don't. Yeah. And I just hate errors. So doing like an apt update on SSH session, just like, oh, God, I hate seeing this error. Because if you do like... You try to chain commands together, the apt update will fail. It didn't fail. It just can't hit some repositories. You could comment them out if you really enjoyed it. But it's not hard to bypass at all. I really like it. I guess that's my main downside though. I mean, other than the cost, there's just more excitement around XEPNG. But I still love Xbox because it's status quo. It's stable. There's no surprises. And it just works. And a lot of people, and not just the ones commenting here for Home Labs, we've seen it used in businesses, just like one of your previous employers is using it. It's completely practical, solid, stable, which is, of course, one of the things you really want when you're running it on your business production system. So I definitely, even though I don't run it, I still, when people always ask me, like, do I think it's a good platform? Absolutely. Just don't ask me a lot of detailed questions about it because I don't use it on the daily. So sometimes I miss things. And we will leave links in the show notes to Jay's full series on Proxmox. Granted, they're older, but the concepts, the base of the system is going to be the same. Eventually, Jay will get around to doing a new series on Proxmox. And unless I convince them to use XCPNG. But if I don't convince them, I'll do a new series on Proxmox. And dive deeper into it. A few people had mentioned like the Proxmox backup server. It's a newer feature. It's got some cool features. It sounds like a great system, but it's not something either one of us are extremely well versed on with Proxmox. Another thing that I'll mention is not Proxmox specific, but I think it makes sense. Like if you're going to buy a server, and I recommend you do this regardless of what you want to run. Always run mTest86 on your server before you install anything on it. And I mean, I don't care if you buy a used laptop or a desktop or a server, run mTest86 on it before you actually start using it. And I would even argue to run it once a year. I can't count how many times I'll get comments like Proxmox sucks. It doesn't work. Or Linux doesn't work on my computer. Honestly, different operating systems handle bad memory differently. They'll often still run. It's just you'll have these weird quirky side effects that don't really make sense. And then you find out your memory is bad the whole time, especially on a server. Just download the mTest86 or whatever memory tester you have and run it for 15 minutes. Make sure there's no errors on there before. So that way, you know, you're starting out at a very a good place without any hardware problems, at least as far as memory is concerned. Because the main thing is Proxmox is as stable as the hardware you install it on. So if you have like a mostly bad hard drive with like half the sector's bad or bad memory or some motherboard issue or capacitors that are leaking, I've seen all these things. It's not going to run well. Just check the physical layer. Yeah, it will save you some real fun troubleshooting when you have a quirky system. Let it run for 24 hours prior to setting it up if you can. And yeah, that definitely saves you some of that trouble. So I will. Another thing that I should mention is that we are not going to be recording next week. So we're taking that particular week off. I am taking the week off actually. And this is a real vacation because anytime I take a vacation, I'm just taking a vacation from a thing. But I'm still doing all the other things. This is actually, I'm going to try to be where cell phone coverage isn't actually much of a thing. I'm just like Tom did recently. So that's the best thing, right? You just have to be old school and not even have a working cell phone. That's awesome. So constantly. True the AFK. But on my channel, we'll slow down a bit because I always record ahead. So the number of videos will slow down until they pick right back up again. Probably first or second week of July. And this particular podcast will take a break next week. And then we'll be back with a topic. We don't know what yet, but it's going to be amazing because this show is amazing. Let's be honest. Yeah, we're pretty excited about this. We love diving into each of these different services. And we have like a cloud of things we're just grabbing from going, which one's next, which one's next and trying to put it on there. So plenty more content to come. Yeah. But it's not like our cloud isn't someone else's computer though. This is our computers. Yeah. This is our cloud. We're pulling from the things we have in our cloud and sharing them with you. Hopefully give you some ideas to get started on there. We will be diving, of course, into some specific and individual applications as well. That's going to be, I know next cloud's on the list, but we're going to dive deeply into that one. The challenge we're going to have with that episode is keeping it to about an hour. Yeah. And next cloud changes a lot sometimes every now and then. We just overhaul a lot, which I've noticed like both times, like the last or not both times, the last two versions of my book, Mastering Ubuntu Server, I'm pretty sure both times I had to make a massive change. One of the versions of my book, I think it was a second edition, is when the whole own cloud versus next cloud thing happened, which actually that book made it out to the press right before that, that then a new version of next cloud came out and had to redo a chapter. It's fun times, but that's when you cover tech topics. That's going to be the case. When it comes to Proxmox, I know there's things we missed. We will cover it again at some point. We're going to have episodes like check out the new features of X and let's talk about this new feature. So this is not the last time you'll hear about Proxmox. We'll cover XEP and G again. We'll go back to all these topics at some point as they change and mature. There's going to be some things to talk about. Yep, for sure. And we'll leave all the details in the show notes where you can find the series on Proxmox and several videos that you did on that topic. And thank you. See you guys in two weeks. See you later.