 Welcome to the homelab show episode 108 self-hosted questions and answer, but I actually go a little further because there's another topic that came up that me and Jay are going to add to this list of someone actually self-hosting everything and making a long list for all of you. It'll be linked in the show notes that we'll talk about running a whole business on self-hosted. My friends over at Vates had just did a great write-up. So if you were looking for some projects for your homelab, that'll also be included in here now. How are you doing Jay? I'm doing awesome. How are you? Man, it is a wonderful day indeed. The sun is not quite shiny, but it's been nice and cool here in Michigan, which is unlike some of the other areas, so it's been rather pleasant. I just got back from Myrtle Beach, so it was quite warm there, very, very warm compared to here. Very refreshing. Very cool. It was very refreshing. All right, before we dive into today's episode, we want to thank a sponsor. That sponsor is Linode. They have been sponsoring the show since the beginning. We do like Linode. If you're looking for a place to host your homelab projects and you don't want these running at your house, run them at Linode. They have a lot of templates to help you get started. They've been a great sponsor and host of the show. We run a lot of our own stuff on here, including a lot of the infrastructure, including the homelab show itself is hosted on Linode. So if you downloaded this, this is where you're downloading it from, it's right from Linode server. Check them out. We have an offer code down below for the homelab show, and we thank them for being a sponsor. Yep. All right. Let's see. The first thing I want to start with is the very top of this. Our self-hosting journey with open source is what is titled. As I said, you're going to find this over in the show notes, but I want to read through some of this and just talk about it because it's kind of a wow thing that it's like such a great self-hosted journey that they have over at Bates. Bates is the company behind XCPNG, so a little background. They are huge open source advocates. All the source codes available for XCPNG and the Zen Orchestra project that they support, and they also, you know, don't just what they call this living your brand. And it's not just like, hey, we do open source, but we have a bunch of proprietary stuff that we build all this with. This is all about them giving back to the community, working and using open source and being really better than me when it comes to doing this because unfortunately I use a lot of proprietary tools doing things. And I wish there was an open source tool that would solve all my solutions, but I manage Microsoft and Windows servers, so there's not. But there is in the world that they're in. So the whole write-up predicates itself around some philosophies of why self-host. And it really comes down to, they have a whole list here, but the ones that matter are no vendor lock-in, predictable costs, data sovereignty. And I think all this just right away aligns with Homelab. So this is a whole list of software that all of you in the Homelab can absolutely use. And for those that always argue, well, is this used in the enterprise? Well, XCPNG is not just some small company. They're a pretty big outfit with, I think they have over 40 employees right now and growing fast. And this project is huge. This is a massive, used by enterprise companies software project that is being run on open source. And I just think it's such a great list they have. Everything from video, which is going to be Jitsi, their blogging, their forums, code repository, infrastructure management with NetBox and Snipe IT. I haven't done a video yet on NetBox. It's something I'm looking to implement, but we do use Snipe IT. It's really cool for inventory management. I mean, it just goes on and on a list of fun things. Prometheus, Grafana, pretty much stuff that many of which we've talked about on the Homelab show, but even more for those of you that would like to just dive into it. I think it's so cool. Maybe I'll even do a video about it, but I like it. And of course, their choice of hypervisor was XTPNG. So they do use their own product for that too. No shock there of the base servers that it runs on. Yeah, it would be quite a shock if they didn't, but I'm glad they did. Yeah, so I left in the show notes and I've tweeted it out. I've shared it on LinkedIn, so it's easy to find. You can just go to the Vates blog if you're just googling around and looking for it. But I think this was just a really cool list. I mean, it took the time to give you the detail of what they use, how they use it, and why they use it. It's a good write-up to give you the whole philosophy. So I was just excited to see that. I don't know if it deserved a whole show topic to go through each one of these, but I know a lot of you are interested in. The cool thing is this is all accessible for you to run. Yep. All right, now we can get to the Q&A part that we have. All right, what is our first Q that we're going to A? I would say let's start with Joe, who has it looks like a three-part question. Hopefully, I'm not missing anything. So essentially, the first part of the question is whether or not large networks are using physical firewalls or software firewalls and containers and such. Now, of course, each data center is different and every implementation is different. What I've seen the most is physical networks, physical switches, and then software-defined networking and cloud environments. So with a lot of companies using a combination of both, and we also have another firewall question as well, it's usually some combination of that. I personally like to use the tools within the cloud environment if I'm using a cloud environment because they're just integrated so well. There's nothing stopping you from using UFW and just doing it yourself if you want. But with cloud environments, firewalls have an API that you can hook into and programmatically define what can talk to what, essentially, which is exactly what a firewall is for. But generally speaking, that's what I see, physical networks, physical firewalls, cloud is software-defined firewalls. Does that resonate with you as well on what you've seen so far? Yeah. I also mentioned in the chat and in the comments, Oliver Lambert, the person who published that is now in the comments. Now, if you guys want to throw questions in there, back to the topic of the firewalls, I think it's really important to use both when it comes to your configuration because we talk a lot about insecurity, the lateral movement, what happens when they get in. And I'm working actually on a video on that because I think this is kind of fun. And if I can get one of my friends, John Hammond, if he has some time to participate in it and be fun of, let someone in my network and see where they go. So the always the fear is, where do they go once they are in, they move, what they refer to as laterally. So you have a server, you have, let's say your Plex server and something happens, a flaw is found and now they got inside your Plex. Now they're going to look to the left and right of them and go, what else is in here? What other fun things can I get to? This is where implicitly firewalling things. And for example, you may build a network that has several servers on one segment of the network. Those servers may or may not need to talk to each other and you can define the firewall rules accordingly. And I recommend that. For example, we run a Wiki with a bunch of documentation in it, but that only needs to talk to the reverse proxy. So if you were to scan the IP of the Wiki first, it doesn't care about pings, because why would I need to do that? Second, it doesn't care about ports 443 and 80 being open to you. It's only open to the reverse proxy. So it's not just building firewalls so they don't talk to each other. It's why not? Cause if you have an absolutely predictable that this server only talks to these things, that's a good lockdown method to go that far where you create isolation between the hosts. That way, if someone's on the network or one of those individual servers is compromised, if there's no need for communication between the two servers then have a firewall rule that blocks communication between those two servers. So you still have your main firewall. That's still the first step. It's an extra layer. And it's not a hard layer to do just to define the rules because when you're defining implicitly a server for production use, you know, and don't turn a firewall onto the last, to save you some troubleshooting, you always turn it on last, just a step in the process of the thought here. But it's good to set these up and configure them and then implicitly list it. It stops any extra traffic from going. You can go a little bit wilder if you want and do egress filtering where only certain things can reach out. Be very careful with that because I've seen people set up egress filtering and then they break the ability for the server to do updates because if it can't get to the update server, it may assume there's no updates needed. So now you have a new security problem you created by trying to be overly secure. So do be careful on the egress filtering whether or not you need it, but the ingress in terms of ports open, yeah, filter them down to the minimum principles of least privilege type of security model where what does it need to talk to, what needs to talk to it and build those rules. Yep, I totally agree. Excuse me, Michigan allergies never go away. So, okay, so the second part of the question is in regard to disabling unnecessary services running directly on the router's host OS and is this a good practice? I say it's good practice on everything. I mean, if you're not using something, it shouldn't be running. And I would even go as far as to say if you can uninstall it, you know, that's even better because then you don't have to worry as much about a CVE that can enable something that was disabled if that's something that's currently happening. So something that's not present can't be used against you but usually the appliance firewalls, they don't really have a whole lot enabled by default but you still wanna check that if you're not using NFS, why have it enabled? Same with Samba or any of the other things. So I feel like that's a good idea on routers, like, you know, pretty much everything, computers, you know, your workstation servers. You definitely wanna make sure you audit what's running, especially what ports are listening and that's just a good idea. If it's not something you're using, especially if you have like an SMTP server, I mean, I especially hate it when a Linux distro has something like that pre-installed. If you don't plan on actually sending mail, then you probably wanna turn that off. So I think that's definitely just a good practice on literally everything. So it's not specific to your router, it's just everything that has TCP IP. Definitely make sure that you lock that down. Absolutely, everything that you said. Yeah, so the other part of it is asking about a security audit. Now, this is interesting because, you know, you would think, okay, nobody's home lab is gonna be security audited unless somebody's like super wealthy and they can just afford that. But I still think it's a great way to think, not just because you at a company might somehow or someday need that knowledge, but if you approach it from an auditing standpoint, you'll get some security ideas that might help you out. So, but when it comes to an actual audit, it depends on the audit itself and it's more or less, you know, if you say, you know, server A can't get to server B, okay, they're gonna try it. And if they can get from server A to B and you said that that's not something that's supposed to happen or not something that can happen, then that's gonna be a doc on your audit right there because there's gonna be some things that are required that, you know, are just required for the audit. And a lot of it is just, you know, saying what you do and doing what you say. And they're not gonna care so much most of the time. I put a little asterisk on this because it depends on the auditor. They're not gonna care so much about how you accomplish it so long as you accomplish it. For example, if you have part of an audit is often every, I don't know, certain number of months you're going to audit your backups, you're gonna audit your documentation, you know, if you audit these things regularly, then you're in pretty good shape. But that's a different mindset in a way, but I feel like it's a good mindset because even if you're not going for an audit, it still kind of forces you to take a look at the different things that are required for an audit. You could probably find the requirements online. And a lot of them are just good ideas. Having documentation, that's a good idea. Having an authentication system, that's a good idea. Having a system to where if you delete a user account that all the access for that user also goes away as well. Your backups have to work. How are you doing your updates and how frequently, those are all things that come up in audits. And I just think they're good things to focus on in general. And if an audit puts you in that mindset or gives you, you know, some ideas, then why not? Why not treat it that way and see just how strong you can make it? Yeah, that's a nice thing with these policies becoming more and more popular. You can go through, I mean, NIST and the different frameworks are completely available and you can kind of go through and do your own rough estimation of like, how much of this do you want to implement? And I don't think it's a bad idea at all to go through these steps in your own home lab. Yeah, yeah, there's been a few I've just done habitually because I've been involved with audits before. So some of the things I do, it's like, oh yeah, that's from that audit I did way back when. And it just kind of sticks. It's pretty cool. And that's when Linus really helps in when it comes to audits. It's the free tool. We talked about L-Y-N-I-S, not the YouTuber, that's in trouble. We're talking about the tool, L-Y-N-I-S. A different tool. A different tool. Okay, that's funny. But that's a free tool you could download. I have a video about it. I'm pretty sure we covered it down the podcast and that's going to give you a ginormous number of things to look at. It's actually kind of cool to run that and see the report for the first time. I want to jump and answer a really quick question I see in the comments here. It says, Lawrence, my question is, why does everyone on YouTube complain about 10G on copper and always resort to recommending fiber? I ran 3,000 feet last year for my house, so it drops to no problem. I don't complain if you have copper, awesome. There's nothing wrong with 10 gig over, your standard RJ45 Cat6 or Cat6A, if it's over a distance. The reason for recommendation for fiber, and I have a video called DAC in Iraq and I might do an updated one called, you don't know DAC and something related to that. The bigger challenge is the heat dissipation and the more expensive switches that you get when you're using those. So pushing 10 gig over and RJ45 does have a higher wattage requirement and a more expensive switch. That's why you'll find a switch with tons of SFP, 10 gig ports, but only a couple RJ45 ports and it just comes down to design. So it's a little bit more expensive in process and if you do use like those DAC transceivers that convert, you'll notice that some switches have limitations on how many you can insert because of the aforementioned wattage and heat dissipation versus fiber uses less wattage. But I may do some updated videos on that. My previous ones are still accurate, but there's still more context to add and the prices have gone down so I can reference some of that, but it's fine to use either one and if the cost is more to run the wire and because you've already got the infrastructure built for a client, then you go with the more expensive switch. If you're green fielding it and you're going, I have a choice right now to get from point A to point B and we're doing that with a client now, we are choosing to run fiber because we're running it new so we may as well run it all with fiber and pull multiple lines at once. So it comes down to cost analysis based on that, but both work, both are fine. There's not enough difference between them outside of the aforementioned issues I brought up. And also another thing to keep in mind is you don't always have a choice. If you have a device that has a network card and the network card is not removable, there's like, I'm seeing computers nowadays that are 10 gig capable, they have RJ45. At that point, you may not even have a means of adding like an SFB based card if there's no slots and you just have like one of those Mac studios or one of those others that are out nowadays, it's RJ45. So there's gonna be some cases where you may not have a choice. Right, yeah, it comes down to each use case, but a lot of you two people, I mean, I think it's part of it. Does fiber looks cool? I mean, I'm using lasers in light to get my data around. I mean, doesn't that sound cooler than copper? Copper is what has, you know, that's how the phone line started. So the cool factor is definitely there for fiber. Practical. Well, that doesn't mean like correlation doesn't equal causation. You could have somebody who might have had a slow transfer speed over, you know, cat six or something like that. And then they replace it with fiber and everything's better when, you know, the entire time they could have the nick and the cable that they didn't find and it really wasn't about that. But because now it works so much faster, if they didn't know that was the case, they're just gonna recommend that because in their experience that worked for them. So there's gonna be as much as I hate to say this, some bias in homelab, it's just human nature because if you have, you know, for example, a brand of hard drive that dies, some people don't even think further than that. That hard drive failed them. They don't wanna buy that brand again when it could just be a happenstance, hard drives fail, that's just what happened. So you have to kind of look at this from that perspective. I'm not trying to say that YouTubers are of any kind that I know of our bias, it's just sometimes you really have to think about the bigger picture. And sometimes I think that people make a bigger deal out of things than they should. And that's just kind of how it goes. Yep, absolutely. But yeah, it's a good question. It's definitely, it's where people spend a lot of time as they should thinking and planning this out and, you know, dealing with what you have, dealing with what you need to get done and how you're gonna get it there is planning out how to build out your lab is always a challenge. Cause especially home lab is actually harder than industrial commercial that we do. The reason we don't do residential work is because it's harder, I'm not gonna lie. Like when you say I wanna run a cable across my house, that's a wild card of, it's a wild cost difference versus when I'm in a commercial building with a standard drop ceiling, standard trusses up top or I'm gonna mount the J hooks and everything else, it's way easier to path. So. Yeah, I would say I'm kind of with you. I use fiber, you know, when I mean by default, that's what I use. But then if there's something that needs something else, I kind of go that direction or if it's like a short run and it's just a means to an end, you know, then I'll go that direction. But I'll just start with fiber and then see what the situation is and what the use case is and then decide from there. And I think that's probably the best way to do it. Yep. All right. Now the next question, what's the next one list here? All right. So the next one is from Paul in regard to Ansible pull. And what I'm noticing is that, and this is one of the reasons why I picked this question because I see this come up in comments a lot. Ansible pull is kind of like the inverse of Ansible, just for those that don't know a quick background. So Ansible is gonna use SSH, that's normally how people set it up. There's a control host or a source that is using SSH to connect to the machines that you manage. And that's just how it works by default. Ansible pull is downloading a repository and running it local host. So in that situation, you have to kind of think about things differently like the inventory file, for example, how do I make that take impact on anything? In this case, Paul's asking about the vault password. So the vault password. So essentially what you look at is the ansible.cfg file and you want that root level inside the repository. So when Ansible pull pulls it down, if it finds ansible.cfg there, it's gonna take effect. You could tell it where the host file is, you could tell it where the vault password file is, keep the vault password file out of your repository and add that, you know, just side load that anytime you set up a machine, you never want that in your repository, it invalidates everything if you do. And I use one vault password outside of the repository. The config file tells Ansible where on the file system to find it. And it's up to me to add it there, add it with the right permissions manually so that when it does run Ansible pull knows exactly where to find its file. And the same is true with the host, excuse me, the host file, you could still use roles and all those things. And there's a lot of misinformation because I feel like it's almost like there's some stigma not from anyone in their audience. It just kind of seems like the normal Ansible is just gonna get more attention. That's just the way most people do it. And I feel like Ansible pull just isn't really considered as much as it should be. And I think that's part of the reason because some of the documentation is just a little bit here, a little bit there I had to find on my own. So I definitely understand these questions because, you know, some of this information is pretty hard to find. And I don't remember if I covered the vault situation on my Ansible pull video. So I'm not really sure if I wanna point anyone to a video of mine, I'm not sure if I clarified that but just configure the Ansible config file, put it in the root of your repository. And then in there, you could point it to the files that matter for your configuration. And then that should probably fix it. And now we have an idea for another video topic. How to do passwords with Ansible. Yeah, I think Ansible vault, and I'll maybe Ansible, you know, mislead. I watched that video, I'm not saying. Yeah, I think that would be a lot of fun. So yeah, just wanna throw that out there. And I'm sure there's gonna be more Ansible pull questions because it's quite common. So if I didn't answer something about Ansible pull and you're still confused about it, just throw us a question and I'll see what I can do. Yep, we have a Bitwarden question. Is that the next one? I think it is. So let's see, this is from Dan. Let me pull this up here. I recently did a video on Bitwarden. We'll jump right to it. Tom highly recommends Bitwarden when we merged companies only a month ago. They were not using Bitwarden, but now the merger and all the people and the management, we're all using Bitwarden. So that's how much faith I have in believing in Bitwarden. That's, there's my answer. I use both. I use Bitwarden primarily, but there's some things that are more local. That's not, you know, an online thing. You know, Bitwarden is great for your online accounts. And KeyPass XC is as well, because you can do auto fill. Like there's keyboard shortcuts you can use. Well, actually I'll back up a little bit. KeyPass XC is a non-cloud solution. It's an application you download to your local computer and you create a local password database. And then that's what it uses. It's unlike last pass and the others where everything's in the cloud. Now, technically you could put the database in like next cloud or Google Drive. I'm not saying anybody should do that. You could make it cloud, but inherently it's a local solution. And it has some interesting tricks like looking at the window title, you can auto fill even a website, but what's also cool about KeyPass XC that not a lot of people know is you could also auto fill apps on your Linux desktop, like Steam, for example. You could just see what the window title is and make sure that if there's a match, it'll fill it. And that's just so cool because it's not limited to a web browser. So there's a lot of good use with KeyPass XC. The problem is it gets a little somewhat confusing if you need to sync it between computers. And I've seen people do all kinds of weird things, like put it in a Git repository, which I don't really think is a good idea. Might be the one exception, but KeyPass XC is really good. I use it for the non-online things like my local IPMI password, for example. I'm gonna put it in there. And the things that aren't really required to be in the cloud, so to speak. And then Bitwarden is pretty much everything else. I absolutely adore it. I was using LastPass for a long time, before the controversy. And I felt like LastPass, the interface was okay and just not all that great, but it would just make any web browser, well, actually, especially Firefox, slow down to a crawl if the LastPass add-on is installed to the point where the entire browser just feels like Internet Explorer on an old 386 or something is ridiculous. So I noticed as soon as I removed the LastPass app back then, I don't know if they've since fixed that, the browser got a lot faster. But when I started using Bitwarden, it just seemed like oddly a masterclass in managing passwords. And I was nervous because anytime a new password management solution comes out, I'm like, oh, okay, how long until this one is blown wide open? And what stupid thing did they do this time? Because we've seen our fair share of these that just get blown open in a couple of weeks. But Bitwarden, I should never have had any stigma towards it. It's just the industry, it's just awesome. And when I had a chance to try it, it just, it really blew me away. The interface is easy. I just have nothing but good things to say about it. Yeah. And other than having to manage it yourself and someone suggested Sync Thing, which I'd highly recommend for managing KeyPass, I don't think KeyPass is like you said, it's still a good platform. But the fact that you are responsible for your own password database can be its own challenges. Yeah, somebody in the chat room is asking us, what's the meaning of life? And I can't say that I have a, you know, an absolute answer to that. But I would say the meaning of life is to be happy. Yes. Anyway, back to the tech. Back to the tech starter. Sometimes we'll answer some off-topic questions, but we try to spend too much time on those though. My answer was 42. That's also true. Maybe 42 means happiness or something, I have no idea. So another question came up that's pretty interesting regarding linked versus full clones in Proxmox. This is something that Peter asked us. And so the idea is if you create a template in Proxmox, you know, not unlike other solutions, you can clone that template into a VM. And when you do that, it's gonna ask you if you want a linked clone or a full clone. Now a linked clone is going to require the template is present. It's gonna use it and do it kind of like a differential. A full clone is gonna take a little bit longer to clone. For me, it takes like a whopping 30 more seconds or something like that. It's not really that bad. But it's gonna take more disk space, but it's gonna be completely, the resulting VM will be completely independent of the template. Now, my opinion on this might be a little controversial, but it, my opinion is always useful. Like just avoid linked completely because I don't really see much of a benefit. And here's why. So let's say you have a Debian template, unless you're doing something crazy. I'm gonna say it's probably a two gigabyte template. I'm just throwing a random number out there, but probably like that. And with SSDs and VME, I mean, like what is it like $20 for a 512 gigabyte SSD? If we're gonna save like a gigabyte or like a subset of that two gigabytes, is it really worth all that effort to have something that's dependent on the template? But also keep in mind, if your template is older and then you create a VM and then you update it, it's gonna download a bunch of updates. And at that point it's pretty much like massively different from the original image anyway. So then you're kind of offsetting some of that. Now to be fair, there's block level things going on here. And I'm oversimplifying this, but I just use full clone. In my case, I just never use linked and that's just what I recommend. Yeah, and you can do the same thing in XCPNG. They just call it fast clone, which is gonna be the same as a linked clone where it has a dependency on the original. Well, the dependency though can be removed by getting rid of the original and it'll say, okay, I have to fix all of this. The downside is the extra power is taking underneath the hood to do that and track those differentials is something to consider. This is where people have goofed up and it's just the unexpected use case and maybe they need more warnings, but where people go, let me create 100 snapshots. What possibly could go wrong with a hundred differential snapshots on things and lots of linked clones. And you'll just find that you're creating a bigger IO bottleneck because the system can track this, but it has a performance cost. So this is one of those things that take into consideration on there. Full clone, easy enough to do. It's like I said, these are the same options in a way apply. Now, you don't need a template. You can fast clone any VM. This is actually something I do a lot for testing. I'll grab a VM that I know is up to date. I may stop that VM and quickly fast clone it, do a thing that I wanted to try, but I don't want it permanent. So I tried the thing, I tested some software, then I just delete it and I'll leave the original completely alone. So there's use cases for it, but when it comes to like, if I'm gonna build a new production machine and I have a built working latest version, I will do a full clone at that point to avoid that underlying extra overhead. Now, one exception to my own opinion is gonna be for people that live in a country where we don't have easy access to technology because I do understand that some countries, they're still basically being forced to use like really old hard drives. Like if you have a 40 gigabyte hard drive and that's the best you have access to, at that point, I mean, you probably do care about saving as much space as you possibly can because that's gonna make or break how many VMs you could run. So in a situation where you don't have easy access to decent hardware, that's going to invalidate my opinion because you have a limited amount of hard drive space and always try to be mindful that not everyone has spare parts that are actually decent in most stores because sometimes parts are hard to get, but as long as that's not the case and you're not starving for hard drive space, I would say it's probably better just to stay with full clone every time. All right. The next question we have was indexing. Yes, the indexing question. I almost forgot about that one, that's a good one. Yeah, first, let's talk about CoroSync because there's a two-part question. I call it the indexing question because that's what the titled it, but there's more to the question. There's two parts to it. So answer the CoroSync problem and the challenge you were having of Proxmox and making sure the two servers synced up fine. So the individual venue, if I'm saying your name right, apologies if I'm not, I mispronounce everything. Believe me, people let me know, so apologies there, but the first part of the question is about the issue I had with Proxmox freezing. It wasn't that it was freezing. It was that I had two situations where the Proxmox, one of the nodes would just drop off the cluster and just be inaccessible. And there's two things that I fixed. Now, the first thing was my 10 gig card. I'm pretty sure it just went bad because as soon as I replaced it, it became a lot more stable and I didn't have so many problems, but I would still have an issue. It would drop off the cluster if sometimes during backups, like I'd have backups running overnight and I'd wake up just to find out one of the servers is offline because I guess it was just too much overhead. And I didn't really understand why that was. And I haven't had the problem sent. So I'm going to just claim that this fixed it because I mean, I literally haven't had the problem. So what I did was I put CoroSync on a different network. And this is something that's recommended because CoroSync, when you have a Proxmox cluster is going to handle the communication between the nodes. And if you set it up right, then it's a dedicated network that only those nodes can access. Nowhere else on your network can ping it. You want it like extremely isolated to your Proxmox nodes. You could even go as far as to just buy a dumb switch completely like not connected to your main network and then have them both connected to it so that only they can use it. And the reason why is because ping times between the two nodes is going to be very important at this point because it's like a heartbeat. And if the milliseconds of ping is too high, it's going to, oh, there's a problem and then something's going to go wrong, which is exactly why having CoroSync on a dedicated network is great because you don't have to worry about anything, just getting on there. And what I think was happening is that the number of backups I was running just overloaded the IO a little bit enough to where the ping times weren't fast enough and then one drops off. So the idea is to run a dedicated CoroSync network. And there's an article on Proxmox and their documentation pages, they tell you how to do this. And it's actually pretty easy. You just choose the interface, there's a config file. If I remember correctly, I had to edit that file on one of the nodes and then the rest of them just picked up the change. Don't quote me on that, but I'm pretty sure that's the case. It took me all of like 10 minutes. And then of course a couple of hours of banging my head against the wall. And I think I reached out to you, Tom, like why am I able to ping this from this other network and I shouldn't be able to do this, that kind of thing. I can't remember what it was you told me and you knew right away what it was, but. Yeah, we went through a whole review of your VLAN rules and subnet rules and all kinds of fun stuff. Ended up being something really simple that I didn't even take into consideration. I just wish I could remember what that was, but take a look at that. I think if you are serious about using Proxmox, just get a dumb switch and just connect a dedicated network interface on each to it and use it for CoroSync. And that'll at least make your cluster that much more resilient. And if you only have two on your cluster, you could probably just put a network cable between the two devices. Yeah, but crossover cable, I haven't used one of those in a while, that probably worked just fine. That would think that I don't, the synchronization between two virtualization servers or more, whenever you have a cluster of servers or a pool resource pools, what's called an XCP and G worlds, same concept though, they have to be in sync with each other. XCP and G actually has a database that goes between all of the devices. One will always be labeled the primary or master and then all the other ones are secondary servers and every secondary server has the same level. There's not any tiers behind it, but it always takes its cues off of what VMs are running on the master. So all commands are sent to the master. And even if you're starting in on server 12 of your secondary pile of servers, the command always starts there because it has to hold the source of truth, but that source of truth is then replicated between all of them. But if they lose communication with the master, bad things happen. And you do an update of video on that I've talked about it along the way, but it's certainly a problem we've run into where people, if you lose the master you have to perform a promotion of the other servers to become the master because they all have a copy of it, but they become leader lists. But also if they just lose communication that can be a problem as well because they also don't know what to do. The good news is all the VMs stay running they just can't be changed. Everything has to stay in the state it is until an election has been held or you force an election to be held. So it's a fun topic. And if you wanna dive deeper into it just this is a computer science problem called split brain where you actually wanna run HA it's also directly related to this. And the basis is things like CoroSync is what they're using in Proxmox that's the basis, but to get to the broader topic of split brain problem when you have HA it's how do you know when you have three or more servers which one's really down and how do you have a quorum between these servers for it? I'll leave you guys with that homework for further reading. It's definitely a good understanding of the problem and it solves in different nuanced ways that it is solves amongst these other machines to make this work. It's also frustrating when the nodes are fencing for the right to be the master. And they both then all of them just constantly reboot over and over again because they both lose and reboot and reboot and reboot. And I'll never forget the first time I ran into that problem. So that's definitely a rabbit hole for sure. Yeah, it's a lot I always like the further reading parts of things. I don't want to know why I checked the button. I'm a person who's always curious about the like, oh, I know I checked this to make it work but really why? Not just why to make it work but why is it set up this way? Why does it work this way? And that's always a deep fascination. These are the reasons I talk about this in a more expanded form frequently on my channel of, hey, you don't just check this box to make it work. You could say that's the why but the why is because you need these synchronizations for these things to work to solve this problem. So definitely it's a lot of fun. I like the engineering behind all the tools we use. Yeah, and it's also so fun when you figure out finally what the problem was. And then you put pet yourself on the back and then get through a situation like that. It's super frustrating when you experience it but that feeling at the end, that makes it all worth it. Yeah, now the second part and why we called us the indexing question is that's what the subject was titled when people emailed us at feedback at thehomelab.show which I should have said at the beginning but it's not hard to figure out what our email address is. Feedback at thehomelab.show to reach out to us and get your question right on the air here. And the indexing is a challenge to figure out where we said something. I am going to hire some people. It's been on my to-do list after the merger to really go through and index better all of these topics. It's just a big task. It's easy for us to put the words together and when me or Jay produces a tutorial we do take the time to time index it but when it comes to things like the podcast or the live streams it's a lot harder and it takes a team of people who have to watch us where at 37 minute mark in the show that's 37 minutes someone has to go through make notes about what we said in each of the shows and then how detailed do they get. Now I do my best to have these as write-ups in my forums whenever a write-up is necessary they're auto posted there so people can ask questions on it and frequently they do so I do my best to index them because they're all auto posted in there but it is a challenge when you're doing kind of spoken word here but there are some really cool tools out there as a matter of fact we're using them in the business world right now but they're auto summarizers and I'm trying to find a good one and if you have a suggestion send it over to me at feedback at the home web show but some of these new automated summary tools do a great job of transcribing and contextualizing everything and then I would like to attach that in my forum post. Now my goal would be really lofty here to build that as an automation so if you're someone that can build that as an awesome automation but even if I at least had that and summarize it that might be more helpful so all this stuff can get turned into words that are easier, easy to find. We, me and Jade both used DaVinci Resolve which is now also making this easier because they now added in the latest version the option when we edit this with our editing tool to export all the words as it heard them and even if we're not checking them for 100% accuracy and it's definitely not gonna get some things right because some words we say in tech are just a bunch of abbreviations and acronyms but I think there's enough to get the general understanding of it and hopefully find things for us. So these are all goals we're working towards to make this easier for all of you to consume and read up on our content. I would really love it and I don't know if I'll have time for this so I'm not offering but I think if I can make time for this sometime it would be really cool to have like a website and just in GitHub or something and have it there and then people could just submit pull requests to which people would wanna practice anyway if they wanna get used to get and contributing to the show would be cool because then they can write in some notes for an episode put in a pull request and then re-accept it shows up on the website. That would be really cool with something like Hugo I think but again, right now I have a big editing queue but maybe after that that's something to explore because I think it'd be kind of cool to get audience feedback on there and also just give the audience a chance to practice version control if they want to practice that if they don't already know already can't talk know already how to do that that might be something that'll be a lot of fun. So maybe Tom and I will talk about that another time we can slow down a bit. I used to see a question here the best right over here why virtualized tier NAS? Why? That's what I asked people why are you virtualizing tier NAS? Now for cost savings I get it that's usually why people virtualize tier NAS but from a production environments that we work in we are going to tell people run it on bare metal run it direct but if you're feeling adventurous and you like to virtualize it true NAS and more specifically ZFS has a requirement of directly talking to the drives so you got to set up a pass through to get your drives passed through directly to the you want the OS talking directly to the drives that is the challenge that you face and if you're a first timer that's going to be a bigger challenge if you're virtualizing things that's why I'm going to say bare metal bare metal is always preferred in production but I get it you want one machine to rule them all check out Wendell from level one text he's got a video called the forbidden router which I think is a great title where he talks about virtualization of things he uses XEP and G in that video that's why I mentioned it but it's just a challenge when you want to virtualize it my preference is always bare metal for both your firewall and that but completely in the home lab especially there may be constraints where you're going I don't have room for three, four servers I want them all one box because that's all I'm allotted so these are the constraints that I've been handed and maybe I'll do an updated video on it because I completely empathize with people in the home lab trying to save power have limited use of what they want and they're not running a production environment that I'm running so I completely get the they're very correct assumption that hey it is best for me to run this all virtualized but I do want, yeah I do want to make sure everybody understands you are increasing your workload I'm not saying you shouldn't do it if that's what you want to do but it is going to be more challenging to maintain than separate because you have a single point of failure for everything at that point and it's like some people they'll virtualize PF sense and then if the VM isn't working they can't route to anything to fix anything else because that VM is down and then at that point with TrueNAS you're going to have CPU contention that'll make your NFS slower just like you said I always want everyone if they can to have separate hardware but again we don't have disposable incomes all the time so sometimes we just have to use what we have and yeah that's just how it goes sometimes Yep Someone said, have you seen 45 Drives HL15? That looks amazing Yes, that is the HomeLab 15 we talked about it on the 45 Drives live stream I was part of the creator summit as well as several other people there check out the videos on the 45 Drives channel where that is discussed there's even pricing you got to watch the live stream for it sorry because I don't remember the price off the top of my head it's right around, I don't know it's less than a million dollars all right I don't remember exactly what they said Yeah the pricing is very preliminary on it but trust me we all got to see what they're working towards we all got to contribute ideas back to 45 Drives they're listening carefully and talking about this HomeLab 15 basically it's a 15 drive miniature server that can be used both as a tower or as a rack mount basically you just don't put the ears on it you stand it straight up and put some feet on it there's demos they have check out Techno Tim talked about it Jeff Geerling has talked about it I wasn't physically there I was remote so I don't have any video of it but check out 45 Drives you've been talking a lot about this this is an exciting thing for the HomeLab because the idea is going to be building a premium server for you that's made of metal not plastic not cheap so no it's not going to be a sub $500 HomeLab server it is going to be closer and a more honest range is going to be $1000 to $2000 but you're getting a really premium product that holds 15 drives and a nice motherboard so it's fitting in an interesting market because there's definitely people who go how cheap can I get something because I'm on a limited budget and there's people who go I have a nice professional HomeLab I can't afford one of those big $30,000 servers but a couple grand to have a nice 15 drive storage server that's a good market and not trying to hodgepodge something together so I think they're putting it they're putting a lot of effort to try to get the price as low as possible on there and they're exploring a lot of ideas on how small of a motherboard but it was a big topic we talked about during the Creator Summit 45 drives they're known for their enterprise we use them in the enterprise a lot but absolutely they love the HomeLab people it fits near and dear to their hearts and being open source and offering people a lot of customizations that's why they have the Creator Summit they really wanted to hear from the people very directly and 45 drives is listening to all of you so it's been awesome watching them build that Yeah, I'd like to see it too so that'd be pretty awesome Yep, all right well, do you have any more questions if not, I gotta run Nope, I don't have anything on my end All right, well, I gotta run and thank you for joining us email us feedback at the HomeLab show sorry for the slightly abrupt I have construction people and I realize what time it is they are knocking on the doors so I gotta go take care of that Once again, thank you all for joining this was awesome it was wonderful hearing from all of you we will be back with a regular show next week All right, and thanks