 and welcome to episode 57 of the Home Lab Show Q&A time. The questions and answers have been kind of fun. We actually really like these episodes and we had a goal of doing them once a month and there's enough questions coming from the audience and people filling out the feedback form that once a month is absolutely achievable to make show. It is and it's even more achievable if you continue to send your questions in because we love to read them and go over them. Yeah, I think these are great options to stay connected with the audience on things and toss around some ideas and live show ideas. As a matter of fact, one of the things and I didn't even mention this to Jay, so he's still learning about it right now. I actually had someone who works at or works on the NetData project which is an open source monitoring, oh, reporting, not monitoring, reporting system and some of the new features. So I'll probably be doing an updated video but it might be if there's enough content talking about monitoring systems might be something or things that can graph performance of systems and how to do benchmarking and performance and how to monitor your workloads might be for a good episode. So there's the queuing for, you should ask more questions of us of how you want us to approach that topic. So we and Jay will have plenty to say and we'll probably come up with a topic and talk about it, whether you ask questions or not but hey, we love when we get the Q and A session so we make sure that we're covering the topic in a way that you can understand it or make sure we're answering questions you may have about topics like that. But before we dive into these Q and A topics we do have to thank a sponsor of the show and that is Linode, which Linode's been sponsoring a show for quite a while. It is the place, if you have downloaded this from the podcast and by the way, yes, we're doing a better job. We've hired, I've got more people involved now to help keep the production happening faster because those of you that are listening to us as a podcast may know there was a delay by about seven days to get some things out but we're working to do this. This is a challenge sometimes putting the teams together to make sure these things go from the time you're hearing it live, not all podcasts by the way, you realize the delay time but nonetheless, back to Linode, if you downloaded it, you downloaded this from Linode they've been a great sponsor. The host, they host all the infrastructure, Jay maintains all the infrastructure. The infrastructure had nothing to do with the problems while the podcast is delayed is what I was going for before I started rambling a bit but nonetheless, the servers have had wonderful uptime and actually they've been pretty public now about their partnership with Akamai which only adds more features to the Linode and makes it a more compelling option to tie in all load balancers in case your project gets a little bigger but if you're starting out with Linode we have an offer code that you can use to go ahead and get started with them and that offer code is head over to linode.com slash homelab show and it'll get you started you just signed up and let them know that you appreciate their sponsorship of this podcast. So thanks Linode for sponsoring and thanks all of you for listening. And did you know we actually have another sponsor today? Oh, you? Yeah, we absolutely have another sponsor today and it's actually me. So I just wanted to let everybody know that the writing process is very close to wrapping on the fourth edition of Mastering Ubuntu Server which I don't, it might be out by July I don't have an exact date yet but it's up for pre-order right now so I just wanted to let everybody know the fourth edition is coming, you could pre-order it so just check out the packed publishing website or even Amazon or whatever you prefer. More importantly learnlinux.tv you'll find the link to it, always start there. Yeah, and this is the one time where I don't have a link on there but by the time this podcast circulates I'll actually just add it right after this podcast. Well, you can get the previous versions on there but Jay will get it up to date. That is your source for all things Jay is doing it starts at learnlinux.tv just like how mine starts at LawrenceSystems.com So. Yep, and I'm also, excuse me, I'm also going, I'm creating a special standalone website or web page for the book as well. I just wanted an excuse to write in Hugo so if it goes well then I might even do, we might be able to do an episode on Hugo which would be a really cool thing when it comes to developing websites internally but yeah, if you guys are interested just check it out it's up for pre-order right now. The final release date is pending but I'm about to submit the final chapters this week then it's going to go through the editing process and it'll be ready. Absolutely, so there you go. We got our two sponsors all the way now we can start doing some Q&A. Let's do some Q&A. All right, what was the first question we have here? I'm going to laugh because I'm going to read a question that we don't understand how I got here. I'm not sure if this is the place that's all that's in the form. That got us thinking like is this just rhetorical? Is this kind of like a philosophical thing? What's the place you think this is? Is there a place we should go? Is there some magical server we should SSH into? And it has like all the answers. Is it server 42? We have to find server 42 because it has all the answers. I don't know. We just laughed and we've seen average is like, okay, I don't know if this is the place either. I don't know. Someone right before that asked if we wanted to or thought about doing an episode on server rooms and we've covered it throughout the podcast. I mean, we've covered every aspect, but I think it's a good idea though, because it would be nice, I think in my opinion, to have an episode to kind of just bring it back and just talk about the individual components, especially as a jumping on point for people that are new to the podcast that may not have had a chance yet to read or excuse me, listen to or watch the earlier episodes. So that might be something we'll consider doing. That sounds like a good idea to me. Yeah, the server room design, we can talk a little bit about like, putting in the anti-static floor, putting in a proper split unit in there for HVAC, maybe securing the door. There's a lot we can talk about from some of the higher end standpoints. And then you can extrapolate from there what may apply to your home lab. Because obviously, we all strive to build the beautiful clean room data center with the, have you walked in the data centers where they have the sticky floors at the beginning? No. So instead of a mat, if you go into the really nice data centers, there's a roll out sticker is in place of where a floor mat is. And it's literally what I say, it's a sticky. It's like double-sided tape they put on the floor and your feet stick to it because it literally is supposed to pull any type of dust or anything off of your feet when you're first walking into those. So yeah, if you go into some of the nice data centers, you have all kinds of cool things. You know, double locks, what is that other one called? The man trap is what it's referred to as. A person trap or whatever you wanna call it. But you can only let, there's like a room that is only one person at a time to get into the data center room. That way they can verify who's in there. There's usually a really heavy glass that you can see the person in there. So when they go, all right, we're gonna authorize this person to go. We authorize them to get this far with their badge, but then they're stuck in the man trap and you're staring at a bunch of people in the data center room. At least when I was at, I thought it was really clever how they did it. You have to stand there. I was waved to him like a goofball and then he hit the button and let me in the other side. I was not allowed to ever pull my phone out in any step. Once you step inside there, that's one of the reasons you don't see a ton of data centers. A lot of companies just really keep them pretty tightly locked down, but I think it's probably worth talking about. I'll even bring a mention to the flywheels that were there because I can say that they were there. I just can't take pictures of them. I don't know if you've ever seen a flywheel battery backup system. I have not. They're really clever. They use a spinning flywheel in some data centers. Actually, one of the ones local here in Detroit has them installed. They have two massive spinning flywheels and what they do is instead of having lead acid batteries, you have a spinning flywheel in there and the power maintains it. I forget how fast they spin. It's not incredibly fast because they're super heavy, so they don't need to spin fast, but the power is used to maintain the speed of the flywheel. And if the data center loses power, they actually, on the other side, the power output is converting the power back. So it takes the flywheel and the centrifugal motion of it, the mass of the spinning mass itself to power a generator that powers temporarily until the main generators kick in. Now, the reason they do this is because lead acid batteries are expensive and not the most environmentally, they can be dangerous because you can have leaks, you can have acid leaks, you can have a load go on them. And I've actually seen the aftermath of a data center that had a overload of those and it broke. Angular momentum, that's the word we're looking for. Thank you very much, AstroCat3D for that, but they are using the momentum off of the flywheel to do it. It's a really covered design and they keep two of them because you can replace the bearings in them every six or seven years. It was a pretty, I think that's what the service maintenance was. It was a pretty long time and lead acid batteries only last less time than that and bearings turns out are cheaper to replace than lead acid batteries and hardly any environmental problems replacing bearings. Yeah, it's really cool. There's so many different things when it comes to data centers. I mean, I'm just waiting to hear about like a physical honeypot because I could picture it in my head where someone sneaks into a company and they think like they're doing the best job and they get into their server room and there's just a stack of servers that look like they're very important and then a monitor attached in the KVM with Captain America coming up on the screen. So you wanna hack our servers, huh? In a video recording, that'd be hilarious. But then again, I'm just devious, don't do that because that could get you, you don't wanna physically trap anybody unless you're like really strong anyway. Yeah, nonetheless, yeah, there's some discussion we can probably put together on data center rooms. Like I said, we'll talk about the high level stuff because it's interesting. And then you can take from there and start extrapolating what maybe fits for your home lab and data center. In something I'm trying to work on a longer term review on, I actually switched two of our UPSs to the LIPO, or I'm sorry, yeah, Lithium Ion, I think they're Lithium Ion or Lithium Ion Polliner. Either way, I have the newer ones that are Lithium based UPSs. They've worked really well. We've had them for a little while. I didn't wanna just review them, but out of the box, I wanted to see if they worked over time. I bought them like six months ago, I think. I got bought them last year. So, and it's a good topic because I didn't bring it up because someone in the chat mentioned about their, what do you call it? Their 3000 VA UPS exploded two days ago, yeah. That scares me a lot, that's scary. And I have a couple of UPS, I mean, that'd be horrible. Like I'm just sitting here and then bam, what? Because I mean, I'm literally looking right over there. You can't see it. There's a big UPS and a little server rack there. So hopefully that doesn't happen. But yeah, sorry to hear that. Yeah, and there's obviously risk with any of them. I'm actually kind of surprised how small the batteries are. So thermal runaways, obviously a problem with Lithium. It can be, I should say. But the companies, and these are APCs that I got. APCs, a very reputable brand. I've actually talked to other engineers. I was supposed to get some from another company. Their demand was so high. It's funny, they reached out to me like a week or two ago saying, hey, Tom, would you like to review our products? I'm like, I remember asking you that over a year ago and you couldn't get me their product for a year. But I, it's an engineer I was talking to at a completely different company. And I asked him about the safety concerns on it. And he actually said that they've become quite safe. So... Yeah, well, technically everything is safe until it's not just... Everything's safe until it's not. Like there's not enough, I mean, it's a problem with any of them. You're talking about, you're storing a bunch of energy, releasing that energy at a rate faster than you normally would, well, this is gonna be risky at any given moment. It does include lead acid. They definitely can go wrong as well. So definitely anything like that can happen as much as we never want to talk about this or think about it because, you know, if you think about it, I mean, you might have servers running, you leave the house to go grab dinner or something or maybe you have a life and you're actually going out and you come back and then something, you're hoping nothing exploded, but it's like you don't always think to turn everything off. I one time, this has happened a few times actually when I was doing help desk, different companies, but one of which I came in on a Monday and through the entire building, I smelled some weird odor. It was horrible. And I tracked down the odor to the computer lab. There's about 12 different computers in there and the smell was coming from one of the power supplies. So obviously that's a problem. But then what's interesting, you have intake fans in each of the computers that are sucking in the odor. It was really hard to determine which computer it was that was producing the odor because everything in there smelled bad and you don't think about that, but technology fails. So it's one of those things where thankfully everything seems to be more solid nowadays, but you never have a problem until you do and then you learn a lesson and then you have a different problem. So not trying to scare anybody, but if you don't need to have it on, turn it off. It's probably the best way to handle it. For sure. All right, next question on the list, because we're gonna take an episode about that. We'll just table it there. Now, squid filtering. Someone says, this is always an aggravation and Tom talked about using squid as a web proxy and yes I have. I've talked about why it's a real pain to manage as a web proxy and what it breaks. I've been kind of working slowly on a video. I've got a lot of notes together, but haven't got the video released on how hard it is and what challenges you face, I should say when you're proxying. And it says, I'm curious, what do you use and why you prefer it to squid? Because I've said not to use it. Some videos on Tangle in the past. So I'm not sure if you're using something else. Well, here's the problem. Everything uses squid, whether it's on Tangle, whether it's Cisco, pretty much all, squid is the de facto thing. The good and bad of it is the tooling that comes around it. On Tangle's done a good job, which by the way, if you don't know on Tangle's not owned by Arista there. So there's probably more changes that are gonna be coming down the pipe, but ultimately why their system works the way it does is they put a lot of engineering on it. Now, squid itself is an open source project, but the engineering that goes around it that's embedded in all these other companies using it is usually not the open source. That's their sauce of managing it. It's also just a pain in the manage. Doing man in the middle certificates adds a level of complexity. There is no way around it. The only way you can get in there and do full web filtering and proper URL filtering is to do man in the middle. And the solution we use is a commercial one and it's not necessarily manageable by, it's not something I really reckon with. You can't really use it for a homelab because it has a minimum sign up and pretty expensive. By the way, none of these web filtering things are free because there's the component of managing the web filtering is keeping lists of websites. Websites do not identify themselves into categories. There's of course the big ones like, hey, Facebook and Twitter is easy to figure out. All the other websites is much, much harder to figure out of what they are, what their category is. I mean, it's just a really challenging thing doing this. DNS filtering is usually what I recommend, especially for the homelab people. It's way less intrusive. It's going to be way easier to manage because you're just denying lookups of websites based on DNS. And because DNS queries provided you don't turn on the HTTPS, the DNS over HTTPS, yeah, think about those, get them in order. DNS over HTTPS, which would bypass your local DNS server, you can force disable that. And that's another solution to, forcing all the DNS traffic to one place and put some filtering on there. That's the better way to do it because you're not dealing with all the man in the middle problems and everything else. But it's unfortunately, if you really want the filtering, you absolutely can do squid. You can load those certificates on there. But as the person asking the question says, isn't it really cumbersome? Absolutely, it's cumbersome. And you'll find yourself putting a lot of blocks in, not blocks but bypasses. I'm trying to remember the term they use, it's really strange in there. But it's basically, there's a bunch of language that's native to squid. They reuse words in a weird way. It's in my notes because it's so strange the way they call it, but it's the way you bypass certain things inside of squid to say, if a site has the certificates pinned or not, banks don't like man in the middle. Google doesn't like man in the middle. So having those services on there, by the way, squid starts breaking things like the QIC protocol, which we enjoy the fact that when we're on many websites, such as Google is an easiest example of this, when you start typing, you notice how as you type in Google, it immediately starts giving you results. Well, that's all done over QIC, which is a UDP protocol that squid can't handle. So you have that as a problem. I actually think there's probably a way to make it handle it, but it involves a lot more things that just break more things. It's really convoluted. So that's why we don't really recommend squid. It's great for learning. I think, because what this person undoubtedly has embarked upon is all the reading that comes with it and understanding how the protocol is transferred and how HTTPS works. You get a great understanding of it by using squid, because everything, as it breaks, and as you troubleshoot it, you get a deeper understanding of it. My understanding comes from I've been running squid, not really much anymore, but this goes back to, I don't know if Jay's ever heard of this, Mandrake Firewall, which I think later became Smoothwall. We used to have a Squidproxy, we ran, and it- I mean, I've heard of Mandrake, but not there, Firewall. This was an early project. It was circa 2000, and we actually used it when I worked in corporate because it was based on the earliest versions in Squidproxy and things, because we had a very limited fractional T1, and so you cashed things. Caching isn't really much of a thing anymore here in 2022. Most of the stuff doesn't lend itself to being cashed easily because the web pages are so dynamic. People aren't usually requesting the same standard content over and over. We've also, through things like QIC, became more efficient. So when you start putting in some of the filtering tools like QIC, you actually, or filtering tools like Squid, you end up breaking things like QIC and ultimately making the experience slower and worse for the user. So that's why I don't really recommend Squid. It's a great learning experience. DNS filtering is still not a precise art, but definitely can get you further along the way. There are, and I don't know what consumer tools are out there, there are consumer tools you would load as an agent on each computer that would restrict in a more grand in a way. But obviously those all come with subscriptions and may or may not want those companies in there. I'm one of them, and I can't remember which one it was, got in trouble for collecting the data and selling it. And that was a while ago, I remember them being in the news. So I'm always, be very suspicious of there because you're intrusively letting something watch all of your browsing history on a system. And the way companies can monetize your data, I don't know, I'm skeptic and I wouldn't endorse any of them. We use a commercial product just so people know the name of it. It's in our list of software we use over on my forums, but the name of the software is called Zoros. You pay a healthy subscription fee to sign up to them. And you have to have a minimum number of end points. I think it's like 100, 200, whatever it is. We're well beyond that. That's how we manage it for our customers is with a tool. So there's kind of a long answer on web filtering. It's not easy. And as the privacy of the web advances, especially as we go from SNI to ESNI or other encrypted ways of transporting the name headers, it's gonna get trickier unless you have an agent on the system. Yeah, so I would probably consider using squid proxy just as an excuse to have a server named Squidward because why wouldn't you want that? But in all seriousness though, like one of the reactions I had to this question was that I realized that when we say we don't use something, that doesn't mean quite the same thing as when a lot of other people say the same thing because we are constantly creating like a bunch of content that takes a long time. And when it comes to homelab, you kind of get to a point where at first you have like a bunch of different services and applications running. You might have a PF block or Squid, I almost said Squidward. Probably Squid proxy and a number of others. But then later on after you learn those things and you kind of look at everything and you're like, I have a hundred things to manage right now. How many of these things do I actually need? Now, depending on how into homelab you are, you're probably okay with a bunch of things to manage. But if life ever, you're really busy or something you might not be able to keep up with that but then for you and I, I mean, I'm writing a book right now. Plus I'm creating content at the same time among other things. And sometimes things just stay broken. And if something's gonna stay broken and it doesn't really benefit me, then it might just have to go away. And right before we started recording, I was complaining that as of this week, PF blocker has decided to block all of the links on Twitter. So I couldn't even click on the YouTube link to get to this particular live stream on my browser. Now, obviously I can log in and fix it. It's not hard to fix. It's just another thing to fix. So then I asked myself and you mentioned that you use you blocker while I'm using both technically in the browser I have it and then PF blocker, do I really need PF blocker? Not really. So I just turned it off because at that point, like I just don't have time. So it's kind of one of those things where we love what we do, we love the hobby but we don't always have the time to focus on this. So not using something doesn't really always mean that there's something wrong with it. It just means in the grand scheme of things when our use case is considered, we might not be as feeling as strongly about running that service as we were when we first discovered it and that changes over time. Yeah. It comes down to what you want to manage on it. Everyone's like, it's all excited about the ad blocking and things like that but you block origin, I really care more about blocking ads in the browser. Matter of fact, it's one of those silly things. If you turn on PF blocker and you get real aggressive with ad blocking, you may have other people in your house that complain that their game, they like their clicker game stops working because once the clicker game realizes the ads are blocked or something, they can't play it anymore. So you'll find that sometimes it can backfire on you. I like doing it at the browser level with you block origin. One thing I've never understood about PF blocker and that mentality is like, let's just say, you know, a family member, right? Or someone that doesn't really know technology very well because it's not their thing. They could have somebody in their family, maybe it's you that manages their firewall for them. Maybe you were nice enough to set that up and get that going and they really love it. And then they go to a website and they get this message, you have an ad blocker, turn off your ad blocker. But at that point, they won't know how to do that or even if they have one because you might have done that yourself or maybe they had someone else, they hired somebody to do this for them. It's really hard to navigate because the website is detecting an ad blocker, but if you don't have an ad blocker in your browser, what do you do? So there's a whole new layer when it comes to that, especially considering the experience level, like other people in my house, like why isn't this website working? It's just like you said, right? They don't know. I know because I'm gonna blame that as the first thing whenever I see that white screen with no text because every time that's what I end up seeing. So there has to be a better way to handle this. I'm sure we'll get to that, but I'll be interested to see how this actually plays out. Hey, look, I'm just, so we went from pop ups to pop overs now. So there's definitely been changes. So much. I know. It's like I could be reading an article for like five minutes and then it just pops up. It's like I get anxious because I know at any time I could be interrupted and that thing comes up. The only way that I've found a deal with that, that has worked so far is to always turn on reading mode in your browser when you read an article that strips the UI and everything and just shows the text. But I always forget to do that. So I still, I still see those, but do you make it a habit to use the reading mode? As far as I know, that's the only way to block it because the way that they have those pop over ads, it's, as I understand it really hard for anything to really tell the difference. Yeah, maybe become part of the page. Yeah. There, I, unfortunately the trend I think is gonna be to have more of those, not less of those in the future, which creates its own problem. And then if they're using same domain origin and embedding it that way, it also breaks the ability for things like PF Blocker because it's not a third party injected ads, it's part of those. So yeah, the ad cat and mouse game will carry on until a better solution is found. I was just gonna say one last thing about that. One of my theories, which I'm not, I'm 50-50 on this, it's a possible outcome. I'm kind of wondering if like they just keep this going and make it more and more common if they'll end up defeating the technology altogether because for me personally, I've become really good at finding the X. I can't even tell you what any of these pop over ads say. I can't even tell you a single word on the page where the picture is. I've become numb to flashing things like big photos, big texts, flashing text. I look for that X and I just subconsciously hit the X. I can't even tell you. How many other people are numb to it as well? So if the more they make this common, I kind of wonder if the return on their investment of creating this is gonna eventually just diminish to the point where it's no longer worth it. Yeah. Now, before we move to the next question, which is a good one regarding remotely accessing and monitoring systems for like screen monitoring, I'll make two PF sense comments. Someone asked about Wi-Fi cards and PF sense and you can simply Google supported Wi-Fi cards and PF sense. There's a link I dropped in the chat, but it's easy enough to find for wireless hardware you can use with PF sense. It's documented from NetGate. They have a list of what cards they support and you can also find the video on my channel about that topic using Wi-Fi and PF sense. It's not a wonderful solution. It's a solution. It might be fun for the homelab solution. I think it offers you some cool insight into how wireless works. It gives you some very granular options. So you're getting to configure it right inside of PF sense, but it's not the same as setting up like a half a dozen Wi-Fi access points around your house and creating a large seamless network. So not really that. Second, backing up PF sense. I've got a video on that topic, but automated backups that don't go to PF senses cloud like they have their own cloud backup system. There's not, there's probably some writeups you can find on it. I've never really done a video on it because it's really no more complicated than just grabbing the config.xml file. That's all you need. So if you have something that can SSHN doesn't matter what that something is something that runs a script that SSH is in like and grabs that file and you set up SSH keys. That's all you need is the one config.xml file. The entirety of PF sense is in that file. And so if you watch any of the backup videos I've never really done a video on it because it's literally just a bash script that says SSH grab this file and put it here on some schedule if you want. But you don't really need a schedule to back up your config.xml. You really only need it backed up when you make changes. But then again, if maybe if you're a homelab person or like me and Jay who play with things a lot you might make changes a lot and maybe an automated hourly backup is good for you too because we don't know which hour you're gonna make that change but you may as well have all of them because it's a really tiny file. And it's up to you how many versions back you even need to keep of that file. By the way, the file itself has the change history in it. So there's, I think some of the change histories in it and all of it but there's a backup folder underneath that does have more change history. So watch my backup video. I cover a lot of that. Plus it's also well documented. That file is documented in the NetGate documentation. Yep. And another thing you could consider is just trying to make it a habit which I'm cautious with this because we forget things easily because we're human but if you're making changes when your session is done before you close out that browser tab in PF Sense back it up then. You know what I'm saying? Cause you know, in my case I could probably leave it for like several months and not even make a single change to it. And that's usually the case. And then all the things I was thinking about changing in one session, I'll spend like an hour just kind of going through and implementing everything. And then you could back it up. I like that automatic backup solution that you have Sense has where you have an encryption key that you keep and you need to keep it. Cause if you lose that encryption in the backup it's useless but if you have that key in a safe place then it's just gonna keep updating as you make changes. My understanding is that nothing is personally identifiable. They can't even look at it. So as long as you have the encryption key it's just gonna send the blob up to, is it NETCATE in this case? That's the receiving end? Yep, NETCATE set that up as a service. So I really recommend using it cause you know why reinvent the wheel? It has an automatic option for backup on change. And if you do want to reinvent the wheel go to the source code and redirect where that goes. You can always just modify the code to go somewhere else, you know, backup on change but send it over here. But I'm, we recommend a lot of people all the time cause people, sometimes when they've had a catastrophe it isn't just their PF sense. If you have a larger catastrophe in your lab of losing files, cool. As long as you didn't lose the backup password you can just load a new PF sense and pull the config all the way down from the last known good. So I mean, don't only rely on their backups have your own in case something goes wrong on their end as well. Because you know if they're doing it and they stop doing it, that would, you know things can happen on their end. There are people too and servers and things happen. So always just have multiple ways to back it up set up the PF sense one. So you have them try to remember to do it yourself. And if you want to go, as I said just use some type of script running on some type of automated cron job that just SSH is in with the keys and say I'm going to grab this config.xml file every hour to make yourself happy away you go. So if you're going to use the bash script style for something this important I highly recommend healthchecks.io Oh, yeah. Put that in your bash script. And then if you think like you're going to make a change in your PF sense let's just say on average every week, right? Put the schedule for every week such that it'll alert you if healthchecks.io doesn't receive a ping from your script in that amount of time and make sure you put the ping at the very end of the script not at the beginning because if it fails you know after the script starts that doesn't help you because healthchecks.io will show it as green as good because it ran you want that last thing to be the healthchecks.io ping or whatever they call it and make sure that you have it checked the return code and if it's a failure that it's going to exit not get to the point where it alerts healthchecks.io you want the script to abort so that way it doesn't alert anything and then you'll get that alert that it didn't run and you get a certain number of checks for free I can't remember but if it's important make sure you use something like that because we're human we will forget like if that cron job and I've done this I've had this happen starts to fail and you're not regularly auditing that then guess what it's silently failing and that's why I kind of I don't really like the Bass Script thing but with healthchecks.io I think that makes it more reliable because it'll alert you if they don't receive anything from that script and that's very important. Yeah, that's an important aspect whenever you're the only and if you have anything set up that you rely on it's so often and we've run into this so many times when we take over IT we don't have anything in place that's automated that's also not monitored to make sure that automation is doing what it is and then at some level a human interacts with it and then at some other level you do your DR testing and you make sure you implicitly make sure that files there and can be restored so I mean actually we in a whole just we talked about our jobs in the enterprise space and all the different levels we have to go through to ensure these things are all happening at our jobs and our processes we put in place for it so this is a very frequent topic and it's less technical it's more just making sure you have methodologies in place and documented processes. It might be worth an episode in the future though because I feel like a lot of what we discussed off camera would actually translate very well to HomeLab because I brought up the fact that I've had clients that had tens of thousands of files and some are corrupted and most are fine to the point where the client signed off and everything everything looks perfect, thank you and then a week later we can't view these files for some reason, they were gone which is horrible but then again if you're enterprise or even HomeLab for example, you're not going if you have like 10,000 photos are you wanting to like, you know regularly open every single one of them and make sure they're not corrupted who has time for that, right? So there's some real challenges when it comes to keeping your data safe from things like BitRot that sounds easy but isn't always easy so that could be something we could talk about later. Yes, absolutely. All right, the next question is about Well, first I want to comment I like what they have running in here because it sounds like they got running as we talked about, hey, you talked about some of the things in your lab they got a true NAS core system, Plex server they also have and it's something I'd say on my to-do list I've just been really busy and have time to look at it but they're using a tool we talked about called Shinobi and it's I've not heard of that, isn't that for cameras? Yes, it's an open source CCTV system I've seen people comment them as foreign just as my impression is that it's a cool homeland project not ready for commercial production and it also breaks occasionally with updates and all the feedback we've gotten of people have tried it have told me the same thing which just puts it down on my list of things I'm going to try but for those you want to play with it it is there out there but the question is using things like currently their solution is to get to their desktops you want to get to the UI of a system and they were talking about using the Google one now the Google one's not bad Google offers a free remote desktop tool that works through Google Chrome it's probably the most recommended I've done a couple of videos on it free one out there for people looking for something like that it's done through Google it's secured through Google it's a little app and it's tied to Chrome and it's tied to your Google account to allow you to log in I think that's kind of cool because it doesn't require you to do any port forwards it doesn't require you to do any special weird software installs and it's just really simple for taking care of being able to remotely access things but if you're looking for something a lot better I've done a couple of videos on X2Go so for managing your Linux desktops remotely X2Go is solid because it's not just that you can see the screen X2Go does things way more advanced so yeah cool we can see the screen there's the basic functionality let's talk about application remote application projection you can actually run an application on a remote server but bring it locally as if it interacts locally like the application is running on your system this is some really cool features I covered in it I actually for a little while was trying it it was a little tricky to get the the level of synchronization I needed for Kaden live and I was doing some editing but I did confirm you could get Kaden live to work over X2Go so physically running on a server but ported all the way to my screen there was always this weird little quirks when it came to timing you showed me that and I was so blown away by it that I immediately put it into use at that time that was back when I was using Kaden live so and that's just an example of some of the clever things you can do if I do say so myself I had a sync thing at the time that was syncing the video footage and B-roll things for all my videos and it was syncing that folder that my working folder for editing to a server that had like I don't know how many cores like 40 something and that server was completely headless it had X2Go on it it had Kaden live installed and I was just advertising Kaden live through X2Go and I could literally after I edit the video on my computer because you don't wanna edit through X2Go because there's gonna be some lag but after it's edited and the file sinks over there I just open up the same project file on Kaden live through a web browser, hit the render button and the server's just gonna go nuts the fans are gonna go crazy and then since a folder is synced when it's done exporting the video it's synced back to my computer and I upload it and it's just one of the many reasons things that you could do in X2Go if you don't wanna see the whole desktop you can actually just have one app or whatever to be accessible that way which is really great. Yeah, so in the next thing you can do next to go that's even cooler is you can have multiple users logged in and it creates essentially like an RDP session for each user and I think Windows will refer to it for those of you from the Windows where like RDS sessions so each person has their own little environment and they can run multiple environments on there it's there's some really clever things you could do it's a really advanced project well-integrated there's still some quirkiness with it so before I oversell it I will admit I believe it works best with the Mate interface it doesn't work very well with Pop OS and there's some things you have to do that I do cover in there is it doesn't like some of the more modern newest versions of GNOME it's just not the most GNOME friendly thing so you end up using different desktop environments but that's not a big deal actually one of the things I do cover that's a clever use of X2Go is using it for Kali Linux as well because you can load Kali Linux on a bare metal machine somewhere or even a Raspberry Pi and then you can use X2Go to get to the screen for it to kick off jobs and be able to use a UI remotely and it does traverse well over the internet over a VPN it's running over the SSH port generally so it is larger that's generally how the security is handled on there so it's a definitely a flexible option for Windows machines though I'm less certain there's another there's some tooling out there I just don't know it very well because VNC is what a lot of people talk about but there's always I don't know I often feel connected to a Windows machine you mean or I would recommend Ramina for that no, no, no it's that's one way yeah, using Ramina that's one way you're using the native RDP inside of Windows that's actually not a bad option the commercial tools we use are super nice but they're expensive so they aren't always they're subscription fee tools that we use commercially in business to do it so that's how we're doing it commercially for the home users there's a couple of free RDP systems out there I've just never tested them and I'm not 100% on how good the security is on it that's why I'm hesitant to recommend ones especially because there's not much in the open source realm that manages Windows like that not fully open source some of them have partial open source and things like that but if you use RDP on Windows on the other hand that's great that solves the problem because RDP, well, it's built into Windows so you have really good support now you're actually with RDP it's a little confusing because you're not exactly just sharing the desktop simultaneously but that maybe isn't your use case you just wanna get to your desktop and RDP is a solution for that because when you RDP into like a Windows 10 machine it's going to lock out the local interface but maybe that's the ideal situation for you and that's what you want so that's what it comes down to but RDP does work really well yeah, I would say on my end the reason why I don't cover RDP and VNC on my channel this is just personal opinion by the way I think RDP and VNC are just terrible solutions and that's why I don't cover them because I think like X2Go when it comes to Linux has pretty much defeated RDP and VNC in every way imaginable in terms of every category, speed, ease of access VNC is so annoying to work with and there's so many quirks that it just becomes a frustrating thing to deal with RDP works exceptionally well on Windows because like you said, it's a native integrated but then I'm also nervous about using RDP or a Microsoft centric tool for this purpose because yes, they love Linux right now will they forever? We don't know I'm not trying to be critical on Microsoft I'm just literally saying I don't know I have no idea so of course I'm gonna go towards X2Go because that's gonna be the safer bet and when it comes to connecting to a Windows machine via RDP from your Linux workstation that's Ramina in my opinion because that's a really awesome tool that has like a bunch of different remote desktop protocols built right into it like SSH, you could use that, you could use RDP I think there's a VNC client in there if I'm not mistaken I can't remember what else is in there but it's like the Swiss Army knife of connecting to things so Ramina is an awesome tool for that but if you're connecting Linux to Linux, it's X2Go Now what's interesting to me, if I'm not mistaken I think GNOME actually built in RDP support in one of the newer versions and I'm thinking why? Why RDP and not something like X2Go or something that's more friendly to the Linux ecosystem but that must mean that there's a case to be made there that I could be wrong and I'll admit that I'm wrong if I am but RDP I just don't really care for that also when it comes to GNOME it's basically a problem in every remote desktop solution in existence today or virtualization anything that's not using GNOME on your actual computer GNOME is gonna run slow otherwise because it uses the GPU, it needs some acceleration mind you, it doesn't need much you could have an older computer that has a GPU and that's gonna be fine but in virtualization, you won't have a GPU you might have a little bit of a GPU that's kind of okay but you're gonna notice lag so I totally agree when you were talking about the Mate environment because that doesn't really need that and not only that, the Mate environment can detect when it's being run in a remote session and adjust itself accordingly like they thought about this it's not that GNOME didn't think about this I think it's just more of a conflict of interest with what GNOME is and using it in a virtualization platform or remote desktop platform it's not really what GNOME was built for yes they want it to work well everywhere some patches have been put in to make it work better and it's getting better all the time but it's just not better enough yet and at this rate it's gonna take years so I would just think of GNOME as the thing to use when you're not doing remote desktops and you're only using something local but anytime I'm connecting to something remotely and I wanna set up a remote desktop on Linux it's going to be Mate every time yeah now so I can answer another question because I didn't see it come up there people are asking about the Guacamole the Apache Guacamole project well Apache Guacamole is an intermediary and what I mean by that is you can set up Apache Guacamole and it serves up a really cool HTML5 interface to get to the things you wanna get to it bridges so you from a perspective of I have my Windows box cool I need to get to it how do I get to it? well I'm gonna set up Apache Guacamole on another server and then I'm gonna take Apache Guacamole and put in all the RDP credentials so it can talk to that server then I access that server by going to Apache Guacamole which then initiates the connection back to that server what I mean by intermediary is that it is not directly connected server you're connecting each time to Apache Guacamole by which you can tie other servers to it but you're still reliant on the RDP being available and if you're a Windows 10 home not pro user for example the RDP is not available I believe by default on home still it's only available in the pro versions of Windows so it comes back to you're still doing it and Apache Guacamole can also connect to VNC it can also connect to some of the other protocols so it's a different layer it has a different use case as opposed to direct connecting when we mentioned things like the on Linux the Ramia remote desktop tool that is a remote access tool that directly connects probably you most likely over like a VPN to something that you have RDP or VNC open on yeah and some people are saying Guacamole can be a pain to configure yeah it's an extra layer and if you're only going to one computer or two computers maybe it's worth it maybe it's not I think it's a fun learning project but I don't use it myself I don't really have a use case for it doesn't mean you don't and there's a few people who have done some videos on it so it's a cool project don't get me wrong it really is yeah there's a bunch of different things out there you can run that can solve anything you want to solve someone's probably thought of it and they probably have it on GitHub right now yep so one thing I wanted to kind of mention we have someone right in and this is a description of their home lab and basically we're kind of running a little over but I wanted to I'll keep this short as much as I can Intel NUCs appear to be highly central here and that's a I just want to mention that because you'd be surprised how far you can get on an Intel NUC their low power low noise they're just great you might not have as much RAM as you'd have on a you know off lease Dell server from eBay but you might not need that especially you know if you don't run VMs for example if you have an Intel NUC with four gigs of RAM I mean you might be able to run one VM on there so you probably don't want to load Proxmox on something like that or actually you could because you could just run containers instead which have a smaller footprint you could really get a lot out of that for the longest time I was running a container server that had just four gigs of RAM and that at one point held everything because containers allowed me to really you know basically reduce the footprint on that so definitely want to give a thumbs up on the comment in regards to that and did you see I mean there's a lot of things to unpack here that I really wish and maybe we could go over this person's home lab again because I don't want to like sell anyone short but the NUC thing really stood out was there any particular things there that stood out for you? I do laugh because this happens all the time it says for the main router I got rid of my USG a few months ago because it was awful and I'm not running PF Sense on a passively cool box so that's very cool to running a series of they have three Synology NASs which is kind of cool so you've got lots of data replicated and plus data replicated to the Google cloud so another backup place yeah this is I like this but the NUC definitely really cool for you know people building small home labs and actually one of the kind of related to this there's the NUC and there's also the protect teleboxes I think I'm saying their name right XCB and G did a demo of setting up a handful of those because they're relatively expensive they're all passively cooled and relatively expensive I see relative because it's relative to whether or not you think they're expensive but a few of those set up in a passively cooled system makes very nice fits on your desk quiet because there's no moving parts and fans and each one of these has a series of network cards so you can build some cool things including loading a hypervisor on them so they mentioned they were looking at XCB and G these are just kind of great learning are they fast? Are they incredible for workloads? No, but is your homeland workload really that intensive? Is it about learning or is it about building the awesome I get these incredible performance numbers type system so balance that out when you're thinking about it because one of the most important things is just being able to have the knowledge of how all these things operate even if it operates a little slower it's a there's still great choices to do look at these things like the NUCs or these protect tele solid state boxes. There's no shortage of these smaller computers you know I kind of wish I could show myself when I was first starting in computers and just go to past me and say check out this Intel NUC this little tiny thing is like 10 times more powerful than your current desktop it's just so amazing how far we've come and how things are shrinking down to the point where I predict that when it comes to desktops in the future every single one of them are gonna be small like that with the only exception being the gamers desktops or the big GPUs and things like that those are gonna be like the only things that are huge but if you're not playing games and you're just a casual user you'll probably have something like a NUC if you're gonna have a desktop or whatever the equivalent is at that time it's just amazing the Raspberry Pi I mean yeah they're expensive right now but I mean even those are more powerful by far than my first computer. Yeah two quick questions about Unify that are inside here so we can answer some of the live viewers questions one of them is does the Unify Dream Machine Pro SE support link aggregation? I don't know I'd have to read the specs I don't have a Dream Machine Pro SE the second one does Tom have a video on mesh versus AP it's not mesh faces AP first don't mesh it unless you absolutely have to but yes I do have content on that topic you always want your APs whenever possible to be all hard lined not mesh to each other each time you add a mesh you one reduce speed you two you add latency so you only in the circumstances that you need to have it because there's simply no way to get a wire over there would you use a mesh system to extend the range of an AP we have done it we have done it commercially very limited we had a library there was power on one side of library the library is built in the 50s it's a beautiful art deco building there is no way without tearing up the floor in the library which library was not fond of to get to the one wing of the library we had wifi they have electricity there that's it so we used a mesh that's our exception because it was the only way to get access to that area of the library everything else is hard lined in and if you're doing home you can run into some of those problems too where you just can't get into a wall to run a wire so if you have to use a mesh you know if I does support mesh interfacing I don't would you agree with this statement I'm about to make I would kind of rather see people use a really good and I'm gonna I'm gonna like really focus on that a really good power line adapter that's a really good throughput and then just have your access point connected that yes you'll still have some latency it's not preferred and you're gonna drop a ton of packets it are not even gonna get the full speed out of the access point but I would argue it's probably maybe not a lot better than mesh but it's probably gonna be better it might you might not get nearly the throughput of your internet connection but you'll get something in that case would you agree with that would have you tried that out you wanna wait well so you get a power line adapter and you know that connects two points you have your access point connect to a power line adapter somewhere else with one being your switch going into it the other end is out to the access point and keep in mind like for example a two gigabit power line adapter you're probably gonna get 150 megabits out of it so keep that in mind the speed drops considerably but I just don't like meshing so I've almost inclined to go that direction before I go with meshing because at least it's hardwired and if it's limiting your speed that does suck but if that makes or breaks your ability to have Wi-Fi somewhere it might be something to consider Right, yeah it's a toss up Unify is a pretty good job with their backhaul but it's always there's always trade-offs with it when you're not doing a home run back from each Wi-Fi to the switch think about this and this is the perspective I try to give a lot of people it's in think about this from an engineering standpoint we have the protocol that runs across Ethernet we have to make a conversion of that protocol to get it into Wi-Fi that's the Wi-Fi protocol so we've taken one standard we've muckst it into another standard then it has to go to your device where it comes back to be back to the other standard so anytime you add those complexities of conversion to and conversion back there is latency and potential for problems added you're adding complexity to it so when you add mesh now you're taking because by the way the mesh doesn't repeat a wireless signal that's not what's happening it is taking the signal from Ethernet converting it to wireless then it hits the backhaul part where it also has to be converted back and over to the next mesh over to the next mesh each one converts back before it transmits it's not just grabbing the signal and going take signal here reply over here that's not how that works it actually goes through for each hop that's why there's such a diminishing return over the hops by the way mesh is not roaming that is also I have a roaming video and I probably should make a new one because I want to go in more depth on it but basically I want to title it mesh is not roaming because those are two different things that people talk about they make this assumption they need to be meshed together to roam from point to point I have a video where I talk about roaming if you just look for roaming I've talked about it there's plenty of comments on it for people who say I just didn't go deep enough and I said, yeah, I could have went deeper I got to figure out the balance of how deep do we go into how the roaming protocols work I went far enough but at some point the video has to be watchable so but maybe some of them will watch a deeper engineering video on it I don't know it's always a hard time to balance of just how deep do I go because my ZFS is a cow video has a lot of views and I went deep on that one so you know it's hit or miss when it comes to these topics it really is I could do a deep dive on something that I think is gonna go over well and no one really seems to care and then something else I don't think is gonna catch on and everyone loves it so you never know just throw some content out there and see what sticks I guess Yep and I see people talking about the roaming the challenge with the roaming and I cover this in a video so too long didn't watch the problem is mostly the devices not the access points modern access points understand roaming well devices they kind of leave the parameters up to the device manufacturers and in the case of IoT device manufacturers I don't think a lot of effort was put forth we'll just say that so you run into these problems where they just get stuck to certain APs now if the IoT device is a fixed point it's easier to deal with but when it's not a fixed point or it straddles the distance between two devices that can be a problem older phones especially some of the cheaper ones were more notorious for being stuck on the further away device so they first latch on to the device they find and even though you've wandered past where you walked in the building you're further away from that AP and when you walked in the building and you sat down at a desk where there's one above your head sometimes the phone goes you know what I'm gonna do I like that first one I found and I don't feel like I should go to this one that's closer and it's not a problem with the APs but this is where you can get into some of the tuning of the APs and Jay knows all about setting minimum RSSIs to get things to connect to the right places Oh yes I do I've been through a whole chappelle of trying to make Wi-Fi sane within my household and office and I feel like I got it perfect for me right now like I have zero complaints literally but it's one of those things that takes a little bit of time to research the best place to put your APs how far away from each other can they be how many do you need things like that which ones do you go with because there's gonna be feature discrepancies between them and one thing I ran into is unlike models was a problem which it really technically should not be there's no reason why having like a Wi-Fi 6 device here and it's not Wi-Fi 6 somewhere else should be a problem because it's backwards compatible but it was at least I think it was because everything's fine now but there's just all these different things Wi-Fi is hard that's really the only thing I could say well it just yeah dealing with wireless signals going through the air on the scale of humanity wires have been around longer by a lot so radio frequencies and pushing them over there it's not that it's not a understood science it's just a really complicated one and every environment being different RF environments are notoriously tricky and as we stick more and more things that put RF frequency into the airways around us like whose idea was it to start using 2.4 because by the way that's a lot of the area where your microwave is in and other things that make RF noise so there's all kinds of challenges you can have yeah Well to add insult to injury I'm sorry a bit of an audio lag there sorry about that I was just gonna say to add insult to injury when Wi-Fi was gaining prominence 2.4 gigahertz wireless phones were probably I'm guessing something like 10 times more common or somewhere around there than they are today because I mean who has a hard line phone anymore but back then everybody did and cordless phones are really popular so they chose the same anyway I digress Yeah yeah because it's an unlicensed band there's just a lot of noise in it and you jump in the topic of Wi-Fi deauthorization and ways you can just be highly disruptive with Wi-Fi when you go places yeah that's the thing you can build pocket based Wi-Fi deauthors that just send out signals in there and force everyone to disconnect from Wi-Fi it can be really messy at times I mean it's less likely there were certain attack scenarios that that would be the precursor for because they would try when there was a flaw in the way they would negotiate if you've got enough renegotiation package or was a way to extrapolate data that most of that's been patched hopefully depending on what you're using of course whether or not there's patches available but yeah Wi-Fi is a everyone wants the number that's on the box that marketing claimed I live in the real world not the sales and marketing world so yeah sometimes I wonder if the sales and marketing people have to use the thing they're selling yeah yeah oh and then someone what about what about five gigahertz five gigahertz the nature of going up in the gigahertz band is it doesn't go through things as well so while it does have less noise it has great a lot of distance restrictions essentially on the five gigahertz you're not gonna be able to go through as many things you're not gonna have the same distance but under shorter distances which is good for people at Unify who sell lots of access points and this is why we'll do a higher density solution it's just you go that's the solution just put more of them on there so you can do it I you know it fell off the map not to digress too much but the light emitting was really promising for a while a few people had built some really cool proofs of concept where they could do line of sight with light now it was outside I believe the visible wavelength that we can see but they're able to put light beacons and as long as your laptop was in view of these light beacons and they can put a lot of them out there because no interference because the beams are so focused and then you could have connectivity over light beams I still think that's a promising technology it seemed like they there comes in my head from an engineering thinking standpoint but not necessarily knowing the details of how scaled it is I think that's a clever idea if you could build some type of omnidirectional beacon in the ceiling and as long as you have a device that has a surface exposed on it like the top of a laptop lid like the top edge it was a rather small part you can find some demos of people who engineered this I think that's a clever way to do it because you could send a lot of data over light the downside is the moment if Addo hits it or something you slide your laptop under the cubicle with one of those little overhead things it loses its line of sight to the beam and you have a new problem but the concept's there I think there might be ways to mitigate and solve it if we can get it everywhere I don't know, Lifi that's what it was called Yeah, you know something that's always been interesting to me we can send a satellite New Horizons to Pluto which is about 2.6 billion miles away and get high definition pictures back from there but we could barely facilitate Wi-Fi to our couch that's kind of where we are today but you know, we're getting there I guess, eventually Yeah All right Well, we've rambled on enough for the show please send us your Q&A so we can do more of these shows this was definitely a fun live stream we'll love having all of you here and looking forward to more Q&A Yeah, I think that's about it Jake anything else? I have a ton but I think we're about out of time so That's how I look at it too so All right, see everyone next time and thanks for joining us See you later