 Welcome to the Home Lab Show, episode 83, questions and answers. Do we have some questions and maybe a whole lot of answers, Che? Well, we hope to have answers. I think that's always the goal. Whether we actually do, I mean, that's always the struggle, right? Yeah, it's always the struggle. We're here to do a Q&A episode. We would like to do these at least once a month, a few things we want to change. And one of those is getting that information out to you easier. And that is feedback at the Home Lab Show. We were putting the year after it. We decided just to make it feedback at the Home Lab Show, just to make it easy. And we want to make sure we start saying this at the beginning here, because I know some people just want to be able to send an email and not fill out a form. If you don't want to give us your email address, that's fine too. You can use a form. You can use, because this was something that was discussed before with the forums. Yes, anonymous email addresses for now. I don't think we'll get caught up in the spam filters. But just in case they do, it is a risk that may happen. But feedback at the Home Lab Show or head over to the Home Lab Show. And we have a contact form where you can jump some data in there. And we like hearing from you. And we love the viewer suggested content, topics to cover, or just questions that we can help you with and get your Home Lab started. But we have to pay the bills here for our Home Lab in the Home Lab Show. And that is done with Lino. They are a sponsor for the day. They're a sponsor of the Home Lab Show since the beginning. It is a great place to host your Home Lab projects that maybe belong in Linoad's lab. You know, some things you might want public-facing, not on your IP addresses. Or if you're setting up a VPN, great thing to do with Linoad. So you can start testing all that. It's a great place to test all those fun things that you need to keep up and running and run in their data center, not necessarily yours. So great place to learn, lots of templates to play with. And we thank Linoad for sponsoring the Home Lab Show. To get started with them, we have the offer of the Home Lab Show. And it is down linked in the description as well. All right. What's the... Should we start with what we were ranting about yesterday for like an hour? I think so. We had an interesting discussion during our weekly call. So I think it's probably worth bringing up at least. Yeah. Now, this started, and I'm not specifically picking on Trunas Scale. But it is where the discussion started is with Trunas Scale. Now, what it comes down to is using things that are built well with like Kubernetes and Docker and all these fancy deployment tools that can get you up and running fast. The problem that people run into. And one of my other YouTube friends, he had actually... And he made an updated video to correct it, but he had made a mistake. I remember a few years back talking about using Docker containers for doing Unify. And he took down the video and made an updated one because he had forgot to mention about the storage volumes. And what happens when you're setting up any of these is if you're not clear on how storage volumes work. So you treat your Docker systems as very ephemeral. They can be just deleted, rebuilt at any time. The problem is, can they be? That's something you should test. You should test that thoroughly to make sure before you put any data in there that you have the ability to back it up. It's a process that a lot of people... They struggle when they're learning, and I get it. It's a lot of knowledge together. And they're like, hey, I did a Docker poll, and I Docker this and Docker that, and a whole lot of commands later. It's suddenly up and running. Awesome. Let me start using this. And one of the different tools I was testing, and we'll talk about that in a minute, is Joplin Notes. And I was like, hey, great. The folks at Truecharts have a Docker image for this. This looks like it's going to be really easy to install. And it was. And it even had a spot where it said, store your data here. So I did. The puzzle I have, and this is because it... Go back to the testing. I synced it with my notes, but then I realized when I started looking at the data folders, there's no data in them. So if you rebuild the Docker container, all the data is not going where it's supposed to. So it rebuilds all your data with it, which is not the ideal situation. My bigger thing is people should spend more time testing. And I've run into this, unfortunately, where people contact us, or they set up an app, they set up a jail in the older style TrueNAS core, or it's a, you know, TrueNAS scale set up, you know, I've ranted on my last, one of my reviews of TrueNAS scale on NextCloud where it does properly put the data in the storage volumes, but it does not have a easy restore method. And so what people do is when they try to reinstall it, it wants to generate new passwords for databases and things like that. And if you didn't take the time to dig into how those passwords are saved, you have a database with a password that's different than the one you're reinstalling and the restore process then becomes substantially more complicated and that's where people get really agitated. It can be very disheartening to spend a lot of time building something, getting it to work, but then find out from a minor mishap or just an update, you lost all of your data. So we always encourage people to really spend the time making sure that once you get something up and running and you should be taking notes along the way of how you got to the point of it up and running, then try to rebuild it again, then put some demo data in there and rebuild it again. Now you have a process. You're demo data there after the rebuild. That is something you should do before production. I know it goes without saying that you should do this, but you would be shocked at the number of people that go as far as in production because that's sometimes the people who are contacting us. It's not from a random person, a home user that has lost some data. We've had businesses contact us from internal IT teams that just deployed something and now it's not working because they can't, well, it is working. It just has no data in it. I do consulting, so I'm talking against myself here, but I much rather help people with an innovative project and getting something set up than try to recover data. Those are always way more tedious and way more expensive than a properly set up system ahead. So this goes against my business model in some ways. But I really want people just to have things set up right. I'd rather always work on innovation, not recovery. That's my opinion on that. So question, will I answer? Yes, we are taking Q&A from the live feed comments as well. Yeah, another... So kind of in line with that, there is something that I think should be obvious, but never is when it comes to restoring data because we're on the subject. So this is just a tip, but I feel like a certain percentage will be like, of course that's what I'm going to do. Why would I do anything else? But you'd be surprised. So if you're like me and you use LVM, you really should use LVM, it's really good. But if you expand your VM, like a cloud VM, for example, onto another disk to grow it, basically make sure you're not just grabbing the one disk. You know, update your backup to get all of the disks in your LVM config, because if you're only backing up one disk, and then you try to restore it, that could be a little embarrassing. I've actually seen that happening, keep an inventory of your virtual disks and what you have, try a test restore, just like you were saying. And to your point earlier, it's kind of like how I make videos. I think you're probably the same. I come up with some commands to arrive at a state, but before I record it, I destroy everything, rebuild it, destroy, rebuild it, try to take away a few commands, try to find out if there's any commands that just aren't necessary, you could eliminate some steps and get it down to like a distilled process, just like you were saying. And that could really help. It's just strange to me how so many people just get set up and running with something, they trust it immediately and then they just keep using it, kind of like how some people will complain that Wi-Fi doesn't work in Ubuntu. And then I'm thinking, if you didn't even try it in live mode, you would have known that ahead of time. You replaced your entire operating system and now you have no network. But you know, it is what it is. You have to train people. And I think too, Jay, we touched on this, I want to wrap this up, it was kind of that automation mindset. One of the things that Jay does is, well, I think, I'm going to guess at how Jay does this, because I know Jay well enough and that I think this is the right answer. Jay doesn't ever install anything directly on a system. He updates his Ansible script and makes sure it installs it. So if Jay says, hey, I would like this utility added to my system, this extra package, he goes and not apt-get install it to his system, he updates his Ansible script, that way he knows it's updating and installing on his system. And if you are doing it from that methodology, like you're always starting from a build script, that makes it extremely repeatable. And it's a, you know, when I built some, I got to probably do an updated video on MinIO, which is an S3 system. And one of the things that I did was I built an install script and I kept re-rolling the system back, run the install script. Well, I didn't have the outcome expected. Roll the system back, like Jay said, learn LVM. I was using a VM so it was a snapshot that I just hit rollback snapshot and, you know, re-run the script. Oh, it's broke here. All right, fix the script. But now I have that deploy script and because it had to be deployed on several different servers to create a similar setup at all these different sites. One script deployed to all the sites once and that same script can be run again to easily push a new password, for example, to set the, because you have to, when you're building MinIO and storage buckets, you want the same password set to the storage targets for these systems. So you can, you do that actually all through the command line and scripts are really handy for that. So as long as you start thinking from that mentality of it, matter of fact, I'm going to work on writing in Installer because we had a discussion about Greylog and there's some, Greylog 5 is out. I have not upgraded to it because you can't just hit upgrade from 4 to 5. There's some trickiness to it. So I'm going to do a new install video for 5 because they have some very different changes but I want to build that as a script that I'll give away on my GitHub so people can follow the process from Greylog. I'm copying it just like in my Unify video. I'm copying it from the official documents but I'll add any notes in there to make it clear about how to get that done. And you are right, by the way, that's how I roll out apps. Basically, every application I install even on laptops and desktops is always a get push every single time but there's one more layer because I'll push to a staging repository where it's tested and then if I get a message back that it applied successfully then I graduate it to the production repository and then, you know, all my systems have it. But yeah, you were pretty spot on though. Yeah. I see a question. It's a pretty good one in here. We even talked about it much. Wendell did a good rant on this. Wendell from Level 1 Text. I can't remember the name of the video. I think it's actually titled about Windows Storage Performance and it's Mr. Pixel here says, is there a good alternative to Windows Storage Bases for rating, pooling, NVMe disks and bifurcation cards and a server? I would never recommend unless you're absolutely locked in and have to use it using Windows Storage Bases. I see Wendell ranted about it because he talked about how disappointed he was in the performance of it just not being there. One of the ways in the enterprise world that you can get a large volume of storage inside of Windows is using, and I'll throw TrueNAS out there because it's a popular solution, you build your TrueNAS server to high-speed spec. You can either A, have people directly connecting to it as an ass or B, because you want to use all the features of a Windows file system. You can present the TrueNAS Ice Guzzy as a storage option for a Windows system and then you get to use all your normal Windows tools that you're used to for managing and it shows up as a drive. So you can mount an Ice Guzzy as a block device in Windows and present it as a drive. So I think that would probably be a good alternative to storage spaces. I need to hold, there's a lot more of your use case I'd probably have to know to give a more complete answer, but that's a path you can look down and see if it matches the features you're looking for. Yep, absolutely. Yeah. I haven't looked at storage spaces in a long time. I don't think, I haven't either. I haven't used Azure much. I mean, I used to use it for work. So I did kind of know it, but it just kind of like disappeared after I left. Yeah. So, yeah, that's that. So let's see. There was a question I thought of answering. Let's see here. I'll quickly answer this one real quick. I see someone asking about open app ID and stored. It's just not great. It's going to miss some things. You're finding some applications aren't identified. This is the challenge with identifying applications, even with firewalls with deep packet inspection. They don't always get it right. It is hard to identify traffic because the more traffic that's encrypted, the more blind the firewall becomes to said traffic. So this is always a cat and mouse game. I've got an entire video dedicated to content filtering and the challenges within it. So I have, that's a good video where I break down why it's so hard to do that on a firewall. So hopefully that makes sense. It's a, it's a complicated, it's an entire long video on that topic of trying to identify thing based on the, just the traffic passing through the firewall. So I'm going to answer this question. So basically someone is asking how would they clone, you know, one to one clone from a, a VPS basically to a local server in this, you know, you could probably ask the same question in the opposite direction. It'd be the same. So on Linode's documentation and you know, this will probably work in other cloud providers too. I just can't confirm or deny that, but I have, I do this a lot on Linode actually. I probably do this every month. So there's an article on Linode's documentation that goes over a DD over SSH, you know, process for backing up a disk. And this is something that you could use to even send something up to Linode. Let's just say you had a custom Debian VM that you have set up the way you want it. You can literally just DD that over to Linode. I haven't tested that, but that's more of a theoretical thing. I have done Linode to local and then local up to Linode. And it works just fine. So the process is basically rebooting your instance in recovery mode, which in the case of Linode is just a Phoenix ISO. It gives you a command line. You basically set a password or a key or whatever and start SSH local. You just run the DD command that's on their documentation page. It takes a while because I mean, you're pulling down an entire disk, but it does work. And that's actually how I back up the instances that I manage on Linode's platform. I mean, I have other backups, too. That's not the only way I do it. But the DD clone is probably the way I would go. Yeah. DD clone is definitely one of the ways to do it. The most ideal way to do it is if you have a script that built whatever services you're running in the cloud, you should be able to rerun that script again on a local instance and do it. But DD is definitely an option. Yep. And it takes a while to get your automation to that point. So I feel like for pretty much everyone, they either stay with this process or they just start with this process and back up this way. And then, you know, eventually they'll build their automation up and get used to that mindset. But yeah, definitely agree. Yeah. The automation, it's where everyone wants to be. But trust me, there are still a lot of things Tom doesn't have automated. I just haven't had the time to redo them automated or, you know, gray log is an example. I set up gray log like three years ago. And now I need to set up again with five. So even if I did have an automation script, it wouldn't even be valid because it'd be completely different. So I want to do it mostly for you to have an automation script. So I'm hoping, but the downside is it does make my video a lot harder to do because I had to first build the automation script then do gray log. I could just follow their install instructions, which would be fast. I can copy and paste those real quick and do it once. But I could have you automate that so quick. And I even have my gray log server automated to where it's a variable which gray log server an instance is supposed to report to. So I literally just changed the variable per instance to tell it where it needs to send its logs. And then the firewall will allow it through from there. So we need to talk about it. We need to get you automated. Need to get me automated. Well, I understand your side of the automation and automating the service to talk to gray log. What I need is an automated installer right now for gray log five. So that's, that's not hard to do. I've done way worse. Yeah. Yeah. So I made that a wiki installer for and that was amazing. So, yeah. Cause that can be a pain to set up media. We did a video on that. Yeah. Yeah. That would be something too. All right. So a couple of quick ones to knock out because these are relatively they're more or less statements rather than questions. But in a recent episode, I think it was our, was it the previous one before that and I'm, I'm getting a little confused. So basically we were talking about monitoring and it was someone asked how I do it, how I get notifications. And I mentioned how I get mine. And I mentioned that I don't use SMS. So you, I mean, there's nothing wrong with using SMS. And someone pointed out, you know, hacks with acts pointed out many cell providers have emailed SMS gateways because sending SMS messages via an email address for free. And someone else is someone named Scott Wilson mentioned Twilio for notification options. It's easy to integrate, pay as you need, which is all true. All of that is absolutely true. There really isn't any reason for me to, you know, go away from SMS other than I get a lot of text messages and I just don't want it buried. And I know there's, you know, ways to have different notification alert settings and whatnot. But I really liked having everything in a standalone application rather than, you know, being lumped in with, you know, my kid talking about a movie trailer he watched and right next to that is a server alarm. But there's nothing wrong with that. And yeah, Twilio was really easy. I have set it up before it took me all of like maybe 10 minutes to set up. It's surprisingly easy. And just like one of our commenters mentioned, there's an email address associated with phone numbers. So you could just use that and that could even be easier. But yeah, we understand that that stuff exists, sometimes we just sometimes just go a different direction because of reasons, I guess. Yeah. And the mail server, I've seen someone asked about that for getting your notifications. I actually commented that, you know, like Synology has a mail server built in. There's a lot of different and projects out there that are easy to get started with the mail server. All of me and Jay will talk about this all the time. And we say the same thing. It's easy receiving mail. It's easy setting up some of the mail servers. There's a lot of automated script. Getting off of spam lists. The only real solution to the problem. It's kind of a workaround. But in, I guess this is about points of view. You pretty much have to buy, pay for a legitimate relay service, such as I think mail gun is one of them. There's mail hop. There's a couple of different relay services from a couple of big companies out there. And those relay services are almost a necessity to send email. Because if you set up, especially if you're a homelab user and you're doing this on a cable modem, pretty much all the ISPs that provide internet for our home users, their IPs are automatically on a list. They just say, you know what? We don't receive email from those. That's kind of a default spam block because that was a big problem solving for spam. So as much as mail servers is a really frequent question we get when people ask about homelab. Hey, we'd love to set up a mail server. It's, you pretty much, I guess maybe it's worth doing as long as you accept that you're going to have to buy a relay service access in order to get emails to leave your system properly. Because you're not going to be able to email most people. You're going to end up almost everybody spam list just because of the IP block you belong to. Right. I would even go a bit further with the mindset to kind of phrase this with a series of three questions like should it be in your homelab, right? How do you decide that? So I've come up with three questions. One is is it fun? The second question is, is it educational? And the third question is, is it maintainable? So the first thing being, is it fun? Because if you're just doing it for learning for work and that's the only thing that you do it for and you don't actually have fun with it, it's just something you're required to know for work. Well, I mean, that's not really all that fun, but it is important. Yes, but we kind of maintain a little bit. But the third question, is it educational? Is this something that you want to learn? Is this something that's been on your radar for a while that you just want to dive in and learn it? That's another thing also. But the third question, is it maintainable? Do you have the time to maintain it? Right. If you don't, it's just, no matter how fun and educational it might be, if you don't have the time to put into it, then you're not going to be able to maintain it. Is it educational? No. Is it educational? Yes. It is educational because you will absolutely learn how to maintain email server. Is it maintainable? No. It normally is not because it's like you're saying it's, you know, blockless and whatnot. I feel like you have to look at the anxiety level because you email is something you'll, you know, you're going to rely on that and they're really not that expensive. So that way you don't have to maintain it because unless it's something you want to do, maybe you want to be an email administrator. If that's what you want to do, then absolutely dive in. But if you don't have like, if you can't answer yes to at least two of those questions, don't. Just don't. Yeah. Kind of back to a true and ask out question I've seen come in here. I am almost done with all my questions. And I'm going to do a bunch of projects with Chernas scale and virtual machines. And that way I can tell you whether or not I love or hate it because I really didn't like it in Chernas core. Chernas scale. I think they did a good job. The good news is it's fast. It's have direct access to ZFS. It uses Z vowels for storage. The storage performance has been wonderful on it. I've been happy with all those features. They don't have any type of like download as a VM. They don't seem to have an import. So there's not any native into Chernas itself tooling to allow you to back it up to allow you to. Well, technically you can clone it to another true NAS that that kind of is an option. But you're only cloning the drive not the metadata like the settings, the network settings that you put into it or any of the stuff. So all you're doing is cloning that VM. There's no automatic export of that VM to another true NAS. It's not like Proxmox or XC PNG or VM where where you can take a VM and say, hey, take this VM and place it on another server or give me a backup file that VM so I can back it up to a file store and then upload it again in case something happens. Those features just don't exist. And that's why when people ask like they want me to compare true NAS scale to PNG or Proxmox, I'm like, they're not they're not playing the same game. It's like they have this little add on feature where you can run a VM in it. But that's kind of it. It's an add on feature to run a VM. There's no even if you had 10 true NAS scale systems on the same network, they can't even migrate to each other. There's not like even a lie there's no migration, not even live or otherwise. You can just clone the the Zeval with ZFS replication to another system. But that's not really the same as what you'd be doing with Proxmox or XT PNG or VMware. So yeah, there's not really any backup method under is the short answer. But there's also not a lot of other things. It's just kind of a, hey, cool, we can have one consolidate a box that runs this cool VM. But as we said earlier, you should have that mindset of build it with an automation script. So you can rebuild it again if something happens. Yep, absolutely. There was a question from Christopher S. And I pulled some of these questions from the YouTube channel. So I don't remember which video actually I think this was the automation mindset this one came in from. So Christopher S. asked and I'm going to summarize this one. Basically he's talking about having a base image in lieu of Ansible. There's a few things I wanted to comment on. So for example, he mentioned he doesn't want to spin up anything using Ansible. So the first thing is, while you can use Ansible to spin things up, that's not really what Ansible is for. So basically you could use Terraform, Packer to create images and things like that. Or you could use none of that. But the meat of the question here is for this individual, a base image, he feels it's going to work better maintaining a base image and just restoring that. There's nothing wrong with that necessarily. But I used to do the same thing. So there are some weaknesses when it comes to base images versus using Ansible for configuration. So with a base image, what I ran into was it just became a pain to maintain. So let's just say, for example, you're using a specific application and you want this application on every single thing. And you put that in your base image. And then later on, you stop using that application because you found a better one, right? So now you have to go into the base image and remove that, the one that you're, the app that you're replacing. Otherwise, every single server you spin up from that, you know, from that, we'll actually have that application in there baked in. Then every single instance, you'll have to uninstall. So rather than do that, you go into base image and remove it from there. So none of your machines that you roll out will have it. But you'll find yourself continually doing that because later on, you know, you have a bunch of updates to install. So now you've got to update the base image for that. And then it just becomes like a weekly thing at some point because you're constantly making changes. But if you don't make them any changes, sure, it's great. But if it's just to ansible or something like it, it could even be a bash script. It doesn't really matter. Then you could just remove it from the script and just set it up to run after. Now, honestly, you could just set up a base image that pulls down a script and runs it local host. And then your, you know, your image will be updated that way. But I just wanted to bring up some of those weaknesses just for food for thought. I'm not saying don't go that direction, but just keep in mind that that often happens. Personally, I got away from base images for that reason. I don't like having a script run when an instance comes online. And then that makes the rest of the config happen. You could even set up a web server. Just make sure it's not externally available, like a local web server. It just shares the bash script even. And then you could just curl, you know, URL of your server pipe to sudo bash and have the instance do that. And that could be the easiest way to get started even without ansible. So I think, honestly, for most people, automation does start with a bash script. And then later on it becomes so hard to maintain, then they start looking into other solutions. So there's a natural path here. But I think this is a question that's going to come up a lot. And there is a bit of a debate around having a base image or not. With some people feeling like you probably shouldn't of the other people, you know, say you should, it really doesn't matter. It just matters what is the best fit for you. Yeah. Maintain the base image becomes kind of the tricky problem. Right. For me, I leave. And this is what I'm doing specifically in my Trinascale system. I leave a bunch of running all the time with automatic updates installed. So it's always up to date if I need it for something. It's just sitting here doing nothing most of the time. But hey, my NASA isn't doing anything unless I'm recording a video anyways. So it's just kind of running a background doing that. And whatever I need to do a demo or a test, I need a VM real quick. I stop it. I clone it real quick and then I start the clone, give it a new name of the project it's going to be for that day. And I set up all the things that I'm going to test on it, see if it works, and then I can delete it and destroy it. But this way I leave something running all the time. Now, technically, I should, because I will get an error message, as Jay has mentioned many times. If you don't reset machine IDs and SSH keys and stuff like that, I do have a quick little script. I think we mentioned it the last time during our automation one, but it's just basically also I'm doing is I delete the keys and you do a deep package reconfigure SSHD after you delete all the keys and it rebuilds them. Depending on how long I'll let that VM live, I may do that script. Now, sometimes I don't do it and the reason why is because as far as when I'm shutting down the main from the clone when I SSH into it, it just thinks I'm SSH into the main one. So, I don't really have an error message because it's very ephemeral. I only need this because I have an idea. I want a thing I want to demo. So, I'll do it, but I don't feel like removing the thing later so I clone the system or you can snapshot it. Yup. Yup. So, there's a number of actually a question about answer I wanted to answer. It's actually kind of right in line with the one I was already going to answer anyway. So, it looks like the username is ShadowWee if I'm pronouncing that right. I'm sorry in advance because I'm probably not saying that right, but so basically Ansible tells the system what to download from where, what parameters, and launches it. I mean, yes and no. So, the quickest explanation I could think of by default unless you use a different style which I won't get into. Ansible is going to use SSH and it connects to the system that you want to configure and you write the playbooks in the YAML format. There's different modules. So, there's a module for copying files. So, you want to copy a file to the system. Then there's a module for that. There's a module for app. If you want to install app packages, there's a module for you know, DNF and so on, whatever the package manager is, you could tell it to install packages. You could tell it to update packages. You could create a template. There's a module for making sure a service is restarted or even just enabled at all after something is installed. Basically anything that you could do in like Puppet or Chef that I'm aware of, I don't think there's anything it can't do. It could do all the same things. It's just simpler doing it and the YAML syntax, you don't have to like know YAML, but you'll learn YAML through the eyes of Ansible by using Ansible. It's pretty simple because you could literally just type the simplest play could be apt colon and then I don't know engine X, for example. And that's it. That's the entire thing. Whereas with Chef and Puppet, it's an entire code block. You have to write for that one package. But it's really easy to use and I highly recommend it. Just start out small. Some of the tutorials will have you just ping or use Ansible to ping servers just to prove that there's connectivity. Then you use it to install a package and then next thing copy a file. We also had a person on the YouTube channel ask, how do you deal with configuration files in Ansible? So far I haven't seen a lot of Ansible plugins that could do more than just check out a file from Git. So actually there Ansible could do a ton of stuff there. So Ansible uses Jinja to templates J-I-N-J-A. It's a weird word and it's not specific to Ansible. You could use the Jinja template with bash and I've done it. Anything, any programming language can use it. But basically what it is is you get a config file and let's just say, I don't know, the SSH config file, it defaults to port 22. You could actually make something a variable instead and then just feed the variable to the instance and the file will be the same for each with the only difference being the variables and you just put the variables in brackets and it literally takes care of the rest. So the template module in Ansible is absolutely the way to go because that way you don't have to maintain a config file for this server, a config file for that server and that just becomes a mess. You could just maintain one config file for each thing and then just have variables in there that anything that's different from one system to the next will be in the form of a variable so it makes it a lot easier to maintain. So that's the way that I would do that for sure. I got to dust off my Ansible stuff. Yeah, I mean we could just do a sit-down and I'll maybe even turn it into a collab or something and figure out on there. Yeah, I just got to sit down and watch a bunch of J's Ansible videos. Well, I think I could just create a template for you and walk you through it honestly. Just like a skeleton just implement this and installs H-top on anything you install it on as a proof of concept and then you could just add to it from there. Oh, let's build an Ansible playbook and build an Ansible but everything at the same time because I'm positive a lot of people would love a GreenLog5 installer. That'd be fun. I mean I did the one for MediaWiki and let's be honest MediaWiki is not the easiest thing to install. No, it's not. There's some trickiness to getting MediaWiki set up. And yes. Yep. Ah. Oh, this is something probably good clarification here, Jay. And I think he's referring to you. Do I need that? As a sponsor of the show, I would suggest you use Linode and use our offer code for NextCloud but no, you don't need Linode for NextCloud. You can absolutely run this locally, run this on any cloud platform you want or anywhere that you can get it set up. I believe Jay's got some really solid tutorials on that. Yeah. And I mentioned in mine, I don't know if this person was watching mine. I'm going to assume yes because I don't have a NextCloud video. Yeah. Well, that's true but I usually mention that what I'm the instructions I'm specific to Linode and I try to make sure to mention that because I use Linode because when a company sponsors my channel, I have to like the product. I'm not going to just oh, you gave me all this money to recommend it. Sure. I'll just do it for that reason. For me, I have to like it. I have to know it. I have to, you know, be able to, you know, put double down on yes, this is good because I use it. So for me, I have Linode. So I have a demo account and it's just stupid easy to use it for filming footage and things like that. So I try to make sure to mention that it's not Linode exclusive. Like you said, any cloud provider local is fine. Once you have the instance created, I show the process usually of creating the instance on Linode but once you have the instance, however you get it, whether you get it from Linode, Digital Ocean, Amazon, whatever it is, then from there, they're just, you know, normal commands. You're just adding, you know, you're downloading a file. You're putting it in the location. You're creating the config directories and whatnot. From there, it's really kind of vanilla, honestly. Yeah. It's a great project for learning too because it updates rather well once you get it set up. It's pretty stable and there's so many different places you can notice because there's Docker versions of it. That's actually one of the apps that's officially supported by Trunas Scale. I believe there's a lot of write-ups on how to get it loaded on your Synology. So yeah, it's a fun project for the goal because it helps you learn things. You kind of like, it's not just playing. I mean, knowing around playing is fun, but when you start with, I'm determined to get NextCloud set up. What do I need to know? I probably should learn some Linux command lines. All right. I got to learn, you know, I learned how to load Linux first. All right. Cool. You know, you got to start finding all the building blocks that let you get to the top of that goal. And there are, you know, a lot of cloud providers in the market place. And I'm, I'm sure other cloud providers have the same where you can literally just click a button, name it, and you have NextCloud and it's, it's super easy to do. Some people ask me in my videos like, why don't you just use the marketplace or insert name of App Store here. And I'm thinking, well, I could do that. The video would be 30 seconds long because you'd just be seeing me click one button and then it's available and that's not very entertaining. But the bigger reason is I feel like if you're in a time crunch, you don't really have that much time and maybe learning isn't really on the top of your list right now, then the one click apps are absolutely fine. It's just that they don't make good videos and it doesn't make good educational content either. So if that is your use case, then that's not something like my channel helps with because it's all about educational stuff, but it's absolutely a good way to get started. So if you're in a time crunch, you don't really have that much time and maybe learning is really a good way to get started if that's the way you want to go. So yeah, it also leads you a little bit back to the beginning of this episode where we mentioned understanding how something got there I think is really important as if you're an IT professional maintaining this even if you're in your home lab, it's part of the learning experience, but you want to make sure that cool. It sets up when I use it from their app store, which is great, but can you back up your data we're going to have a harder time troubleshooting why it broke and you may even have more trouble getting the data out of it to set up a new instance. So this is a few things to consider and this too rather than me type all this out. The question comes up a lot about proxbox versus xdpng. There's like two or maybe three videos I've done on it, at least two because I did a main one and then kind of a follow-up one talking about all the different things. I favor xdpng because we do so much enterprise consulting with it. We have companies that have data centers with this like many, many hosts, many, many, you know, one of them we helped them migrate over 2,000 virtual machines off of VMware. I've always liked it from the scalability perspective. That's why I choose xdpng and I don't really have the time to mess with proxbox. Jlex proxbox and proxbox is solid. It's reliable. It's secure and stable. You would say it's stable, right, J? Absolutely. Yeah. So there's no reason not to use proxbox if you're on the fence of which one to use comes down to preference. I know when it comes to scalability and interoperability for the backups and everything else and some of the really slick features xdpng is really enterprise-grade. So it's one of those things of if you want that high-end scalability in it, you can absolutely use it in a home lab because it's all free still. So you're they're so both very accessible, but it still comes on the choice. There's good, they're both good choices in the market, whichever one you want to build out on. I feel it's funny a little bit or a little amusing because, you know, everyone kind of thinks it's like team proxmox and team xdpng and I'm just sitting here like, I like both. The only reason why I haven't done xdpng content or the only reason why I don't use it is because proxmox because at the time where I was starting out with proxmox or whatever or actually picking a solution I had a server that didn't have a lot of RAM and I think the tiebreaker for me was the fact that there's containers built into proxmox and I needed to use containers to make good use of my memory because I needed it to go, I needed it to spread kind of thin here and that was just a tiebreaker but I've been I maintained Citrix VMs and became very familiar with it so technically I kind of went outside of my comfort zone but a lot of people will assume because you make xdpng videos I make proxmox videos that were on different teams but honestly like I love them both I'd go as far as to say proxmox is really good if it's if it's business proxmox is good for small business xdpng is bigger for like the multi-location big corporation type thing xdpng is going to absolutely flourish better there but proxmox is also good who knows I might even make xdpng content it's just I have a lot of content to make and sometimes that's a challenge me and Jay both are pretty diverse there's there's a lot of overlap in what we know but there's also some gaps and it's it's kind of sometimes a divide and conquer thing it's not that we didn't I wouldn't like to cover proxmox it comes down to time how many things I'd like to review that I don't have time to do it and Jay's kind of got the same thing he's busy cranking out some of these other videos it's like what do I do I stop and learn another hypervisor or do I continue and update my bash series or update my python series or update my ansible series Jay creates a ton of content so it's something has to give it's hard for us to sometimes review exactly the same product it's like Jay doesn't take the time to learn as much pf sense because he doesn't use his day in day out so I usually have more pf sense content it's not that I'm against Jay or anybody else doing pf sense content it's just a divide and conquer type thing of let's see how much content we can get out there and talk about it Jay hasn't time I don't think you've set up WireGuard yet have you no I it's on my list but then and this is the part recording is the easy part editing is the long part so I can set in front of a camera and talk that's not hard but at least not for me but then when it comes to editing like I have 300 gigs of footage right now I'm just chewing through and I just finished I think the last video I recorded back in September along with some new ones and at the same time I'm creating a video about the new launch heavy keyboard which I think I'm going to put out tomorrow so I'm doing new content in addition so it's quite a big operation for just one person yeah and the challenge with an editor is you need someone who can like I edited out one single word because it would have changed the technical accuracy of my encryption video I did yesterday yeah there's I didn't know I added the word the word was inaccurate I had to trim one word out it's a quick one word trim but it does make a difference for technical accuracy and editor would not have caught that unless they're familiar with ZFS encryption so it's one of the challenges as we audit it's not just editing it's auditing for they're probably better describe it we don't edit we spend a lot of time auditing to make sure everything is technically accurate so you can follow our instructions to absolute you know accuracy that's that matters a ton to me and Jay yep and one thing I'm going to I'm not going to give a time window because right now my workload is way too high but I am planning the way you know the people that were that apply and are chosen for this they could watch content that may not even show up for a couple of months sometimes and just watch it before anyone else and then just in exchange for trying all the commands just going through the process letting me know if anything you know has broken or isn't working quite well I do plan on rolling that out to give people that experience but I think that'll help because that's you know then again that editing cube rolling out anything even that is going to take a while but I'm getting so caught up at this point that I think within a month or two I should be able to start rolling out some new services so we'll see how it goes hmm um well here's a fun question one day I will do a video on this but in the meantime someone who's done a better video than I have time to get a data that's kind of it's such a niche thing and I don't have not many of my clients have a specific use case for us there are clients that do we just the ones we have don't have enough files to next I have one potential client we're talking to that would probably need it because they have like two million files something like that but you can use they call them special vdev drives in zfs they're basically ways to take small files and make them go faster by having this is something that you can I think do in butter fs but zfs is coming along there there's ways to basically say files of a certain threshold can go to different drives for that designated class of special vdev Wendell did a video on how to reindex some of that but butter fs I believe has policies you can apply to files to where they get saved on there for speed this gets one of those large scale things it's not something you need unless you're a homelab user that's collecting two million of something that you need index two million three million files usually the majority of people and homelab people are like this as well are just storing a bunch of video files that they have found somewhere that they would like in their mb server or their plex server or their jelly fin server whatever media some type of you know data iterations and you would want something to speed up the process on there so that it's low on my priorities list to do that I've retweeted and share frequently 45 drives they do have engineers that work with this all the time because they are storage consultants at 45 drives so they've done videos on sef if they are the sef experts man they are probably the only public facing people I know that dump that many sef videos out there they have a list of them they even have a bunch of scripts on getting sef deployed they've even made some UI systems for helping you get sef set up they have like a wizard system that is only maintained by far as I know I don't think anyone else besides 45 drives has done this and it's all open someone in the YouTube realm mentioned the username was Lensherm basically mentioning that comment about the fact I brought up about there being an iDRAC controller in a container that you could download and he recommends I didn't try personally but this individual is recommending domey style d-o-m-i style slash docker hyphen iDRAC 6 is the container that this individual uses for iDRAC and the idea is that the older iDRAC before they were able to be upgraded to HTML5 required Java in the browser which is a bad idea you don't want to do that but unfortunately there's really no other way you have to have Java for these old ones so what do you do do you just have a dummy browser that has Java in it just never use that browser I would say no because container would probably be a better way to go here because that I think this is containing something it's containing something that you don't want on your main system and I think that's a really good use case for it because rather than trying to figure out if you could update the iDRAC controller by a new one is it licensed or not no just don't worry about all that just use a container that has that in there make sure the container is only running when you actually need to use it make sure it's closed down so it doesn't become a loophole I think soon those will age out we won't have to worry about them no more but that's not today things take so long to age out because I mean some Linux distribution still maintain 32-bit for some reason and we've had 64-bit support since the Pentium 4 so you know here we are we'll probably still have this for probably another 5 to 10 years coming up a conversation unfortunately here's a weird one I don't think there's an answer for this you're running NetApp in an AWS instance but have you ever tried running TrueNAS in the cloud? I'm not aware of any TrueNAS cloud instances working so No and you have to really be careful about this because I have used NAS solutions in the cloud there's actually NAS solutions that exist just for this purpose and are only available in this it's not a recommendation it's just letting you know that they do exist but a lot of cloud providers will throttle transfers and I remember when I was working with AWS more often the issue was that if you had a file server in AWS it's going to throttle even if you use their equivalent they have equivalent of a NAS solution if you use it to not get throttled because normally you could just pay a higher tier and not have to worry about that but we at the company I was working for we were literally running into issues where we'd have a lift and shift moving a client from on-premises to the cloud we need to transfer all of their data and we've had clients that literally took a month and there's nothing we could do because there's nothing we could do with AWS that would be a problem nowadays but it was then you just have to be really careful about that and only that cloud is a great solution most of the time but storage is always more expensive in the cloud every single time every client I've worked with they had high spending in IT it's almost always in storage in the cloud and I've seen $20,000 bills on the low end of this so I would be very careful with running anything in the cloud on that level because of this discussion I might do a dedicated video my channel for because there's been more and more articles about people pulling things out of the cloud and it's weird to me that people always try to tell me and I know where you work basically what your answer is to this when people insist that the cloud is where everything's going I'm like oh you serve a small business clients and when you talk to people at enterprise it's a big mixed bag we just help people build they spend half a million dollars building that's just all the server hardware they bought pulling it out of the cloud to put in their new location that's common in the enterprise market that's the quote we were working before the homelab show why I was actually late it was reviewing another large quote for another in-house data center build because well mini data center small private cloud call it what you want they got a $200,000 budget to buy some servers they got a nice building and they evaluated the cloud options and said well this actually is going to be a big capital expenditure now but the cloud bill like Jay said we're spending $20,000 a month so the ROI the systems last five years the ROI is less than a year you're like oh cool we got that $200,000 back I mean you could for that $20,000 you could probably buy several really good NAS units but I think what it all comes to my personal opinion and this is just coming from someone who's been in the industry like two decades if you if anyone is all in on anything that's the wrong mindset every time 100% of the time I've heard of people everything needs to be a container we need to put everything in that one solution and before that everything needs to be a VM nothing needs to be anything I want to be clear on that the most talented IT and system administrators out there every solution that's available is a potential tool and they match the tool to the use case to the business to the need and maybe it will be a container is the best use case maybe they have an old iDRAC controller or something or maybe VMs are better maybe physical is better I mean it always depends and sometimes it's often a mix of that because I really don't feel like we will ever reach a situation where anything fits 100% that mindset I'm not calling anyone out I'm just saying please reconsider and think about it because again the most talented people understand match the tool to the need and everything else is going to be a lot better trust me yeah there's not a one size fits all for it nope you know if a small and I'll use an example of one of our clients we have a carpet store they have I don't know 10 employees and a handful of people recommend an on-premise mail server to them no that makes no sense at all what I recommend putting a small data center in them no that makes no sense at all hosting your stuff in a cloud for that particular client makes the most sense it's practical it's cheap and you have to really consider and serve the home great place by the way great forums especially for the home lavers here Patrick at serve the home did a lot of things that I really liked was paying a technician a good technician wage of someone who's competent to go into a data center that is competent on servicing these servers so a technician who's making he's in the Bay Area so like $150,000 your technician even included pricing for drive time for service outages et cetera et cetera to calculate whether or not it was out of some cloud hosted service and that's I like it because the tools that he gives you by doing that calculation is the tools that you need you can't just say well it's cheaper if I have it in my office over here yeah but now you don't have redundancy and how long is a life cycle on that do you know how to fix it if it breaks so there's all these different extra things so having a home lab we're talking here but whether you should host it or not if it's in a home lab always host it because that's how you learn that's how you get to be that technician that goes out there to the data center also ask yourself the three questions is it fun is it educational is it maintainable yes those are all really important ones there we're close to the NJs I know you have a hard stop right so yeah do we have any more questions or I think right up the note and we'll do more so feedback at the home lab show we're going to try to remember to say it all the time we like hearing from you we love answering all your questions we're going to try and do these if there's enough questions we'll do it twice a month it just kind of depends on how many questions we always encourage and like to talk to all the people and help you guys in your home lab journey so thank all of you for joining us for this episode and look forward to hearing from you