 Welcome to the Homelab show episode 90 net data and a few updates of people that are software projects that we're going to be talking about today. How are you doing, Jay? I'm doing great. How are you? Man, it is, I'm excited to talk about this because it forced me to look. Cause I've been using net data for a while, but it forced me to look at some of the extra things that maybe I don't use in that data. So we'll be diving into some of those topics today. And if you're a homelab user, uh, go use net data. Like this is, this is an easy no-brainer. If you're running Linux stuff, sorry, uh, for those of you who are hoping this is the one thing people complain about, does it support windows? And the answer is no. Be cool if it did, but you're talking about a very, very different animal. So, uh, monitoring, they focus on the Linux and BSD world. So yes, our friends who like BSD can celebrate. It does support that as well. Yep. We're going to talk about it. Yep. Uh, first we like to do an hand read and a long time sponsor, but new name. Linode is what they have been called. And they're still Linode in some places. They're rebranding to Akamai cloud. So let's let people see the connection there. Um, and you know, we've, uh, continued on with a spot, them being the sponsor of the show, the Akamai cloud, which occasionally, if you catch me seeing Linode, I will try to correct myself, but don't worry. It redirects the same place. So the, uh, it is a great place to host a lot of these projects. If you want to monitor those projects, you can use net data on those projects you host in a monitoring load. They've been the sponsors since the beginning. A lot of the projects we talk about that may be well suited for not necessarily in your environment, maybe they need to be external facing for whatever reason, uh, Akamai cloud is a great place to host those projects. They have a cool app store. So you can get a lot of them up and running really easy. Uh, we thank them for being a sponsor of the show. We have an offer code to get you started down in the links below. And yes, our old offer code works in case you click the old one. Um, they have everything set up to redirect. We've been in conversations with them for, uh, and we haven't had, we've been cautiously watching cause when big company buys little company, cause this end of ad read personal opinion, when big company buys a little company, we worry, uh, but so far they've, they've been good. So for anyone wondering, we're, we're keeping an eye on it for you. Cause we, we have questions. And that only, I think for me, since I record sometimes two months ahead, I'm going to be seeing the node in, in any sponsored ads for the foreseeable future, even though the name changes, which is a little awkward, but they redirect, like you said, so, you know, it's a transition period. So it is what it is. Yeah. We've been in close contact with them and talking to them about it. So it's, and we haven't really run to any issues and we, we don't have any reservations about using it and continuing to bring you the homelab show right from, uh, that hosted instance, cause the hosted instances haven't changed. You're still, if you downloaded this, uh, or went to the homelab.show that's still on the same, the name changed, but the server actually only think the IP address changed unless Jane moved it. Well, we did change it recently, but it was not in regard to, um, you know, Akamai, Linode, or anything. I just noticed that I had the instance like so over spec that it was so be so much beyond the usage pattern that it's just a waste of money. I think it was like 10% of the instance that I gave it. I gave it a ridiculous instance type. So, um, credit the new one just to dial it down a little bit because there's no point in paying for something or, you know, if no one's using all of it. So I just lowered it and, uh, no one's noticed a difference. So I think it all worked out. Yeah. And that's, this is something you can use net data for. So we'll cover that later. Oh yeah. Absolutely. That is a very good point, very good point. And I'll be talking about some real world troubleshooting with it as well because that's really what this is about. Um, want me to start with the first software update? Well, first of all, I just wanted to get out of the way a quick mention because I went to scale last year and I won't be there this year, unfortunately, I wasn't able to make it. So I just wanted to, to mention that. But more importantly, I know I'm, you know, probably could have mentioned this sooner, but you know, working on open stack, that was a big, but fun project that was something. Um, so I wasn't able to mention that, but I just wanted to mention it. If anyone is in the Pasadena, California area, then it's tomorrow. Surprise. Um, it's tomorrow. So if you can make it out there, that's great. Unfortunately, I will not be there, but there's going to be a lot of really cool people. Um, that'll be there because I've had a lot of really cool people ask me if I'm going to be there because they are going to be there. So trust me when I tell you, there's going to be some. Really amazing people at that event. So if you're close enough or have a means to get there in time in less than 24 hours, um, go on ahead. It, you know, you will not regret it. And I'll mention, uh, to our, our last episode, because I think this is where he kind of brought up a little bit of scale talk, uh, episode 89 of Rocky Linux, check it out. It's a great interview. Um, we learned a lot about the Rocky Linux project, how it's formed, how it's put together. Um, there's a lot that goes into maintaining a distro. And I thought that was a really great conversation. Um, which we, oh, a little privilege me and Jay have is having the deeper conversation that came out a little after we just chatted with him a little bit longer after the show. Um, no, you know, just, just kind of just a genuinely nice person. Uh, he was, so that was, he's talked about a lot of history stuff. Maybe one day we'll do a follow up episode, like some deep dives into some historical things. Cause that might be a fun talk about cause he's got a long outside the scope of Rocky, a long history and tech in community. So definitely some fun topics. All right. So I think now we wanted to get into some, uh, software updates. And this is part of, you know, we don't have like a brand new format, like chiseled in stone at this point. But, um, you know, we thought just figured it'd be a good idea to mention some of the, um, you know, updates, new versions or anything noteworthy that may come across. And we had a number of those to mention and then also some feedback and then we get to our main topic. So, uh, I think you had more updates than I had today. Um, for today's episode, but, uh, if you wanted to start then cause you have like three out of the four, yeah, out of the three or three out of the four, something like that. Yeah. So, uh, TrueNAS core 13U4 was released. And just before that was TrueNAS scale 22.12.1. I have more in depth videos where I dive deeper into them. Um, but in between, cause we've covered both TrueNAS scale and core, I, in those videos, I also dive deeper into my thoughts and you know, the too long didn't watch as this. If you want to use apps on your NAS, TrueNAS scale has been really making a lot of progress to make that path easier, a lot simpler. I've done more tutorials to dive into how you can get apps working better. If you're looking for NAS functionality only with TrueNAS core, it is still a solid product. Of course you're limited to the hardware supported by FreeBSD, which is, don't worry, it's a pretty broad support, but I'll bring that up for someone who insists on using real tech. I just tell people not to use it cause it's, it's a problem even in the, uh, Linux world for people having problems with real tech and Broadcom network cards, but it's even a bigger problem in the FreeBSD world. So that, that really hasn't changed. But in terms of a stable NAS solution, TrueNAS core is not going anywhere. It's just very focused on being a NAS and nothing more, uh, which means the updates are more stable, which, you know, we do a lot of enterprise consulting. So we're seeing a lot of it there. Uh, you know, we have talked about these projects on the home lab show. We know TrueNAS is incredibly well used in the home lab. So it's still a great solution and those little internal updates help. So hopefully that answers the question. Oh, and one final thing out there out there. Yes, I did do a review of the, uh, performance between the two of them. So, uh, that video is out there and there's a link in a forum post I have that really gives you the details on how they perform. And it's, they're really close in performance. Um, but there are some nuances to it for certain workloads. And, uh, that's, that's, uh, I'll leave up to you. I won't dive into every one of those details because I have an entire write up on it and video to go with the write up. Cool. Um, and for those of you that haven't made the plunge to move over to XCP and G, uh, the, the big announcement really, and something they've been working on, I think it's really cool is they have really enhanced the VMware migration. They're just making it easier and easier to get off of VMware. I mean, they know people are migrating off of it, and Jay's going to talk about something that you can use to help migrate off of it, uh, in a moment, but they actually have natively built in tools that are really being ramped up to just make that transition easier to get off of that old VMware stuff and move on to, you know, a nice modern, fast system without crazy licensing, especially once the Broadcom deal, which is very likely to go through Broadcom's buying VMware for those who don't know, um, the plan has been pretty simple. They plan to buy them and raise the prices. So, uh, if you want to get in before the license updates are due, Hey, there's some options, come on on to the open source world of XCP and G. Yeah, that would be a lot of savings. Mm hmm. And, uh, rescue Zilla, right, Jay? Yeah, rescue Zilla 2.4.2. This is one of those tools that I think everyone should have, um, either on a flash drive, on a ventoy, on a Pixie boot server, on a carrier pigeon that's able to boot your computers, whatever it is you do, I don't judge. Uh, but it's just something to have on hand because it's, it just has so many really cool tools built into it. You know, it's easy to do file recovery, image a hard drive, for example, recover a hard drive. There's so many different things. I, I mean, normally I use clone Zilla for imaging, but you can, I mean, we could use rescue Zilla for that too. So it basically is the Swiss army knife and I thought it was noteworthy to bring it up. Um, there's some new features. I don't think there's anything here that's going to make anyone like jump for joy, but they are, you know, welcome improvements. For example, they have an alternate version based on Ubuntu 2210. It's still based on Ubuntu 2204 normally, but there's an alternate version based on 2210. For those of you that have bleeding edge hardware and you need a kernel that supports all that. So if that's you, they have that version for you. They reintroduced the 32 bit version for some reason. I didn't even read why because I felt like, you know, even computers, people don't think are able to support 64 bit operating systems, you know, are capable of doing that because it's been the case since this, I think it's Pentium 4 HT, if I remember correctly on the Intel site. Yeah, it's been a long time. It's been a long time, but they reintroduced the 32 bit version for those of you that still need that, but they warned that there's some issues with part clone on account of it being not the same version that would be in the proper release for reasons that are too long to go over in this podcast. So let's be made it all about rescuezilla, which we're not doing today. They also have some improvements built in for working with encrypted drives. It has PC man FM as a file manager. So I haven't tested that yet, but apparently it's supposed to work better with encrypted drive. So if you need to do file recovery, I didn't really see details on how that works. But my assumption being just like it works on everything else, you click on the drive like a novelist on Nome, it gives you a password prompt, you type in your password, it unlocks the drive. So apparently you could use that with the PC man FM file manager. Now either way, I just the main reason I'm mentioning this is just to well mention it because it's just one of those things I think everybody should have at their disposal. Hopefully you won't need a rescuezilla image because everything will just go great. But you know, we're homeland people. I think we know by now that chaos theory often comes into play while we're building things. So there's that. Yeah. And I'll mention, you know, I've talked about using clonezilla to migrate between hypervisors. It's still an easy way to do it. Rescuezilla same answer. If you are trying to get off of whatever hypervisor on, even if you started with something basic like virtual box, well, there's not really a direct migration path to get over onto Proxmox, XP and G or something like that. Spending up, you know, things like rescuezilla and clonezilla and literally cloning it and bringing it over is a pretty viable way to do it. And that's not just for Linux, by the way, you can do this with some of the Windows systems as well. Now, Microsoft actually has their own P2V converter. I've done a video on that before to where you can take the physical and virtualize it. And by the way, that target does not have to be Hyper-V. There's ways to virtualize a Windows instance to other hypervisors. It creates the format for you, which is kind of cool for doing some of the imports. So there's there's a lot of different ways to do it. But, you know, rescuezilla definitely keep that in your back pocket on your Ventoy. Do you really set Ventoy video or did you do one? Is scheduled for Friday. Okay, it's coming Friday. It will be scheduled for 10 a.m. on Friday. Yes. So do watch that video when it comes out this coming Friday. So about a week from now. Today's March 8th. Wait, no, it's two days. I'm sorry. Not a week. It's not Monday. Yeah, you're probably just excited because you have a cameo in it. I do. Yeah. Yeah, that's gonna that's a fun one. I had a lot of fun with that one. I had that's why I wasn't sure. I didn't think you got released just yet. I knew it was soon because I we talked about it. So it's coming. All right. Feedback. Yeah, we have several pieces of feedback. So I have a few that I was going to go over two of which are from Mike. They're really good questions. I'm trying to summarize the first one, though. Basically, this is in regards to the distro Linux distro episode we did. And we're talking about, you know, the different kinds. There's still a little bit of confusion about, you know, does one distro do acts better than, you know, another is there a reason to go with the Red Hat family versus Debian? And, you know, the thing is there's so many ways to answer this because part of it is if you are employed in IT and if your company is all in on a distribution, sometimes you can make the argument that you could also standardize on that too, because then there's a greater chance for your things you learn with your home lab to translate to work. So I'm not saying you should do that. But for some people, they like to do that. So other people might just like to be completely different than work because they're on those systems all day and they might want something different at home. So that's a bit of a preference there. I don't feel like there's any way to say one distro is better than another because there's so many trade offs. And when it comes to a distribution being better than another at a very specific thing, I mean, there's tens of thousands of different things. So it's like, which one should we double down on? I think the best thing to do is look at the community first and just see how the community is because if you're choosing a Linux distribution, the community is really important. So if you go to the community and everyone is toxic, yelling and flaming at people, it's probably not a good distribution to go with if everyone's just being mean or something like that. If the community is not existent, like they have two or three posts and the message boards a month, then that would mean that getting support might be a little difficult. Now, I don't think getting support for any of the distributions we mentioned is going to be hard because they're all very pop, you know, popular. But I think those are very important things to look into. The work thing might be a factor for you may not be other than that. I think just take a look at the health of the distribution of the people driving it. I mean, is it a development team of like two people? Is there a bunch of people working on this? And just try them out. There's a reason why we have live CDs, although for servers that doesn't really factor in as much. But my opinion is just try them all and then whatever I think you'll know the one that's for you when you when you get to it. And basically from there, I mean, any other question about this topic should probably be more targeted. Like if you're doing a very specific thing, and that thing is mentioned, we could probably talk about that and come up with a more exact answer. But all in all, just try them out and check out the communities and just make sure that, you know, it checks all the boxes. And I think from that point, you probably can't go wrong. Yep. So the second one also for Mike is let's see, let me get caught up on this one. It was asking about my Plex server because I guess this is part of the larger topic where I like to have disposable containers and VMs where they don't really contain anything stateful, unless I can do without it or it doesn't really matter something that could be regenerated. I'm not really going to care all that much about it. In the case of Plex, just for those of you that don't know, I run it on Proxmox and for anyone that hasn't listened to us before, they're probably like, you're running Plex on Proxmox? Oh my God. Yeah, I am actually. But the thing is, I don't have a VM with like three terabytes of virtual disk. I have a virtual machine that has like a 16 gigabyte disk. And then I use auto FS to make sure that the file share is available. All the files live on Plex. So that's where Plex gets this data from, but it's mounted with auto FS. And the beauty of that is it never knows that it was never, it was never unmounted because it mounts only when it's needed. So when Plex goes to scan, then it's going to automatically mount and Plex will never know that it wasn't. But his question was more about the database because, you know, I've mentioned a state is stateless for the most part. I forgot about the database when I mentioned that. And it's because I don't care. I figure if the database is rebuilding because I had to rebuild a VM and people can watch TV because the CPU is pegged. We have books. And I'm also very patient. So sometimes, you know, with the Home Lab, if something goes down and it's I understand other people in the house might really enjoy it. But you know, it is what it is. We don't expect to have any failures. And I don't think I've ever seen it crash on my end. But if it did, if it had to regenerate the database, I don't really think I'd care all that much. I would just find something else to do. Maybe go outside. That's always good. There's a real world out there. Yes. So I'm being silly, but the honest answer, though, is, you know, sometimes you have to ask yourself, does it matter? And the answer is probably yes, it does. But how much does it matter? How much work is going to go into architect architecting something versus, you know, fixing it? If you don't really intend on fixing it all that often, if you have to fix it every month, there's another problem. It's definitely not Plex. So that to me, I guess the short answer is yes, that's just something I feel like can be, you know, a consequence of the build. And it's not something that bothers me for the database to maybe possibly go away and need to be regenerated. There is one more question I didn't see through on here, but is is in there is going to be that's the end of that question, right? Yeah. Okay, perfect. It was about static IP management. Did you see that one? I have it pulled out. I did not see it. No, okay, really simple. Robin wants to know about static IP. Do we use it? Do we DHCP? How do we handle DNS? And there's one more parts of the question that I'll answer in a moment. But me and Jay, I probably use more IPs than DNS. But when I like something and I'm going to keep it, it's not ephemeral, it does get a DNS entry. Not everything I do get to DNS entry. But of course, the things I do mostly because I have a bunch of separate lab pools, they're just going to grab whatever DHCP address they get. But when something goes into production, something is something I like that I want to keep or keep running static IP all the way. But usually 99% of the time, it's going to be through DHCP reservations. I've seen the firewalls exception, it's going to be the one handing it out. So you always set all your firewall stuff to be static. And maybe if you have a couple of critical servers that have a dependency on them, you'll set them static, but you'll also add a reservation. And the reason I do that, for example, let's say something that's critical, like my unified controller, which is a commercial tool we use for our businesses. It has a static IP, so it boots up and doesn't need DHCP. But there's also a matching reservation that came in really handy because I decided to rebuild that virtual machine and the rebuild process and upgrading it. I don't know, I just do the same MAC address in there. So even before I set it with static IP, it automatically gets that IP because I'm always using the same MAC address on it. So I have the reservations in there, but then they're often, you know, some servers may be that they're better off or need a static IP on them. True NAS actually does good with static IP because you're, you know, you're setting up a couple of different networks under a couple of VLANs. You're going to have to start statically assigning those, but I'll still put that reservation in there. That way, all things are known. And there's an option if you're using pfSense for DHCP, you can say only give out IPs to things that are statically mapped in here. That's another kind of security that way. You don't just get any IP address on there. And as far as DNS, yeah, you just got to create the entries for it. There is a way you can actually map your hostname in pfSense. You can tell it to register with the DNS server. The next part of that question was, can I use a piehole for this? I, yes, you can. Yes, you could even have piehole. I think someone said it had DHCP server, but I don't use piehole. So I keep everything so I have one source of truth and it's just less things to manage by having it all registered and mapped in my DNS. And we use pfBlocker. That's all in pfSense. So it's one source of truth for all things DNS. And the one last little follow up thing is can I detect tail scale running on a node in my enterprise network? Yeah, you can actually just look for the IPs. The traffic's not, it's wire guard traffic. So if you see wire guard traffic, you know someone's using a wire guard tunnel. If those wire guard tunnels also or those nodes are also talking to the block of tail scale IP addresses, you could detect it. So that is possible. What your thoughts on the static IP? Is that pretty much how you do yours, Jay? Well, yeah, for the most part, but I feel like I'm a little bit more, well, maybe this isn't true, but I feel like I'm more absolute about this. My policy is static IPs never hard. No, not considered not allowed. I hate static IPs. The only the only exception to this is if, you know, my ISP gives me a static IP fine cloud instances have static IPs but that's managed by the cloud server. When it comes to my use case, if I'm spinning up something temporarily just to play around with something, I'd ever give it a reservation static IP or anything, because I'm just kind of, you know, testing it out, I'm going to end up deleting it anyway. And the lease is just going to renew with whatever IP it gets. And it's just not going to be an issue for me to have to create a static IP in development. So I don't really think there's ever a reason to do that in development. When it comes to production, it's always static leases, not static IPs static leases. I use static leases. Yeah, my term for static leases is the right term. Yes. Yes. Well, I don't know about right. It's more of a preference in my opinion. Both are correct static IP. You know, static IP is what I was saying I don't do a static reservation is, you know, that static lease, I should say is a basically another thing for static or gosh, I'm confusing myself now. Yeah, DCP reservation is the same thing as that lease. There we go. Okay. So there's a lot of benefits when it comes to using a static lease over a static IP. For example, let's say you have to do some work on your file server. So maybe you booted it booted up with a live CD or something. It's going to get the same IP address always, even if the operating system is changed, even if you wipe the hard drive, it's going to get the same IP because your DCP server is always, you know, going to see the MAC address is going to hand it the same IP always. And another benefit of that is your DCP server becomes, as you mentioned, that the single source of truth for what device has which IP address, PFSense, for example, does a really great job of this. So I just don't feel like there's any reason for a static IP anymore, because at that one point, when I first started, I was putting static IPs and everything, and then I would maintain a spreadsheet just to keep track of which ones I've used, I would use the angry IP scanner, which is a hilarious name for a tool to scan the network, find out which IPs are available. If ever, I forgot to update the spreadsheet. It's just a lot of work. And then there's no gain. There's no value whatsoever. Use static leases, and I will double down on that. I just don't even know why people use static IPs anymore. But I mean, true NASA, the exception, if you put a couple legs of true NASA on a few different VLANs, DHCP is more of a headache to deal with. It's right. But that's kind of the exception. I mean, well, okay, I'm an exception. I have a bunch of true NASA servers. But normally, if there's in a true NASA service, just have one IP address, they're actually all set via PFSense. But when they have multiple VLAN, they have a static IP in each VLAN that they have attached. And even if they didn't, like another, another IP having, you know, basically using DCP for other things is not really going to be an issue for, you know, secondary networks that just need to exist on that VLAN or just have a connection to that VLAN. If it's not something you're regularly going to interact with, but then like you mentioned, you also have the setting in PFSense and probably other firewalls too to register every DCP address in a DNS. So I never have an issue where I have to add something to DNS. It all just works. And it's just, in my opinion, the most efficient way to handle it. Now, when it comes to piehole, the way that I implement piehole, and I don't really feel like I've had a use case for this, for the most part, because I would just have an ad blocker in the browser. But what really, what really gets annoying is trying to install ad blockers on phones. Yes, you can do it. And I know there's some out there. But that's where piehole really fits for me because if I'm on a mobile device, then I really do prefer to have something at the firewall level or close to it that blocks things for me. So what I do is I set piehole as the secondary outside DNS. So basically, PFSense will try to resolve anything here locally that I look for. But if something doesn't exist here locally, it's going to go outside of my network. And once it does that, it's going to do so through piehole, which I have the outside DNS server set in piehole. So it goes through piehole and then out. And that way, for me, it works better that way. I know that way might not work for everyone, but I do feel that's a good solution for those of you that don't mind that. So you don't that way, you don't have to maintain anything in piehole like DCP or DNS or anything like that other than the block list, do all that through your firewall, then have your firewall handed off to piehole, which then hands it off to the upstream server. So I think that's a good way to do it for those of you that, you know, think that might work for you, just a little tip. There's other ways to implement it as well. But that's the way that works for me. And because I've seen this discussion going on chat, if we're dealing with a business environment and you have Active Directory, Active Directory is going to be your DNS and DHCP for all those AD servers. And a common sub is going to be we maybe have a PFSense, that leg of the network, that section of the network that's where all the devices are, all the Windows computers and the users, that will be DHCP, but then other segments like the guest network, PFSense would probably be handling the DHCP for a guest network, or maybe you have a phone, a separate phone network or some other devices, PFSense would hand off the DHCP for that segment of the network. Right. And in our chat room, Elminster, I hope I'm not pronouncing that wrong mention that they use a static IP for the router, everything else is reservation. So, you know, that is one thing that is static is going to be my firewall. But I don't really think about it because for me the way I do it, it's always the dot one of the subnet is either the gateway, the firewall. So the next hop for me is it's almost like a different category in my brain, even though it's really not because it's all networking, it's all TCP IP. But after I set up the firewall router, whatever it is, then anything after that is going to be a reservation. So basically the same thing here on my end as well. Yep. All right. That was the only other question I had in there. So there's one more feedback I wanted to grab. This one has to do with databases. And I'm actually surprised this hasn't come up before because this is something that comes up constantly when it comes to consulting and working with companies. It's the age old debate about should you have a central database server which means you have a server that's the database server for all your other servers. They all connect to it for their database needs. Or do you have a database built in to each VM? Now, I can make equal arguments on both sides. So I do not have a preference. I mean, I do have a preference, but it is just my preference. I'm not going to tell anyone to go my direction on this unless you happen to like it. So if you have a central database server, you also have a single point of failure. So that server goes down. You just have to be you have to be aware that that's going to be a problem for all of your apps. Now, you can go crazy with this and have a, you know, a backup server, maybe even another. You could have a cluster database servers if you want to, which I've never really been wanting to do that because it's just not something I find fun. Being a database admin is a very important job, especially because I can't really do it. I mean, I could, but it's just, you know, not what I find fun. So usually I just kind of go the easy direction as long as I'm also paying attention to security. So I like to have a built in database. And one of the reasons why I like to have it built into each VM is because if I'm recovering the VM, I'm recovering it and its database at the same time. So I don't have to recover two different things. I don't have to recover a VM and then go go into the database server and then try to, you know, fix anything that might have happened there. Everything is self contained. But that also means that in the VM, you have a lot more running, you have more CPU cycles, you have more memory, there's more TCP IP connections or, you know, basically those kinds of things we need to think about. So what you go with depends on what's more important to you. If you don't mind having that single point of failure and maybe if you're really into database stuff, you want to build a cluster and have, you know, redundancy there, that would be really cool. In that case, absolutely go with a central database server. But if you favor being able to restore something quick and being able to do so in one shot and having each app being a self contained instance like I prefer, then you'll go that direction. But I don't feel there's a right or wrong answer for this. I think it's just whatever you like personally. Yeah, there's there's a lot of different pros and cons to each type of architecture you do and your design needs. Absolutely. And that's the all that's all the feedback I had. All right. We're ready to jump into net data. I am. I finally had a chance to try it out. It's it's crazy that I'm about to say this. I don't think anyone else will think this is a big deal. But I've had so many projects that I've been working on. And I'm finally at a place now where I'm like, I'm comfortable. I'm not like overwhelmed. Oh, my God, there's too many things to do like I normally am. I feel like I'm getting back to a status quo. So I'm finally able to go back to my list of things to check out, which is extremely long as I'm doing all these videos. I'm writing down all the things that I wanted to check out net data being one. I was going to look into Grafana. I've used Grafana. I've decided it's not really for me. Personally, net data seems to check that box so far. So I decided to check it out. I added it to my Ansible config. And it was just so much fun just watching an empty net data cloud account with nothing in it. And I'm just looking at it and I pushed my change to Ansible. And then net data shows everything, you know, basically registering. But wait a minute, what is net data? And what the heck am I talking about? I think that's probably a better place to start. So for net data the way I like to describe it is it's a it's a monitoring solution. At first, I thought it was just a graphing solution for, you know, having pretty graphs of your CPU memory and all that, which it does do. And I thought maybe that was it, but it's more than that because you could look at trends. There could be negative trends. Maybe you might have a hard drive. That's not full, but it looks like it might if it keeps going the way that it's going be full before too long, you could see things like that, which I think is really important. But when I installed it, I realized it does more than that because I started getting alerts, you know, things that I maybe didn't think to check. And speaking of piehole in the gloomy way, it told me that my block list was out of date. And I didn't even think to put in anything in my Nagios to check that it just didn't occur to me. So I'm like, Oh, well, that's interesting. I didn't know it did that. But I'm really happy to know that it, you know, about this problem. So I can go and fix it. That was really cool was not expecting that at all. I thought what I would get is just graphs and nothing more. But apparently has the ability to in this might be with just a paid account, the ability to send payloads to servers as well. So I'm I feel like I'm just scratching the surface of it at this point. But so far, I really like it. So I suggested it as a topic. Yeah, so I've been using it for a while. And I've done videos on in the past. And it's really worth revisiting because it is a incredibly actively developed project with also incredibly actively developed use cases. And I say it like that because it's not just about the tool, the tools fun and in terms of how easy is to install. Well, that part is the the part I had to talk about the least when I did my tutorial on it was basically copy paste that you can do the installer if you don't want to just copy paste, but they give you instructions you can follow by hand or you could just copy and paste their Kickstarter and it will grab net data. It's a fully open source project here and set it up on your machine. And then you're going but Tom I'm running a Redis server in a MongoDB and I have all these things how much configuration time do I have to spend on all that and actually the answer is zero their installer figures out if you're running a piehole as Jason just pointed out like Jason did not configure net data look for a piehole. Net data discovers what you're running on your system and starts activating charts related to that. Now we'll go even further. It understands even some of the temperature sensors. We had actually use this monitor temperature under load. So you're correlating CPU load along with the thermal of the computer. So there's actually all kinds of different sensors that can get its hands into. So the more it has available to discover the better this gets and it's extremely automated like even you don't have to be a high skilled user to be able to do this. Now it opens up a port on your system locally and you can use it just this way. So we talked about the net data cloud being able to consolidate data a little bit. We'll get there. But this does not require any type of cloud tie in or pay or anything. This is completely open source and runs on each one of your individual machines. Now once it installs it's got a couple different ways you can install it depending on your version. But it supports free BSD your red hat distribution sent OS rocky Linux whichever whichever one you're using and of course all the Debian and Ubuntu installations. The install goes pretty well for any of those. I really didn't run into any issues. I even use this on some of my XP and G systems. This is part of something they're starting to integrate into XP and G. We've actually I think we've done some communication between the teams that get even further because it understands virtual machines. It understands if you're running Kubernetes. It understands if you're running Docker. It understands ZFS. So it can understand all those and then give you insights into each of those in terms of real time monitoring correlation data. And correlation data is really cool because the UI is just beautiful. You don't have to spend any time customizing it. Also, it doesn't give you compared to let's say Grafana, Prometheus and some of the other tools out there on the customization that you might be looking for. This is a good base for monitoring and I'm based would probably be an understatement. This is an incredibly advanced monitoring tool that comes very preconfigured with a the visual is everything's up and down. So you can really tie into all things at once. So I can see, OK, the CPU is loaded. All right, but what is the file system doing? How many TCP connections are open? How many, which one of these VMs is doing it in the case via which VMs also peaking at the same time I CPU is peaking. So they get you all this data in one of a column format that makes it easy to use. But I did mention Prometheus and Grafana because they actually have all kinds of hooks so you could actually pull that data and tie it to those. This is not something that lives in a silo. It actually is interoperable with other things. It also has some options for tuning it in terms of anomaly detection. So you can say here's my baseline of a system. It runs this load over this time. Let me know when something changes. This is actually kind of interesting because for example, it was flagging and there's ways around this, but it was flagging when I run my backup on my gray log because I shut down my elastic, starting back up elastic puts it under load and gets a flood of UDP, which says, Hey, that's an anomaly. You suddenly have more UDP than before. So there's there's times when it would kind of alert you for that. It's also interesting to tune it because it can start sending you alerts based on or have anomaly detection based on whenever there's some type of spike in traffic that's coming to a system. So you can start going, OK, this is stuff I want to kind of monitor now back to what the cloud does. This is how they make money because I like an open source project that kind of has to have a business plan because then you're not going, I love this project. Oh, everyone had to go get a job and they didn't have time to go more. The team's actually with that cloud next next, obviously next cloud, it's not next cloud. It's that data cloud, that data cloud is the way you can tie all of your nodes into a single dashboard. This is where things get really cool. If you want to know what's going on in your network, you're just pulling them all together. Instead of looking at them individually, you can have them consolidated and then you can cross your nodes into a different type of view. So I can see TCP traffic over here, correlating with CPU load over here and I can see my database server over here and you can start to figure out what's being overused or underused and draw correlations between all the nodes together through their cloud. Now they offer this for free, but they have some upsells if you want better data retention, teams management and it's pretty inexpensive. Their pricing I found actually really reasonable and it's not something that the homelab people may or may not want. It's cool because by the way, the free one works great even for homelab people. But if you do want like a team member so you can have a project meeting diving into correlation data and you wanted more logging of all the data they'll read for their retention, then it's a few dollars a month based on how many nodes you have they have some pricing plans on there. I'm not going to dive into the pricing plans for public. You don't have to talk to a salesperson. You just click sign up and convert your account over to that. So I kind of think that's pretty cool. Having that business model means continued development. Now use cases and how you actually use it. Once you start diving into the how you use it, they blog posts and I've been a little bit since I checked your blog. So I checked it, you know, before the show here. How to monitor and troubleshoot smart attributes as in hard drive smart attributes how to troubleshoot memcash D NTP, Damon, MongoDB, engine X plus. These are all blog posts they have where they actually show the uses for these. They even have videos on our blog posts and sometimes videos as well on how to insert like post fix or just a whole list of other things that you may want to understand how to better troubleshoot. I actually use this the other day and me and Jay had this discussion and I might make this as a video because it's kind of a fun troubleshooting discussion. I was running into some challenges me and Jay both being creators and we used to eventually resolve and that means we deal with a lot of media and TrueNAS is both of our go to furthest, but there's some nuance we were running into, especially when I was using proxy media. Now, what does that mean in terms of homeland people? I was moving big files around and I found some bottlenecks. How did I find those bottlenecks? Whether it's my computer or the TrueNAS server turns out net data was incredibly helpful with my TrueNAS scale system to understand where the bottleneck is what the CPU was doing when I had these problems because I could visualize it while I'm moving files and go, that's weird. It moves slow. Let's try these parameters around so I can have one, you have to have clear metrics because I don't know if the adjustment I made fixed something or was anomaly. So you find a repeatable file like we're going to move this we're going to move a 35 gig file back and forth between me and TrueNAS and I want to see what the performance difference is and it's very visual. So I can look at the performance of I moved it this way. I was oddly getting really slow read performance but my rate performance was not affected. So it was like one of those this puzzle keeps getting deeper. But net data was really key in helping me understand it. It's just an easy tool to load and set up and running it in a Docker is how I run it on TrueNAS scale. You don't have to load it natively. It's available through their app says a Docker. So you just click it and install it. It's there in a few seconds and away you go. Now as far as their cloud goes because I've seen someone and this was a question that came up on the video centers. Is there privacy concerns? That's up to you. You don't have to share it with their cloud. It is not by default send the data anywhere. It just lives on your system. So you install it. It lives on your system. It is up to you if you would like to include that data to the cloud. That being said, there's not anything in there that's personally identifiable. That was a concern because when I say monitoring IP, it does not give me a list of IP addresses. It's not logging that this is logging performance metrics or connection metrics but not like it's not a drill down. This is not like hey, I got an IP connection from this computer or this computer. That is actually not on there. I really like that because that means it's easy for me to do videos and never have to worry about sharing anything about my computers outside of the name of the node in terms of personally identifiable information. So for those of you that are wondering, well, can I use it in a lab? Is there a security thing? No, you're connecting it up to their cloud and aggregating the data but it's an optional move that you don't have to make. So it's a really cool system for doing, I would say, real time data analytics. If you are looking at a web server and you go how loaded this up something Jay mentioned is like how unloaded our server was for the specs that are on there. It's a cool tool to help make those determinations. You know, is it a anomaly? Is this traffic spike that I seen something I need to be concerned about? Is it is it sustaining high load over time? Is it a hard drive load? Is I you know, do I have a lot of read right going on? Is it a database that's loaded here and there's some problem? How's my ZFS cash doing? This is something you can read inside even though it runs in Docker. Yes, it does read all the ZFS data out. And by the way, even if you're not using true NAS just in general, if you're using ZFS net data can pick up if you have a bunch to build with ZFS, that'll work too. You can still get all that data and all the statistics out of there. Yeah, you put it really well. That's pretty cool. How do I follow up on that? So yeah, obviously, you've been using it longer than I have. I'm still kind of spinning the wheels. Now I move I move really fast. I mean, I when I dive into something, I dive in deep for like a week or more until I can't stand it anymore. And then I have to make a video. So I have a feeling there's probably going to be a video on my channel about this, you know, whenever I feel like I'm comfortable talking about it. But yeah, I mean, everything you said is totally spot on. It's absolutely all of that. And I like the fact that you could run it local on your machine and not expose it to the cloud at all and just access it at the port at the machine's IP address, or, you know, tighten the cloud instance if you want to. You know, you have that control. So I think that's a really great thing and the correlation and the information that it provides, I feel like I'm just scratching the surface. I have to make sure my you're find out why my block list didn't update last time because, you know, I'm aware that last time it must not have done so. And I'm sure there's other things like it's finding packet loss right now, which I don't know if it's a false positive or not, but it's definitely something I need to check into. And I think I'm finding out some things I probably should have had Nagios checking for, but I built my own Nagios instance. And this one is just finding things to complain about without me having to put in the configuration. So I actually like that a lot. Yeah, something I'll mention. And actually, I just noticed this as I went and clicked on their site. They have a demo on their site, so you can just look at things. I guess I'm kind of wrong. I guess they do have some windows options now. So I'm wrong about that. I'll correct myself now. That's something I didn't see before when I looked. So I just seen it in their demo. I've always thought of it as a Linux tool, but they check their demo out. They actually have a demo that's showing active directory connections. So that that is something Tom is going to have to go revisit. Well, I think they're moving fast because one of the things that really impressed me was that I went to Google something. I don't remember what it was now, either installing that data on TrueNAS, you know, as a plug in or reading TrueNAS information. One of the other isn't available. I don't remember which at the time I read this. And there, you know, one of their developers was in the TrueNAS forums actually saying, hey, can we help make this happen? Like, what can we do? How can we get in touch with you to work on this? And I have a very special place in my heart for a project that will actively engage, you know, other communities and ask to work with them to find out how they can implement their solution or work together on something. And I feel like that's something that a lot of communities out there or a lot of products out there just don't do. And the fact that they're doing it, I mean, that's not a required box to check, but it is a very, it's very nice icing on the cake when it comes to a solution for them to do something like that. So I think that sets an example. Definitely engage with other people. I think that's what people forget to do, but we have to engage with others if we want to work together. Can't work together if we're in a silo. So why don't we just reach out? And that's what they did. So I think that's a good sign, too. Yeah. I also mentioned here there's it's not just a drop in install windows. There's OK, he does. It's Linux talking to windows it looks like. So I mean, I mentioned that. So because I thought, OK, I was confused, but they do have that in a demo. I thought now I at least circled back in my head of you can't just I was like, we drop it in. Why am I not doing this? I think it just goes to show you that it's a it's being developed, you know, anything that doesn't exist today. We don't know. It might exist tomorrow. Maybe if we revisit this in a year, we might be like, hey, remember when that didn't exist, that thing. And now it does. So you never know. Maybe that's something that might exist. But if something doesn't exist, then, you know, I think a really good thing to do, not just for net data, but for anything, let people know put in a wish list bug or something if there isn't one there. A lot of people don't think to do that. Right. You have a feature that you think would be really cool, doesn't exist. I don't think there's any harm in just filing a bug report if there's an option for a wish list item. And many bug reports have that option. So just file a bug and make it very clear this is a wish list. It'd be really cool if this existed. And who knows, you might get enough people to back you on that. And they might say, hey, maybe we should we should consider implementing that. That's a really cool thing to do. Yeah. And one last thing I'll leave you with net data as I kept using the word real time monitoring, if you're looking for something to collect a lot of historical data and a lot of other stuff on there, and you can keep some historical data in there. Once you start using it, you realize, oh, this is not the most ideal for historical. This is where your other expanded more in depth. You know, I want to know what my hard drive was doing for the last year or in case of me, I use Zavix to monitor my PF sense infrastructure. Zavix, you can use these all side by side. They're not either or they're complementary to each other because they don't. There may be some overlap or yes, I can look at my real time CPU usage here. But when I say real time in Zavix versus real time, Zavix is polling and has a queuing system to get data. You can actually and this is where net data shines in my true nasty example. I'm dragging files and I've got net data running in, you know, in another browser window while I'm dragging files back and forth and just watching in real time the data move. There's no polling. There's it's like to the second up to date watching it happen. So I can keep diving deeper into figure out where those problems are. When you started talking about historical data, I just had this weird, but funny picture in my head of a police officer in a questioning room with a hard drive on a chair with a spotlight and being asked, you know, what were you saving on the night of September 17th, you know, because that's going back to a long time ago, trying to figure something out because it kind of feels like that in a way when you're doing it. I mean, obviously there's we hope police aren't involved in anything that we're involved with. If something is catastrophically bad, that would be a problem. But just, you know, having the, you know, understanding of what happened at a specific time, because sometimes you might need to know what happened like six months ago, because there could be something currently that's behaving the same way. So that could be important. But isn't that true? Like the does a paid account give you more history? I wasn't clear. Yeah, there's some there's some extra data you can get with the paid accounts. It's there's some role based controls you get with a more enterprise account. They have all the detailed out there. And this is one of those things I know they've been, it's actually changed because they had some really basics. They've had it more since I last reviewed it. So I don't want to really date the video because they've got it outlined pretty well. They got a nice comparison chart, which the free plan for anything you want to do, which I think is great for the home users. And yeah, the home user plan to use it. I use I'm still using your plan. I might move to a business plan or just because I want to support the project more than anything else, because I actively use this and I find it actively helps me correlate data when I have silly troubleshooting problems. But nonetheless, yep, there it is. All right. Do we have anything else? I think that's it. Yep. Feedback at the Home Lab show. We love hearing from you. We like answering your questions. Great chat with everybody here. So that was definitely fun. And we'll see you next time. Check out our other episodes on some of these other topics. And who's that Rocky Linux has given her shout out because I was just a good interview we do with them. So it was fun. That was a lot of fun. All right. See you guys next time.