 Welcome to the Home Lab show. I had to pause because I was like, wait, what show is this again? This is the creator's life when you go, what show are we doing right now? Oh my God, I mean, yesterday, you and I had the meeting with a certain person and then I kept losing my voice because I'm recording so many videos and doing so much talking. And I literally lost my voice yesterday because I'd done like 12 videos this week so far. Yeah, just like you said, the life of a content creator. I literally lost my voice yesterday because I think it's you, Jay. Is there a delay? Yeah, just like you said, the life of a content creator. Oh my gosh, I can hear myself. So technical difficulties live on YouTube, everybody. Yes, I fixed it. It was actually, one of the pages I had open started. Sorry about that, everyone. Oh, I thought it was seriously my problem because I used you. I thought it was you. I was blaming you because I didn't see which one was open. So normally it is my fault. All right, now I'll fix the intro. Those of you watching it live, I'm just gonna start the intro over again because this will get trimmed out in post. Welcome to the Homelab show. Episode 56, is it? Wow. Wow, yeah, it is. That's crazy to me. How many, how did that happen? 56 episodes later. Oh, this is a Dev Random episode. And what this is is the things that we couldn't put in one topic. We wanna cover, because there's actually been a lot of changes with products we already talked about. And rather than update because the update changes are not enough for an entire show for every individual product. We're just gonna talk a bunch of them. And these are ones we've dove in depth on in other episodes. So we can reference back to that. But these are some of the new changes that came from there that I think are noteworthy and relevant to the Homelab people that listen to this show. So thank you all for joining us. Before we get too far though, let's just go ahead and get the sponsorship and money thing out of the way. And that's the node. We like to thank them for sponsoring this show. They've been a sponsor since pretty much the beginning. They're a great place to host many of the projects we talk about on here. So if you don't wanna use your lab, you can use their lab. And we have an offer code that you can use to get started over at Linode. And that is the go to linode.com slash homelab show, redeem your offer code. Go ahead and get started using Linode. We thank them as a sponsor. And by the way, if you're listening to this, you heard it downloaded from Linode. That's actually where we even host the Homelab show. And that's sort of the servers are at. J maintains all that infrastructure over there. All right, now that we got our rough beginning out of the way and we've covered the sponsorship, now we can start on our dev random list of things. Yeah, yes we can. And we have a number of things to talk about, including but not limited to PF sense. I believe TrueNAS. I have a couple of things to mention as well. And then if something interesting pops up in the live stream chat, maybe we'll grab one of those. We'll talk about that too. I wanna address something right away though because our last episode was about ButterFS. And someone I had asked because I didn't know and didn't really dig deep into NAS software that supports ButterFS, BTRFS. And that's just episode 55. But this is actually kind of cool because Rockstore not only supports it, it's also built on open SUSE. And we were saying how SUSE Linux doesn't get enough love. So you have a combination of things. Now I don't know if I'm not gonna try it all of it but I'm throwing it out there that it exists. It's kind of novel because it works on Raspberry Pis as well. So they got an image for that. So it might be something Jay explores at one time. If he wants to dive deeper down the rabbit hole that is ButterFS, but thought it'd be worth talking about. Oh, and I actually, this is actually kind of funny and I think it's worth addressing. An extra topic would be the difference between Dev Random and Dev U Random. So maybe we'll throw that in one of these episodes sometime. Maybe we'll discuss some of those things. What I love about this is that there's never a shortage of things to talk about. People that aren't technology people like we are and they find out what I do and they're like, well, okay, that's great. But what do you do when you run out of things to talk about it? I'm like, we never ever run out of things to cover. There is just an insurmountable number of topics to cover that we'll never get to it all. But that's the fun of it. There's always something new. Yeah, absolutely. Let's see, we could probably start with the big one which is gonna be the changes coming in PF Sense. Now, this is a mixed bag because some of these changes are only gonna be for PF Sense Plus. And now PF Sense Plus, they have a free tier for the Home Lab users. It's still, it's a little bit fuzzy in case anyone's wondering, but this hopefully clarifies that because I know people go, what's the difference between CE or Community Edition and Plus? And of course, the haters, when they came out with Plus saying, they're gonna stop all the development of Community Edition which isn't true because Plus is based on Community Edition. Plus is the add-on features that come from NETGATE, kind of an upsell type thing. You can look at it that way, it's not wrong. Cause it's some of their business offerings. But one of the ones that's interesting, that's in there and the engineering is going into OpenVPN DCO or Data Channel Offload. Now, this is an OpenVPN functionality and you can use this with real OpenVPN with a Linux system. Right now, today, you can set this up, it's part of the OpenVPN 2.6. Now, Data Channel Offload is a way to offload the Data Channel in a way that is substantially faster. And this is one of the challenges that everyone I know is going, but what about WireGuard? Tom, isn't that the solution to all my problems? But it's not, and the reason it's not is because WireGuard is a protocol but not a complete VPN solution when it comes to things like user management. So WireGuard is great for, and don't get me wrong, we've talked about it, I love using it on my phone to be able to easily connect to my home network and things like that and it's greatly convenient. But when you have a business or just a lot of different users you wanna manage, OpenVPN is relatively important for that particular use case. And the Data Channel Offload is substantially and you could just type in like OpenVPN DCO and it'll actually bring you to the OpenVPN page. The PF Sense connection is the team at NetGate engineered this to work on BSD because I kept mentioning works great on Linux. Well, it's not really something that was implemented in BSD and now it's going to be. So this is actually a really nice change that's coming there. Granted, it's only in PF Sense Plus, not PF Sense CE but development is still going forward on all of that. So it's a novel feature. Now the next PF Sense feature, Barry we're talking about, I think is going to be the fact that they have ZFS boot environments. And we, me and Jay have talked a lot about ZFS and these ZFS boot environments are gonna be a game changer. Once again, I know it's PF Sense Plus but it makes sense because with the boot environments you now can use ZFS snapshots to choose instances by which you wanna boot. Now this is different than configuration backup. This is actually, you know, if you have and this is actually in a way similar to the way the boot slices work in TrueNAS. But what this allows you to do is say this boot environment I want to set bootable or take this snapshot and say, I have several versions I can jump back and forth to of PF Sense and different configurations and just reboot it back to a previous config. It also gives you the menu option when it first boots up to be able to go ahead and say, all right, I messed up something horribly wrong, an update went wrong, because it automatically does this for your updates going forward in the future. And then you can say, I messed this up, I wanna roll it back. And it gives you the option right in a boot menu, pick one of your previous snapshot states, essentially the boot environments to roll back to. So these are a couple of big things that I think are really cool. Yeah, they are. I think that's just one of the best use cases for ZFS. Because when you think about your appliances, you think about the settings that you have and if something goes wrong, I mean, you wanna roll back and go back to a previous version, previous setting. And also you wanna survive power failures. You don't want something to die mid right or something like that. And ZFS really helps these types of things. And I think ZFS at least for me is one of those things where the value depends on what you use it for. Just having ZFS for the sake of having it, I mean, that's okay, but it could be a waste, but building it in like this to where it actually gives you the ability to roll back, that's a really good reason to use ZFS in my opinion. Now, getting to ZFS, I wanna address that really quick. This is already now the default with current versions, not beta, but current versions of PFSense, if you install them new, they all start with ZFS, whether you started with PFSense Plus or you start with PFSense Community Edition. And there's no path though, unfortunately, to convert an in-place install, but good news. It's extremely easy to reinstall PFSense. You grab that config file and you pull that config file down, run through the reload process and you can do a default next, yes, next, yes, and choose the options, choose the drive. It will default to ZFS. You can still actually go backwards to the UFS, but I don't see any reason to do that. So you can get the advantage of ZFS being the underlying file system for PFSense, meaning I can randomly pull the power as many times as I want without worry about the system becoming in an unbootable state. That's great, but all you have to do to restore your PFSense is upload that same config file and everything will come back working, including your proxy settings, your certificates, whatever reservations you had, firewall rules, every little piece of the PFSense is stored within there, even things like Saracata, even your tuning you did for Saracata, all the different rule changes you made. So all the little details comes back from that config file. So it's actually not that big of a deal to reload PFSense. It's really a big deal if you don't have a config file backed up. So definitely back that up. I have plenty of videos on my YouTube channel about different methodologies, including automatic backups you can do with PFSense to constantly make sure that file is backed up. So if you wanna switch to ZFS now before the beta releases that adds some enhanced features, you can absolutely do that right away or just wait till the new version's out and reload it with the new version and pop your config file in. So one question I have about this, I just thought about is what about the lower power devices that are, they sold at one point, I don't know if NetGate still sells this, there was a appliance you could buy, a firewall that was good for gigabit, but not much beyond that. And I'm wondering if you have a device that it can get to gigabit, but it might not get there as fast or be as quick as other things or maybe not have as much RAM or whatever, if that might be a potential problem on those older devices. And it's worth noting that if you have, and sorry folks, if you happen to own one of the ARM devices that are still on there, that is a spot where it's going to not work. So if you are on an ARM device, the ZFS support isn't there, the performance isn't there. So there's the one exception to ZFS. It's probably, I probably should brought that up at the beginning. It's a really good point though, Jay. If you do have one of those devices, like the SG-1100's an example of them, that has to still stay on UFS. So there's a couple of exceptions. So even a new install, a brand new install will go to UFS in that case then? Yeah, there's not good ZFS support on ARM as I understand it, it just doesn't work well. Okay, yeah, that makes sense. As long as, you know, if you install it, if it, you know, diverts you to UFS, if that's all you can conceivably support, then I suppose that would probably be less of an impact because you'll just keep using what you've been using in that case. Otherwise, you have the hardware and if it's a new install, like in my case, my hardware for PF Sense is overkill. Like I have, I don't think I've ever seen my firewall, no matter what I'm doing, and mind you, I'm, you know, doing 4K video and copying that over the network. I've never seen the CPU go above 2%. It's a ridiculous Core i7 that belongs in a gaming machine that's processing PF Sense and that thing flies. I think in my case, it'll probably handle it, you know, no problem, but then again, I'm very fortunate. Some people, you know, you might not have as new of a device. Actually, mine's not new, but in this case, yeah, UFS would probably be a good failsafe. Yeah, couple of side notes here, as I've seen these questions come up with the live stream here. When you do the config.backup.xml, the log files by default are not included. Make a business decision, you know, do you need those log files? You can always grab them, pull them out. That data by default is not backed up with the config file, but it's generally, and you only keep a limited number of logs is generally speaking, unless you built an overkill PF Sense Lake J, there's not a ton of storage on these, so you're probably not keeping that many logs. And I've always suggested, we've talked about gray log before on, and we have a whole episode about it. I'm pretty sure we did gray log, right? I think we did, yeah. I definitely have a YouTube video on Linode, and I launched one in Linode and my local network here, so it's like I have a gray log server wherever I have servers. So I think you're the reason why I set it up. Yeah, so I would really recommend you can use gray log, you can also, we've talked about Synology, and you can use Synology, you can use whatever log server you want, but I really recommend consolidating logs and exporting the logs from PF Sense on a real-time basis. That way, if your PF Sense something happens to it, you always have the logs consolidated somewhere else. And usually, because I would say this for many other products, you want the logs sent somewhere. Having central logging makes your life easier to troubleshoot things, because it's usually not troubleshooting a device, it's troubleshooting the connections between all the devices, and having logs in one place sometimes helps you pinpoint problems. But that being said, it's up to you if you want to back up the logs. The final note someone asked, what about NetGate hardware? The NetGate hardware, we use it a lot for the business use case of supporting NetGate, who supports the BSD Foundation, who also supports PF Sense, and the fact that we need a completely reliable, repeatable thing that we're going to deploy at scale for our clients, because we have so many PF Senses out in the field, we have a lot of PF Senses, at least one in stock, so we can get something to a client if there's ever a failure. That's one of the reasons we use so much to the NetGate hardware, but obviously it's not required, as a matter of fact, PF Sense Plus, and this is the reason I brought this up, you can now convert your appliance to PF Sense Plus, even if you completely built it yourself or grabbed one of the really popular little solid devices like the Protectelies and some of those, you can convert those all to PF Sense Plus now. Wow, that's good to know actually. Yeah, so you can do that in place, by the way, it only requires a reboot. So other than rebooting, it doesn't require reloading to convert to PF Sense Plus. So if you've decided you want to do PF Sense Plus, it's just a registration with NetGate, you give them a serial or device, I think it's a device install number, and they activate it. I went through the process, I've talked about it in previous videos, they've made it really, really simple to do if you wanna do that. Now a couple final features that are coming here, and it's directly, I know, fixes the homelab people. If you search for things like Xbox and UPNP and multiple Xboxes and problems, there's a lot of issues. They're going to be, and I haven't read through all the JIRA tickets to read the details, because it's not something, I know it's something people ask about, but it's not something we consult on as often, so I haven't dove into it. They're fixing to quote NetGate of how they worded it, fix UPNP and multiple gaming systems. This is, I know, just a problem from some long forum posts I've read where people have more than one Xbox or PlayStation, and they're trying to get UPNP working to play some of the online games at the same time, like the same game may use the same ports, so they're working and doing some engineering to solve problems around that. So I think that's very relevant in the homelab, because you're in a homelab, in a gaming lab, I don't know, how many gaming systems do you have, Jay? I have roughly somewhere between 30 and 40 different game systems, counting variations. But I mean, I'm a collector, so the thing is only the most recent of those are gonna be online capable. But yeah, that's interesting. Like, I've never run into any problems playing games on my end. I don't know why there are a few systems. Yeah, there are multiple systems. I have to imagine my son and I were playing PlayStation games simultaneously at some point, and I never, that's interesting, actually. I'm gonna have to look into that, because so far I've not had a problem. One of the things that I know is an issue, which there's a fix for it, it's pretty easy, but one of the Nintendo Switch had that problem where you had to do some static net ports. And it's because for reasons unknown to me, other than they didn't have the best network stack developer over at Nintendo, they had a hard time with port translations in certain, not all games, certain games had this problem. And it's also weird because it's modern times here. They should be reaching out and understand that and do the connections in a way that shouldn't have this problem because we've solved these problems actually, not recently, but years ago, but here they are, and here's those problems. There's better ways to do the implementations. And I don't know why Nintendo was slow to do this, but so is some of the other gaming companies. I never dove into exactly what they did wrong, but I just know from the forum posts, like when I see these problems, because we have plenty of devices, for example, phone systems, which some have problems, but if you have a properly set up phone system with that, that problem solved. You can have many phones all using the same ports, and because of the way they understand that, it actually works quite well. So this is a network engineering problem that has been solved, but not for some of the gaming systems. Well, you know, to be fair though, I mean, as much as I love Nintendo systems, Nintendo is pretty much the king of either doing it, you know, half baked or just, you know, being behind the times. I mean, keep in mind that still to this day, the emulation community is emulating Nintendo's games better than when Nintendo themselves emulates their own games. So keeping that in mind, I could have an entire podcast episode if I had a gaming one that just rants on all the things that Nintendo could do better, but I won't get into that. It's just par for the course, I guess. Yeah. And the last little feature we'll mention on PF Sense is the, I'm gonna work on a little demo video coming up soon. They have a way to grab multiple firewall rules and clone them to other interfaces. They added some more buttons that are gonna be at the bottom of the firewall rules. And this is actually really handy when you have larger installs and you wanna just duplicate groups of rules. You can actually check boxes. Let's say you're on one particular interface, you can check all the boxes for that network and say copy them all over to another. You've been able to do it individually one by one. Now you can do it in group. You can say copy all of these to there. And this is actually kind of helpful if you split up a lot of different Vlands and you wanna copy them between there. So I think that's, oh, I think that's where the changes are. And we just had Christian McDonald who is the developer over at Nekate and I misspoke. ZFS works on 1100 and 2100, just fine. ZFS works on ARM64. So I guess it's only the 32 bit. I thought there was an ARM. So it works on ARM64. It doesn't work on ARM32, I'm assuming. I'll see if Christian responds. But Christian McDonald, check out his YouTube channel. I've referenced it many times and I've tweeted it many times and replies because people say, you know, how do I do certain things with WireGuard? Christian does more than WireGuard, but he is also the person that I first met because he was doing WireGuard for PSS. He's quite the developer and works there at Nekate currently. As and where he started when he first started working on the project, but he came into the fold and has helped pushing lots of the new cool features at Nekate forward. So his more recent video dives deep into how they came about with the ZFS. That's awesome. You know, unpopular opinion, but I love being fact checked. I just love it because- Me too. That's why I read specifically and I know who Christian McDonald is. That's why I was thrilled to see him in here because I'm like, Christian, absolutely is the authority on this that I will bow to one on all things, PF Sense when it comes to development because he's literally the one developing it. Because like when someone pops in like that and just gives us a correction, it's great because in real time, we're learning actually along with everyone listening. And that was a good example of that. So that's cool. Well, this makes it handy because I wanted to work on the, how to change over ZFS because you can't do an in-place upgrade, but at least I know enough people are looking for it. I was going to do a video on a topic. And now I know you can't do it with the 3100 because the 3100 is 32-bit ARM and ZFS does not work on 32-bit. So there's the full clarification. I thought it was the ARM, but it's actually ARM32 it doesn't work on. It works fine on ARM64. So thank you very much for the clarification, Christian. And thank you for the good work that you, and well, there's a whole team. It's not just Christian. Christian is just the one with the YouTube channel. So he's a little more visible, but thanks to him and the team over at Nekate for the work they do on PF Sense. I really appreciate that. As an aside though, I'm kind of, it's just kind of amazing to me that in 2022 32-bit is still a discussion topic. I think it is just, it's kind of the, like with the ARM hardware and things like that, I imagine it's just part of the slower dev cycles that happen in that world. That is true. Yeah, I guess when it comes to ARM, that's a different story. At least when it comes to x86, I've had people that'll say, yeah, I'm running 32-bit, why? Because my computer doesn't support anything else. What processor do you have? And I look it up, oh yeah, that supports 64-bit. And we've had 64-bit CPUs on the desktop since 2004, but people love their 32-bit. But I guess with ARM, like you said, that's a different story because there's a different development cycle there. So I guess a case could be made there. But apparently I learned today that ZFS doesn't work on 32-bit. So there you go. Yeah, that's an interesting thing there. The next thing to talk about is, because that's pretty much what's going on in the PF Sense world. But speaking of ZFS, let's talk about the TrueNAS world a little bit. TrueNAS, the 13 release represents something that I was talking about many times. And I think partly, and I'm excited about this, don't get me wrong, the scale is really cool, but scale is still not the performance one, but people in this is, I'm gonna blame my systems a little bit for this, because they call it an upgrade path. And it's technically lateral movement, not necessarily upward movement. You can take a TrueNAS 13 instance and convert it to, or as they word it, upgrade it to TrueNAS scale. That would give you an impression that TrueNAS scale is the replacement for TrueNAS core, but then again, now TrueNAS core 13's out. They're still developing it. And by the way, just that's a pretty good feat because they do not have that many employees over in our systems, but they were able to pull off two major milestone releases of two, not competing, not the right word, two parallel systems that are going. So they each have a different use case, scales based on Debian and core is based on FreeBSD. So having these two platforms going forward is really, you know how it is working in a dev team, Jay. We're starting another product and we don't wanna lose support or sight of all the performance features that we have in the current product go. And you're like, no, that sounds unethical. There's a lot to be said about that and especially given what you said, I don't know anything about the company or the employees and everything, but if they're churning out software like this, I hope they're sleeping and they're not working overtime and disrupting the work-life balance because that could be a real problem. I hope it's not. I'll give a benefit of the doubt that is very impressive, but work-life balance, especially in software engineering is very, very important. And we've seen this since the Atari days when E.T. was developed in a really quick period of time and it was crap. And nowadays we still have software being pushed out the door too quickly, but that being said, it is impressive and I love the work they're doing. And I'm happy to see scale going forward and the TrueNAS core going forward as well. I think at some point they'll probably converge, but for right now they are a parallel product. Yeah, I don't know they're gonna converge because they actually diverge to create it. And it's just the basing on BSD versus basing on Debian. Those are your two major differences on there. What they have done is modularized a lot of the middleware. So as much as it is two different operating systems, there is a lot of reused code in the middle for like the UI and the interface. Now it looks a little different so I don't know what level of reuse there is in the code. It's similar enough that from a navigation and UI standpoint, they look pretty good. But the nice thing is when you're looking at the TrueNAS scale, the things they're doing in TrueNAS scale are not major changes with 13. So there's only a few things to talk about, but the most important thing to talk about is performance. And a lot of people have reported this and they've just done a lot of fine tuning. So there's not like a major, like we added this huge new feature, but the performance tuning is quite a bit noticeable, but I didn't quantify it yet. And I commented when I made the video about its release, like if everything feels faster, but I need to do some before and after benchmarks. So I have a few systems I haven't updated to 13 yet. And I also want to address the release cycle process. Now people said, hey, 13's out, but 13 says not enterprise supported. And I'm like, yeah, it's not enterprise supported. And people are like, well, that means it's not out of beta. And I'm like, no, it's out of beta. It says community supported. And this is both for TrueNAS just overall, how they're doing their life cycles now, whether it's scale or beta, when they do the release to a major version number, they kind of let you know it's release, general use suitable for less complex deployments as in not necessarily enterprise space. So they ask that the enterprise people wait till update one or U1. And that's how their naming teams are actually pretty easy in TrueNAS core. It's U1, U2, U3. So you'll have the major version number and then a U for the update version after it. And they usually want the general, homelab people, the people who probably are listening to this show right now to dive in and start testing it. So I'm always right away jumping in there. If I find a problem, you start reporting it. That way by the time it comes around to some of our clients that have, oh, I don't know, a nice, huge, fully failover IX systems MVME array with 100 gig interconnects. And we have several clients like that or the clients with eight petabytes of storage, you're going, is it ready for us to upgrade to TrueNAS 13? Do we run any problems there? And that's why they tell you for the enterprise users to wait till U1. Now, I've always trusted because you can have the boot environments and you can jump back and forth between them. You can do the update and you go, well, I found my scenario or my use cases in there, you can just roll back the boot environment. They've made that extremely easy to do on both scale and core. So you can try it and to just roll it back. So I don't think there's a problem trying it. And yeah, I would say that it's worth, if you're in a home lab, there's no reason. I mean, I was trying it at the release candidate and didn't have any problems and noticed that it seemed a bit faster, but now that it's at full release, no issues there. The other things they've done in the release notes, I have a video where I covered them. There's like a lot of little stuff of just UI updates. It's all that little polishing stuff that takes time. And you can't do those polishing updates because they are really little priority when you have major problems to fix. So this is such a small miner that they did, they spent a lot of time fixing lots of little things in there, which is great. Yeah, I would say two, update one, U1, update two and so on. That's a pretty easy naming scheme, but I do think it's a missed opportunity to not use the elbow cover for the Joshua tree when U2 comes out. Every time U2 comes up. Anyway, moving on. Yeah, yeah. So the next one is going to be scale angel fish. And then is a little different on naming scheme. They call it angel fish, but that's a 22.02.1. They have a few new features in there. It's a lot of fixing going on in there, which is good because one thing I've noticed and everyone does it because it's such a new product. They did get it released, but there's a lot of little bugs going on in there. So they've gotten a lot of just fixes such as I'm looking at them like bridge interface fixing. And the people asking about this and I really didn't have an answer for them because of the way they're doing Docker. There's a lot of engineering that went into building it with a Docker backend versus the IO cage that you get in BSD. So there's people asking, well, how do I bridge these together? I'm like, I don't know. I haven't really dove in there. And obviously people talk about bugs, but how these bugs get fixed. Well, if you go there, you open a ticket, you say, I'm trying to get these configurations done. People open the tickets and then they start working on them. And then these releases start fixing that. I didn't know, but apparently there were some, people had commented and because I'm not using scale, they did redo the NSF for ACL structure and the access control lists, I guess were broken. And people had mentioned it, but once again, I wasn't using it. So that's one of the fixes, if that was what you're holding off on. And what was the other name? Disabled the Docker compose binary, provide indicator that said password was set, debug should show connected to true command. Yeah, there's a lot of things in here. Where's the other one? New features, updated icon, updated net data is back. That's actually something people were missing from the BSD. There was a memory leaks. They didn't have it in there. Oh, remove disabled hardware offload box from web UI. That's where they removed it. And add to the, I'm not sure what that one is. Sorry, sorry about that. I thought I had that in there. Oh, add TrueDance MVME scale. So they have the MVME updates on there for that. So scale's coming along. I'm gonna do some testing with it soon. We actually built two more servers at the office just for testing in the lab, but we were so busy that they got built almost a week ago and no one's, they're built, but no one logged into them yet. So that's kind of how it goes, right? So many things to do. So many things to do. And one problem that you still have, and this is a challenge that Jay has still is Jay wants to test scale, but of note, if you're using the old style encryption, sorry, not gonna happen. That is still not, you can't use the GELI encryption. I don't know if it's pronounced jelly, but it's the old BSD encryption. Like if you, and Jay's had years in place upgraded since the old encryption, I have one system I'm still using old on it. I'm just too lazy to reload it. Like I know exactly what needs to be done. I have enough storage. I could just dump the data somewhere else, move it back and reload it and get that sorted out. But yeah, so I haven't. Yeah, that's, in my understanding, and it's been a while since we tried it on mine, but you can't ZFS send either, correct? From a new one to an old one or was that a problem? There are some bugs with ZFS send because you can send from an old to new, but you can't send from a new to old because if you're using the old encryption, it doesn't support the new encryption. So there's some problems sending the encryption over there because it doesn't have the extra parameters. Because when you use ZFS send, you're doing everything at a block level. So the destination has to have the support within the file system itself. So if you're running old encryption, there's some translation problems with getting that to work. But taking and replicating from the old system to the new because, well, that data spots, the spots to hold the data are all there. Because you can think of ZFS not just as a file system, it almost, in a way, it's like a database. You have all these extra metadata things that are kind of behind the scenes. And because ZFS send is completely taking blocks from point A to land to a block system on point B, you have to have an alignment for all that data to go to. That makes sense. Yeah. That's the cool stuff with ZFS. Like I said, it's not dramatic. And that's why we didn't do a whole episode. That only took me like a few minutes to go through all the features of both of their updates, but they're both being parallel done. The next one, XCPNG. Now, they're working overtime for some cool stuff. And one of the things they're working on, and I thought this was really cool, this came out of a live stream, was they have the beta of it right now, the ability to do full test restores. And without having to restore. So you want to know that your backups are working. And you don't want to find out that there was a problem with your backups under duress, like when you're trying to restore a system. That happens, that causes panic attacks, and it's a real problem. But then how do you test them? Well, you go through your DR process of finding somewhere to restore them to, spinning them up. Did they boot, confirm that not only did we back them up, not only can we restore them, but they booted after we restored. That's your full DR test. But that takes time. What if you could automate that? Now, commercially, absolutely. You're probably saying, well, Tom, there's lots of commercial, large companies that offer that as an automation. And you wouldn't be wrong. So I talked to, and this started as an idea from the people over at VATES, the company that supports XCPNG and Xenorchestra. And they said, you know, we actually have these things called the Zen Guest Utilities. We could spin a VM up, disconnect it from the network because you don't want it, you don't want it to restore to a server than connecting to the network because, well, it's supposed to be in its little self-contained world, but with no network interface, how do you test if it actually booted? Well, they just wait for the Zen Guest Utilities, whether they're in Linux or Windows, to talk to the API. So if they do a restore, they can create this, synthetically create the network interfaces, but not actually connect them so it has, so the machine won't, you know, hunt for a network interface. But by booting up, the Zen Guest Utilities will actually go, like here I am, and talks to the XCPNG server, which in turn makes an API call that you can read by the Zen orchestra to go, hey, this backup, because it booted, absolutely must have worked. And then they can go and destroy it and give you a report. They only got the beta where it starts it, but I think that's such a cool feature to build in. And by the way, completely available in the free open source one you compile yourself. One of the things I do when I do my demo videos, because they do have a paid product as well and a paid support features, but I always make sure what I'm doing this testing because this is back to relevant to the homelab, can it work with the fully compiled one for the home users? And the answer is absolutely yes, this beta is available to you within there. And I don't know, I just thought that was a great thing overall to be able to have that in there. Cause now you're talking about enterprise level features that homelab users can have for their DR testing. And, you know, it makes your life that much easier to get an understanding of how real DR testing works in some of these environments. So obviously it's not a full, did we see if we could spin everything up from scratch, but I think it's a step further and a great way to do that. Yeah, that's the type of thing I think, especially in the enterprise is especially annoying because you know what the right thing is to do, you have to test the backups. We all know that, you know, but we have deadlines and all these other things. So anything that can help us get to that point reliably and trustworthy, then that would be pretty cool. All these different things like auditing documentation and passwords and users and all the things that goes into this. Thankfully in the homelab, you don't really have turnover like that. So you get to save yourself from a lot of that. You still need to do some of that stuff. Obviously test your backups and whatnot, but I think we have it a lot easier compared to organizations and what they have to deal with. Yeah, this is a, it's a pretty cool feature. Now they've also been working on, they've actually put a lot of time into the backups on here. And this is one of the things I like about when you use Xenarchish and XCPNG together, it's a very complete system as you're not as reliant on third parties. They've also added the Delta Backup Selective VDI restore. Xenarchish backup tools aim to be both resilient and flexible, I'm reading from their ad if you can't tell or their blog post. With this in mind, we have added the option to the Delta Backup Restore Operations. When restoring a VM from a Delta Backup job, you now have the possibility to exclude VDIs from the disk restore. And what that means is if you have multiple disks attached to a single virtual machine, now you can do these partial backup restores of just the pieces you needed. So if you have one of those instances where someone deleted something but it's within one of the disks, you don't wanna do a full restore. Now you can do a partial restore of one of the other disks. They give you a lot of flexibility in the backups like this. And I think it's, it's one more thing going further to just make your life easier because this happens sometimes when you're not using a separate storage server like a ZFS based Trudance system where you could just roll back a snapshot or grab a file, but everything's in the VDI. But now your problem is I can't just restore the whole VDI that takes a lot of time. Now they have an option to do just a partial restore of one VDI attached to a whole VM. So this is a more pushing forward they're doing with Zen Orcasha that is like, that's actually really cool to me. So. Yeah, this is pretty cool. And two things they make your life easier. They now support exporting not just the XVA, which is the Zen hypervisor format, but also OVA. So I can now build things inside of Zen Orcasha, push them out to OVA for people to import into virtual box or VMware because they both support OVA import, which is an interesting perspective that they're taking the time to build in compatibility to export to another hypervisor. It's against, so to speak, being inclusive because usually you always make it easy and they do have OVA import, but now they actually can send it back out as an OVA if you want to send it over to another hypervisor. So I thought it was cool they took the time to do that. I don't know, it's kind of novel. Can you do that in a proxmox? I haven't actually tried moving formats like that. It is something that I think a lot of people ask for. Surprisingly, so it's not that surprising to me. It's like working in a company, for example, when I did do that, there were situations where we had an issue on a VM that were troubleshooting and then somebody would offer to fix it or look into it. And then they would ask me, is there any way I can get a copy of that server and run it on virtual box so I could just poke around and try to reproduce the problem? And I know back then it wasn't very easy to do. So I could totally see a use case for that. And I think in general, it's just gonna make people like XEP and G more because I feel like being that open is actually a good thing and it makes them a part of the virtualization community if there is one than being siloed. So I think that would definitely help. Yeah, I like it. And one more thing kind of related to that is when you're exporting them, it creates a web URL, one-time download link. So I was playing with this as ways, I'm working on my bigger Getting Started with XEP and G video, but there's a lot of cool features I've added. One of them is where we literally had to use this, I had to build a VM for a client. Now good news is the client's using the Zen Orchestra. We're using Zen Orchestra and XEP and G. I can build it. It creates a one-time download URL for the VM I did. Then you can import on the other end. You can go through an import via URL. So you can import your VMs via like an OVA. So this is gonna be something interesting. Like if I set up an OVA on a website to download, you can actually drop that link in. It will pull import and pull and run that VM right inside of the whole system. That obviously is kind of dangerous, just running VMs in it. But then again, that's a cool way to get a VM moved over. And because we VPNed into the client, I set a VPN between my site to their site. And then we just said, here, stream this here, copy it in here. So now we're gonna export this across the VPN without tying the systems together through XEP and G. So there's just some really clever things you can do that I like it. It's pretty cool. I was like, that's so novel on taking whole virtual machines, packaging them and shipping them out over a web URL and then importing them back into the next machine. That's awesome. I mean, I think that sounds like the virtualization equivalent of, you know, WGIT or Curl Pipe to Bash, basic Berserker to Bash. It's really much like that, yeah. Yeah, which is something we tell people, look at the script first before you import it, which is what you're supposed to do. But, you know, that's probably a good thing overall. If you have a trustworthy source, you can get a reference VM for something to check out. And I think it's really cool, especially if you have a product that you wanna test out to have a reference VM blessed by the actual vendor of that product to get a feel for what it is you're evaluating. That's always great, I think. That's what I think is really cool about how that's done is that's really what it's going to be for. So you want to have a trusted vendor or it opens up the opportunity for people like me and Jay, if we wanted to create something and like we could throw it up on a website, like here's this tool we'd like you to test and we packaged it with the operating system, with the config that we want. And we can set it so next time it boots, it does things like, you know, certain configuration stuff like a wizard and then here's the whole OVA file. We know it's gonna work. It's not like there's no weird environment variables that you may run into as long as you have a standard hypervisor running XCPNG, you can grab this URL, download and import all in one step. So I think that's really awesome. That is awesome. So that's, let me think of the last thing that was in there and it's the one thing people have been asking for and ZenarkaShare, this is where there's a rub, of course, where some people go, I load an XCPNG, but then I load another virtual machine for ZenarkaShare and it doesn't seem seamless because you're building it like that. They are working on XO Lite and XO Lite is slowly coming along. It's gonna be a version of ZenarkaShare Lite that's gonna be missing some of the backup functionality that they're just not able to put that in the Lite version but they are able to do though is allow you to have this XO Lite running so you won't need to run ZenarkaShare to have a web interface to manage multiple Zen servers. So they're doing some engineering around to be able to start and stop VMs, get things going so you'll have a local web interface, well, kind of local because of the way it works. It can be downloaded local, but it actually, the way they've designed it is kind of clever so you never have to update it. You just have a one liner that you add right now and you can look up their beta how to set this up. You throw it into the web server and it actually pulls down the entire program into your browser each time you load the page and it loads relatively quick, it's not that big. Then it makes a local connection to the server that it's on and finishes running. So the version, every time you hit F5 and reload that page is always the latest version of XO Lite. Now there's a way to pull it, there's gonna be way in the future to pull it local so it doesn't have a external internet dependency but for a lot of people that's fine because having extra internet dependency on something means it's always up to date. It's really interesting how they engineered it. You know, like I said, you can pull it local but being able to have it pull like that. One more thing they're working on but that's a little down the road, that's still in beta. If you wanna poke at it, they have a forum post in their XCPNG forums where you can kind of engage and discuss with it. So in my opinion, at first when I first started playing around with XCPNG when you showed it to me I was kind of put off by the fact that the UI was separate and a lot of people are also of the same opinion but since then I've come around and I actually, I think I might prefer that because one argument that you can make is that if you have no web interface or some kind of management application running on your hypervisor that it's not wasting CPU cycles on something that's not specifically related for the task at hand and then you as the administrator can control where that UI is. For example, you could have it as a virtual box VM on your laptop and it's only running when you wanna make changes. Similar to how, you know, unify you don't have to have a cloud key you could just have the unify controller on a VM and virtual box all the same and just launch it anytime you wanna make network changes. I ultimately, I think I started really liking that idea because it gives you the flexibility of where that web interface is going to be run. You can run it from within XCPNG which is kind of weird having a VM inside of XCPNG that's for the UI for XCPNG. But in my opinion, it makes more sense just to run it on your computer the one you use to manage your stack with and you have that flexibility. And then it kind of seems like you still have that flexibility with the light version because it's not actually as you, if I understood you correctly, installing it local it's just run from the website making a local connection. So I can argue you get pretty much the same benefit even with that. And this is also the scalability of it because we have some consulting clients we do that have lots and lots of hosts. You only need, cause it's a one to many relationship you can have many, many hosts, many, many pools. So a pool is a grouping of hosts. And we have a client that at their central location in Chicago manages their external different city data centers. Each data center location is a different pool and they one dashboard in Chicago sees all the different across VPN links all the different data centers that they have that they manage that are in different states and everything else. So they only have to worry about one instance of it and that's the one of the beauties of the way it works. And you're probably going, well, Tom do they have enough backhaul to be able to do backups across there? That's actually where when you start staying from the enterprise architecture things I didn't cover, but they've done a lot of enhancements to is they have a series of proxy tools that are built. So the worker jobs get kicked off by lightweight proxies. So you manage it still through one instance but instead of talking to groups of servers that are another data center which of course has a cost of bandwidth it talks to the proxies and the proxies handle the job. So it talks to the small lightweight proxy that doesn't really have an interface it's just an extension of the Zen Orchestra tool kicks off at the proxy the proxy then handles all the worker. All right, I'm gonna back up the VMs on this particular pool or I'm gonna handle the load balancing on the pool. So the proxy workers keep doing but they take their commands all from Zen Orchestra. So you end up with this very centralized the dream we all have an IT single pane of glass for VM management across multiple geographically separate data centers. And hey, I've heard a lot of people pitch it I've never seen it done the way XCPNG does so smoothly and open source by the way you can compile it all yourself. So that's, it's pretty wild. I mean, I know VMware has stuff like this don't get me wrong. This is not the only product I've seen that does this but it's open source and you can download a compile and do it and you have the ability which is of course with the enterprise clients that we consult with do they buy support contracts for it then they're more than happy. They're like, yeah, good support, good product, no problem. Yeah, yeah, that totally works out especially considering, you know I like the fact that a lot of these companies are seeing homelab people as an asset rather than a tedious annoyance because we find the bugs we report the bugs because we complain about the bugs so either we complain about them and or we report the bugs but the projects get better and then also if we have solutions like what they're offering and we can compile them for free and run them then we might be more likely to tell our employer, hey, you got to kind of check this out. I think it might be a good fit and I think ultimately that's a win for the vendors. Yeah, I mean, we've had a lot of times where it was funny because we met some people they set up a lab, they're a pretty good size IT team that we were doing some consult with. They follow our YouTube channel and the homelab show so and they set up a lab based on this and they were actually shocked at just how well it worked and they're like, wow, and they finally moved this down and that happened a while ago and they've now moved to do all this in production so we've got some pretty big companies that use this as their hypervisor for just a pretty like I said picture multiple data centers you kind of get some scale and scope of the company and another one we consult with over in Europe has 2100 virtual machines running all managed by XCPNG so people say, is it an enterprise ready? I mean, 2100 VMs, what do you think? Is that an enterprise class set up? I would say, I would say so. I mean, it's definitely not your average homelab set up. I mean, if someone has that many running, let us know but yeah, that would be an enterprise. But I do know that we have some above average homelab people in here so feel free sometimes me and Jay get a kick out of it. You can tag us on Twitter if you have a cool picture of your homelab set up we always get a kick out of seeing that. Yeah, you know, people have interesting configurations and ideas so I'll throw that out there. You can share that with us. We like seeing it. You know, we are tech enthusiast and you know, we may have the channel to share it but doesn't mean we don't want to see other people's stuff to you know, share out the things like that and tell us some of your more complicated configs. Sometimes it's interesting what people come up with. Me and Jay spent a lot of time overthinking it and me and Jay, we're hanging out last night just talking about how to overdo automation. Yeah, it's like, you know, you talk about how to solve a problem and you come up with a solution that you know, has a lot of moving pieces but it does work and you can't think of anything better and then you imagine it to someone else and they're like, well, why don't you just do acts? Like, well, like our friend Tony we were hanging out with and you know, I'm you know, in person and I'm complaining about the fact that you can't do Wake on Land with 10 gig and then he's like, well, just do it over one gig. I'm like, but I don't want to use one gig. I have a 10 gig card. I want to use that. He's like, no, no, no, don't put an IP address on it. Just have the one gig card for Wake on Land and don't actually use it for any traffic. I'm like, oh my God, like I was going to over complicate the mess out of that. And he just came up off the top of his head with this simple answer. I'm like, wow, that's genius. Okay, that's what I'm going to be doing. You know, that's kind of how it goes. So I'll piggyback on that and just make it a challenge, but it's not really a challenge. Please let us know about your home lab, even if it's not a complex setup, but something just, you know, darn clever. Like, why didn't we think of that? That's awesome. Just let us know about it. I'm not going to promise we'll mention anything. We may, we may not. I don't know exactly what it's going to, you know, evolve into, but I want to see what people are doing. And I think that's just something that's so amazing when you see people's, you know, racks or maybe their bookshelf if they don't have a rack, because let's be honest, we sometimes, that's what we do. If you have something small, something huge, something that's like, oh my God, how many VMs do you have? Just please let us know and put it in the feedback form. That would make a Q&A, I think a lot more interesting in my opinion. Yes, it's always cool. And please fill out the Q&A because we need to do another Q&A episode. So if you have questions, things you want us to talk about, we save all those up. We see them filling up in the spreadsheet that we use that where all this gets collected. So it's absolutely those questions at us, though products you might want us to cover and we'll see if it's something we have time to dive into or talk about. We, you know, we always like engaging with the community because that's our goal is to help educate the community on all this home lab stuff. Another thing that I'll mention, and I think I've mentioned this before in the podcast, I have a couple of quick things myself, but so one of them is the app called Pushover, which I'm pretty sure I've mentioned. But again, it's just something that's not going to fill a full episode because what it does is it gives you notifications and it does it very simply. But what I like about Pushover, a number of the things I like about it is that it's easy to implement and you can have all of your app notifications in one umbrella. So you don't have to worry about, you know, maybe one of your home lab devices is sending you alerts via email, maybe another one is sending you text messages, you know how it goes. Just all these different things. But you can have everything under one umbrella. You can create like a different app for each one of your things. So I have like Nagios and all my other things going in there. Downside is that it's not self-hosted. So I wanna give you guys that disclaimer. It's an app you could download on Android or iOS and then you basically get an email address that if anything is sent to that email address that shows up as a notification, there's an API as well. And I really, really like it. It's not free, but I don't really mind their pricing model. And I don't think you would probably mind this either. It's $5, no, not $5 a month or $5 a year. It's $5, that's it, period, done. Just that one $5 payment and you'd never pay them anything again. So even though it's not self-hosted, it's still affordable and it's actually kind of good in my opinion to have something that's not self-hosted for your notifications because if you're self-hosting the thing that's supposed to notify you, then what if it breaks and you don't know about it? To be fair, it could break on their end too which I'm hoping it doesn't. But so far, so good. But what I was hoping for is if anybody knows an alternative to push over that is self-hosted, I would like to know about that because there might be a number of people in the audience that might be thinking, well, that's great but if it's not self-hosted, I don't really care. So it'd be nice to have an alternative for people that would rather have something self-hosted which most of us be self-hosted. So let us know if there's a alternative to push over but for right now, I'm really liking it and I'm moving all of my things over to that. And not sponsored, not promoted, but pushover.net. I dropped the link and we don't have any affiliation with them, we just like them. Zero affiliation. I've never talked to their people. I couldn't even tell you their people's names but really like it. And another- It's got an endorsement from Veronica Explains as well. So she likes it too. So yeah, we're in good company then. So another thing that I wanted to talk about on the podcast was Get Tea but again, we have an app that's easy to install. It does what it says it does. It does it simply. That's it. We could probably fill if we really tried probably five or seven minutes. Get Tea spelled G-I-T-E-A. So you combine the word get like GitHub but it's not GitHub and tea like let's have a spot of tea. Get Tea and it's a self-hostable GitHub alternative, GitLab alternative where you have an actual Git server you don't need Get Tea to have a Git server because technically you could just have no UI at all and just have Git installed on a server but it's written in Go. It's very fast. It's easy to install. I have a video that just launched on my channel this week that goes over how to set it up. And if anyone's looking for a way to have like their repositories local maybe you don't want it in GitHub or GitLab or something like that. I totally understand. Then Git Tea is probably a good way to go. To be fair, you can self-host like GitLab but if I was going to self-host something you would be Git Tea because the requirements are so few for that. So I reckon it's easier. Yeah. It's set up as easier. It's lightweight like GitLab for example I don't think that you can run it on an instance with one gig because if you're going with a VPS provider like Linode or any of those with GitLab you probably want four gigs I think when it comes to RAM which is going to kind of add up but the cheapest instance you could typically get on a VPS provider is going to give you one gig of RAM that'll run in that. And it's basically however many users you have that's going to slow it down. So if you're just using it in the home lab and you have yourself and maybe one other person that's not going to worry or not going to be a problem you could probably have I'm guessing 10 that's just a guess or more before it starts to fill up but you could run it for cheap or just run it on your own hypervisor internally and never expose it. I do like it quite a bit. So I want to throw out a mention for Git Tea there's just not a lot to talk about to fill an episode but if you're curious on how to set it up there's a video on my channel you could check out and maybe get your own Git server. And you can host it on Linode if you want. Sure can that's where I was running it. So and it works really good. It's got all the features you would want. I mean obviously creating repositories but adding keys and things like that. The usual suspects when you have a Git server for having automation and things like that. It may not be as full featured as GitLab but it has all the features I've ever wanted. So for me it's totally fits the bill. And someone had commented I have not tested this but someone said NTFY is a great self-hosted pushover server so go ahead and make note of that Jay. I am just opening up my browser tab right now and I'm going to pop that in there so I will take a look at it later. It looks like it's if I'm on the right site let me just double check because you never know before I mention it. Yep it's NTFY.sh and looking at the screenshots of it and it really does look like a alternative to pushover so that was quick. I was expecting to wait until I found it in the feedback form but still let us know if you have another one but I like this so far maybe I'll check this out too. Yeah maybe that's another project video we'll do is how to get some of these things set up. We always try to figure out ways because everything is just like let's push it to the cloud let's push it to subscriptions and we get that that's the push from our products. So I'm always interested when there's products that offer an alternative besides the cloud for certain things and oddly I can talk a little bit about it. Cisco's reached out to me because they're actually working on some new product line stuff that doesn't have licensing on it that's made for some of the smaller business stuff. Granted it's still Cisco which means it's going to come with a Cisco price tag but it doesn't have a recurring license fee is my understanding. I'm still investigating all of it because I am ever skeptical of what limitation Cisco will place on something that doesn't have a license. They are more late to the party than Blockbuster was trying to get into streaming at the last minute to save themselves. I mean they are very last minute but you know what good for them to get into it though because that's what they should be doing. So hopefully it's good and it's a do right. Because some things I get for continuous development there's fees that go along with that because you have to get the developers paid they have a fee attached to it but maybe Cisco and some of the other companies go a little too far with everything needs a license fee. So yeah, finding a happy balance is hard. Finding things that don't kill you and the answer is always not just put it in the cloud that's not where everything belongs and we're all about self hosting it. If someone asks me what about some of these low code, no code I'm like, I'm trying to get people involved in tech not like there's magic. If you just pay a subscription fee you can do very little you can drag boxes around and build something. I'm like, well someone still has to be the engineer that made the magic happen and I think that's our audience here is so. Yeah, I think a great number of people in our audience are probably the people that are going to be disrupting things here pretty soon. And then I can't wait to hear in the news that someone creates a new technology disrupts the industry. And they're like, yeah, I was watching or listening to the Home Lab show and I got this idea and I did that. I'm like, yes. One of our audience members disrupted something. So that'd be great. Yeah, someone actually from following my channel now works for Greylog. They love the product, they got into it and it got them into SIM, it got them into a lot of stuff. Fast forward, they actually just got a job at Greylog and I posted it out on LinkedIn. I thought that was actually cool. So, hey, we love those success stories. We love seeing all the labs and other new ideas. Did you have anything else, Jay? No, that's all for today. All right, that's all for today. Send those questions in so we can get our next Q&A episode going and thanks everyone for listening. Awesome, a great show.