 Happy Sunday. And now that I've fixed a spelling, World Backup Day or Word Backup Day. Back up your words or back up your world, whichever it is. I don't know. I just happened to fall also on Easter. So those of you that celebrate Easter, happy Easter as well. I believe it's Easter. I think so. I don't know. Yeah, that's right. Easter. Think about that a second. The bunny holiday. Got my tea. Got some people. Yeah. Well, we'll hang out. I'm not going to. This is probably not going to be a super long live stream. I figured it's World Backup Day. I didn't. I was working on some projects. I didn't really want to take the time to make a video about this. I thought I'd have some discussion around true Nass. And there's a lot of commentary that I've gotten from people on it. And it'll probably be a secondary video, but I at least want to talk with all of you live to kind of flush out my thoughts on this. And it's the concept of immutability. People ask about it a lot, but I don't, I'm realizing more and more people throw the term around without understanding its true meaning in the world of tech. Because there are things that are immutable. We most frequently quoted things that are immutable will be probably like death and taxes. You know, we, we are not sure with our principle technology here in 2024 how to escape death. Therefore it is something that is immutable or inevitable. But when it comes to data on systems, people think and I get a lot of requests. Where's that magic box that says don't ever let this data get deleted. And it's not that simple. It's not that simple even with your cloud storage providers when they offer immutability. It is a term thrown around in tech that needs to be a sentence, not a word. Let me explain why. Nothing in tech is immutable. Matter of fact, our whole problem and why we have backups is the fact that the stuff just breaks sometimes and data and the bits we're fighting against entropy. We're fighting against the fact that threat actors exist and sometimes we'll get in there and do something. There's a lot of reasons we would love for things to be completely immutable. But the fact of the matter is, especially with technology, just because you've checked the box doesn't mean someone can't uncheck the box and delete it. The way I always look at immutability, and I might try to figure out if I'll come up with a visual to represent this, but your data as it's created. If you want that data to land on a backup or storage server where it has the immutability with it, the immutability has to come from a level above the permissions at which the data was created. So if user Tom creates the data, there has to be a higher user on the destination with more privileges than Tom that can say that data will have a retention policy of X. This is the way this works in like your cloud systems that offer immutable backups storage. Let's say something like S3 back plays has their lifetime cycle lifetime. You can say all data on here cannot be deleted for 30 days. That way, the key that you create that goes into that bucket can create data. It can delete data. But technically on the back end, there's always a 30 day copy. So you can say that's 30 days of immutability. But it's not truly immutable because as the admin of my back plays account, I can purge things out of the back plays account. So technically, it's immutable in terms of the key I created for the bucket may not have permissions to delete based on the lifecycle policy I set. And it's kind of the same for TrueNAS, but it gets a little bit more fuzzy and complicated because I just did a video on CFS replication. And there's a lot of people saying I shouldn't have used root, but I want to point something out. If you don't use root, the other option is to create a user. And I actually was playing around with this a bit to show people how it works. So let me show you the user, which is called TomTest that I have created. And I might do this in a separate video just to break it down. But one of the problems is TomTest here, and let's pull up the system that actually sent some data to TomTest. We'll log into the other system. This is the same system as I was using if you watch my video on setting up synchronization between TrueNAS servers. But we have a data protection task. And that data protection task, if we look, it's using the TomTest SSH connection. If we go to the credentials, and we look at the backup credentials, there's TomTest. And we have a remote host key I set up for these demo systems. We have the SSH key pair we set up for these host systems. But the part that matters is this. Here, we have TomTest, and we have to allow sudo commands with no password or Tom can't do this. So TomTest has to have a shell. He has to be in the sudoers file. Therefore, the data that TomTest creates, and let's show the data set here, well, actually let's show the destination data. So we have dozer test one, and I believe that should match over here because I think I already sent the data. If not, I can send it, do it again. Yep, there it is. There's the test one data. The point I have is if you look at who owns this data under permissions here, you'll notice that Root owns it, even though Tom is the one who initiated it because Tom's in a sudoers file. And the sudoers file means Tom has permission to delete other things on this server. That is kind of the challenger application. If you were to compromise a server and compromise the credentials setup, even if it's not the root credentials, it still has that user will still have the ability and the permissions to create and delete things on there. This is, I've been trying to find kind of a way around it, but it's kind of an interesting, I don't know if conundrum is the right term, but it's an issue. Let's just say where the user having that ability to do that is problematic, let's just say. I don't know. This is one of those things that, there's a lot of comments on there, and I see people excited that they offer sudo. But if the sudoer without password means it has all the same permissions as essentially as root, or at least you can actually limit, you can select which commands can be run. So that way they couldn't do a ZFS destroy, which would be great, but they can overwrite data with ZFS. As a matter of fact, I don't even know if you could limit them to not being able to destroy data with ZFS because if you have to allow certain ZFS commands, how granular does it get? I'm going to be flushing this out a little bit more, and if someone knows, please, let me just fill this up here. If someone, because it's not in a documentation, and I couldn't find any answers in the forums that were clear on this, I will throw this email address up there, the where to go. The vlog Thursday at lornsystems.com, the one I've put on here many times before. Oh, I should click the right button. I was like, how do they go there? But if you know exactly how to solve this problem, how to have a user that's able to replicate data, but not able to purge anything off that server if the system was compromised, I'm looking for a good write up on this because I'd like to make a video about it. But my Google foo has failed me and their documentation just says seromon sudoers, which gives them the permission to delete things. Hence, that whole problem I talked about with the meetability of where you started as in the user that creates the data. In this case, the user creating the data is called TomTest. And if the, actually TomTest was on this server because this was the destination. But yeah, we go to credentials, local users, there's TomTest. But as long as TomTest is in the sudoers file, TomTest also is going to have permissions to destroy things. And that's kind of the problem with it. That was just one of the first things I wanted to kind of get off my chest and talk about a little bit here was that. Just if you know an answer, like I said, I'll leave it at this one, vlog Thursday at lornsystems.com. We're going to keep playing with other things. Now, my biggest issue with ShureNAS, and they have changed this by now, is the inability to have additional users to the web UI. Yeah, you can do that now. That's actually, that is a fix, but it's kind of, it doesn't, it's still, that user still has all the same permissions. So the problem is the user, so right now I'm actually logged in as admin. So if you notice on the screen here, you'll see that I have root, and then we have the admin user. I'm actually logged in as the admin user. You can see that up here at the top for admin. But that doesn't really change much other than, I mean, I could create a user called whatever I wanted and then root currently is actually disabled. So there's that. But once again, the admin has to have the same high level of permissions to do things. Now, what would be probably a more on-point complaint, but I don't know, it's a huge deal, is if you had multiple users and you could do role-based access control. But currently, whoever logs into the web UI is logging into the web UI to control all the things that I know of. There's not a way to do RBAC or role-based access control inside of ShureNAS. So they did switch it so you don't have to be root, but you have to have a high level user that has the same permissions, essentially, or it doesn't work. Could you do a shadow copy of the backup server from the main server, then you can limit the main server have access to the backup server? Not exactly, because that user has to have the same permissions to be able to create and move data on the data sets. So like the concept, even if you were to take it and have a read-only user or a write-only user, not read-only, but write-only, but then you would run out of space. So even I didn't see any way that I made sense to me to do like a write-only user. So like I said, I didn't come up with anything on this. I've seen this question asked in forums many times. No one's had an answer exactly to it. And they're getting closer because they have sudoers and they have restrictions. So if we go here down to the user and we edit the user, which one? You want the time test, here we go. Allow sudo commands with no password, allow all sudo commands, or you can specify which ones are allowed. So I mean, that's an option where you can specify, but the problem is the one you need, the ZFS commands for send and everything else, those send commands are also the like ZFS destroy command so they could destroy your dataset, whoever had access to it. The only way I could see it work would be to have a script to see if you own the data after it's backed up to ZFS. No, that actually won't work. The reason why is because this is all done with ZFS commands. So you can destroy a ZFS dataset with the ZFS case. It's not the same as just leaving the data because you could stop them from running RM, for example. You could say use a ZFS case. The problem is I didn't see a way to restrict them to say don't use ZFS destroy. That would be the problem. Not to mention kind of another issue that you could run into is the data problem of let's look at the datasets here. What datasets do we have? We have test one and Tom test. This Tom test data, here's... Oh, that's not the one I want. Your important data. Here's one right here. We have this your important data. Own by root, it's already here. Let's see if Tom, even though he doesn't own this, can Tom from another server overwrite this data? Let's find out. So we got 330 megabytes of data in there. Let's go over to here. Let's go ahead and add a task. This system, demo pool, some test data, different system. Actually, I'm going to do this as a advanced setup so I can do it with a one time. We want to do a push. We're going to choose Tom test. Usudu for ZFS. Netcat, that's fine. Some test data destination. Currently, he does not have permission for... I may have to set up another user. I think I deleted the permissions. Let me try and run this job. It will probably fail. Yeah. I deleted the keys. Actually, I can put the keys back. I made the keys. I can put them back. Let's put the keys back. When I'm done, I tear my demos down and then I build them back up. I didn't build them back up before I started this. But I think... There we go. There's the private key. Well, you know what? I'll just build a new key pair. So let's just do that. Let me start this all from scratch. Oh, I can't delete this because I got to delete this. This is a bug I found in a release candidate. It doesn't tell you why. It doesn't pop up a little thing that tells you the why. How did I like my interview with Space Invader 1? I don't know when that goes up, but it was great. I absolutely had a great time. It was a lot of fun. Yeah. I will admit, R-Sync is a solution for this. If you set it up to R-Sync, you have the R-Sync job versus having a snapshot run by a different user. So the snapshots, the users would have R-Sync availability and could delete things via R-Sync. But the snapshots would save you. So yes, using it with R-Sync as a second job solves the problem. It's not the way I'd like to say it. You can get real specific in the commands for sudo, but will that specificity... I'd have to maybe find someone. Maybe someone's got to write up on how this, like, can it do ZFS send but not do ZFS destroy? I guess that's how specific of a command. And will it let me filter for that? Or is it just looking at the binary, the ZFS binary for sending that? Or is there a regex match for what commands can or can't be used? That's the question. I don't know if you can get specific with the binary or what executables, but yeah. It's tricky. Like I said, this is not an easy thing here at all. Let's create some backup credentials. Let's delete some backup credentials. Actually, can I just edit these and they'll recreate them? Probably not. That won't work. All right. We'll delete these. Tom testing. Actually, I got to set a password for Mr. Tom test over here. Credentials. Local user. I mean, there's a password I have since forgotten it. I knew it this morning when I was testing, but that was this morning. Oh, there we go. There's a password we can set. Save. User updated. It should be Tom test. Actually, no. This one needs to be admin. And this password, because this is to get into the true NAS. Then this password. This is username Tom test generate new. There we go. Now it should work. See if I did this right. There we go. Reload the page. And we got some key pairs. Awesome. We're halfway there. For my threat model, consider the threat to the NAS separate to the threat from the client. Yeah, the problem is the command line. If you look at what it's sending, it's like ZFS send and all the parameters next to it to send a series of snapshots that match a certain pattern. I mean, I don't know if you can limit it to only that. That is kind of the question is can, can the command be limited to exclusively that. Have I received any Zima blades? I've not asked for any Zima blades. I, you know, I've kind of been avoiding some of the sponsor stuff. I just ignore like, I know there's like a big push is the big push for the you green stuff. And I purposely didn't reply to any of their emails for you green. The, the you green stuff is like everywhere they sent one out, but I wanted to reserve my own opinion and do what I want with it. Matter of fact, I want to try loading true NAS on one, which I'm positive. I'm positive that you can't load true NAS. Like if you, if you did an agreement with them, you would have a different view. I think they have rules. I have to talk to my friends about that and see what the rules are for if you did an agreement with you green. Because I noticed no one's done that. But I don't know what is it because in this is common companies like, hey, you know, the agreement is we'll send you to product. We want you to demo it with our software, which is a fair assessment. But yeah, but let's finish this thought process over here. So data protection. Now what we're going to do here is I have these data sets. We have some test data. Matter of fact, I'm going to delete these because I don't need them anymore. So let me delete the couple extra things on here and delete this. Oops. Yeah. Let that data set. So I have this sum test data. But over here on this one, I have test one, time test, your important data. Now your important data does not belong to Tom. By the way, this is a data set that exists. So my goal as a person who has taken over the source true NAS is to destroy your important data on the destination one. So we see we have your important data and there's 330 megabytes of data in there. And this one has 650 megabytes. So what happens when I point it at an existing data set? Can we destroy it? We don't have permissions for it. We're not the owner of that one. We use the user Tom test. But let's first do this. We're going to go here and data protection. Let's just create a quick task, source this system, some test data, different system, Tom testing. And we'll choose this. We'll call this one YouTube live. So this is us just making sure this works. I'm just creating a new, you know, send this data over here. YouTube live demo one. Next. And I just need to run once save. Replication has started. And all right, it finished. So we sent the data over to this system. Let's just go ahead and refresh the page here. And there's our 650 megs of YouTube live demo. And by the way, noticed it's owned by root again over here. So yes, this is all true. And scale. Oh, good. So someone did load load that on there. Cool. I'm glad to know it works because now I'm curious because I mean, if it's a good budget hardware for true NAS, then it becomes more interesting to me. But let's go back over and now see if we the goal next is to take this current existing data that we do not own and we did not create called your important data. Can we overwrite this data? So let's go ahead and add another task. Source destination different Tom testing some test data. We're going to write it over to the year important data. We'll call this one over. Right. Overwrite your important data seems like a good task. Next. Let's do another run once job. I just need to send it over once. Go ahead and hit save. Destination snapshots are not related to replicated snapshots confirm. I would like to just go ahead and overwrite replication overwrite. All right. So now it's a project. Overwrite that data just overwrote that data. So let's go ahead and before you're fresh, we see the 330 mags versus it should be 650. So let's go ahead and just refresh this page. And there we go. Whoops. We've deleted what was there and replace it with something else. So the exact threat model that I was talking about is very real. If someone takes over your source TrueNAS, they could manipulate and change where your destination data is going. And that would be bad. And because you have to have a user in a sudoers file, it is allowed to overwrite any other data. Even the data that user doesn't have access to or own on the destination TrueNAS. That is the problem I have. Like I don't see a way around that right now. When using TrueNAS as a target charger for VMs over. And if that's what happens when I do a snapshot, my VMs will be backed up in a powered state. That is correct. If you do a snapshot on ZFS and NFS share, they're in a powered state and you don't know what data in flight may have messed up. It's not the most ideal situation. That's not the ideal way to back them up. It can be good for coming back to a certain version. But you just have to know and be able to unwind those transactions and be able to do it. That's the tricky part of it all. But yeah, this, like I said, this is why there's not a magicness to immutability in here, but where there is some immutability in TrueNAS. And I have an older video on this. I'll probably make a new one just so we have a new version. One of the things when you're setting up data in here, let me, which one of these has a share in it? None of these. Do any of these? Well, let's just create a data. Well, let's create a data set with a share on it. So we'll add a data set, some SMB share. If you create a data set with a share and we'll make this SMB, just let it do all the default attributions here. And then we take this share and we manage snapshot tasks. And we'll say we'll add a snapshot task. So there's our some SMB share. We want to have a snapshot lifetime of, we'll say a week. And we want this to run like every hour, easy enough. Actually, we'll say, I only want to keep one day. So if I forget this, it won't fill up or have some other problem that I got to delete. But now we can say now every hour this thing takes a snapshot. This is where when we go back over to the share. So let's go back over to our data set and we can see this one here. And we can see its role is set up as a share. We manage the SMB share. We edit the SMB share. We go under advanced options. And let me make it big enough so you can read this. And right here, host enable, export ZFS, snapshots as shadow copy for VSS clients, shadow copies. And it just leaks you to the Microsoft page for shadow copies. Now this is something beautiful in TrueNAS. Even if you tie this to Active Directory, this will still work. The shares have snapshots. The snapshots present as VSS and Windows. So if a threat actor gets a hold of the Windows network and they attack those shares, they encrypt those shares, do the thing they do, you are able to go through the snapshots, which are immutable because they're not at the same permission as the user creating it. So the SMB shares are whatever user permission. It doesn't have to be a user in TrueNAS. If you tie it to Active Directory, it's whatever AD user. The snapshots are controlled by essentially the root user of the TrueNAS machine. And if you're smart, you're not putting the root user of your TrueNAS the same as any root user in your Windows machine. You're not sharing passwords. You have a unique control plane login for your TrueNAS, your root user for TrueNAS, or admin. You can use admin, which has those same high-level permissions on the TrueNAS that will do it. So that should work fine for considering it immutable on TrueNAS for doing a Windows share. It's only the ZFS replication that I'm kind of stuck on. Things like this work beautiful in TrueNAS. Or I did mention doing like the S3 MinIO. You can make those immutable because the user you create for the S3 buckets is not the same user as TrueNAS. So you can have a snapshot policy on your S3 bucket, which also will give you immutability on it. Suda rules can see the sub-command, but that does not protect against overwrite. Okay. When using TrueNAS as a target storage for VMs over NFS, what happens when I do? Oh, yeah, that's the one I answered. I used your early video for my local share snapshots are immutable as far as the client is concerned. Yes. What having scripted checks for destroy before running command in the sudo permissions, give access to the script, but not ZFS command. The problem is not just the destroy. Destroy is one command you can run. But as I noted, I didn't use the destroy command. All I did was overwrite the existing target and it also deleted it. So if the threat actor creates an empty data set and then points it all your full data sets and does the replication of an empty one to a full one, now they've also found a way to delete your data. Any tips or recommendation doing backup verification? Do I just nuke the main and see if I can bring it back? Well, I pointed this out and I've seen people ask this question a few times. So we go here, and I've got this way too zoomed in. There we go. If we look at the data sets on here, like here's the some test data that I have. And let's go to the data protection. And we have these two tasks right here, the replication tasks. Let's spread them out. This one overwrote the data. Let's just go ahead and I don't need this one now. Let's delete it. But what about this one here? This one took and sent and edit it real quick just to show you what it did. YouTube live demo. And it said push. And it said some data to the YouTube live on the destination. So let's look at the destination. And there's our YouTube live data. So how do we get that data back? That's actually really easy. There's two ways to do it. One is just at the restore and it builds a inverted version of this task. It's really that simple. This is what I demoed in the video I did with replication. This creates, you know, pull back. We'll just call it back. Well, let's just call it restore. That's what we're doing, right? We're restoring. Where do you want to restore to? Well, let's go to here demo pool. And we'll call this test restore YouTube live. We're doing a YouTube live demo. So we're going to restore. We hit restore. We've created the task. We enable the task. We run. And by the way, for if you didn't notice here under description, this is a poll task. This is a push task. So you just say run now. Oh, I don't have permission to pull it back. That's funny because that user doesn't have permission to write over here. That's actually, well, this is a weird problem. I would actually need to set up a, the user doesn't have permission to bring the data back. But that user has permission to send the data there. Boy, that's strange. Maybe that's a way to do it. We set up a poll job instead of a push job. Huh. That's an interesting way to look at it because that it's trying to use that user to pull this data backwards. But essentially the short answer is you just set up a poll job to pull the data back the other direction and you'll be able to pull it back over here. Or you could always go to that server. So if we were at this server, this is the one that has the data. We could also run a task that does the same thing that pushes the data back from here. So you just go over here to data protection, which there's no replication tasks, but you'd add one. You'd point it to say, hey, take, you know, this system destination, different system. There's, you know, add the SSH connection. You run through the process and you can push the data back to the other server. Pull most control of the target. This requires compromise yet another system that should be good enough for accidents. Yeah. Yeah. If a threat actor can compromise the first system, it is likely they can also compromise the second one. Well, the problem really comes down to when you're over here in the system. And you go to credentials, backup credentials, SSH key pairs. Please note, I'm showing you the private key over here. So if they have the private key, they can put that private key into SSH and send. By the way, something interesting, the private key operates through the UI here, but it doesn't save it to the desk. So kind of interesting that it doesn't get saved there. Yeah. I think the poll is the answer for probably how to do it. The downside, if they get a whole... The poll just moves the problem somewhere else. They get a hold of the system doing the polling. The system doing the polling also has to have access the other direction. So if you're moving which system they have to compromise. If it's a push and they compromise your source system that's pushing it to a destination, then you have a problem. If you compromise the backup system and it's pulling from the other system, they can go backwards the other way because it has to have permission to grab the data. Yeah, you can just... Because you can just grab the key pairs right out of here, I know. So basically, you really have to lock down the web interface and be very careful who accesses it because this is where the keys to the kingdom are. This is where the problems will... This is where all your problems will be. We had a threat actor... Well, we didn't. The client did. When my client was consulting... Well, indirect client. I guess there's still my clients that are consulting with us. But we did find interesting because the threat actor... What the threat actor did was nothing so clever as to try to do any of these things. They just went and deleted their TrueNAS system. And... Yeah, they knew their backups were on the TrueNAS, and the client was using replication. So they actually had data from the replication. So they did not go further and destroy that, but they did destroy the data sets and then encrypted a bunch of things. So definitely a big mess. How is scale doing in terms of apps? It seems like TrueCharge is not supported on the latest release of scale. Will scale give an option for more easy container application for homelab folks? I don't know. I don't keep up much with the TrueNAS stuff. They've done a good job of adding a lot of other applications. So they've gotten... Like their application marketplace in here has gotten substantially better. So if we go over here to apps... Make sure I'm on the latest. But there is a... We'll say TrueNAS category, maybe app name. They've got a lot of good apps in here. So their support... I mean, they have 107 of them in here. So they've done a good job, I think, of having a pretty good, solid collection of apps in here. But it really... It doesn't matter if there's 107, if there's not the one app that you need in there. So, yes. As in all things, you need to evaluate the threat and then cost through media threat. Yep, you are absolutely right. Oh, yeah, they've greatly expanded it. And my understanding is TrueCharts fixed the latest incompatibility that came with the latest version. So they've updated it, they fixed it. They don't have the same problems as they did before. But yeah, there's just a great number of apps in here. But you can still... My understanding, and I don't quote me on this because I'm not using it, they did fix... I seen a tweet or a forum post that they've now updated and they're ready for 24.04. By the way, 24.04 is not released yet. This is still a release candidate. So for TrueCharts to get ahead of it, if you will, and are able to support the Dragonfish RC1, it's not released and they got on it. So I think that's great. Do you think I could still sell a 2016 Unifine Mesh Pro AP to a client? I'm going to say that feels like it's getting into life if it's a 2016 model. Look up the end of life on it. I could not tell you that off the top of my head, but if you do send Googling, they do have some pages with some of the end of life things on there. But yeah, the overall... I think there's an update. The updates have gone really smooth lately too for these. I have some apps that are broken once in a while, but it's not been a big deal because it's not that the app itself really broke. I just had to delete and reinstall it. But if you're setting the apps up properly and you're pointing them at the data directory, that completely doesn't matter. So go ahead and update this one, which means this one needs an update too. So I'll upgrade this one. That one's updating. Let's go over here to this one. I probably have a few apps that need updating. Yep. Let's go ahead and update Minio. See how that goes. Generally said, not a big deal. I've even had them... I hit the rollback button and it worked fine. So one of the times the app did an update, it hit rollback, it rolled back, and then I checked it like a day later and it updated fine. We'll run an update on this one too. I could have checked a box to say update all of them, but I did not. Actually, I need to check one more thing. That's upgrading. No, I'm not backing the Kickstarter. I'm probably going to buy one when it comes out. I'll wait and just purchase one if I think it's good enough. I turned down, they wanted to send them to me and I turned them all down. Is there ways to automate the backup config? There's some forum posts on how to do it. I've never done it. And my reason why is process a procedure I have. The config file is only important to me when I make changes. So whenever I make changes, what I'm done and I'm happy with the changes, like I know this is good, I just back it up. So it doesn't need to be constantly backed up unless you're forgetful and you don't have a process. But I just, in general, PF Sense, for example, really anything if I'm configuring something that has a config, this is just a general rule. I go through, make the changes. When I'm done with said changes, I say, great, am I happy? Does this work well? Sometimes they even have to, depending on the device, may have to reboot it to make sure the changes are all saved. And then I make sure I have a backup of those changes and it goes to my backup place. So yeah, absolutely. There is a way to automate it. I've just never bothered to do it myself. This needs an update too. So let's got to update all those Linux ISOs, you know, that I make sure are always being well seated. Yes, I am planning to review the Nekite 4200. I kind of sat on it for a while as I got busy and we'll share a picture with everyone today. We've actually been deploying these in production. I had to stop by and grab something from the office today. And yeah, there's another one. We have the, we've been deploying these in production to clients. So my review is not just your average review. This is a review of Tom has been using it now for a couple of months. And we now have more of these deployed in production at clients in use. By the way, all of them are great. So if you're wondering how does it work, it works wonderful. I've had no problems with it at all. It's really been solid. We'll share this tab. And if we go over here and look closer, you'll see that this, this whole, this is the one at my studio here. It's also a Nekite 4200. It's actually in use here with multiple VPNs on it. So I have a PIA VPN running, a WireGuard VPN running. It works great. It's been, it's been absolutely solid. I've got HA proxy running on it. Yeah, it's no complaints at all about it. So if you're just curious, like, do I like it? The answer would be yes, I do. I've had no problems with it. One day I'll get you deploying UDMs. Yeah, yeah, yeah. As soon as the UDM does all the things that I need, I mean, come on. UDMs aren't, UDMs aren't anywhere near getting your reverse proxy built in or anything else. I just need that flexibility for things. So best option for 10 gig. Maybe you mean PF sense. If you're asking PF sense, 10 gig. I recommend the Nekite devices. The 8200 is awesome. I think it's great. There's, of course, self-building them is pretty relatively easy to do. So what's your thoughts on 4200 running Seria Cata? Well, my thoughts are that, I mean, I chose to run Snort. So you see Snort running in here. Down here. But you'll also notice that N-top is running as well. So the answer is yes, the 4200 will run it. But, but this is always the big but. Will it run it the way you want to run it? The challenge all the time is not that these things won't work. It's that they do create some extra load on the system. Do you have a thousand connections? Or are you like me and only have maybe 50 connections? It can handle a thousand. But you're going to really start choking up N-top because it won't be able to do it very well. What are you running as DNS on your PF sense? The, what do you call that? The built-in DNS services. DNS resolver. So DNS resolvers when I run for DNS and PF sense. Yay, they all updated. Yeah, DNS resolvers. Here's actually all the packages that are running on this right now. So I have the ACME, ARP watch. This just comes installed. This AWS wizard is a plug-in. The HA Proxy, N-top NG, OpenVPN, PF Blocker. But by the way, I'm only doing DNS blocking. I don't use it for the, I'm only using geo blocking. I don't use it for DNS blocking. Traffic totals, snort, tail scale. I have tail scale and wear guard. I have three VPNs running on here. So wear guard, tail scale and OpenVPN are all running on here. And then as you can tell, I have a handful of interfaces on here. But yeah, it works great. No issues at all. So 4,200 solid. I mean, for a business environment, probably I would go to the 82. That's what we have at the office. For business environments, you want something a little bit beefier. The 8,200, that would be better. This is all running as a PF Sense. So this isn't Tinser. This is all PF Sense. One day, maybe I'll take a look at Tinser. It's not been high on my party list because I don't really need it. And I don't have any, I may have a couple of clients using it, but it's not been, it's all supported by any of the people we've seen using it. So he's supported by NetGate. So it hasn't really been an issue. No, I haven't really done much with tail scale traffic rules. Tail scale has been doing a good job. I got to get back over to playing with NetBird. That's higher on my list right now. And then I'll come back to tail scale, but I like tail scale a lot because it just, the tail scale being integrated into PF Sense just makes it really, really easy. I just helped someone else solve a VPN problem with tail scale because it was so easy for them. They were, well, they messaged me. They're like, we're trying to set up these five locations and here's all the routing stuff. And I'm like, they're all using PF Sense. I'm like, you could just solve all that with tail scale. He's like, oh yeah. They actually had played with it before and kind of forgot about it. And I'm like, you can completely solve it with tail scale. Have you begun deploying Wi-Fi 7 to your clients? What are you considering for doing so? Wi-Fi 7 is too new. The biggest challenge is density. Matter of fact, I'm going to a client tomorrow to avail, hopefully we're going to be, I think we can film this one and we're going to be able to do it on site. It's always density. It's like, the home users always want me to do a speed test, but a speed test isn't really that relevant. It's always a density and connectivity test. That's what people want. High density connectivity, not speed. In the Wi-Fi 7, I mean, it offers what the six gigahertz now. That also means you're going to have to sell more access points. Not that the five and six is substantially different, but five and six are much shorter range than the 2.4. We haven't seen a ton of it, I'll say. Thank you for watching our T35s coming up for placement and really thinking about changing, giving you the push I needed. There's a few different boxes out there that'll do 10 gig. 10 gig is not bad on there if you need it. We actually have not that many business clients with 10 gig in their PF sense set up. Don't run into it that much. We look at Tensor when we're building out our data center to act as registrars. It performed very well in our testing, but ultimately went with another router. What didn't you like about Tensor? Yeah, Tensor is supposed to be really fast because it has the vector packet routing fancy stuff in it. I've read all the spec sheets on it. It does look really cool. The problem is there's only so many hours in the day. That's what I run into. I want to do all the things at once, everything everywhere all at once, but it turns out you can have it all, but you can't have it all at once, and that's what happens with all the things. I was actually playing with, I think I put it away already, some link testing tools today, and I'm like, I got to review this. I've had it for three months. This is where I need a helper that can help me review things, and then it would be great because then they could actually do some more of the testing. The testing also takes so long, I really take the time to validate things. Hence, the first 40 minutes of discussion we had about the level of validation for edge cases here on TrueDance, making sure we understand how the Threat model works for potential compromise of a system. I like to really dive into it. Is there anything to address regarding the changes in Let's Encrypt and ECME in PF Sense? Not that I know of. I know the changes, but I don't think they break anything in PF Sense. I don't think they made any API changes in it that would break anything. They're just root level changes. Trust me, I will know. I will talk about it because I use Let's Encrypt for things. I will certainly do a video if there is, and I'm friends with the people at Let's Encrypt. My friend works there. One of the people that lives near me, that if I have a Let's Encrypt question, I can go right to the source of who writes the code and ask them. For business phones right now, OIT. Speaking of threats, hope you haven't been too busy patching XC. I made sure, or at least I thought I did, but it seems to be a popular question. I said it's pretty much only in Bleeding Edge, and I noted it's in Kali, which does run Bleeding Edge. Unless you're running Bleeding Edge, you're not likely to see it. So a lot of people were asking about it, but I'm kind of like, it's really not that much of an issue because even if you're in... Here's an example. They're running... I think this is running off testing. So let's look here. Shell. So this is the beta of Shurnass, and let's go to... You know what? Let's do it differently. I'm going to share a different screen because it's just... I hate trying to use the terminal inside of a web application. It just sucks. Window. That seems like the right window. I hope it's the right window. Let's make it big enough so everyone can see it. There we go. Yeah, they're using 5.4.1 inside of here. And if we look... She's probably just... And they're using... What is this? Bookworm. Isn't bookworms... Yeah, bookworm is the same as testing right now. But I wanted to check because if you notice, they pulled their own after-positories. So here's the after-positories inside of here. So even things like the release candidate, cutting edge of Shurnass scale is still running 5.4.1. 5.6 is where you have to have... Where is where the problem is? So it's really not a big deal. So not really a... Something to work about. Also, worth noting. Does any... Can someone answer why I typed the word strings, XZ, grep, you know, and then for the version to get the version as opposed to that? Whoops. Isn't it? I guess there isn't... Is it not in here? Where is it? Yeah, whatever. I'm not worried about that. My point is, it's really not much of a patching issue. Yeah. Easy doing strings. I see a lot of people telling people to invoke the command as opposed to doing strings. I'm like, no, no, no. Oh, XZ not XV. Okay, I'm just typo-ing. You're right. I completely typoed that wrong. Nothing specifically. It came down to familiarity. It's different than our locations. We were... ISPs make it to Cisco and as we normally deploy for edge orders. Makes sense. Yeah, I had actually seen... I did the video and I noticed people posting like, hey, just run DacDac version and see it. I'm like, you know, if you have a potentially compromised binary, it could possibly lie about its version. That is, you know, maybe they modified it to not say the proper version or, you know, I don't know. It's not the best idea if you're wondering. Okay, so bookworm is 12. I think SID is the only Debian version. It's like, I think it's unstable. Hold on. I have this note somewhere. I think the folks over at Wizz Security, so let's go ahead and share this. They've been compiling this, which I think is great. I'll share it with all of you. Wizz is starting to make a list of things here. So there's the link thrown in the chat. So no Debian stable versions are going to be effective. Affected versions from 5.5. Alpha uploaded on 2.3.4.2.5.6.1. So it's not unstable. Looks like it's in Alpine, Arch, and Gen2. Amazon Linux not affected. Kali Linux updated between March 26 and Fedora 41 and 41 Rawhide. So it's not in a lot of places, but still interesting nonetheless. The whole attack. I'm waiting to do a debrief. People are wildly speculating all the time and pointing fingers. I'd rather wait until the dust cells. The most important thing is that we found it and we understand the method by which they were injecting the commands. It has been stopped. The account has been ruled the accounts and then probably just one account that looks like was related to who's been suspended. And that's great for now. So the immediate threat is done. Speculating doesn't help, but strong investigative work and then putting controls in place and seeing how this could have been done better is definitely the way we want to see going forward. I did like this as well. So let me pull this tweet up because there's once again back to the wild speculating people, but someone who actually took the time to say something good. I just like this quote. The way the campaign is falling apart under scrutiny is to be expected. They did not build a campaign to resist investigation. They built a campaign to avoid investigation and they were successful at no point in the campaign to the erase suspicion. It was just bad luck. Yeah, that's a really good point I think made about this entire thing. It's definitely one of those type of situations that I'm glad we found it. It got found by luck, but wow, one of those tripped over it type of things. It's definitely wild. Everyone's asking how many more of these are out there. I don't know. You'll always be checking it. We always have to be very on guard about this. There always has to be levels of auditing. The fact that there's a point, if I understand this correctly, and this is where I'm not speculating, I'm just making sure I'm recalling what I read, that it was failing a certain test and the developer said, don't run that test. And that is what should have raised more flags, but someone was able to, that's what got them further along is the fact that that happened. Someone went ahead and said, yeah, you can just turn those tests off. Which is not what you want. Not at all what you want. Catching up on my messages. There we go. My process for poll requests is to review only the things that I expect to change that are actually changed. What's so fuzzy is because they were able to embed that secondary stage loader that also only pulled that extra data under certain circumstances. That's what made this such a tricky one. This is not an easy one. It was definitely great levels. Years of obfuscation is what's wild. The attempts took a long time, the stacking of attempts to be able to get to where they were. That made it really interesting. But yeah, it definitely sent a lot more people thinking. The other part that sends a lot of people thinking is that all of this occurs and how damaging this is to the industry of technology. But then you're like, oh yeah, that's just some developer doing one thing. We keep coming back to the XKCD comic that the entire Internet is supported by some dude in Nebraska or some dev working somewhere contributing some code that's critical to everything and that dev not getting much support. These large companies that benefit from open source kind of need to take a step back or maybe we should audit this better and maybe we should pay some of the people better and put some more people on things that are supporting this critical infrastructure. I don't remember. I think one of the developers does a couple of things. I don't know what one developer did, but one of them does have a website. What did they do? Oh, there's so many. Which by the way, let me just drop this thing at you. This is what I'm looking at and I'll share it. Man, there's been a lot more added. These are the links I sent. In fact, these are still ongoing discussions of people tracing everything out. This person has their website. I don't know what they do. They have this site here. This is one of the developers. The other one I don't know much about, but it's kind of cool. They're now tearing down the payload, which I think is pretty cool. I guess I'll wait till I'm not that good at reverse engineering things. It's not my skill. I will read this work when it's done by other people who are working tirelessly at it. So many to-dos in here. And more people poking. Matter of fact, one of the graphics I've seen combined here, people have been making some of these graphics, which I think is kind of cool, kind of walking through the history of it. But I haven't validated this is true. I've just seen this graphic and it looked cool. Hey, did I see that merging VMs from VMware has gotten easier? I've used VMware. So I'm not exactly sure what's gotten easier about it. Bernie Robertson mentioned one of them ran commits in the kernel pass. They're scrambling to audit those commits now. Interesting. Professor, I try to keep a copy of the tools we need for operation and local repos and then I build those for our deployment. It doesn't prevent all threat, but it helps. Yes. Yeah, supply chain compromise stuff is just huge. And definitely it's where the attacks are going. We've gotten really good at a lot of other things. So the threat actors focus in on what they can get to. And well, this is that place. How do we get to the supply chain? Man, there's a lot of discussion on there. A lot of reading. I'll wait till it's all done before I read all that. That's how I feel about it. Fun stuff, fun stuff. Well, I think I'll wind this down here. This was lots of fun. Glad everyone there. So yeah, oh, you feel bad for the original XC dev. He was freshening and taking other contributions. Yeah, yeah, yeah. Oh, arms fine. That all healed back up. So I'm good. Matter of fact, that's actually what I was working on today was I was playing with one of my motorcycles. The Honda really jams in. I was adding some more stuff to it. That's where the battery is, man. They've got it really jammed in there. I was adding some more accessories to my Honda. And yeah, they don't make them easy to work on. They like to see how tight they can get it, I guess. I don't know. But all right, thanks everyone for joining. Awesome hanging out with everyone. And I'll see you next time. Thanks.