 XEFS or I don't know if we're gonna break it or not. That's the point of doing it live is to have fun with it. The, oh, I should probably bring the microphone closer to me. Details. Details that matter to people who listen. If there's anyone listening, I see zero people right now. Seven, there we go. Some people joined. This is a random, I didn't feel like editing a video while I played and I figured maybe I'll do these. Sometimes on Sundays or days I sit and play with things because I'm curious what will happen if I do it and this is kind of where I'm at. We're gonna play with a XEFS pool. I already recorded the video but I had a couple, you know, you get on those side quests as I call them for all the things that you go, that might be a neat idea or this might be a neat idea and start doing it. But why not do that live and people can play along with me, ask questions or poke at it with me at the same time. And I actually built out some slides about symmetrical. There's very few slides. This is gonna be in the video. So I have some basic explainer stuff that I'll be able to do. But obviously we want to play with the system itself and we're gonna start with destroying the pool. Because, yeah, I think I've already expanded it to most that can expand it. So that means I got an export disconnect, destroy, demo. And this is why, you know, you have me do it because who wants to destroy their own pool? Why not destroy Tom's pools of his labs and things he has going on here. Way more fun to do it this way. The downside is you guys have to wait just like I do for the pool to go away, reconfiguring and pausing. This is not the most performance oriented system I'm doing this on. This is my old ZFS system. I'm getting ready or an old TrueNAS system I should say I'm getting ready to retire. So it's been spinning for a long time. And the drives are old to say the least. So the oldness of the drives is time to spin them down. I think they've been spinning for like a few of them at least have been spinning for seven years. So that makes that makes a bit of a difference. But I wanted to try something here because you can, not that you should, but you can build things like this. So we're going to head and add a RAID Z setup with just a couple drives. And we're going to take a couple other drives. Let's do these ones here. We'll add a VDEV. And we'll just add two drives. Oh, I need three. Okay, three is the minimum to make this idea work. There we go. It's let me know this is a bad idea. And I want to know just how bad of an idea is it? What happens when you try this bad idea that people say, no, you have to add them symmetrically with the same number of drives. As far as I know, there's not any data integrity problems. It just warns you about doing it. So now I've got unbalanced VDEVs in here. Good morning. Good morning and good morning. Morning means you're in the United States. If someone said good afternoon, they're obviously the European audience. So I can get hints as to where you are based on what you're saying here. All right. So now we have this created. We'll line this up so we can see that we've got four drives in the first VDEV and only three drives over here. So let's go ahead and write some data to them. Oh, I got to create a data set. I guess we'll do it through the UI. I can do it from the command line too for anyone that didn't know, but we'll add data set. Save. All right. All right, now we can write this out because now it should match. Writing out some data. It seems to be fine. I, you know, this is the don't do it argument people have when you can, you can create these unbalanced VDEVs. And I think if you're just trying to do things for capacity reasons, I don't think this is a problem. And I also am demoing at the same time here. If you didn't know, there's an algorithm that ZFS uses to figure out all the VDEVs and how they're grouped. And it's distributing the data I'm writing based on percentage of size. So it's evenly doing it across each of these to build this out and build the rights on here. So almost. Hey, Michigan. Yep. It's Michigan for sure. New Hampshire, Ireland. Good time of day, Maine. So good afternoon. All right, now we know who the UK people are. Well, I already know Sam Sheridan's in the UK. But, you know, you can build these, expand these out. Matter of fact, we can, this is going to be running for 14 more minutes unless I stop it. You have unbalanced my backup server, just slow, not unstable. Yeah. And that's the part I want to kind of hammer home the people is it might be slow, but it'll work. There's not really a problem with doing it this way, as far as like performance wiser is, but as far as like integrity of your data. Yeah, it'll work. We just distributed the data across these. And by the way, even while it's doing all these rights and adding to the drive, you can actually go through here and expand it. Let's actually see what you're doing. An easy way to do this. This is not writing much because it's reading from the memory now. So laying out file. So let's go ahead and cancel out of that. Mount demo FIO. There's all that data we wrote. Let's just go ahead and blow it away. Hey, why not? So I got blown away. And in a second, these will update to say, Hey, the capacity has been updated on these because there's no more files in here. So that'll change in a second. Let's add more drive. So let's go ahead and add some V devs. How should we add them? How many? Well, that's interesting. So let's Oh, that's why I am sorted wrong. I'm like, where's all the drives? So I had these drives here. So we'll add a four. There we go. Oh, mixing different sizes. Yeah, why not? Force that too. Let's just make this a messy hodgepodge together machine. Yeah, yeah, yeah. So let format those and we'll lay these out again. All the way from Poland. Well, look at this. We've got 23 terabytes of capacity of mismatch drives. Oh, span this part down. There's all three of them. So read Z1, Z2. This one has five. This one's is four, three, and one, two, three, four, five. So yes, we've evenly broken all this. So let's go ahead and write data to it now. And it'll still evenly distribute all the data amongst here as it lays out all these files. Actually, let's modify FIO RM star. Yeah, delete these IO depth 32 number of jobs 32. Why not more 64? We can go 64. We'll make them bigger. 256. And that make them to gig. I think that should work. I think I'm doing that right. Hey, cool. I got the laying out the files. Oh, yeah. Now it's really heavy, heavy IO rates now. Travis learns about breaking stuff at the office. I think we broke something at the office playing with some rate arrays, but this is why we play so we can learn how to do these things. What else are we going to do in a Sunday morning? How am I today? I am well. I am breaking ZFS arrays and seeing what happens because something's going to happen when you break this. I like the way it writes out all the data here. Let me expand this out a little more while it writes those files. There we go. You can see it's still evenly distributing across here. So these are those, it gives you the warnings when you set these up, but it doesn't, it's not going to hurt your integrity doing this. You can still get all these drives, set them all up on there. Now I don't have physical access to this, but one of the things that, well, right now I don't. I know where it's at. It's at my office and I'm in my studio, but the RAID ZV devs, one downside of this, and especially this one right here, we only got three drives in here. If any one of these fails, the whole system comes crashing down. Yeah, we'll talk about lessons learned at some point about, you know, what happens, what happens when you break things at the office, but Tom said to do it, so nobody's in trouble. It's just a matter of figuring out why it didn't, why the, it shouldn't have broke it, it just did. We'll sort that out at a later date. But as far as the integrity goes, you can have these mismatch feed-ups like this, lay out the files. It'll still evenly distribute the data across here. Now the way ZFS works is because this has a capacity of 13.6, this has 10.9, and this has 7.5 chair by capacity, it's basing that using that information to lay out the data. It's going to give more data to the drives that have, well, the V devs that have more capacity. So this is just how it's going to even out. Now the reason it's a little offset for this bottom one here is because I didn't delete all the data ahead of time. So if we deleted all this, got rid of all those random writes, it'll fix and rebalance itself. So it's pretty simple, so to speak, in some ways how they work. Yeah, the performance hit comes when they all start to fill up. Yes. Then you have an even bigger performance hit because they'll fill up an uneven amount. So there's definitely that as a potential issue when they fill up. But if your goal was integrity, well, the integrity is there. That's the good part is you can maintain integrity of your data. Now a lot of people say, and we'll talk about that real quick. So right now we have like 23 terabytes of data doing it this way. What if we go here to demo, confirm, try the pool, made sense once I realized it. But I did not know that Synology does not like mixing drive types like mechanical and SSD. Oh, okay. Good, you learned you learned what mistake was made when we were playing at the office. Hey, morning, Cody. So yeah, surprise Sunday stream. Yes. Never try that before. That's and that's the thing. I always encourage people and I try to do a lot of these demos so people can try different things and like for example, let's try mirrors. This is what a lot of so many people recommend mirrors. So obviously, if we set up raising one, we don't have a lot of redundancy, but we do have a 23 terabytes of storage when we are setting it up that way. But what if we went the mirrors because so many people just build them all in mirrors. Okay, you can do that. So we go ahead here, we're going to add another mirror. And we'll go ahead and add V dov. I would the repeat button doesn't work for doing it this way. When you want to repeat layouts. Oops, I grabbed the wrong two drives. I need, I need them in pairs. There we go. Add the V dov. Actually, we'll go ahead and hit add twice. Oh, well, let me add it until I put more in. But we'll take these two drives here. Put those in a mirror. Add a data V dov. This is the hardest part about mirrors is it's tedious to be off this. So add a V dov data. We'll hold off on adding the last two. Well, I guess it doesn't matter because you can expand them later. The advantage you get with mirrors is you can only get 16.3 terabytes of storage. So that's a disadvantage. But the advantage of the mirrors is going to be I can buy drives two at a time and add them even though they're all different sizes. But when we build this pool of mirrors, everything is just a simple mirror of two drives. This kills your capacity to some extent and in some ways, I don't know that you're that much better off than you are with RAID Z1 because if a drive fails, you have only one extra copy of that drive. And if you buy drives at roughly the same time and one of them fails, there's a statistical probability that the other one may have the same level of capacity for endurance. It's hard to say it's not an exact science, but sometimes it happens. Sometimes drives fail in groups, the groups at which they were purchased. Not a guarantee, but it's something to consider. And also when you are re-silvering out of a pair, well, now you have the other problem of you're only able to do it at the rate of the data on that one single drive. So it may take you a little while to re-silver as you add those in there. So here's our 16 terabytes capacity. We'll look at the drive statuses. There's all those mirrors. Go back over to the command line. And there's all the mirrors. So then we'll go ahead and do the same thing. ZFS38. Sitting crooked and it's hard to type. All right, now we made another data set for FIO to write data to. Oh, I don't know if I want to write it out that big. I guess it doesn't matter. As long as we're writing data, we'll see what happens. It looks cool going across all the drives at once like this. So it's distributing the rights across them evenly. Well, somewhat evenly. The drives are different capacities. So the same things. The larger drives it's able to write more data to. So these are getting a higher amount of data. These are getting a lesser amount of data. So it's trying to evenly distribute all the data across them. With two copies and one gets corrupted. How does it know which copy is right? It does do integrity checking on there. But yeah, you do have some challenges that the integrity checking doesn't have enough pieces of data to scrub it. So that is a concern. I'm not exactly sure how that works in mirrors. I know it works when there's ZFS. I'm not sure if you lose any integrity in a mirror. That's a good question, though. I'm uncertain of the answer. I wouldn't say it with confidence. I assume it's like any time you have a mirror. It only can do so much. But if there's a mirror, the parity checking should work. Because how do you know if a data is bad? Well, it uses ZFS as a cow. I've got a whole video on that and how it does checksums. And if you have two copies of the data, you can check some against the right integrity and the journal log. Essentially, it was created to say, yes, this data is the same integrity of which it was done. So which copy is bad isn't really the question, because it's using the log itself to confirm the checksum log to make sure that the parity matches. I'm pretty confident that's the way that works with mirrors. It is the way it works with the Z, like a Z1. But it's one of the reasons that people say, well, you don't get all the benefits of ZFS if you have a single drive. And there's some truth to that, because if you have a single drive, then you have nothing to compare against. Or if you do find data corruption because of the log says there's data corruption, you have no way to repair it, because you only had one copy of it. So but yeah, this is I mean, the mirror, other than losing capacity, it's not a terrible way to do it. But I prefer Z2. Yeah, Z2 is what makes the most sense to me in terms of when you want some better integrity and risk. For example, and we'll jump over to this system over here. We'll log into my 45 drives, or this is a production system. These are all broke down into RAID Z2. So what these are is when you're, you have a lot of drives, you lay them out and group them together. These are in groups of nine, because there's actually 27 drives in this system. It's a different story for a different day. There's not 30. There's a couple, there's a hot spare. Well, it's not even a hot spare. So I don't have it listed as a spare in here, but there's a spare drive in there. It's just not in the pool right now. But this allows me to get reasonable capacity without worrying too much about the problem of having a VDev that is too wide. You can go too wide on a VDev. And this is something that needs to be thought about as well, because it's if you have a lot of drives, you could lose some of your IOPS performance. If you just said, hey, I have 30 drives in my Stornator. Let's use all 30 drives in one VDev. I believe they've solved some of the problems with the right hole problem. And a right hole is where you have to re-silver a drive. And it never really gets to re-silver because there's constantly new data being added to the large, really wide VDev. It seems pretty much recommended that your VDev shouldn't be much wider than 12 to 15, depending on who you ask. I've got some articles linked to that. I don't know how much that's changed though, because when the articles are written versus the modern ZFS changes that have been made, if there's any big differences between there. But it's pretty good to say that you don't want, I don't think you're going to go wrong if you set them not wider than 12. So let's see. Oh, have people asked them browser questions? I got nothing on browser questions. So I'm not the browser expert. I mean, I know about the browser. So there's that. Ooh, look at the usage on here. How much cash? Yeah, we've using a lot of cash on here. Here's a thing I don't have an answer to, but let's find out real quick. Go back over here. Stop this. Stop this. Let's destroy the pool again. Storage, whoops. Export, disconnect, destroy. Demo, confirm, disconnect. Let's get rid of this. The part I'm actually curious about is going to be, does the ZFS cash clear out when you destroy the pool? I don't know the answer to that question. I'm going to know shortly with all of you. Maybe someone already knows the answer to this question. But far as I knew, details are hidden. Oh, there we go. Successful. Let's go back to the dashboard. Hey, look, it does. It frees all the cash up. So interesting. Storage, create pool. Demo. Does it do even? So just layout just grabs the biggest drives and said use these. That's fine. We'll use those drives. What else did we want? What other vdos? Hot spares, metadata, deduplication. Yeah, I don't have anything fast in here, so I'm not going to bother with these. So let's go ahead and just add these. Create, create pool. Go ahead and do that. Oh, I see what I'm doing. I wonder why I'm getting all these alerts. Got it. The system's sending me some different alerts here. Not worried about them. All right, cool. Not too worried about all the alerts. I kind of expected the alerts to come through from goofing with this. Start destroying everything. So all right, now we got this and we'll go ahead and add a data set. Let's go and add FIO again. Save. And let's do something a little different here. So go ahead and do this. I don't want those to be that big. That took forever. So instead of random write, let's do random read. We'll go back to 256 meg, and we'll do 512 somewhere in between. Number of jobs, 16. All right, now this is going to be interesting too, because what we're going to do is watch before we fill that up. We'll do this. Go to the dashboard, and we see the cache is all blue right now. So let's go ahead and pick this off. So it's got to write out the files first, which means nothing happens with the cache. So we slowly cache more and more as we do this, but when it gets to the read part, it's going to fill it up right away. It's kind of interesting what happens too when we get to this part. So lay out the files. Build them all out. What are the commands I'm typing? Oh, let me share them with you. I'll drop them in the stream here. This is the FIO command. I think I've done a video on FIO. So there's your FIO, and you can look up, there's plenty of tutorials on how to use FIO, but here we go. Look at all the reads we're doing. Once the cache kicks in, up the reads, I think it should cache the reads at a point where once they're read into memory, you'll see the drive activity go away, and this will get faster down here at the bottom. That's my theory as to what's going to happen. Let's go actually to while we're waiting for the cache to fill up. Actually, if I change it to sequential reads, it'll probably do it even faster. So let's look at reporting. Go right to ZFS here. Yeah, those are arc size coming and getting destroyed in our arc hit ratios. And the demand data is going up. Let's see if it's able to meet that. So whoops. Hey, there we go. We've now hit it. So it's still reading. It's reading right away here, but we're not seeing any drive activity because it's pulling it all out of memory. It figured out the pattern even though it's set to random. Random reads, it's filled it all up in cache. So we have 11 gigs of cache, and it's hitting all the right cache ratios for this. So this is pretty slick how this works. This is one of the things that really, this is what makes ZFS so cool is the fact that it can do this so well. We'll go to ZFS. That read cache just absolutely while it refreshes. There's your hit ratio for all the hits being the same. So it's just going to keep going. Hey, it keeps asking for the same data. So I keep giving it the same data. That is the beauty of ZFS caching on there. I wonder if the F1 team uses TrueNAS for their pits. I don't know. I really wonder. Maybe. Which Linux do I use? Yeah, POPOS. Pretty much POPOS is my go-to for, it just works. Currently have a 1, 2, and have many Docker containers related to Plex. I have everything done DIY. I'm considering migrating to Unrayed or TrueNAS. Would you suggest? I suggest TrueNAS. If you like the performance, one thing downside about Unrayed compared to using something as ZFS-based, Unrayed can't match ZFS and performance. But if performance doesn't matter to you, then I don't think there's anything wrong with Unrayed. I'm not very experienced in it, so I don't have a lot of opinions on it. I just know that it's not as performance oriented as this. So there's our arc still hitting this. It's still running. It'll run all this time and it's just pulling everything out of memory. Every now and then, I think something must expire because it does a quick little read here or something else writing because the data sets of the system right to the drive. As a matter of fact, even the system, because it's not doing much heavy IO, it's still pushing the system a little bit. Yep, RAM disk from RAM, pretty much. But that's how the ZFS cache on air. So looks like TrueNAS Core is the device under the test, which is free BSD, is desktops PopOS. Oh, yes, PopOS. PopOS is not a good name in American ears, either, because it doesn't make sense. I don't know why it's called PopOS. That's not, I don't think it's a wonderful name. Maybe they'll change it one day. I'll go with TrueNAS performance is important. Awesome. Yeah, unraid for backup, that's fine too. I don't see any problem with using unraid for backup. Let's go ahead and stop this. What are other TrueNAS things you want to play with? I'll let this live stream go a little bit longer. I just wanted to play with this because this is what I was doing some testing on. It seems like there's something else I wanted to test, but I kind of set up the live stream and forgot everything I wanted to test. I mostly just want to play with some of the expansion capacity, play with the cache, the other stuff I can play with. Can I, you know, I wonder, let's do this, go to, well, this will take a little while to load, actually. So if we go to apps, choose the pool. PopOS, you just heard of that distro? Probably the people over at System 76, which they're awesome. They are great. So highly recommend it. It's what I run on my system. It just works. It just works. Although I am really eyeing this laptop by them. System 76 has, where's it at there? Is it this one? Wait. I thought it was the Orix Pro. Oh, I don't see it anymore. Which one of these has, isn't it the Orix Pro? Maybe it's not. Or maybe it's out of stock already. I want the one with the, I really want an OLED laptop. Oh, it's right here at the top. I just need to look at it. There we go. They sell laptops that are Linux friendly, and I'm really thinking about an OLED laptop because, yeah, OLED. But the downside is, look at that price up there. It's a little pricey, but they make a good laptop, and it's all Linux friendly. So, all right, let's play with net data. Let's install that real quick. Here's some net data. I can still see all the ZFS caching and everything else. I'll let this run in the background while we answer other questions. What do I pull? It says three USB drives that connect in a RAID 5 config. These often report errors. Scrubs don't work or reboot reports. Fine. So the reason USB is unreliable and why everyone tells you it's a bad idea is because it can lead to data corruption. I don't think the US, the current status of USB at least, maybe there's a future where I'm wrong, can handle heavy IO without causing some problems. So that's essentially where the issue is. If you have heavy IO over USB, it just seems to mess up. Now, could someone sit down and write drivers to possibly make that work or tune the existing drivers to function better? Probably. I don't think anyone's going to, but I think they could. So that's essentially why you don't attach USB drives to your TrueNAS system. That OLED's expensive. I don't want a 13-inch laptop. So, yes. Do you have any experience managing Mac OS? No. I don't do Mac OS. We're going to TrueNAS use it. Can IO devices be connected through VMs on the TrueNAS? There is some pass through. I've not tested a lot. So you can do some pass through a virtualization. I've not really spent any time testing that. Let's see. Is there ZFS information in here? Does it have it? Oh yeah, cool. Perfect. We actually can see the ZFS information inside of NetData. So let's go ahead and watch this from NetData instead. So we watched the NetData cache. Matter of fact, we go at the very top here. Disk write speeds. Actually, now I know another thing I wanted to test. So we're seeing a disk write speed of this. So 476. Oh, let's pause. We got to jump to the, so where's the disk writes? Disk. I think this is slow to update because it's probably ZFS playing. Come on. Where's our data? What is NetData missing? It knows the writes. Which drive does it see it writing to? Interesting how it called. There we go. It sees the drives like this here. So SDA. Those are the individual drives. Interesting. A few more questions in here. Do you recommend hardware CPU, motherboard, DCC RAM, Fronter 1K for a ZFS server running a few Docker containers like MB, NextCloud, a couple of minecrafters as well. Do you have a recommended hardware? Not really. I'm not the best budget hardware person. You server hardware is probably your best bet. If you want to try to buy new, you probably are going to exceed that price. If you go look for a used super micro server on eBay, you'll probably find something that does that. Do scale work in a VM? Not a great idea. I don't recommend virtualizing TrueNAS. Scale with containers and Kubernetes? Yep. So sorry if you discuss this. TrueNAS, any VM product? Nope. This is all bare metal. Yeah. Travis is the Mac guy that we have on staff. Hey, awesome. Good to see you here. I love helping all the homeland people, getting more people in the stuff like this. There are tools to manage some of the Mac stuff, but it's, yeah, it's not as smooth as some of the Windows management. And Apple themselves is getting more into the business management side of things too. So that's definitely a thing. But yeah, it shows our drives. Where does it show? There's the MD raised disk. Huh. So it sees the disk rate. CPU. Okay, there we go. It looks, it lists it right here under system devices and virtual block, zoom in a little bit, make it easier for people to see. All right. So now we can see memory paged data written. So we peeked out at 483. And now it's just reading, because now it's, it's dropping off because the cash hits now. So we go here to cash. There's the writing it all out and reading it until the cash took over and now it works. But the next question I'm wondering here is performance wise. So our disk rights were read at 162 and somewhere in here was those rights at 522. How much does that go away? Let's do this. These drives here, let's expand the pool some more. So we'll go here and expand the pool. What drives it at? Add VDevs. All right. So let's add these slowest drives, which are going to be these ones here. These old two TV drives. So we'll go ahead and add them to the pool. Add VDevs, confirm. Well, this slowest down that much or speed us up. All right. Now we've done that. Take this off again. See what net data says now. Hey, we're not quite getting that same right performance. I think those drives are slowing us down. Not much. 377. Oh no, there we go. It peaked out. I'm wrong. Having those multiple VDevs definitely brought our rates speed up. So 760. So it is spreading across those. So even those old two terabyte drives are slower. It definitely gave us a right performance boost across those VDevs. So yeah. Wow, that's a lot more capacity is over here. We're only hitting right here. So now we're hitting that. So let's go ahead and let's add more. Why not add more, right? You don't need to finish this. Control C. Need all the data back over here. Let's add some more VDevs. We'll add the last of these drives. Let's see what kind of performance this gives us. A little bit more performance boost on there. Maybe. I don't know. This is why I play. This is Tom on a Sunday morning playing. Because why not? That's why we have all this hardware laying around. All right. Kick it off. Go back over to net data. Net data makes it so pretty to do all this. So here's our drives in the 528 peak. Then here's actually just reading a lot of data because it caches. Here's the 944 peak rates where we hit it here. And let's see if it gets any further. Laying it all out. Yeah. Everything is Tom's giant sandbox. That's why I have a lab to play with all these things. Nope. I think we've hit the limit. I don't see anything higher in here than we've seen at the end of one. So actually it's a little bit less. We hit a peak performance over here. Or was it? There we go. I think that's kind of a fluke. It only hit it for a moment. But with this many drives, we're still up there at 740. So we definitely got a boost even though we're using three of them now. So not bad. And here's that same thing, the loading of the ZFS cache, loading it up and then the demand's going again. Efficiencies, breakdowns. I love net data for doing all that. I like that net data has so many things built in here for being able to see what's going on. Net data is just great. I got a video dedicated to net data if you're curious about it. It's free. It's open source. It's also set up as a Docker container inside here. So you go to the apps. I'm just running it as a container right here. Probably maxing out the CPU. No doubt. So that's probably... When you look at our CPU usage, yes. Definitely maxing out CPU usage here. You know, here's a curiosity. Let's go back over to here. If we turn the number of jobs down, let's say we did four jobs, and we did a... 128. Let's see what happens there. Well, that maxed out the CPU or... Whoops. So we got much smaller layout, four jobs running. It writes really fast doing that. So did it write the disk fast? It's so brief. I think we have to zoom in. No, wrong button. Where's the zoom right here? Yeah, it still hits some peaks in terms of write. And we're pinning the CPU again, but that's kind of expected. Did this get any fad? I didn't really pay attention to the stats here. It's reading pretty fast here, though. I usually run Pheronix to detail out the test better, though. So that's... Pheronix gives you a much better view of this. So if I were actually going to publish results and make this more scientific per se, I would probably use Pheronix so it would have all the stats for each scenario laid out. What issues do we have virtualizing Trudance? I'm curious, since my troubles have been so far few and far between. The trouble you run into is whether or not you're able to pass through the HBA. If you're able to pass through the HBA, you generally shouldn't have too many troubles. But if the HBA does not pass through properly, you will have a bad time and I've known people who've lost data. So it's not that you... It's kind of a thing like Jeff from Craft Computing. I have no problems with the way he virtualizes it because Jeff understands how to virtualize things very, very well. He's got a ton of tutorials on doing pass through. So as long as you get all the pass-through settings right, you should be good. But if you don't, you may have a lot of problems with it. And I've just seen too many people... See, my jadedness comes from how many people contact me for consulting. And unfortunately, when they've contacted me, it's because they've lost all their data. They've made a big mess of something and then they set it up. So the people who are asking if they should virtualize it, I say no. The people who don't need to ask are people like Jeff. He's not asking Tom if he should virtualize because Jeff's been doing it for so long. Is it possible to set up TrueNAS servers at a separate site and have them BNHA? No. It doesn't work like that. That is not how TrueNAS works. TrueNAS is not the tool for high availability load balancing between sites. That's not where it fits into the tooling for doing something like that. You can build high availability apps and you can build load balance servers to manage high availability applications, but you don't generally build them with... Well, TrueNAS is not the interface for those apps. It can be done with Dockers and Kubernetes and all the different container virtualization. There are methods by which to do it, but that's way off topic for today. Passing through a PCA credit card to TrueNAS should work. This is the problem. I remember there's a discussion somewhere. I'm trying to remember exactly where this was. This is in a forum post where someone found an interesting problem where even though you passed it through, the kernel would not work properly and it would cause data corruption. Even though you could do the pass through, there was some type of bus locking that would happen that would cause random errors in it. Someone actually had to... I forget what they had to do. It was a while ago. About a year ago, I read this and they had to do some driver updates to fix it. It was a forum post. I was just following it. I actually... I'm in a lot of forums. I mostly read and don't always reply to them. I reply in my forums all the time, but I spend a lot more time reading. I've seen that and Xavier's, who's been on the channel before, taught he's a cybersecurity person, he lost some data from passing one through. He insisted on virtualizing it. It worked until one day it didn't. It kept introducing errors. We don't know. We formatted and reloaded it. The system worked fine bare metal, but there were some errors that kept coming through when he had TrueNAS virtualized. Yeah. I don't know. I just don't like NAS virtualized, same as a firewall, even when I can do it. It's one of those things that comes down to performance and things like that. I mean, it's the same virtualizing firewalls. I do it for lab purposes. I don't usually do it for any commercial reasons or production reasons, because I don't know. It's kind of neat. Wendell's got his ultimate home lab server he did and his Forbidden Router. I like that he titled it the Forbidden Router because, yes, there's some issues. For example, when you have to patch your virtualization system, and this is, it was funny because someone commented on one of the live streams when we were talking about this, right away said, yeah, I haven't patched because of this problem. If you have to patch your virtualization system, that takes down your firewall. Unless you have an HA and you set it up so you move the firewall over to another one of your nodes, that's fine. You can do that because the problem is, if your patch that you run has a problem, how are you going to get back online to sort out the problem? That can be one of the challenges. Wendell actually addresses that in an interesting way with some pass through stuff that he talked about. The other problem when you do virtualize firewalls is making sure that you're using pass through because, well, you may have trouble with the network cards and the loading and performance of them being virtualized. So, Trunas says you shouldn't virtualize with at least three VDEVs and at least two DCH. I think there's an issue with metadata corruption. I've never heard that it, the number of VDEVs matters about virtualization. It's all about the pass through of the card and the pass through of the network card and of like an HBA card. I don't know anything about the, based on that you should have this or that. The main benefit of virtualization for me is that I have a consult for whenever I'm running so I can access it remotely. Yeah, I mean, there's, you know, you don't have to virtualize it. SSH is how I manage all the Linux servers I use with, I mean, obviously Trunas, not Trunas, I use the web UI. So, makes it pretty simple. And I don't know, let's look here and maybe it's not an option for this. Advanced, nope, but I think we can do, what can we do that has virtualized, has pass through options, maybe this, I don't know, maybe I'll dive into the path through. I don't use it very often. I don't use it, I should say I don't use it all. It's something that home users use, like passing through things, it's rarely used in enterprise space. I mean, there's exceptions. There's someone going to point out, but Tom, I got this client that passes through these weird devices because this manufacturing thing has to have this card pass through. Yes, there's exceptions to everything, but it's the average enterprise setup doesn't have pass through because they want all the machines and all the nodes to be pretty similar to each other and they can just move the virtual machines. I mean, see somewhat of an exception might be when you have cards, but they don't really use pass through. There's vert IO and there's the virtual GPU pass through, but you'll just have symmetrical machines, so they're all the same, where you'll just pass through and that same those same resources are available on the other machines. Should I have a notice performance if I have a server running Ubuntu and Docker directly connected to my Chinese machine via 10 gig Velox. As long as the drivers are supported for the card, it should be fine. But there might be some, I haven't really done much testing with the performance on TrueNAS with the Docker images. I don't know if the Docker images have any issues with performance when it comes to the pass through if that's an issue. Can stop that from running. I think we've ran through all the tests. RM star. I'll do it right. RMRF star, whoops. Delete. Yes. Bye bye data. All the data is gone. Only good research pass through an enterprise they see is a stupid license USB dongles. Yeah, that's one of them. License USB dongles, weird cards that control newspaper printers. That's a thing. There's some weird ones out there. Machine that runs laser serial number creator that was made in the 90s that runs a virtualized version of Windows 95. There's all kinds of strange. There's all these edge use cases in manufacturing especially. Pass through is always a word I dread to hear reminds me of the X Gamers 1 CPU thing. You're just asking for issues. Yeah, it's just all about that. For networking on TrueNAS, this is for LACP, Port Channels, or VLANs. It supports all those things. I have a video not on scale, but on TrueNAS, but you can build the different interface types. So if you want to do link aggregation and link aggregation types, failover, LACP, or load balance, those are all options in here. So that's supported. You can also, there's your VLANs, choose the parent interface, VLAN tag if needed, priority code, MTU, build IP addresses. So you can build these on VLANs. That's all supported in TrueNAS. Static routes and all those other fun things. VNC needs desktop. I guess, yeah. I mean, I have, I haven't done much video on this yet. This is what I've been playing with. This is one of my other TrueNAS. I have a lot of TrueNAS, in case you didn't know. Got a lot of them. This is the one that does all my video, which I'm barely using any cash on it. But let's go over here to, not the apps, oh, virtualization. Here we go. It boots up really fast. I'm impressed with how well all this works. Let's go ahead and this is what the virtualization looks like on TrueNAS scale. And we're booting, loading drivers, and we're booted, I think, almost, almost, almost there. Actually, if it gets stuck here, well, wait a second, because we'll see if it gets stuck here. Oh, let's see. I think it's stuck because I probably broke something. Let's find out. Let's go to the devices. What is the NIC attached to? This is a whole nother bug. It's attached to that one. I think that's the right NIC address type. So let's go back over to, yeah, we got to power it off. This is the buggy part of TrueNAS. And I covered this in my last review. So just powered this thing off. Devices, edit, what's the other NIC I can choose? This one should work. I think that one works. Let me look at my networking here. What's the configurations I have? Ah, yes. So this is where the silliness comes in. So this IP address here, 172.16.16.205. 172.16.16.5. They're both on the same network. So they can talk to each other. But this is the stupid part about virtualization. And this is broken. Wendell ranted about this too. And I don't really, it's just, I don't know why this is broken. This annoys me. So let's boot it up and play with this silly thing. Because I don't know why this is like this. This is a bug that's persisted now for a couple versions of TrueNAS scale and makes virtualization kind of painful. I mean, there's a workaround which I'm going to demo right now. Hey, look, it's got its own IP address and everything. Let's go ahead and SSH into it. Oops, there we go. Tom's Ubuntu server, which is now at 172.16.16.17. And if we ping 172.16.16.5, can't get to it. What if we pinged 205? Can't we get to that? Can we get to five? Is it pingable? No, it doesn't want to talk to that one today. Why? Why? Pings, other things. It has trouble understanding. And I don't know why this is. If it comes out on an interface, it can't talk back to the interface it comes out on. And I don't understand the why it doesn't want to do that. I think it's just having trouble responding because it's matched on the same network. I found a workaround for this too, using bridges. I don't understand the why this doesn't work. This is silly. I should be able to talk to the interfaces, even if there are interfaces I'm attached to, like this one here. Most unreachable. This gets confused because right now, let's find it right now, this is ENP2SOF4. And it's right here. And if we look at my virtualization, go to settings on it, and you will see devices. We go to the NIC and we hit edit. Hey, it's attached to that, you know, this interface right here. The fact that it's attached to it means it can't talk back to it. And that doesn't make any sense. It tries to dump it back out on the wire, on the network, and loop back in. And that is a bug in the virtualization that I don't know when they're going to fix, but I hope they fix it soon. That's just a yes, that bug is a pain. I don't understand. That's the hang up of virtualization. What do you think about Linus Tech Tips servers? Is it really that bad as some people supposedly system comments in the section saying, no, Linus makes a lot of excitement around it because it gets the audience engaged. It's not terrible what he's all doing. But I've not watched every Linus video and looked at everything Linus does to say that he's a, you know, to really dive into how good his system and skills are. He just kind of does things in a little bit of a necessity way. He's also got different use cases than the average enterprise customer. So I don't know that they're all bad, but I've never really spent time examining everything Linus does in great detail. There's also a bug where you can't revert to DCP if you mainly assign an access. Oh, that's interesting. That's weird. Do you guys think pop linux is stable enough to run server-grade software on, or should I stick to Ubuntu Debian? I don't know why you would run a server on pop OS. I would use Ubuntu for server because Ubuntu server is different than Ubuntu desktop. So I run Ubuntu server or Debian for my server stuff, but my desktop is pop OS. But yeah, Ubuntu is fine. That's even right now. That's what I've loaded right here is for my virtualization. It's just Ubuntu 2024, less on, this is a Ubuntu server. So Ubuntu server works fine. I don't, I don't really have any problem with that. I think it's, I've gotten really used to using it. I was always a Debian person and I still prefer Debian, but I have no problem with a Ubuntu server and some things, the documentation's good, the support's good. I don't really have any hangups using it. So it definitely seems to work well. LTT, I pretty much see what sticks approach. Yeah, a little bit. LTT lost a bit of marriage here when you used SFP plus RG485 to cat 6 instead of DAC for 50. Like I said, I've not examined everything Linus did. And I'm not saying Linus didn't do some silly things here and there. There's, there's a lot of silliness. But then again, it's also more fun to watch people do silly things than have a boring, perfectly set up system sometimes. Oh man. All right, I'll give this five more minutes because then I'm going to do some other things. So five more minutes, rapid, rapid fire questions about TrueNAS and I guess Linux because I, people are going down that road. But then again, TrueNAS scale is based on Linux. So I think we're all in the same, same, same boat here for things. But this might, my video editing one works fine. Oh, and by the way, who failed? Interesting. Why did this fail? Oh, you know, one of my, I should probably do something about that. I did this fail. The world runs on Linux. Yes, it does. So this thing's up and running. Why did it not? Huh. Strange. So let's edit this job. Then see what happens. What is the number of reshys for failed replication? Oh, I see. I don't, I was probably deleting some stuff and now it has a problem because this one right here, if the destination system has snapshots, but they do not have any data common with the source, can't destroy them because it needs to destroy some of the old ones I didn't give it the ability to do so. Save. Let's kick this off now. Yes, let's run this replication task. Hey, it's running. Now it's going to sort out the couple snapshots that are missing. I have a lot of true NAS because this is, I said not to run RAID, well, a lot of people will say don't run RAID Z1 in production. But sorry, I need capacity. So we have a RAID Z1 in production, but it replicates hourly to another true NAS. So I could never, I mean, if it lost data, if two drives are to fail, I would lose my data that is also replicated every hour on a replication task. So it's not something I'm too worried about. But there we go. I should probably fix the other one too. It probably has the same thing. I'm going to do new videos, but by the way, I want to point out something. This system here is one super slow because it's a Intel Atom. Look that up guys, the Intel Atom CE13, actually let's look it up on CPU mark. Oh yes, this is one Scramon CPU I have here. This is a true NAS that I'm going to leave with only eight gigs of RAM, but 26 terabytes storage here. I think there's still 26, there's a lot of storage left on this. Anyways, this is my free BSD one, and you can go from your true NAS scale storage and replicate to your true NAS core storage. That's all this is, is a duplicate backup of a bunch of data. And maybe it has some torrents on there, you know, because you got to seed them Linux ISOs. Oh, what else do we have in here? Let's see. Any other questions? Power efficiency, yes. For what it does, it just holds on to data. That's all this other true NAS server does. It just takes data and holds on to it for me. It's got 30, uh, how much is used? Yeah. So right now there's about 26 terabytes free. You got 30 gig here. Uh, seems like there should be more data on this. Oh, there was, it failed. That's why there's not more data. So it's, it's getting more data to it. So that's sorting itself out right now. There's not much in the syncing data. I'm guessing you aren't de-duping on that. You're not de-duping anything on this Atom. We're happy at boots eventually. All right. It's just not fast. But look at this. I mean, how do you go wrong with only eight and a half watts of power? That's the advantage it has. It actually is fairly responsive with, you know, despite having two threads, it's quiet, doesn't do much. It's got, this is also RAID Z1. So, but the data's replicated a third time offsite. So these two are on site and then there's another offsite replication going on. This one's at least fast. So much data should be in there. So the should have 824 gigs. And at present, we only have 22 gigs is sending it all right now. Yeah. It's re-syncing. I didn't have a box checked and I was playing with snapshots and I got a notice. I just didn't do anything about the notice until the live stream. I was like, I'll fix that later. But it's replicating now. There we go. Send day to send. All right. RZ1, RZ1 plus offsite equals RZ1 Z1 ish. Sure. But it offsite is a Z2. So it goes to one of our larger Z2 arrays at the office. Yeah, power of a dim light bulb. We'll go with that. 8 watt can't even light up a room with that. Ah, it's LED. It's okay, Tom running a 16 terabyte scale system with a penny and one AK RAM seems to run okay. Yeah, you can do that. Might be thought it was possible to run clients at any given site, then have a load balancer effectively serve as a bridge for HA. Yeah, you're misconception how all that works. Yes, you can go learn about HA configs, go read about them. You do them, but not probably not in the way you think. Ah, fun stuff. All right. Well, I'll let this do its thing for a while. I'm gonna go do other things. Thank you all for joining me this morning while I played with, I should do this more often on Sundays and playing with this. You know, for those of you that made it this far, what are good time zones? Someone said I should be in the time zone of my customers for my live stream. And I said, my customers or the people who watch my channel, not just customers, but just anyone who wants to learn about stuff, I cross a lot of time zones. So I don't know if morning's better, afternoon's better. I don't mind doing extra besides my vlog Thursday live streams because they're fun. I like talking about this stuff. So EST, yeah, I'm in EST. So that makes sense. Best time, any day, Sunday, fun day, Sundays are good time, central time. Because this is really early for the people on the California side of the United States. They're a few hours behind. This is afternoon for all the people towards Europe as you go across that time zone. So nonetheless, at 1640, we're there. Awesome. All right, well, I'm gonna wind this down. Let me know. Send comments DM me on Twitter, tag me on Twitter. I'm always interested in talking about when's a good time to do any of that stuff. So let me know and thank you all for joining. Hit me up in the forums or wherever you find me on the socials. Thanks.