 Welcome to a Sunday livestream talking about TrueNAS and the Kobia release. Now, normally I do these releases as videos, but this is not exactly a review and I spent too much time recovering from what broke when I upgraded my system to that, but not all of them. And I want to do some warnings here in terms of like, should you upgrade? Because that's usually one of the reasons I do these right away. But this is the time that it got me. And that's fine though. I opened some bug reports and found problems and found more problems, but they're not something that may affect you at all. But if it is, we want to start with that before I get into release notes because I think that's going to be a very important thing because people want to know, will it break things? And it might, it might. That's the problem right now is that it might break things, which is not exactly what you want to hear when you're setting up a new NAS. I didn't experience data loss. I only lost the time it took me to recover this. That is an important aspect of this. So I still trust, and this is a question that comes out a lot with TrueNAS scale versus like TrueNAS core. I really like TrueNAS core for being amazingly stable, TrueNAS scale because it's in development means there's bumps on the road. The bumps on the road seem to center around application problems. So if you're dependent on the applications, that can be a problem because there's changes constantly. If you're dependent on it as a NAS, it's TrueNAS, it's true purpose, it's base purpose of being a NAS. I've not experienced any data loss in terms of the NAS, but this particular upgrade caused it to break my shares. But let me get right to the point here because I know some people don't want to watch your whole live stream to find out whether or not they should upgrade. First, if you have set your data sets up, like me, where you have the base data set encrypted and then you would like to have some unencrypted data sets, that's where the problem comes in. If you have unencrypted data sets nested under an encrypted data set, you're going to have a bad time. There are some bugs with this on both applications and on file shares. I filed, well, I didn't have to file a bug report on the main one because someone found it before me. I was just experienced it when I upgraded. It's kind of a strange problem, but this is officially from TrueNAS. It appears that they would prefer that you did not encrypt the base data set. Now, don't worry here. It doesn't mean you can't use encryption. It's just a different methodology. In the days of old, you had to encrypt the base data set to get encryption underneath. Now you don't have to do it anymore. So we're going to switch over to another system that I did upgrade to the latest version of TrueNAS. So the 2310, Cobia is what this one's running. And what you have is this data set here, which is unencrypted and look, we have encrypted data sets underneath. This scenario is fine. The opposite is not. So if you would like to have unencrypted data sets under an encrypted data set, that's going to break things in TrueNAS and the Cobia release and you don't want to have it. So that is the first thing I want to say that gets you only a few minutes into this video and go, okay, is that a scenario that's going to cause me a problem? Now interestingly, the applications seem to work fine. Even the TrueCharts one, which I was kind of surprised about. I thought it would break TrueCharts, but it didn't. So the apps, even on my system worked fine. The downside is if you upgrade and then you have to roll backwards, it's a tedious process that I didn't feel like doing. So I just blew away all my apps. I blew away the whole app folder. I tried rolling it back. It didn't work. And I don't know why I probably could have spent more time on it, but I didn't feel like spending more time on it. So I just kind of stopped right there and said, you know, I can just blow the applications away. And I've talked about this when you install apps, if you're using and you really should be using host path, not PVC storage for your setups, even though that seems to be how they default. I've talked about this when you set up an app, point it at a host path that way when you reinstall the app, the data, the settings for that app are all in that host path. So for me, deleting the apps was kind of tedious because I had to delete the couple apps I use and reinstall them. But as soon as I pointed them back at their host path, all their configuration came back and the downtime was pretty minimal for me rolling back. You can revert back the system pretty easily, provided you didn't upgrade the pool, but that wasn't too big of a deal. I was actually able to recover reasonably fast, but I did spend more time than I should have just trying to fix the problem, only realized there's not a fix. There's just that bug that encrypted base pool needs all data sets underneath it to be encrypted or they won't share. They actually show up. You didn't, like I said, no data loss, but yeah, just no no ability to share them. What does that mean for the reliability of scarce broader? What does this mean for scales, broader reliability and production? Can it be trusted to sell? I sell core still in a reason why just comes down to most businesses don't need the apps and core doesn't have the apps. So it doesn't matter to me. Core has no tuning issues with the arc size either. I've got a separate video about scale and how it doesn't handle arc size. That doesn't seem to be a fixed in Cobia either that I'm aware of. It seems to and I have to do some further testing. That's why this is a live stream that I'm saying I haven't fully vetted this because I have to swap a server and I'm going to go grab another server from my office and bring it back to my studio here to finally put this into production. So this is one of the reasons I want to live stream and also interact with all of you that may have questions about it. Now, the good news is there's a lot of good things in here. But before we get to the release notes, there's something I want to talk about that they changed for Cobia. And this was announced in April when they were working on it, but wasn't really something that they clarified in the release note. So we have to go back and by the way, these pages are all linked in the video here. So you could just go read through all this. This is the challenge of putting this together was how much time I've spent reading and compiling all this data into one place so all of you can see it so I can read through and this is a whole write up on host path validation setting. And I talked about this when I was doing my Trinascale 22 bluefin videos. Well, they don't have this anymore in Cobia. They changed the warning to just a warning on a per app basis. So now you have a different type of warning and it's per app and it kind of leaves it instead of you having to check that box, which is weird that they had you check the box because the box used to say we're not offering support if you check this box. But if you actually watch a Trinascale video for some of these setups on their tutorials, their instructions how to do things, they tell you to check it. But then you're checking a box that lets you know this is an unsupported configuration, which I'm like, but your instructions say to do it, you can't use this application without doing it. So you're telling me you don't support your own application. I like I said, I've had some aggravation with the way they deploy apps on here. I think it's getting a lot better. I think as they keep rewriting it, they keep improving it. But back to this production question, it was asked, would I sell this to a client for production use of apps? No, I would not. Home users, this is fun to tinker with. There we go. That's the audience for this. There may be a future when this is more stable. I feel like certain apps are stable. Sync thing. If someone needed to sync thing, I'll do that in production. And it goes downhill from there. That's the one app that we actually will use in production for client. I use it in production. I think syncing is amazing. And they've done some really cool integration with it. And I'm going to do now that this is released because in this release, they've added all the support for not just syncing the files, but syncing the extra attributes. So you can sync permissions finally. This is all integrated now. So I'm going to do an updated syncing video, which is great. So I got that. I tested that and said, Hey, this is pretty cool. I got to build a bigger demo with all the permissions. But I was like, Hey, this is a working thing. So we can have real-time synchronization with a share and with the extended file attributes. So definitely some work has been done. And let's just jump over release notes now because that's the next part I wanted to talk about was to do actual release notes of what's changed in this version. Someone's angry at the new login screen. I can't find a reason to be angry at a login screen. Now this is the release notes. There's actually more than one release notes. This is like the blog post release notes. And then they have this is the actual more detailed release notes. But let's go through the blog post one because the big one is right here on top and that's going to be D raid. Now, before you get so excited about D raid and to resilvering being faster because it seems like a lot of people were excited about this, but I want to make note it even tells you and they changed the raid layout screen a bit. It even warns you that yes, this is only for 10 drives and I believe even in the write ups they have, which I left a few of those down there so you can read more about D raid. It's going to get its own dedicated video because there's a lot to talk about. It's a pretty complicated topic, but it's also due to storage efficiency and other issues with D raid. It's it's a form of CFS, but it's for large like hundred plus drive layouts. So do you have a hundred plus drives may not affect you, but it's still really cool ZFS block cloning for SMB file copies. I'm interested in how that's going to be implemented. Not a lot of information about it, but cool. They added it open ZFS 2.2 Linux kernel 6.1 more hardware support and video 5355403 driver updates improved apps UI and storage pool UI different. I mean, I think they keep polishing it more. So I guess we can go with improved. I'm not liking that they monkey a lot with the apps because it makes it look a lot different now. We're going to go in there and I think they did a good job with it, but I don't know. There's I'll let the audience decide on whether or not they think this is actually really good. Now the sync thing. There's this is the part that's also confusing. They talked about same thing can be used with SMB and NFS shares, etc. But they also talk about SMB file sync, but they don't give you any data on it. There's actually a forum post where people were asking about more data. This appears to be not in the documentation yet. So I'm not exactly what they're going to be giving more giving out the details of how that works. It's not documented at least right now fast file copy with the new ZFS block cloning capability copies of SMB and NFS files and directors can be accelerated 10 extra more when a director is copied from one data share to another. Only the metadata is copy and treated like a snapshot and remains in place. This accelerates necessary file copies to allow the admin to rearrange the data without having to wait hours for copy. This is really cool. I just need some documentation to make sure it gets implemented properly of exactly how to do this. Essentially when you move data between different locations, different data sets, each data set is somewhat self-contained. So the data actually gets duplicated there. So then you're like, well, how do you dedupe it? This is basically fast file copy is like a deduplication where they're just using a pointer reference when you're copying something that you want in two locations if it's the same file. So this is pretty cool that they have that in there, but that's going to be kind of a, I want to see exactly how that's implemented to make sure I set it up right now. Back down here, protocol services, security updates. There's a good reason in general you have to update to this because Sambas had a few security issues and some enhancements to it. So you're going to want to get to this version pretty soon. That's just part of it. I scuzzy improvements, including a Lua support. I haven't tested what enhancements on there, but once I get one of these systems fully finished, I will be doing when I do the review of Trunascad Cobia, a benchmark against this same hardware under each version because I think that's going to be an important aspect people are going to want to know is, hey, does it perform better? Because for each major version, I have taken the time to benchmark to see if there's differences in performance with SCSI and NFS shares, especially we can probably test SMB as well. But the usual ones for people using it as a storage target for virtual machines, I scuzzy and NFS. Those are the ones I really focus on testing. Now there is another point release, but it's the final point release for Bluefin and the final point release of Bluefin is going to also put in some security patches and things like that on there. So I think this is really cool that they're still supporting it, but this is a marker. This is a demarcation of end of life of these. All right. Let me jump and answer some questions here to do to do. So do you think stuff is a better solution for multi node software to find storage solution or perhaps Minio would you recommend stability for us to flash all flash across eight or more nodes? I mean, if you have eight servers and you want to combine them together into a cluster, I know that they're going with Gluster. My heart's not in it. Everything I have learned between Gluster and I've spent a whole lot more time in stuff. I think stuff is a better solution than Gluster. Everyone I've talked to in the storage space really seems to believe that I haven't had anyone tell me that they thought Gluster would outperform stuff. Now stuff is very complex and there's a lot to getting it set up, but if you have eight storage nodes and you would like them synced across there, I think stuff is a good choice for that, but it comes with the complexity that stuff comes with. So there's a learning curve. It's not set it forget it. It's not easy and it's not built into a TrueNAS at all. They're they're not any going to to my knowledge. You're not going to have any support for Ceph inside a TrueNAS. It is a separate thing all together. No, core is not going away core. Matter of fact, they noted that these things are going to be going into core as well. I love core because it's so stable. They're doing less updates to core. Awesome. That's what you want for storage when you're using it as a storage target. I want less updates. I don't want something that keeps changing on me. I don't want something that's just going to break on me. I want a storage server just to do storage servers and that's it. It's as simple as that. And if that's one of the reasons I really like the TrueNAS core builds is exactly the reason some people are complaining in a homeland, but Tom, they're not getting a bunch of updates and I'm like wonderful. I have very large scale clients running TrueNAS core and they don't need updates. They just need the giant cluster of servers talking to it to work perfectly fine and be very stable. So if you're looking for the latest grace apps, though, this is where most of the problems are. Now let's jump over to one of the systems. That's installed with this and we'll go and log into this one here, which I don't understand people talking about the login screen and being angry, but let's jump right over to the apps. Well, I guess we can look at the storage dashboard first. Maybe we'll start and run down the list here. They didn't make it look that much different. So here is this storage. I have one RAID Z1 that's called Flashy and one that's called Rusty. Rusty has eight wide spinning drives in it set up in a RAID Z2 and Flashy has one RAID Z4 wide just using SSDs. You're shocked by the names, I'm sure, but Rusty and Flashy are working fine. I've been dumping data on there. I've got about 13 terabytes. I just rebuilt and reloaded this. I want to go through the paces on it. So this is not too much on this particular system. As far as the data sets go, this is the encrypted, not encrypted. I did a demo of sync thing. I played with iSCSI on this. I ran through all the tests on this particular system to make sure everything worked. And yeah, definitely pretty cool. We'll talk about DRAID at the end. I'm not that, I mean, I'm excited for the large scale customers, but if you don't have 100 plus drives, DRAID may not be for you. DRAID is not the more efficient way to store this. We'll talk about that towards the end. Now Rusty just has some backups that I have in here. I have some backups of videos and stuff. So there's a few, what do we have here? 13 terabytes of some of the things. And I also tested cross compatibility. So I was sending data back and forth between the, well, specifically from my core system duplicating it all onto this. No problems there. Backups work fine doing ZFS replication between them. Now, something I want to note that's missing though, right away is they did finally deprecate this. So if we go to services, you will notice that the services page is a lot shorter because our sync is missing. They finally killed our sync. They killed S3. I have a problem with them killing S3. Well, kind of, and we'll get to the applications next because that's where that problem really occurs. So we go over here to apps. And this is the new layout for the apps. I've got it zoomed in a little. Let me zoom it to 100%. So you can see like how it looks when you're not zoom in. And I zoom in to make it easier for all of you to see. But when you go to the apps, it starts with discover apps. And then you get this list, but you're probably thinking that's not 98 lit 98 apps and you'd be right. I don't like the way they felt did this the way you filter these. And if you notice, it's kind of small here, but it says view all new and updated apps and then view all media apps. It gives you the top ones and then you have to click into view. The other option is you go here by app name and yes, by the way, this is like an eight point font. So I even me, I almost have to zoom in to be able to see this where it says app name. It's kind of, it's just tiny how they set this up. This is 100%. This is like your normal view. But when you choose view by app name, there you go. Now we can see all the apps. Now you can always use a search at the top, which is great because when you use a search, if you're trying to find a specific thing like tail scale, it filters and finds it really fast. They've done a nice job of if you know what you're looking for, you can punch it in. But of course, if you just want to know what's available and these are all available from TrueNAS. These are all the TrueNAS supported apps. Hey, here they are. Now I went ahead and made one minor change under manage catalogs. I actually have the enterprise one in here. I don't really know why it's not checked by default. So I've got the community and enterprise. And the reason I did that was because I want to do the video coming up on sync thing. So if you check the enterprise box, you end up with two versions of sync thing, one train charts, one train enterprise. I'm not sure why they just don't use this one. It's just a newer version that has the extended attributes support. So yeah, I don't there's that that part's missing from me as to why they keep two. Maybe there's a reason, but it seems like why not use the one that has all the features all the time. I don't see the downside to using this one. It's still free. It's still syncing. It's still open source. So I don't know if someone knows love to hear comment on that back to the problem though. And I haven't tried this. Let's first try this one. I don't know if this is a problem or not. R S Y our sync Damon. So we can install the our sync Damon back if you want to use our sync. So they built rebuilt it as an app. I think this is fine. That way for those of you that go, hey, I really have to have this great. I haven't really done much for setting up the modules, but it looks pretty straightforward for setting it up. Maybe I'll do if there's demand on it to do our sync D tutorial, how to do this. Now, the reason they got rid of it makes sense. This is not secure. You're transferring data back and forth. But if you're attaching it to a storage network, that's not as big of a deal. You can lock it down by AP, but this is using an unencrypted just to send our sync back and forth between two systems. The biggest reason for this is usually something like Synologies or other NAS systems because they will talk our sync, but won't talk. ZFS replication to a system like this because they don't use ZFS and therefore if you're looking for cross compatibility, you can use an our sync module to get your data are synced from point A to point B. Our sync is not near as efficient as something like CFS replication, but the other side of that is you can also use syncing, which does do secure communication and you can load syncing on lots of different NAS systems. So if I were to sync data, I'm less likely to use this and I'm more likely to use something like syncing. Now let's go back over to discover and this is where I have a problem. They've decided that you can use Minio. Now with Minio, you've got two different versions. You have the enterprise train and oddly, it seems like the charts train is newer, but either way, there's a bug I have that I'm looking for a good write up and I have not seen this yet. The chart here, I'm sorry, the Minio is missing and I still don't see it in this newest version. I don't see a way to put a certificate in here. The problem when you don't put a certificate when you're doing S3 is you're going to get errors on a lot of things that are expecting. Even if it's a self sign certificate, usually you can get around that. So you are required to put something to put a certificate in front of it and this creates like an extra problem because, oh, great. Now I got to find some way to put a certificate in front of this. I don't know why they don't let you choose even a built-in self-hosted certificate or imported certificate that you have for self-hosting S3. Now this is completely doable with the Minio standalone because I've used it. I set this up on Linux boxes that so you can tie your storage to an S3 compatible system because some clients have used cases for that like Veeam is an easy example. So you can build out a Linux system, throw ZFS on there, build out your RAID, throw Minio on there and great away you go. You put a certificate in there. That's missing from here. I don't know why. So they got rid of S3, but the S3 implementation they had built in had a certificate option. Why did they get rid of it? I don't understand and why don't they add it to this because some of the other charts you load get certificates. That's awesome. I like having certificates. That makes my life easier. Even with NextCloud, you can choose a self sign search. I don't know why that's missing from here. If someone knows, love to hear a comment. I've seen people ask and it's a dead end in the forums because they go, how do I put a certain and just nobody answers because there's not really a way to do it that I'm aware of. So that kind of like, I mean, there's ways. Sometimes you can find services that may use S3, but don't expect HTTPS, but I don't. That's not the majority of them, or at least from my testing. If I'm wrong, let me know. Let me catch up on some questions here because that's the most of what I have just so we're clear. This was all the run through I've done so far with this. Now, as far as question goes. Let's see here. If you have enough tries for D-Rage, you don't need Tom's videos anymore. Maybe. Oh, the new UI asked for 2FA separately, which is a waste of time. I don't have 2FA turned on. Do you think scale is ready for customer deployment? How would you rate the project in late 2033? I still choose core because most of my business clients, as I said a little bit earlier, are going to just want it for storage, not for the apps. I would not use the apps in production. I have the drives over a hundred and I don't have the language down for TrueNAS scale still in discovery phase for what I like to do with these drives. Looking at Luster now. Oh, let's see. I thought they're going to tear apps in Cobia because it's so clunky and collided with Docker Portainer instance. Not at all. I'm not aware of that at all. You know, it would almost make sense to me more sense if TrueNAS had just gone with Portainer somehow partnered with Portainer. That would have just made sense. It already exists. It's built out. It has all the functionality that they're trying to reproduce. I don't know like they tried to reinvent the wheel and I don't understand putting a lot of labor into customizing that and making a complicated system. And this is where so much of their time has been spent. At least my understanding is like I don't get it either. If you need to answer S3 storage, you should probably should probably be running Ceph or Minio rather than just a few disks on a NAS. That's not true at all. I completely disagree with you because there are so many use cases for S3 storage because it's common. Matter of fact, it's a really easy way to back up a dozen different Synology NASs scattered across all over the place to one location with an S3 on a TrueNAS server. Matter of fact, we are using things like that. So I completely think you're wrong on that. And matter of fact, there's a lot of times even in developers who are building something they're internal. We have a client who has their internal lab running I believe with some S3 because of the way they write storage in production. It goes out to a Amazon S3 in development. It's all internal. So there's actually a lot of use case for that distributed. This is this is not production, but we want stability because we're going to prototype alternatives and then we're all sorts of things on it. We're going to make it available to others. The goal is to scale it out. Start small and grow large. Have the ability for a whole node. Lose a whole node. Yeah, stuff is the way to go for probably that if you're looking for large scale, scale at storage and 45 drives. They have a whole playlist on all the things you can do with stuff. Highly recommend it. Maybe you can add a SERP for via environment variables. I think you probably could, but no one's documented this very well. So I don't want to guess my way through it. I feel like they're fighting uphill battle apps trying to repackage every Docker service with a custom prop. True NS UI makes more sense. Yeah, I agree with that completely. It would just make sense because you're trying to customize something. Just use Portainer and stop trying to customize because you can I know Portainer for some reason calls them stacks when you can actually just take your Docker compose file and pay I'm not a Portainer expert, but I think you can pretty much copy and paste your Docker compose in most instances into that and it just works. That's not an option here in True NS scale and Kobe other arcing Ted is an arcing test option over on the data protection tab but it looks like the module is set up different. Is there that I missed that? Let's drop in here and look real quick smart. Oh, our sync tasks. I think it's an arcing test but it connects to a remote host. So you're able to add the task. I'm assuming to connect to a host so you could do something but not so not so this can act as the arcing server. So it's got the option to add an arcing task to connect to another server and then you would use that other server as the app. That's how I that's how I perceive this working. Actually, the feature I wish you built in a true NASA's vfio support specifically for Nix. I had to write a custom CLI to manage attached on vf Nix and true nas. Not sure why you need that, but okay. Now, the next thing they did change and I don't think this is a bad change. This is just going to be a little bit of a learning curve and I think they kind of had to change it. And the reason why is because D raid. So let's talk about how you create a pool and the changes here the storage create pool. I'm making a little bigger to try and you YouTube pool demo now they give you a warning and you can choose encryption, but they're kind of letting you know, first, you know, obviously back up the key but you don't need to encrypt the pool because you can encrypt data sets later. But as I mentioned, the very beginning this can cause you some problems. So we're going to go ahead next and we'll choose the layout. We want to stripe mirror raid Z and this is where we have our D raid. Now the D raid, I believe somewhere along here. Yeah, here we go. Got to change this to this that and the minimum recommend on this. So they give you a minimum right here of D raid is 10 discs for D raid one. And do they say anything with D raid to not enough drives? That's what they say. But this is where things can be a little bit different. So they're smart as they change out the layout. So they tell you what they're doing over here pretty quickly. So if we choose this, we have four drives in this system. So we're just going to go over our traditional raids here. I don't have enough drives in the system to really do a D raid layout. I'm going to do a separate video on D raid because it's in complexities, but there's plenty of links. You'll find down below on write ups on D raid, but we choose the disc. You know, it's only lets me choose one. You're kind of like, well, what's the layout? Well, we chose with the four here, number of V devs one, it automatically and it's telling you that right here is laying it out. So we have one raid Z one by four. So you've got the information that is pretty laid out. It I would say generally easy way for setting up a new pool. Now something of note here. If and unless you find, yeah, there we go. That gives me a little room manual disc selection. This is kind of cool too, because they let you drag and drop dis like this to build it out. So I think they've done a nice job. I need to set up this and test it on. We have a 45 drives XL 60 sitting at the office. That's not ready for the client yet. Well, the client's not ready for it or something like that. Either way, it's at the office. I may try loading Trunas scale on it and playing with 60 drive layouts because then I can actually do some more demos on it. So I'll probably use that server for some of the D raid demos and talking about the different layouts here. But once we chose our raid Z here, I like what happens next. So we're going to go ahead and hit next. And this is where it brings you to the log layout, straight mirror, this size, etc. Do we need one? Nope. We're not worried about it. I don't have any drives in here. Then what are we going to do for a cash drive? No problem. We can choose the same thing. How you want these set up and then we can go next again, you can choose a metadata option hit next again, we can choose a Ddupe option hit next again, just to review and then we can hit create the pool. Contents will be erased. Yeah, I'm okay with that. And there's our pool creation. So they've changed it a little bit, but I think they've changed it for the better. I think this is nice. It's kind of like a more wizard format and I think it's a little easier to understand if you're new. It's confusing a little bit if you are looking for the old menu, but if I was a brand new person, I would say, oh, okay, this, this is easy enough to do. Let's see. Is there a top out speed for streaming MP3s or MP4s ranging around the four or five minutes? I have a massive collection trying to figure out what might be best for streaming. I mean, small files like that, I guess it depends on but MP4, MP3s are trouble like music files. You could probably stream quite a few of music files because most people are streaming something that's much more intense like, you know, 4k video and you can do quite a few of those on there. Can't wait to easily add drives as needed somewhere to unrate. Well, you are going to have to wait a long time for that. It's not going to be as simple as people may hope. It's going to be a long time. I know that TrueNAS IX system specifically is sponsoring the code for expanding ZFS, but there's so a lot of complexities involved in that. I know they had targeted it before and I believe you can find an old blog post where they targeted the release for Kobia. It is not something they advertise in Kobia, the ZFS expansion. I think that's still a long way off just because of the technical challenges that come with it. Would it make more sense just to add an SSD and serve the pool for IX applications to avoid the inconsistency in encrypted pools since we can still use host path with encrypted pool? I don't, I mean, you can if you want. I don't think it's a big deal. You just build the pools on encrypted and problems gone because you could encrypt anything you want underneath that. So I can start with an unencrypted pool and then have encrypted things underneath. That's the example I gave here with this particular server. So we log back into this one. But if we go to the data sets, I have this one's encrypted. This one's not encrypted. Zoom in make a little easier to see. No problem. This works. They're both on SMB shares. Matter of fact, I've got the encrypted one with a data set being used by Sync Thing and SMB. So I started this demo playing around with it. Works fine. So I don't only think it's an, it's something you need to do, something you can do, but why not just stick it on the regular pool and life is good. How would you go about exporting a scale VM to bare metal? I don't know. Really just whatever backup software you have, it's not going to be anything you're going to get inside of here. You would just have to take that and back it up whatever that VM is. So you take the VM, back it up and export it via whatever backup tool, clone Zilla, for example. You know, very few people ever go from VM to bare metal. It's almost always the opposite bare metal to VM. That's a more common request. They do also changing a backup CPU temp strap by 10 compared to Blueprint interesting isn't small files better with mirrors. That kind of depends and you can get if you have multiple, how wide is the V devs? So if you did a layout where you have a lot of drives, you break them out into a series of V devs, like for example, you have 30 drives. You break them out into a groups of, let's say 10 V devs. So you have three groups of 10 or even faster would be if you broke them out to groups of five and now you've got seven V devs. So those can really boost your read rates for doing streaming. The downside of using mirrors at some point is storage efficiency. So yes, you can also use a whole lot of mirrors. I've seen people recommend it because it makes it easier to expand ZFS if you're doing everything in mirrors, but you're doing it at the sacrifice of storage efficiency. So you need more drives to get the storage. So there's that as a downside. Someone said they might be ready for the major release of open ZFS. I assume you're talking about the ZFS expandability possibility. Hey, Tom, after I saw a true dance on a super micro board, I tried installing a two 10 gig TARD real tech. Okay. Agreed RAID Z expansion is very complex under the hood blocks are not easily expanded. So what we are getting for is good for emergency but better planning is really the better idea. Yeah. And if you are planning and this is what we're going to do with clients as we always have done you plan for symmetrical deep V dev expansion. So if I'm selling them and we'll use a 45 drives XL 60 is a cool example. I got 60 slots and if I'm doing things symmetrically, I'm doing V devs that are 10 wide. Well, I'm going to build my V devs 10 wide. I sell them 20 drives because now I can expand 10 drive blocks at a time and keep expanding to V devs downside is there's no automatic V dev rebalancing that just comes with moving data back and forth. But you know, it's at least a solution to expand them out. It is an option that you can do because as long as you and I have a video on this, how to expand CFS I explained what symmetrical V devs are. And if you have that as part of your plan than the beginning. Awesome. You can then expand out ZFS. Can you expand on your thoughts on a Zima blade for Nath? I brush for a Nath solution. I have this roughly 75 terabytes that take up only two terabytes on your basis. All music 1080 viz. Well, you're in luck because if you didn't notice, we'll go back to the dashboard here. This is on an Intel Soler on and 3450. And if we'll pull up a picture real quick here because this is actually here we go. That's the thumbnail. And if you notice, I'm running a Zima board with the expansion. My review of the Zima board was running to Nath scale Bluefin. It's now running to Nath scale Kobia. So there you go. Off topic. What does the Arsync Damian offer over just sending it with SSH instead of SSH has a cap on the speed because you're encapsulating it SSH. The nice thing about encapsulating SSH is you are getting security by encapsulating it. When you do raw Arsync between two servers, it's faster. You're not encapsulating security though. Wendell mentioned scale about implementing Docker is not the best. Can you Eli5 knowing what he meant recommend? He recommend running Docker in a VM. Yeah, that's the apps problem. They the way sure Nath scale works is they're repackaging and we'll pull up the apps here. So these apps that they're using are actually running in Docker, but it's their Docker. So I go over here discover apps and I want to find something here like WG easy. This is technically a Docker app, but it's not normal Docker. It's done with the way they implemented it with Kubernetes on the back end. This is why you can't just drop a normal Docker app in. So that's one of the reasons Wendell's suggestion was to use something like Docker and Portainer on a virtual machine. So you can build a virtual machine. You can have a virtual machine that runs Docker and you can use Portainer to manage Docker. That is just a simple scenario. This is some of the discussion we had a little bit ago where people were asking, why are they trying so hard to reinvent something that exists? I agree with that statement where I kind of don't get the effort being put in for, you know, rebuilding something when there's not it's not like there's not other solutions. And by the way, you know, we talk a lot about Portainer because it's open source and it's it's a cool little project, but Docker isn't that hard to manage on its own. Portainer just puts a web interface to make it easier. But by the way, it's not the only solution to do this. So TrueNAS could have probably partnered, I think, with some of the other people, but this is my opinion. I don't run the project. So I'm maybe missing some context that they have that I am unaware of to know why they wanted to do the integration the way they're doing it. I mean, maybe they just wanted a turn case solution that said TrueNAS top to bottom. And as this matures, maybe it'll become one of those things that they want to sell to businesses and, you know, push from a that standpoint and have it all branded as TrueNAS, which is pretty cool. Have you ever seen a server reboot versus shutting down based on the Nick? I don't know. Not not usually I've usually a hardware problem. You can actually fire up Docker images, but it's not simple. It's not as simple as actually using Docker. That's the thing. So yes, you can import custom images, but now you're doing something also not managed very well when you import them in there. It's kind of it's kind of hacky. You think I'm dropping frames. You hear audio cutting out. Not sure about that. Now last couple of things I'll talk about here and this is over in the TrueNAS forums. That's right. Not forum blog post and these are linked down below. This is the ZFS D raid primer to play devil's advocate a lot of the middleware, etc. for TrueNAS scales heavily involved in Kubernetes. My God is mostly them. Well, since we're already here, but that's the only thing you need Kubernetes for. You don't need matter of fact, one of the things that failed on my system was Kubernetes because of the permissions problems. It works. The shares work fine without Kubernetes. I SCSI works without Kubernetes. NFS works without Kubernetes. Kubernetes is only for the apps. So it's kind of like, hey, we're already here. It's more like, let's bolt this all together. So. All right. I see everyone else says the audio is good. All right. The D raid primer. This talks a lot about layouts, how it works and they're showing, as they said here, a lot of drives. There's a lot of write up on here. I'm not going to spend a lot of time on this. There's actually two and I believe I have this one in here. There's two D raid write ups. There's this one. And I think this one was kind of close. I think it has some interactive demos in it. Once again, these are all linked in the description already, but they show you all the different ways the D raid works, D raid considerations. By the way, this is really designed though for, well, there's two things that they make note of D raid does not work well if you have a lot of small files. So if you have something that has a lot of small rights, it may not be as efficient at storage as a full standard ZFS RAID Z setup. So there are some interesting things. I actually thought this is kind of cool that you can, I just like the visualizations they put in here so you can kind of see different ways the layouts change of how it works rows to draw. Like I said, Z I got to spend more time to really understand it to articulate it, but it's kind of niche. This is not something for home users. I mean, I'm not going to say there's not some home user with a hundred drives in Iraq, but that's the, that's the more rare home user. And it's not like all of them. There's, there is definitely people on Reddit, our data hoarder to have these large scale systems that would benefit this. So why use TrueNAS over Snap RAID? Just wondering what is Snap RAID? I guess this is my first thing. So thank you for the donation. I'll have to first figure out what Snap RAID is. Snap RAID is a backup program for disk arrays. It stores parity information from the six disc failures. I'm going to go with Snap RAID sounds interesting, but I'm positive it doesn't have the engineering in it that will just say that ZFS, I'm on the page reading about Snap RAID. Never heard of it before. Let's see. Snapshot number of fails, Linux, Lure's, BSD. Yeah. I don't know how it performs. So that's a, Snap RAID is the only one available that are not standard RAID solutions for disk arrays. So I don't know enough about it to really give you a concise answer. I will tell you ZFS, and I've heard, I think it's Michael Lucas has said this a couple of times. He's got a book called ZFS Mastery. He's actually got a book called SSH Mastery, SNMP Mastery, kind of a theme he's got with his books. Really good technical books. Michael Lucas has been a channel before, but I believe he calls it the billion-dollar file system when it comes to ZFS because the amount of development time that has gone into this. There is major amounts of development that go into ZFS. It is one of the most robust file systems out there. It is one of the most large-scale, in-use, well-documented file systems out there. Despite all of its complexity, it is beautiful in how well it does at storing things. ButterFS is kind of a runner-up to it, but ZFS is really kind of the king of storage right now. So I'm not really trying to throw shade at Staff Raid, but the fact that I haven't heard of it, and it could be just me with too much ZFS in my head, I don't know how it really compares. Does TrueNAS scale have proper GUI for Fiber Channel assignment and LUN zoning? I've never seen anyone connect TrueNAS scale to Fiber Channel, but we've got clients who have TrueNAS core with Fiber Channel, but it was already set up, so I don't have the answer to that. I've never set one up. When we get the disc shelves and things like that that are from IAC systems, I mean, it's all tied together, so it's not like we don't have to worry about that. Sounds like a niche raid project. Would you trust it with your clients? Yeah, that's the whole thing. Like, I'm very confident in ZFS ability to keep data safe in the integrity and dealing with, you know, a degree to drive and things like that. ZFS is awesome. I actually have learned a lot about SEPH, and I think SEPH is also awesome, so I won't say that there's not another solution out there, but SEPH is a far more complicated solution than a single ZFS system. There's also some performance challenges with SEPH to if you have something that needs a lot of small rights, it is hard to get SEPH to perform as well as ZFS unless you ramp up the hardware a lot. So it's not that you can't build a performance SEPH system, it's whether or not you have the budget to build a performance SEPH system. This is a real, you know, this is just a problem with distributed storage or any type of, you know, node based storage system. One of the things they really recommend is, you know, talking to the people at 45 drives, they talk a lot about this in their SEPH videos of what you have to build in order to get a performance NAS. They're like, oh yeah, minimum is for node clusters. Minimum is going to be a 25 get, I mean, they said they'll work with 10, but they want to see 25 gig or even 100 gig networking on what they refer to as the private network of SEPH and that's with a 10 gig front end to SEPH. So there's a lot of things if you want the performance out of it, it's not that you can't get it. It's not that it won't perform. It's that you need a system that can handle that performance because the OSDs, each OSD is essentially a storage, it's the storage daemon that runs. That is like a one to one for hard drives. So every hard drive in the cluster is going to get an OSD daemon running on it and all those OSD daemons can talk to all the other OSD daemons on the cluster. So it's like if you have one node with 50 drives, there's 50 daemons running and if you have another node with 50 drives, those 50 daemons can talk to those 50 daemons and then you have another node. So now we have 150 that can talk to each other. If we have three node system each with 50 drives and 50 OSDs, yeah, now you've got all that intercommunication so they will sync with each other. That has a performance cost. So you need a fast CPU and this has a cost in terms of networking fabric of having to have network fabric that's fast enough that when I send a right to a node, it has to replicate between all the nodes sync and tell that that it's committed. You know, this is just how the storage in the back-end works. So you can see there's a scale problem with this if you don't have fast enough hardware for something like Seth. And by the way, these are challenges that are pretty specific to any distributed storage. Gloucester will do the same thing. It may have some different mechanisms by how it distributes the storage, but anytime you have a multi-node as in multi-host, I should say, storage system, the distribution of that storage is going to be where the challenges come in, both in processor and networking fabric of getting the data distributed to all of them. So they're all in quorum that this data exists on all nodes. You got to think about this from the video I did on how ZFS atomic rights work. You know, I got like a ZFS explained video. There's a mechanism by which ZFS will write to the drive. So I send data to the drive. It's got to write to the drive. Well, drives. Let's put this in a plural. It's got to write to all the VDFs. The atomic system is a complete way to know that the data you sent has committed to all VDFs. And once it's committed, it sends back the information to whatever told it to write. Yes, I have committed all the drives. This is the sync on and off option that you'll see in ZFS. So it's obviously faster if you don't wait for it to sync, but of course that means you could in theory lose some data if the drives were to fail while that write wasn't being completed. So this is that sync button that you see in ZFS right, you know, turn the sync off. There's no turning the sync off for something like Seth when you write it to Seth until all hosts have committed the data. That's when it releases and tells you, yes, the data has been committed. That's fine when you're writing a couple of files. That is a scaling problem when you have a database with small writes and you're operating at scale. It's not that it can't be done. You just have to ramp up how fast those commits happen. So this is kind of like a broader topic that I'm trying to figure out how that fits into a video and maybe I'll just do a dedicated video on distributed storage. I'll center around Seth because I've got the most knowledge on Seth after doing the training, but even though I'll still handle it with the people at 45 drives. AHA is already available. If you buy the IX systems, they already have high availability ZFS. This is it works. It's something you completely do. They have a dual connector system. So two motherboards connect to one ZFS array. So the motherboard itself can fail and the M 50 that IX system sells an example of this. You can have a complete motherboard failure and it boom, it automatically does it. Now you can't have high availability multi node with CFS. That's not how that works. This is where they're gluing Gluster on top of it with ZFS in the back end. And this is where I don't think that's my opinion. And mine is just, you know, a voice in the choir of voices here. My understanding of Gluster and my understanding of Seth, which of course, as I said, I have more knowledge on Seth. I don't see this as the best solution. I see Seth, which uses a very different mechanism than Gluster is way better. Now, now I'm getting this opinion from two different sources that work at enterprise level storage who have deployed both solutions and tell me Seth is the better one, but there could be someone out there because I would see people at IX systems are quite smart when it comes to storage that yes, they really like it. What's the difference between? I mean, I've never tested Luster. Is it open source? Because all the really big places seem to like Seth. I haven't seen like the real big. I'm like the hyperscalers like Seth. Seth is used at companies like I believe that's what Facebook uses. I know because I've had some access to some really interesting clients. There are some of the biggest in the Fortune 100 that have large, large scales of Seth storage. I've not seen any enterprise storage in Luster. I'm not saying it's not. I'm just saying at least in the world I work in places I've seen. I've not seen Luster. So I'm assuming Seth is better. I don't know. I don't know enough about it. I mean, they have a page. I'm assuming this is their page. I would actually say their page is a little lack Luster. It's kind of kind of reminds me an old page. At least it's open source, but I don't I don't know Luster versus Seth. I don't even know Seth well enough to really build an awesome system. But yeah, nonetheless, you're the first person to ever tell me Luster is better. Most of the people I know and granted I work a lot with 45 Drives and they are really good at Seth. I'm going to go with I don't know. I'm going to go with them because I know the scale that that company operates at and who their clients are. So I mean the number of Hollywood production companies running on large scale things like Seth. I mean, yeah, there's I think it's a real well tested out in the field type of system. Maybe Luster is kind of niche. I don't know if you said it's even harder than Seth and Seth is far more complicated than something like ZFS. Yeah, maybe still niche, but I'll end the stream here because I don't want to take forever. I will allow the I'm not going to answer any of our non-quit non-trunass questions. I wanted to focus on Trunass here for anyone just watching a stream to get more educated on this. I left all the links to things I talked about are in the description, which means they will also be in my forums, which is a great great place to have a discussion on this. I will probably do some benchmarks and a more full review, but I'm going to use Kobe a little longer because it's such a different change before I kind of, you know, rest on it now. Oh, I'll mention at least one of the bugs I found. I don't think this is in my notes. Do I have this somewhere? Yeah, maybe not. I lost the bug. I posted. I think I tweeted about it for anyone curious. I found a bug in Syslog too. Syslog isn't working in Cobia. I don't know how many people really use that. I export all my logs to a central log server that being gray log, but the if you export to a log server, there's an extra curly bracket that goes in the config file that, you know, I filed a bug report. They it'll be fixed in a point release and it might be my final verdict or any of you wondering like wait till the point release of Cobia because that and a few other fixes are going to be in there. I'm going to go ahead and start running it. I just need to reload the system. So I don't have an encrypted pool and I need to go grab another server to dump all the data I have at a faster rate. I actually have all my data backed up to a slow server. It just takes me all day to move the data back and forth. It's all my videos. It's not critical, but so I'll do a rebuild and then I'll focus on running Cobia. Have you tried to migrate from Bluefin and Cobia via config file? No. I just did in place upgrade. So I think that's it. All right. I think I've answered enough questions. Thank you all for joining. Links down below to check things out. Check out the TrueNAS forums, not just my forums with the TrueNAS forums. There's links I have to their announcement. Of course, all the comments that people have there. That is the place to ask and find bug reports is in there because that's where I started my bug report and then filed it over on their JIRA. That's the place to do that. So I think that's it. You had to use a tunable to initiate shutdown. Not a problem I ran into. I'm able to shut. I've been able to shut these down without a problem. So I don't think I don't know when that problem occurred for you, but it doesn't really occur for me. But I also don't shut down the servers right often. So all right. That's the last question I'm answering. Thanks everyone.