 Tom here from Lawrence Systems. It is June of 2022 and there's a new version of TrueNAS scale. Version 22.02.2 has been released with a whole lot of improvements and bug fixes. We'll be covering some of that. I also took the time to take a system and do an in-place upgrade from TrueNAS Core 13 to TrueNAS Scale 2202.2. So in place after I set it all up, same hardware and then do a series of benchmarks and the last time I did these benchmarks people said they weren't thorough enough so I also threw in some iSCSI benchmarks and a few different file sets so we can understand better the performance differences over NFS and iSCSI between core and scale and where scale still falls behind a little bit. Before we get into details of this video, let's first. Are you an individual or company looking for support on a network engineering, storage, or virtualization project? Is your company or internal IT team looking for someone to proactively monitor your system's security or offer strategic guidance to keep your IT systems operating smoothly? Not only would we love to help consulting your project, we would also offer fully managed or co-managed IT service plans for businesses in need of IT administration or IT teams in need of additional support. With our expert install team, we can also assist you with all of your structure, cabling, and Wi-Fi planning projects. If any of this piques your interest, fill out our Hire Us form at laurancesystems.com so we can start crafting a solution that works for you. If you're not interested in hiring us but you're looking for other ways you want to support this channel, there's affiliate links down below to get your deals and discounts on products and services we talk about on this channel. And now back to our content. I'll be leaving links to everything down below and here is the TrueNAS Scale announcement in their forums. We are pleased to announce release of TrueNAS Scale 22.02.2 Angelfish. This new release has several bug fixes, improvements, and new features. With this release, the enclosure view is now available for the following TrueNAS platforms. I have one of the mini threes, so I'm excited to eventually try it on my mini three. I might start moving things over to scale. Just because that particular workload, I want to play with some of the Docker things on that server. And you know, it's pretty cool. I don't run any jails on it currently because there's, as I've said before, you cannot convert jails directly over from TrueNAS Core because it's built in BSD to the TrueNAS Scale Docker platform. Now I wanted to make a quick comment on this. Someone said the PIP install, the Python installer was missing and they opened a JIRA ticket on this, but please note this was closed because you're not supposed to be using some of this. This is where some of the confusion comes in with TrueNAS Scale. And I think it's something that should be noted. This is designed to be an appliance that you load and use their UI, not the command line to get everything set up. I bring it up like that because there's a lot of people that start well monkeying with it a little bit. And if you're going to go through all of that and try to manually do a lot of things on this particular system as opposed to using the UI, this is the same thing that goes for TrueNAS Core. You're going to potentially break things. It's just designed to be an appliance where their UI takes care of all of it and gives you a good interface for that. If you were going to run things manually, you may as well build your own bare metal server and set up whatever the tools you want. So you have more control. This is not like a full Linux distribution with scale. It is very customized. So if you start monkeying with things down below, so to speak, at the command line level, you could potentially cause a lot of problems that the interface may not understand. Now, the ZFS commands are still able to work from the command line to control things within ZFS that may be not in the UI if you have certain advanced use cases. But when you start messing with installing extra libraries, or even though it is based on Debian, you start apt-get updating. I know I've broke a couple of the installs, being curious what happened. So it sounds like they're starting to lock things down, like not being able to use PIP to install whatever Python things you like to install. Now, here's the release notes and they have all the enclosures listed and lots of these little improvements on here. Some new features in here as well. Expose info about Gluster Network interfaces. I'm going to talk a little bit about that because this is the more common question. And of course, here's a list of the current bugs, which there's actually quite a few. Yeah, there's actually, well, a whole lot of bugs still in the system. By the way, if you're someone like me and enthusiast testing it, if you find more bugs, please add to it. This is how the engineers get working on it. But the big important thing, of course, is the way this works in terms of Gluster and the tying together of different file systems. So you can have a clustered SMB share, a clustered file system. And a lot of people were thinking there's just some magic where you log into a couple TrueNAS scale systems, tell them to talk to each other nicely, and then they do so. But that's not exactly how they've implemented it. The current implementation in TrueNAS scale and how you get the scale out architecture is all facilitated via True command. I don't know if they plan to allow it to occur locally on each system. But the current, as I understand it, implementation and the way going forward is always going to be using True command. True command is a service offered or Docker image offered that you have a license fee that you attach based on number of drives. I've not taken the time to really dive into review it. It's pretty cool for doing management of multiple machines, both core machines and scale machines, and it adds some of that functionality to build out your cluster management. So whether or not this will be exposed directly so people can just log in and do it, they do have some, you know, relatively inexpensive True command offerings you can sign up for trials on it. But it is a separate management tool that orchestrates the control that allows the functionality and ties Gluster together with SMB with all the other features are working on. That's part of their roadmap as I understand it right now. So a lot of people ask, well, can you do a demo on it? And I'm like, well, I'll do one in the future. I'll build a few scale systems and tie them together and do some testing with True command. But I just wanted to offer that clarification. That's the way they appear to be going forward with it. Everything I've read so far in the documentation. So while they do talk a lot about the clustering, I just wanted to make sure it's clear how the clustering works. Now for everyone's favorite part, how did you do the benchmarks? Really simple. It was all on the same exact platform and system. The only thing we did was an in place upgrade. This is not a high performance system. The hypervisor is Zen. The storage target we chose was NFS and ice guzzies. So we went into the test of both. This was just a rise in five system we have that runs Zen. And it's not a particularly performance oriented system. But these are the benchmarks we have on here. And we'll scroll down. The most important is that all things were the same other than the software change. We didn't have to rename the shares. The in place upgrade worked perfectly fine. The same NFS share and ice guzzies share worked. So no problems there. Now when you look at the block size, and I'll leave a link to this, you can dive into the details more so than I'll even cover them here. But the block size of one meg, no problem when doing FIO tests, they actually are really close to each other. So here's your scale of NFS and ice guzzies. Actually, the NFS was just slightly faster. Ignore the fact that I have a typo I type ZFS instead of NFS. But wherever you see Trunes core 13 ZFS, I meant to type NFS Trunes core 13 ice guzzies. But they're all really neck and neck here for a lot of them, whether it's IOPS or just raw data speeds. But any of the smaller four and eight k block sizes, well, it just gets stomped on the performance is well roughly half right here on the four K. This was a read engine four K. Let's scroll down a little further. Even at eight K, we're still looking at about half the performance. So if you have a workload on a VM target, it has a lot of small rights. Well, this is going to be those results you get. It's just not quite as performant back up here to the one meg again, we actually outperformed it again Trunes scale NFS just slightly faster, not substantially not, you know, the big swing we seen on the other ones, but a little bit. Then if we see the random right, and wow, we drop even slower for NFS, it seems to have a really hard time rating for small four K blocks, ice guzzies fared a little bit better. And even ice guzzies for small blocks, a K blocks worked a little bit better on Trunes core ice guzzies versus ZFS. You can see the different speed differences. But I'll leave links to all this so you can stare at the numbers. It's just on that small right performance, they are a little bit faltered. Once you get to about 100 to eight K, they come closer to being in there, but they're not the same until you hit like the roughly one mag mark. So take that for what it's well, that's if this is something that's important to you using it as a storage target for virtual machines or for NFS or ice guzzies performance. Now this is a system I did the actual testing on. It is also an AMD Ryzen. It's just a Ryzen 3 2200. But the important part, as I said, is it's the same system that ran core and ran scale. And we've tried this with other systems before we ran to the same issues where it was a little bit slower, it doesn't appear to be specific to Ryzen. We tried the same at Intel and got very similar benchmarks where we've seen it a little bit slower on the four K rate performance. Now, if we go over here to the apps, and it's of course, what a lot of people are looking at is all the cool things you can run Docker, which is a really solid use case for this, because the applications available for Docker versus the jail and IO cage system in core and BSD, well, substantially different. And it makes a lot of sense throwing your applications directly on your storage, especially if you have something that's either a storage intensive or just has a lot of rights of storage, because well, now you're removing that barrier of I need the application to talk to the machine. Well, if it's on the machine, it can talk to it at a much higher level. So I haven't really done any tests. There is a way I could possibly if people encouraged me enough, take the time to build pharaonics and IO cage, run a performance test and do a pharaonics build inside of Docker. It's low on my party list. Maybe one day I'll test it to see how the direct IO performance differs. But if you're using as a storage target over NFS rise, because he well, those other benchmarks are more relevant to that. But I did set up here the install applications. I installed net data worked net cloud worked perfectly fine had no problem going through setting up the settings, setting up the storage on there. And I feel it's pretty stable. It went really well. I pointed it to a specific data set for next cloud. So I'll probably do some tutorials coming up on it. Because I actually found it relatively easy to use when I did this. It deployed faster than it did at least it felt faster than I did before. In a few people commented about the hangups they had had with the interface. And I didn't seem to have any of those like before update available. Let's see how fast it can update. We'll go ahead and tell to do that. Click the upgrade and see how that goes says the upgrade went fine. And it seems to be working. So good news on that. So my overall feelings are I love seeing the progress on sure Nats scale. I'm finally thinking about moving one of my systems over to it. I can live especially it's going to be my video editing system. I can probably live with a little bit of performance loss because well, I'm mostly moving larger files, not a bunch of small files because it's my video service, my true Nats mini. So I'll probably do some testing on there to get myself more familiar with it. So I can eventually create tutorials. I know a lot of people have been asking for more tutorials on true Nats scale. And it's been on my to do list to start making them. I just need to really start using it. The majority of the videos I do have a lot more to do with the consulting work and the enterprise work that we do, where we're doing, especially lately, a ton of virtualization consulting and true Nats being one of our favorite servers, especially for high end systems, including, you know, full disclosure, where an IX systems reseller, some of their high end models that you see me review on this channel. So definitely that's where I spend a lot more of my time. But I'm fascinated by and love the catalog of different simple to deploy Docker images that are available that well, looks pretty cool and looks like a great reason to start doing everything right inside of one system. There's a great efficiency of having a server where, as I said, the storage and the tools I need to use are right directly on there, especially, you know, if you have any type of logging servers or anything like that, that would be pretty cool to run directly on the true Nats scale system. So I'm looking forward to the future innovations. I'm looking forward to probably have to fill out a few bug reports, because I like helping progress things. It's kind of fun hunting them down and saying, All right, I found this issue where this is how you get around that. So I encourage everyone to do this for testing. I don't know that I would in case someone asks or wants to point down below. I don't know if I would run the Docker side of things in a production environment. But I would consider the data itself and the integrity thereof from ZFS that part seems to be good. So I trust it with my data. I'm not as sure because I'm much less familiar myself with Kubernetes and until there's some better documentation for how they implemented and how they put it all in there because it's troubleshooting. It seems to be a little bit vague. I've looked through the forums and sees plenty of unanswered questions of when people have had problems with it and there's not a ton of documentation available yet. But hopefully I'll be part of the people making that documentation. I have no problem doing that once I understand it and teaching everyone else how to use it as well. So get out there, help with the project if you can, participate in the forums and answer some of those questions if you're someone who is familiar with all of it. In the meantime, thank you all for watching. If you want to have more in-depth discussion, hover my forums or just leave your comments down below while you love or hate True Nascale. Thanks and thank you for making it all the way to the end of this video. If you've enjoyed the content, please give us a thumbs up. If you would like to see more content from this channel, hit the subscribe button and the bell icon. If you'd like to hire a short project, head over to lauranceystems.com and click the hires button right at the top. To help this channel out in other ways, there's a join button here for YouTube and a Patreon page where your support is greatly appreciated. For deals, discounts and offers, check out our affiliate links in the description of all of our videos, including a link to our shirt store where we have a wide variety of shirts that we sell and designs come out well randomly, so check back frequently. And finally our forums. Forums.LauranceSystems.com is where you can have a more in-depth discussion about this video and other tech topics covered on this channel. Thanks again for watching and look forward to hearing from you.