 Tommy here from Lawrence Systems, and I've been migrating from, well, one system, one primary system I've already got my lab system behind me already set up on TrueNAS 12, you know, from a fresh install I've done a couple other videos testing with TrueNAS 12 on different systems with a fresh install and it's gone well, but I decided to take my existing FreeNAS system, the one I'm editing this video on right now, and upgrade it to TrueNAS. So I want to talk about the migration process, what works, what doesn't, and if you are watching this video I'll already save you the, you know, watching a recipe. Yes, it did work or I wouldn't be able to edit this video This is actually the server that handles my video and a few other backup functions around the office So it's so far has worked really well I'm gonna talk about the details, enhancements and new features and before we jump into all that, let's first If you'd like to learn more about me and my company head over to LawrenceSystems.com If you'd like to hire a short project, there's a hires button right at the top If you'd like to help keep this channel sponsor-free and thank you to everyone who already has There is a join button here for YouTube and a Patreon page. Your support is greatly appreciated If you're looking for deals or discounts on products and services we offer on this channel Check out the affiliate links down below. They're in the description of all of our videos Including a link to our shirt store We have a wide variety of shirts that we sell and new designs come out. Well, randomly So check back frequently and finally our forums forums.lauranceystems.com is where you can have a more in-depth discussion about this video and other Tech topics you've seen on this channel now back to our content. So here's my TrueNAS core system and it's running TrueNAS core Version 12 beta 2. How do you get here? Well, let's start with you go over here. We can check for updates This is an existing FreeNAS system. You can see it's on the stable train and We just move it over here to the beta and currently as of August 13th 2020 the beta 2 is what you'll end up downloading if you switch to the beta train If you want to go all the way into the latest build you can use the nightlies and nightlies are well Kind of like they sound nightly builds and compilation for what was completed in the code that day So that's gonna be a little bit different I probably recommend unless you're feeling really adventurous just doing the beta Which is what I'm using right now now other things like interoperability Because these systems and we'll go over to tasks and we'll look at replication tasks I have them set up purposely to go both ways in terms of one is a push from this server Which is running FreeNAS 11 3 and it lands on the TrueNAS server now. I also have we'll go here tasks replication Push tasks on this that pushed the other way and both these are working perfectly fine I didn't have any issues pushing data back and forth between the two servers So the nice thing is interoperability seems to be there no no issues with that the jails did migrate over That really wasn't a problem. I don't run a lot of jails other than syncing in my production environment So that seemed to work. I had a couple out of custom built jails They all started and seem to be functioning perfectly fine. So that didn't really cause any issue I will start with though one minor issue and that's this right here that I didn't see a fix for and that's legacy Encryption and what that means is it's still using the legacy method of encryption And I don't get the per data set encryption options on here The solution that I seen suggested in the forums one of them was to you know Just go ahead and rebuild them It sounds like there might be like rebuild as and move all the data and rebuild restart the pool fresh and Then move all the data back That's you know, obviously you take a little bit of time and I'll probably do that once this goes into productions I don't mind the legacy encryption. I also currently don't need to have per data set encryption I think it's a cool feature But for me I just encrypt the entire data set because there's not like some pieces I don't want but it does it is a neat option and are adding to be able to set your encryption does on a per data set As opposed to a per pool basis, so you end up with that now as far as upgrading the pool itself It's pretty straightforward now This is something that I warn you if you decide to do this and they have it right here The upgrade pool is a one-way street meaning that if you change your mind And you cannot go back to earlier versions of ZFS when you downgrade what that means is If you switch to example the free NAS 11 3 and then I want to move over to like I did the true NAS 12 well moving the true NAS 12 it has an updated version of ZFS So you can upgrade the pool now You can keep operating a pool in legacy mode in terms of the feature sets because new versions of ZFS on the operating system Can read and write to old versions of ZFS with certain limitations But for most part, you know version behind's not usually a big deal But once you say go ahead and add those feature sets to the ZFS pool an existing pool And you want to move back like reload free NAS reload the operating system in the boot drive to be the older version Well now you got a problem because it won't recognize that pool because it has the feature flags that are for the new version So just that's a caveat anytime you're upgrading Free NAS version so outside of that if you're not worried and I wasn't I went ahead and upgrade but it will not upgrade the encryption because that's a Setup issue and so while it does upgrade the feature sets to the new ZFS 2.0 You don't get that now. What's the differences and what has changed? Well, obviously I've talked before about true NAS core and the conversions of all the code and I'll also mention again The release schedule right here and while true NAS 12 unifies free NAS and true NAS into a single image And this is going to mean faster features and better testing etc They're release schedule slightly different because true NAS being more designed for the enterprise market and the support is You know provided by AI systems and you go with the previous how it did true NAS now They have true NAS enterprise and true NAS core the open source one and it's still you know open source But you get you know build support and I like said I have a separate video a breakdown that what all the detail Means between there but what really is important is how the release cycle is going to go So currently we're at beta 2 and June 30th is when he released beta now We're in August and September is when they expect to release the release candidate now What you see as release candidate release candidate one is more like the release is for true NAS in terms of stability So your business users might want to wait till release before they do it but generally speaking home users can go on the release candidate version and you're you know dealing with a really stable Mature system in terms of their support So it's a little bit skewed on the release because technically even though they're calling it beta It's a lot more like the release candidate was and when they get to the release candidate It's going to be like the release and free NAS and when they get to actual release It's going to be more like the bug fix which the use series So like right now every you after that is like an update But it's a minor update just to fix bugs no feature changes and that's what the use are so for any release You have like you one you two depends on how many bugs they find there's not like a Limit to them, but when they find bugs they find a bunch of little things to fix they fix them They're based on you reporting them not complaining about them on Twitter or posting a comment down here It sucks because something doesn't work You actually have to go to their ticketing system find the bug report the bug discuss it in the forums That's how bugs get fixed not by complaining in all caps that this stupid thing doesn't work Anyways, let's get off topic. Let's talk about some of the details here for true NAS 12 beta And I'm excited about a lot of these as well The reasons I want to put in on all the systems that I use because you know I can do all kinds of benchmarks on my test systems, but it's really not the same as using it But it sounds like I'm gonna have to rebuild the pool which say that's not too big of a deal either That's that much more fun. I'll rebuild the pool maybe before I go hits full release But I want to really test out some of these new improvements now Some of you don't have to rebuild the pool for some are just enhancements to the way the system works They have new improvements with multiple CPUs in the system There's the need to manage non-uniform memory access or Numa and they've done a better job of the way a scientist These are you we're really Tweaking at the hardware level all the performance because multi-port multi-course systems are becoming really popular multi-processors with multiple cores are really inexpensive to buy used on eBay and we've got an offer code for tech supply Direct if you're looking for one with a warranty and that's actually who supplied the R730 that's behind me I've done a video on this and that's what I'm using right now for a lot of my beta testing and some of the upcoming tests that'll We'll be talking about related to these new features that are coming on there But you can get these systems for so inexpensive now comparatively speaking that you know old server hardware It's just there's a lot of it out there with so many of the data centers needing cutting edge all the time They're pulling out these servers. It feels faster. I don't know Just seems like there's a lot more of it on the market at some really good prices so getting a hold of this equipment is relatively easy and of course You know take advantage of all the things that work on there She's got to find one's compatible with free NAS, but that's that's a different topic for a different day ZFS metadata on flash special SSD VDEvs can be used for metadata acceleration Now this is all part of the new ZFS 2.0 where they're adding ability to tune it further and Set up a system which is obviously going to create more analysis for else So I did a video on performance tuning on ZFS Which is you know an art form into itself of figuring out what the compromises are for how you want to configure it and You know what the benefits are now they're adding more variables to the mix Which is going to make me one have to do a new video and I probably have to get some IX systems people Maybe I'll pull them in for a video to have a discussion about this There's a lot more tuning options you get very a lot of this we're going to talk about comes down to that So the metadata on flash The ZFS fusion pools a special SSD VDEV can also be used for a database on IO right size This is configured on a per dataset basis users can accelerate database datasets by configuring higher IO size Once again, these are some more performance tuning when you talk about things like how you want to tune the database Section like you're usually using a free NAS to origin enterprise environment Not just for one thing but for multiple things especially we talk about a very large server So now on a per dataset instead of creating All right This is a pool for this and this is a pool for that where I tune the settings they can create on a per dataset So that makes it a little bit easier to manage I like that as a feature and it's part of the you know fusion pool system that they have on there Persistent L2 arc which is kind of interesting and this is the flash base read cache and It is typically cleared on a controller reboot or failover for smaller systems less than tell less than 1 TV L2 arc that can be fine for large systems with like 10 terabytes of L2 arc It could take hours or even days to rehydrate the L2 arc the persistent L2 arc option avoids clearing the cache Allowing performance sense of systems to get back and running full speed So this is interesting because normally it just gets cleared and it rebuilds kind of on the fly so if someone requests a file that goes to the cache and Well, if there's a slow spinning storage and a nice speedy flash arc storage Great it starts pulling from there and because it's rebuilding automatically You don't think about it, but obviously when you talk about 10 terabytes of storage on a flash Well, we want it to populate faster. So that's kind of a neat feature that they're doing on there ZFS a synchronous DMU and cow or copy on right with ZFS data management unit or And an algorithm for copy on right these algorithms were implemented in a synchronous manner Which required transaction to wait till another transaction was completed Systems contributed the conversion of these algorithms to an asynchronous approach which reduces the amount of wait time and increases the Parallelism and opens the FS 2.0 and that a benefit is that the fewer disk I owe Are needed for sequential rights this increases drive efficiency and reduces latency and heavy workloads Now this is getting down to some of the fundamental problems you have when you have a bunch of drives is one things And this is why when I've mentioned like NFS and you turn off asynchronous writing So you turn it off sync disabled for NFS shares to get better performance out of them If you're trying to synchronously write data that could be parallel written There's obviously a cost because I owe wait time is one of the killers So people look a lot at a single hard drive all about transfer speed but when you get into enterprise environments all about the IOPS and How many you know operations you have occurring and how long is the wait time before those can occur and you can end up with a Problem where you have a lot of latency because it's waiting because it's queuing up all the rights But then they have to be written sequentially as opposed to doing everything in parallel So they're working a lot on that and I'm excited This is going to be some enhancements and one of the reasons I may rebuild my pool sooner than later because I want to really dive deeper to test this and see what the differences are ZFS checks on vectorization ZFS protects data by writing a checks I'm into metadata for each block data that's written and This is going to use Processor functions that are core into the processor of it certain Intel processors to do this I don't have a lot of details on this, but it's kind of interesting to doing it I don't know if it's specifically using What extensions on the Intel processor? Those are available in AMD But the vectorization is going to be some more efficient way to handle those checks So when you get large like petabytes of petabytes of data on a server You do need to make sure one you don't have bit rot into that all these checksums are Able to keep it from having bit rot There's a lot of files on there and we deal with some companies that we've done consulting for that do have a lot on there ZFS record size increases So they're going to modify the way you can write the ZFS records once again It goes into the tuning options for how you tune the ZFS system ZFS asynchronous trim the open v ZFS 2.0 includes Asynchronous automatic and manual trim capabilities manual trim can be scheduled overnight for each weekend provide performance during business hours So that's the way it trims the rights. This isn't needed for caching, I'm sorry ssd and mvme drives Any type of solid-state non-spinning storage does need the trim and I'll leave a link to this article over in Ars Technia And they talk a little bit more about this and one of the long-standing complaints about zfs linux on linux was its lack of trim now this is Going to be zfs on bsd, but this is zfs 2.0 And this is no article But what it really is highlighting is after several years of untrimmed hard drive use ssds can get down to one third size or less This was a problem in the legacy zfs on linux not something that was on the I don't believe this is a problem that affected the zfs on The free bsd platform because it was a little mature But now that we're moving to the 2.0 these are all being solved So I'll leave this so you can read through some of these things in here But you got take this with the fact that it was written in 2019 a little over a year ago So a lot of this has been fixed and that's why they're highlighting it over here Just for reference for people wondering about that faster zfs boot This is obviously an issue when you have a lot of drives that need to be decrypted and My example machine over here if you can kind of see it in the background I've done a video on it. That is a lot of drives in this r 730 machine the xd So yes when you have a lot of drives, there's a pause while it you know Enumerates all the drives and gets them all set up and imports the pool sets on there So they've moved into a more parallel way of importing them so it boots faster Now ice guzzy reads This is where I'm going to have to redo my test where it's ice guzzy versus nfs for Virtualization storage because they've enhanced this quite a bit Um on the reads and tuning on the ice guzzy And right now they're saying based on the hardware You can get now over 1 million iops and 15 gigs a second can be achieved with the right hardware So that's pretty impressive Numbers on there and of course the iops are the one back to something I said matters a lot Is you don't usually have a lot of sustained transfers as much as you have a quantity of transfers And that's what can choke a drive system. So being able to have 1 million iops. That's really impressive on there They've increased smb client speeds So you're going to be able to get faster server message block and they've increased the Number of clients that can be on there. Now. This is interesting and uh to me because I'm running a linux system that Well, once again, I'm editing this on I'm editing on linux using smb connected to a free nas which runs on bsd And you know using smb So we're emulating the microsoft protocol on two different platforms because it's still an efficient way to communicate And of course we have other systems that are running windows that need to communicate with it So it's just a popular implementation So they once again brought us up to a new version of the samba that has more enhancements on there But nfs like I said the speed comparison between nfs and ice cozy for virtualization. They've also updated performance on nfs So it's not like they forgot about it that has you know gotten an update as well So that's going to be great and frequently these are used for Sand devices and storage networks. So uh nfs and ice cozy both being really important So I'm glad to see the update enhancements on that multiple nv dim each nd nv dim can be assigned as a right Intention log this log for different pools. This is not the same and I'm not going to spend a lot of time on it But it is not the same as a right cache. This is part of the way the copy on right file system It's the intent data that has to be written somewhere fast before it can be spread across the pools So this is where that asynchronous problem comes in and this is the solution is you buy some nv dims So anytime you use a slog it has to be faster than the other pool it's supporting or it doesn't really provide much value That's why we've moving over to nv dims Which are based on ddr for s-dram and these things are Really fast and of course there's now more enhancements on there because we're always pushing the limits of yeah Cool, you can use an mvme cache. First we use ssds cache now ssds are cheap So now we have pools of ssds that means the next step was to move to mvme cache But then the ssd caches are so faster. We may have a pool of mvme's So now we need something even faster to handle the writing essentially for those and that it is caching the the Intent data, right? It's not caching the entire piece of data, right? But that's where multiple nv dims come in to enhance speed even further. So impressive I love pushing the limits of storage and seeing how we're going to go with this and You know true nasty is really on the cutting edge of all of this Update a pc i interconnect for ha systems I've talked about this like when I did the review of the m50. I mean how it fails over and having those interconnects That tuning is important because the ha you want all your storage to fail over seamlessly and smooth when you have a high availability system So there's more enhancements on there All these performance improvements plus advance advances in process performance contribute to the ability to build and support larger systems Beyond 10 petabytes in size, which is just impressive So it is Really great that they're doing all this Like I said, I think we're going to see changes coming faster and the demand for high performance storage is only increasing as You know all these different cloud infrastructure systems They all have to be based on something in the back end when it gets to some of the fundamental levels and zfs being super popular for this and of course True nas and ix systems being at the forefront of a lot of this technology of you know building this out Now they're also getting in all the Documentations being updated and built out as well. So that's kind of you know It's all those conversions of things and once they merge to one single document base That's going to be so much easier for the teams that work on the development of this to Have a single place to do all the updates versus right now They've always had to make the nuance changes can you know the differences between them and ever I have a whole video on you know why they're converging But obviously you can see with all these extra features coming Putting it all in one place and all one documentation and a real difference You know being if you get true nas core you get all the bells whistles and features we talked about here But if you want the enterprise support because you're a business you're like we can't just use some open source product here We need a support agreement in sla. Well, they still have that It's the same software you drop a key in the license file and the software converts to the supported version essentially when you buy the hardware from them and make sure it's all specked out There's a little more nuance in it but the concept's the same it's the same program But you can buy support for it If that's something you're needing your environment But for you the home user or you the person who just like me is a big open source advocate and say I just want to use the open source one and I want to have all those features You go ahead and get all those features because well, they're available in there And like I said, I have a video where I compare send and breakdown of them So I'm going to keep testing and of course now I'm excited to have more speed testing to do So maybe what I'll do is do a series of tests before I upgrade the pool And then maybe try some of the different fusion pool options It's kind of in depth to do all this but it's going to be kind of fun to play with And maybe if I get really ambitious, I'll load free nas 11 and free nas 12 or two nas 12 And try to do comparisons on the same system do a series of benchmarks reload the same series of benchmarks These are things that I really want to do all the time. I know people ask me to do them It's a matter of really finding the time because obviously each one of these benchmarks has a Sunken time to build out set it up load Tear down build out set it up load Then do a comparison the video parts actually one of the easier parts when I do this But doing all the comparisons is where the time is on there me just talking about it Well, that's just me talking about a lot of the stuff that I use. So I'm excited looking at the progress on there I'm daring enough to you know do this in my one production system Maybe I'll move the other one over to just so I can get more in depth on the testing on there And the other one's doing some vm work This one just does beta vm work So it's all my lab stuff goes on the server that I'm also doing some editing on But it doesn't actually it's not really production every day I don't know if it's ready for production every day until at least gets released Canada But you know, maybe I'm feeling daring and hey, why not if Problems are you know fun to have sometimes because they become intrigue and troubleshooting and of course turn into bug reports So we can get this thing better for those of you that are just waiting for this to hit release candidate It's people who take the time to test that gets you there It's a community driven open source is important that community part is not to be forgotten It's not just the developers community feedback on all your unique use cases that the developers go Wow, I never thought to use it that way. That's clever, but I understand why it needs to be fixed that way That all comes from community contributions. Oh plugins. They worked I can't remember if I mentioned it earlier in a video or not, but yeah, my plugins migrated But I have very few of them so more testing will be needed on that So I can't guarantee that if you hit the upgrade button all your plugins If you have a lot configured, we'll just migrate that might be a different issue But that's another topic for another day. Thanks. And thank you for making it to the end of the video If you like this video, please give it a thumbs up If you'd like to see more content from the channel hit the subscribe button and hit the bell icon If you like YouTube to notify you when new videos come out If you'd like to hire us head over to laurancesystems.com fill out our contact page And let us know what we can help you with and what projects you'd like us to work together on If you want to carry on the discussion head over to forums.laurancesystems.com Where we can carry on the discussion about this video other videos or other tech topics in general Even suggestions for new videos. They're accepted right there on our forums, which are free Also, if you like to help the channel in other ways head over to our affiliate page We have a lot of great tech offers for you. And once again, thanks for watching and see you next time