 Tommy here from Warrant Systems and we're going to talk about TrueNAS Scale 22.12 hypervisor features. I want to cover the features, what it can do, what it can't do, because the question is not always easy to answer of can I replace my Proxmox or XCPNG or insert name of your favorite hypervisor platform with TrueNAS Scale so I can consolidate everything into one device. My NAS and storage needs, maybe the application needs that you have for TrueNAS Scale and then your hypervisor needs. And the question comes down to, well, does your needs exceed what TrueNAS Scale can do? So I want to cover all those needs and break them down. Now I'm doing this on February 6, 2023 and I'm completely aware there's a release that's just around the corner, an update to TrueNAS Scale, but I looked through the notes and didn't see anything in that release that was going to bring any big changes to the hypervisor. So I went ahead and did this video. Now this video is two parts. We are going to cover what it can and can't do essentially like a features list and then we'll jump over to a demo at this timestamp here, but I try to time index everything down below to make it easy to get to the part you want to start out. Now TrueNAS Scale uses KVM and that means the Vert IO drivers, which work fine if you want to load Windows, Linux or FreeBSD. There's no need to load them of course in Linux. This is all natively built in and easy enough to get the Vert IO drivers going in Windows. So you can install it, then install the drivers later. CPU cores and settings and CPU customization. You can actually define the number of cores and the number of CPUs in there. So you've got some graying or control plus a little bit of CPU customization if you have such a need, which is great. You can upload an ISO. This makes building new VMs easy. Grab the ISO, upload it and you don't have to dash really create any shares or jump through any hoops or go to the command line and copy anything in there. Those features are there. You can do things like that, but not needed right through the web interface. You can just upload it. It does have pass through supporting GPU, PCI, USB also has the ability to turn off the virtual display adapter, which is nice. If you do the GPU pass through and don't want extra displays attached to the system because dedicated reasons you're setting it up, that is a feature. And the other one is using Z-Vowels and ZFS. Yes, it does, but it's a little bit more complicated and we'll cover that in a demo where it does have the ability to use them for backups and but not in the normal backup way that you might get with the other more mature, more full featured hypervisors. And it's where the nuance will come in. Essentially, you can absolutely take and clone the Z-Vowel, the hard drive, the VDI of the system, but that doesn't really mean you're cloning all the metadata around it, such as that CPU customization and GPU pastor you did. So it does kind of have a backup, but it's not exactly outside of using ZFS replication a backup. And the same thing with the snapshots. Yes, it's ZFS snapshots, but it's only ZFS snapshots and it's not that you can't be done. It's just a little bit different than how you may do it because, well, you have to go to the snapshot menu and set it yourself. And once again, it's not snapshotting any of the metadata, just the drive data on there. But that's why we have a demo later. Now the networking, yes, of course it has networking, but by design it can't reach the host. And I think this is really strange. And I'll leave a link to the forum post because, well, there's lots of them about it, but there's a forum post where this is discussed. And I don't think it's going to be fixed, especially because it appears to be suggested that people like me and Wendell from level one techs are just reaching for views by complaining about this because professional infrastructure design shouldn't be done this way. And I mean, I do like views. I will completely admit that. But the other side of it is I see a lot of the people in the community asking for this. I also think that it should be like a simple checkbox if they wanted to fault it to off, but just give us a checkbox so we can talk to it, turning it on. Yes, that would be great. Now from a design standpoint, I don't disagree with them like, hey, you probably shouldn't have your VMs able to talk to the management interface of the hypervisor. But we have a NAS that we added a hypervisor to, to build true NAS scale. It's a NAS first in a hypervisor is kind of an extra secondary feature. Matter of fact, as IAC systems resellers and people who service these, we rarely see the enterprise use of hypervisors built into true NAS core or true NAS scales. Pretty much it's always a separate box. So from an architectural design standpoint, it makes sense. But when you have a NAS, and I'd like to run something on that NAS in a hypervisor and you took the time to build it, it seems like high speed talking to it makes a lot of sense without looping out of the networking going back in, which is another way you can do it. It just seems like that would be a better idea to have it in there because I feel it's a NAS first kind of in the name, you know, true NAS that's also has a hypervisor. But that is my soapbox. I'll get on real quick there and state it. Let me know in the comments what you think of that feature. Let them know in the forums and maybe they'll listen to the community and make that easier. I've seen some talk about it, but hey, it gets done because enough people would like to see it done. Now for the missing features. HA is missing. There's not any way to do HA load balancing or even shared storage connections between multiple servers. Matter of fact, there's no way to even group these servers into some single management interface. That's not even anywhere I seen on any roadmaps, maybe in a far distant future, but nothing I've noticed. So if you are looking to cluster a bunch of servers together, that is done with cluster for sharing, but that's not in any way that I've seen at least on the roadmap for their hypervisor system. So if you have that need, want it to be able to fail over to another one or use a shared storage, which would allow to do this. No, even though you can use ice guzzy on other hypervisors and have TrueNAS be the target, TrueNAS itself can't target ice guzzy. It's only Zvol is the only option for the hypervisor to use for drive storage on these. And there's no way other than to replicate that over. And this also goes back to like migrations. Could you migrate by doing a replication of that Zvol to another TrueNAS server? Yes. But once again, the metadata is not going to come over there. So you'd have to rebuild it and point it back at it after you did a replication to another server. So yes, it can be done, but it's kind of a manual process. So it's kind of a yes, no answer. And well, if you migrate back and forth a lot and want some high availability, those features just go out the door. Firewall settings and IP restrictions, such as ACLs, if you will have a lot of advanced needs for that, that's a little tricky and not natively supported. You'd have to try and go to the command line to try to add like certain IP restrictions and things like that. That is a question that does come up if you are using and you want to focus on making sure things are very locked down and secure with your hypervisor. So the VM can't go rogue and assign different IPs. That's a common use case when you get to more advanced setups. And that's not even an option at all that I've seen here in the TrueNAS hypervisor. But obviously it could be done because once again, it's under the hood. It's KVM, but don't monkey with under the hood unless you really know what you're doing because this is built to be an appliance granular user controls. The last one I'll mention, so if this is a multi-user environment where you want to have multiple people managing it, once again, there's no user control, which also means there's no logging of which user did what. So it's great for a homelab environment. But once again, it doesn't fit your needs once you start getting into a larger environment where you want to log what each tech did or which SysAdmin had done to make changes to any of these NAS systems. So those are some pretty big gaps, but it may not matter to you at all. You may just want to run something close to your NAS, and it does fit the need for that. So once again, that comes down to your needs. Let's jump in the demo and actually show what it looks like. We're going to start here at the dashboard of my TrueNAS scale 22.12. We're going to go to virtualization and let's walk through adding a system, guest operating system, Windows or Linux or BSD. We'll go ahead and have Linux. We'll call this text Linux YouTube. Now this is freeform, but up here you're limited to either underscores or pushing all the characters together or putting numbers and characters. You don't want special characters here and you can't put dashes. So it does filter for that. So we call it text Linux YouTube, system clock, boot UEFI, which we do have the option for legacy if needed. Start up on boot. We're going to tell it no, I don't really need it to start when it boots. Device display can be VNC or spice. We'll go next. We'll actually scroll back up here. Virtual CPUs kind of defaults to one CPU. Maybe we want to change it to four virtual CPUs with three cores, maybe four cores. It's up to you. Don't exceed what your system has. And then under here you get a little bit more fine-grained control if you need to do the CPU set to specify the logical cores that a VM is allowed to use. So some more advanced settings than the CPU model. If you need to specify that it's a 46, a Pentium, Pentium 2, Pentium Pro, so on. We had quite a few options to hear about. That was kind of cool. I haven't really messed around with it much, but it's there if you need it. Now to specify RAM, 512 is probably not enough. Let's go ahead and say 4 space G, which will then fill in for gigabytes for us. And we actually have some advanced NUMA options if you need them. We go here to next. Create new disk image or use existing. Now if you use existing, it's going to look for any existing Z-Vols that it finds. If you choose, of course, the create new. And we want to choose a Z-Vol location. I have a data set called virtual disk. You just go and create a data set, and then you can nest Z-Vols under it. As a matter of fact, you can nest Z-Vols anywhere you like. I could put under lab work. Just for my own sanity, I like to create a virtual disk data set and then nest all of my Z-Vols in there. We'll actually show that in a moment later on. Maybe you want 80 gigs allocated to this, but you can specify as needed. Go ahead and hit next. Adapter type would choose Intel, but you do have the option of vert IO. If you're booting Windows, Intel will work perfectly fine and be able to have access to that particular system. Then this is the names of your NICS attached to your system. You'll have to know those are going to be different on your system versus my system, for example. So you have to know which one it's attached to. It doesn't really give you any clues here. You go to the networking and can figure out which one's attached to your network or which interface you want it attached to, but this can be changed later. Next, installation media. Upload an image installer. This is great. You can choose a location to save the ISO or for myself, go mount. Then all the way under this virtual disk, I already created a folder called ISO storage. As a matter of fact, we'll expand this out and we can attach like this. A bunch of CD have already uploaded there. Like next GPU, ensure display device. This is what I was mentioning where it's going to make sure it has a display device, but maybe you want to uncheck that and pass through a GPU. I don't have an extra GPU here to the system GPU. So it gives me an error. So we'll just leave it at ensure display device, confirm options, and it will go ahead and build this with save. And now we have this text Linux YouTube and we don't do anything with it sound like setting it up right now. Let's just show you one that's already working, which is like my Ubuntu one. You do have these options right here. If you want them to start when you boot the machine up, I leave it off, but it is a nice feature on there. Things that are kind of missing again is going to be, even though this is booting up, this is all the information it really gives you. It doesn't give me any detailed granular information. If I even look at the devices, it won't tell me what IP address it got or anything like that. But if we do go here, we expand this out, we can take a look at the display and let's watch it boot. Well, it already booted, but we can reboot it again. We have this little control bar right here, we can show the extra keys. If we need to send control, I'll delete or do that right there, which sends the keys you're looking for. And we can watch this system boot, which it just brings up the UEFI boot screen. So pretty straightforward. If you've seen the no VNC console, it's at least by default when you build a VM bound to it. So easy enough to see the display. And I actually found it to work quite well. All right, so let's power this system off and we're going to head and hit stop, close, and the VM is stopped. What if we wanted to clone it? YouTube clone, they've done a good job at this for cloning, because if I wanted to fork it essentially into a clone, it leaves the other one intact, and then thin provisions the clone. So the clone here can start up and run, which we'll go ahead and start it. And it's going to have all the same settings as this one. So it did copy like all these different metadata. And if you notice this is going to be on display port 5905 versus this one here was on 5900. This is actually how it controls this by entering the VNC ports up as you go. So you can have multiple VNC sessions open because you're just hitting different ports. Now let's talk about where that data lives really quick. We go here to the data sets, you see the virtual disk down here. And then we can see the test Linux YouTube that we created. And by default, it is going to provision it thick. So we have thick allocation on each new VM created. But this clone right here is only requiring right now three megabytes out of the 38 gig. This one does the clones are all thin provisioned. So this allows you to have sparse provisioning and your clones don't take up any more space than they change. This is all done under the hood with ZFS, nothing you really have to do. Now let's go back over here to the virtualization. And if I wanted to attach to this, but we're actually just going to go ahead and stop it. But I wanted to show here like the devices now that we've already created it. Yes, we can go back here and add the different devices such as CD-ROM, NIC disk, a raw file, the PCI pass through which is different than GPU pass through. It'll find your PCI devices and hit pull down. There's not much extra in my system for me to attach. We have the USB pass through device. And once again, we can do that. Or we go back and add another display and has the options in there. Now let's go back over here, go back to virtualization and let's delete this. One nice thing is, if you want to delete the device, it will want to delete the virtual machine, we do have to type in the full name. So it is yt underscore. Once you get the name right, it will let you delete it. The last thing I want to talk about are the snapshots, you may have noticed that it seems to be missing from here. There's the clone option, but no option to snapshot. But yes, it does have the option to actually do this, we're going to go back over here to our data sets, we're going to go down here. And let's click on this one, for example, test Linux YouTube that we made. And let's say I wanted to do a snapshot, we go to data production, and we click on create snapshot. And then we give it a name. So Tom testing. And now we have a snapshot of it. And it's saved snapshot created successfully. Now where this gets a little bit confusing is you have to go back over here to data production. And we can go over here to the snapshots, and he's going to pull all the snapshots for the system. So in order for me to revert to this particular one, I can roll it back as in go back to this particular version. But when you're looking at it from the virtualization, as I said, if I make any changes here, it's not doing anything with the metadata, it's just reverting that particular Zval back to its state. So yes, it works. Yes, it's a snapshot, but it's not listed here, you're not aware of it, you have to go over to the snapshots itself under data protection in order to actually see and administer it, or to even delete it. Now you can also build snapshot tasks so that the snapshots are being done regularly, or replication tasks, so they're being backed up to another location. But we'll just go ahead and delete it. Confirm we want to delete this one. Hit yes. And now that particular snapshot will be gone. Hopefully I left you with enough information for you to make a decision on whether or not TrueNAScale hypervisor is right for you. If I didn't, or you have questions, leave them down in the comments below. From a stability standpoint, I've been running a bunch of in it for several months and haven't really had any problems at all. The cloning is a little strange, but you know, being able to clone it and delete the clone when I want to test something convenient enough versus snapshotting and rolling back. Well, there's a few different menus I have to go to to get to that. So I like the clone feature being right there. Be nice if they integrated the snapshot feature, but people will have that in a future version. Nonetheless, if you want to have a more in-depth discussion, head over to my forums, or if you want to talk about TrueNAScale and more depth with the people over at IX Systems, head to their forums. A great place to read and learn more about the hypervisor and some of the ins and outs of how TrueNAScale works. Nonetheless, love hearing from you. Let me know your thoughts in the comments below in the forums. And thanks.