 Tom here from large systems and I've talked a lot about storage. I've talked about Synology. I've talked about TrueNAS and you'll find playlists down below covering both of those topics, even a video comparing Synology and TrueNAS to help you decide which one may be the best fit for what you want to do. But this video is a little bit different because I want to talk about how to integrate storage and your network from a kind of general high level. But I wanted to make this video to talk about proper implementations of storage and that's the topic we're going to dive into today. So let's just jump into it. Now, the first rule is don't route your storage. This is a pain point. Now, I believe the purpose a lot of people have. The idea was to try to create firewall rules to only allow specific devices to access the NAS. And that's a noble idea, but not the best way to implement it. I have videos on how to properly implement it with Synology. You build firewall rules to tie them to each interface. So you only expose certain rules on certain interfaces to allow services to work. That video you'll find link down below. And in TrueNAS you do it by binding services to each one of the interfaces. So you want to keep your storage on the same network that it's being connected to. And I have that video down below. Now, even though this is for TrueNAS core, it works with TrueNAS scale as well. But the idea is you always want your storage on a same subnet that's being presented on as to where it's being connected to. So whether the users that are connecting to file shares or a server that's connecting to ISCSI or NFS, you want them to be on that same direct subnet when you're setting these up. And the videos link down below cover how to secure it, but then also provide direct access to the devices you want on the networks you want. Now we're going to start out with the basics here. And when I say one subnet, same network that does not mean the entire business network is flat, we're just keeping us to the storage network where the computers are going to communicate and talk to, well, let's say a Windows server running actor directory, running a file server. This is pretty common small business setup. And if they don't have many needs beyond this, this works perfectly fine. And it seems common sense. And well, it's mostly set up this way where there's just one subnet the computers connect to, to talk to the Windows server and have their files as they need them. And it's all controlled through actor directory. Now, as some businesses grow, especially those that are doing data analytics or video editing and other file intensive services, they may need a NAS server. Now this NAS storage device and presenting some standard Windows file shares, such as analogy or true NAS, I've talked about a lot, can both talk with an actor directory connection to that Windows server. So maybe the main business documents and office documents reside on this Windows file server as they normally would. And you've added in this NAS storage. Now it seems obvious, but once again, we want to keep this NAS storage on the same network because let's say we've moved to 10 gigs so they can get those files back and forth faster or even a faster speed than that, but you want them all once again on the same subnet. So you're not trying to route the network and the NAS system can then perform and you can do your backups on the NAS because well, backing up an entire Windows server for all that can be a little bit harder. Backing up a NAS, there's a lot of functionality built in the snapshots. This works really well. But what if we go to the next level of complexity, which is going to be having all of your Windows servers virtualized? Maybe we have several of them still running actor directory. But this is the time when you consider not having Windows doing any of the file services. Now, the reason for this is because the Windows, once you virtualize it, it's going to rely on a large virtual disk that needs to scale with the storage demand that becomes very cumbersome when you talk about needing terabytes of storage, because then when you have to migrate this large virtual disk to another hypervisor or try to snapshot and back up that large disk, it's large and cumbersome. As I'm saying here, that's where the problem begins. Not a great idea, not the ideal way to design your storage. I've even seen people where they hit the storage limits of how big they can create a single virtual disk and they start chaining more together and creating a virtual rate. This is a performance problem. This is not a good idea. And if any of it fails, it can be quite the headache to recover the more ideal connection. You can still use your NAS and SAN storage to provide those VDI storage to the hypervisor. But ideally, you still want to also use the NAS functionalities and file shares in order to connect the users to the data they need and not having the windows as a file server, just doing all the things Windows needs to do, like controlling user authentication and still having those AD permissions. And you would have two separate networks for this. You have a separate storage network that talks to the hypervisor to store the VMs and all those functions. But then when it comes to the actual delivering of the file shares, you have them on the same subnet as the network users. Now, for those of you wondering, what happens if I really need to have windows because I want it to be completely in the windows ecosystem, well, there's actually is a way to do that. And this is where you can say the NAS and SAN storage is going to have a storage network that is going to only talk to the hypervisor, then you create within the VM and maybe you do it on the storage network or you create another storage network, you can present I SCSI to windows. This is a really common setup. So now your windows is still a small standard size, if you will, VM, where it's not very big. It doesn't have some huge virtual disk, but maybe we've attached 16 terabytes of I SCSI to windows. Now we're using windows as active directory and a file server. And then that windows server will have multiple network connections. One of the network connections. So we have a as fast as possible connection would lead to the NAS and SAN for I SCSI presentation. And then a separate network interface, virtual interface that attaches to this windows server would then serve the network users. And of course, with I SCSI, you can grow that on a fly. So if 16 terabytes isn't enough, maybe you expand your NAS or SAN storage to a 20, 30, 40, et cetera, terabytes. And as we expand it, we just have this I SCSI link there. But you're controlling everything in windows, not in a separate server. But if you have to back up that hypervisor, the storage is separate. Now, this also leads you to the advantages of having the NAS or SAN being able to do its own snapshots on that large file versus trying to snapshot very large files in the hypervisor. There's generally a more efficient method used on the NAS system for snapshotting a large scale file than there is in the hypervisor. It's just better at managing all the deltas between them because it's doing it at the block level in the case of Synology with butterfs or with your NAS with ZFS, it's doing block level snapshots that are very fast and efficient and won't bog down the hypervisor. So this is also a good way to design your storage. Where you're presenting I SCSI to windows. Now this completely applies to Docker containers or other virtual machines that may be running Linux or free BSD. I talked about this in my gray log video where you don't want to stick all of your storage inside of the hypervisor as a large virtual attached disk. Gray log is an easy example because with a lot of logs, you have a lot of storage needs, especially at a large scale system. So ideally, once again, I SCSI your NFS or your two most popular protocols for doing this, your NAS or SAN may have storage that has those virtual disks on there for the OS, but the data is then presented over I SCSI or NFS back over to that same SAN. Once again, you'll build multiple network interfaces on that particular virtual machine. Docker has a little bit different way of handling it, but Docker can handle NFS mounts. And then from there, all of your images may be really small and they're very replaceable at that point because the most important thing is your data. So this same concept does apply. And I have it presenting here as one subnet to network users, but you could replace network users as the internet where you're hosting applications. This is an ideal way to set this up, especially if you have multiple hosting your hypervisor set up and you want to have a common place for all of the data to be in no matter which hypervisor if you replace them, no big deal, you can swap out the OS and all of your data always remains on the NAS or SAN storage as a separate entity. Now, my question I didn't answer in the slides is what about using a VPN for file shares? And the reason I didn't put that in there is because I have a separate video talking about the challenges and the latency problems caused by VPN and file shares and well, why your file shares are slow over a VPN. There's a lot of factors involved. It's a lot more than just the speed of the connection has a lot more do with the latency of the connection. You'll find that video linked down below. You'll also find a video if you're interested in Synology versus TrueNAS. This comes up quite a bit. I made a 2023 comparison or if I've got a newer one, that one will be updated and linked down below depending on when you're watching this video. That way you can help you choose which NAS is right for you or you can come to the conclusion I had. I need both. Both are great. Like and subscribe if you want to see more content from this channel. Hit me up on whatever socials are available when you're watching this video over at lornsystems.com and I'll see you over in the forums. Thanks.