 The S3 API is in storage feature that has been provided by Amazon for a long time, but it's also an open protocol so other companies can emulate it. And that does include TrueNAS and FreeNAS. And they have taken, and there's an instruction here, and we're going to dive deeper into it. So S3 is an object storage provided by major cloud providers, including Amazon Web Services. It's well-studied for storing unstructured data like multimedia, video files, photos, and other big data. TrueNAS and FreeNAS run MiniIO object storage as a native service allowing NAS storage to act as a storage target with standard S3 APIs. And this is a really great feature and the use cases are frequently because, well, there's a lot of software out there already written to take advantage of S3 storage. You've already got the protocol written. It's a robust, well-documented feature set, along with a lot of your popular backup utilities, which do include things like Veeam or MSP360. And we're going to play with a little open source tool called Duplicati, but they all support that as one of their targets for where to dust in the backups. So having your FreeNAS act as a S3 target makes a lot of sense. Now, just be clear, this is not backing up a FreeNAS 2 S3. This is S3 being the target. So whatever software you want to use, you're going to point it at the FreeNAS instead of like an Amazon S3 bucket or one of the other companies that emulate one. So you can have control over your storage. We'll cover some of the details and ins and outs of it. But first, if you'd like to learn more about me or my company, head over to launcestos.com. If you'd like to hire short project, there's a hires button right at the top. If you want to support this channel in other ways, there's affiliate links down below to get you deals and discounts on products and services we talk about on this channel, including a link to our Patreon if you'd like to become a Patreon supporter. We also have a swag store where you can get shirts and other items that are for sale and that changes from time to time what's available and what's not. So go ahead and check that out frequently. And finally, our forums. If you'd like to have a more in-depth discussion about this video, suggestions for new videos or just reach out, say hi and talk tech, our forums are a great place for that. All right, now back to the content. Now the system we're working on the platform is gonna be FreeNAS 11.3-U3.1. So just so you know what version we're on for all of this in case anything looks different if you're watching this sometime in the future or you have not updated your system and you're going, hey, I don't have those features. Now Mini IO, Min IO, I think that's how you pronounce it. High performance Kubernetes native object storage. This is the tool on the back end which is a fully open source S3 compatible API emulator essentially built into FreeNAS. And I wanna do a video on this at some point in time, but it's actually really impressive because it goes beyond the normal S3 and has a ton of other features that are not just where you set it up as an S3 API but the ability to set up multiple servers and tie them together for multiple object storage and creating a large storage pools cumulatively through this storage system. It's actually a really impressive data set. It's a really impressive system. It's completely free and works on more than just free BST. It also works on Windows or it works on Linux. So it's actually kind of a cool where you can set up a storage server, not to get too far off, but go to miniominmin.io and you can learn a lot more about those products. But they've saved you all that trouble and built it right in here to FreeNAS. And while Amazon S3 seems really reasonable if you start looking at some of the pricing and all the features they offer you for this type of storage, you will find that when you start storing upwards of 700, 800 terabytes and we have customers with that level of storage, that they may wanna run it at their own data centers on their own true NAS or FreeNAS systems. That's kind of the point of setting some of this up like I had said. And for reference, I will leave duplicati.com, a link to this. This is just the software we're gonna be using to demo how the S3 bucket works. And I'll leave a link to this because there's some write-ups and details that they have over here and they have a guide for things that are go way out of scope of this particular topic. So let's start with the basics. How do we set this up? We're gonna go over to storage. We're gonna pool and we're gonna create a place to put it. So I have a couple of projects going on with MiniIO. Like I said, that's gonna be some future videos I do. But for now, let's just add a data set. And what are we gonna call it? I don't know, S3 YouTube? Call it whatever you want. S3 YouTube, cause I'm gonna delete this later. So please note, that's it. I didn't add anything, no special permissions, no special configuration. Because when you go into the S3 setup, which is really straightforward to do here, we're gonna go over here to services, S3. Click the little icon there. Leave it bound to whatever IP address you want. We have several here, but we're gonna leave it bound to all of them. Port, I'm gonna leave it 9,000 for default because that's perfectly fine. And you can tell I've already tested this before. So we're gonna go Mount Dozer and here's that S3 YouTube. Now the reason I chose a new one is to show you what happens when you choose a new data set. One, it'll delete everything in that particular data set so don't choose an existing one and mix storage cause it's about to blow that away. Second, it takes care of all the permissions for you. So I don't have to do a second part where oh yeah, you gotta change all the permissions to this, they thought of that for you. So when you do that, it's set the permissions. And alls it did was set the permissions to be user and owner of MinIO. So pretty straightforward on that when you go back and look at the permissions. So access, create whatever access key you want. Create a reasonably good access key in terms of, so you understand that when you create this, they have a link to the AWS documentation to dive in details of how you should keep those secrets. Don't provide your access key to a third party, find your canonical user ID, et cetera. There's a lot of things that are important that there's not like multi-factor authentication and generally using this as a web transport. So you want a really high entropy secret access key that you don't hand out to anyone that's an important part and use something rational for an access key when you're setting this up. So secret key, we use something completely irrational. I put in demo one, two, three, four. That's a terrible idea, but that's because this is a YouTube demo and I wanted something I could type. Normally your access keys are going to look like something like this and be particularly long. That is that security that you want in there. Also, when you're doing this over the internet as a transport layer, when they ask you the certificate down here at the bottom, if you don't choose a certificate, it will work without encryption. That's bad because anyone, especially if you're doing this as a web transport would be able to grab that information. And I believe it's just encoded in base 64, which means if you don't have a transport encrypted layer, well, someone could just pull that out and reverse it. So you don't want to skimp on things like that. Make sure you're choosing a certificate. And final note on this, self-signed certificates. That is the default tier. This is a self-signed LTS certificate that I created for this Freeness instance. If you wanted to load a non-self-signed certificate, you may have to do that because not all backup software or protocols or systems you're using will allow self-signed certificate. Specifically, when we go through the demo with Duplicati, you will notice that it will fail unless we put in the self-signed, check the option to allow a self-signed certificate. This is true for a lot of different backup software. Some may not even have the option to accept self-signed because they're expecting to point this at an Amazon bucket that's gonna have one of the Amazon certificates for it. So I just wanna get all those things out of the way. But once we've done this, we've chose the location, the dataset that we want. We've set this terrible password and the access key to YouTube and demo1234 and we leave it at port 9000 and enable browser. I am gonna leave that on and I'll show you why. So we're gonna hit save. All right, then we just turn it on. Pretty simple there. And if you want it to be on on startup, well, you just check the on startup box. Let's go back over to our storage, pools and we're gonna look at this S3 YouTube. We're gonna look at the permissions on it. Sorry that my face might be in the way here, but it's just the three dots, click on it and hit edit permissions. And you can see it took care of the setting of the permissions for us. So user and group mini IO, or min.io. I keep wanting to call it mini IO, but you don't need to touch these. Don't mess with them. You're going to break things if you do. Now that we've done that, we're gonna go over here and hit duplicate tab. Just out of convenience. Colon, nine, one, two, three. Now this is the enabled browser part that I turned on and once again, self-signed certificate. Access key, YouTube demo1234. Now, what it's done is, they've also built in the, well, allowed you to turn on, I should say, the min IO browser and it's just a way so you can see or create buckets and see some of the data in them. It's kind of a quick verification for being able to make sure it's working, be able to see the objects or the data within there. So we're gonna add the plus over here and we're gonna create a bucket and we'll just call this bucket Tom. Now buckets have restrictions, can't use special passwords or even uppercase characters if we were to try and do that, it's gonna fail. So we're gonna back over here to create bucket, let's call it Tom, enter. All right, so now we got the Tom bucket and it's got, if you had a lot of buckets, the option search is pretty basic way the browser works and let's upload something to that particular bucket. So once again, we're in that bucket, so we hit the plus, we just hit the upload and this video looks funny. So we'll say cat perfect, excuse plan, we'll just throw it into the bucket right here. Now from there, it's in a bucket, but what does that mean? Well, you may have seen this and a lot of companies do this when they're doing things like sharing a link. Now let's say we had this open to the public on port 9000 and we wanted to share this link to someone. We can actually create a link that expires in one hour and hit copy link. So we'll just real quickly here open an incognito window so we're not logged in and it would load this particular video from this object storage. So this is kind of neat that they built the browser and it definitely makes it a little bit easier and for reasons unknown, this video doesn't wanna play probably the format it's in, it doesn't like. But if we actually go to the base of this, you can see that we've in here, matter of fact specifically, algorithm, AWS, HMAC, SA, SHA-256 and you can see that it's passed in the URL and everything, the entirety of this to be able to grab that information and store it in here. By the way, this is one challenge that you're doing this, both a combination of you want all this encrypted because now you just passed all this information to get to this particular file. So hopefully you wanted this file to be public but if we go down to the base here, even though it's base was cat something, something right here and we delete everything after it shift end, we don't get access to it. It just brings us back to the login screen. So that's some of the features that the browser does offer on there and you can refer to the mini IO documentation to dive deeper into that. So now that we've got this created and we got a bucket created kind of the basics, cool, that's neat, but what about using it as my target for storage? Well, that's where Duplicati comes in. Now, other tools like Veeam and MSP360 also support S3 object storage, but Duplicati was just easy. It's open source free to play with. I don't have to sign some registrations or anything to do it. So I loaded this on a server here, one of my lab servers and figured I'd give it easy demo of showing a bucket target. So we're gonna add a backup and it's got a nice little wizard here. So add backup with Duplicati. This is the, specifically, I'm using the Duplicati 2.0 beta. So demo you tube demo encryption. I like that they have a generate option but side note, if you do use Duplicati, it warns you to back up that password. It'll generate it, if you don't back it up, all your data is lost because it can encrypt it. So let's go over here to next. And storage type. Well, let's say we're gonna say S3 compatible. Matter of fact, Duplicati, and not only Duplicati, it'll work as Amazon AWS SDK, but they also have the SDK for the MinIO, which is really great. So bucket name, YouTube demo, bucket region. These are specific storage classes if you wanted to go a little further, but we also need to specify use SSL and we have to say a custom server URL. Now, what is our custom server URL? Pretty easy, it's this right here. So go there, whoops. Go here, paste this in, 192.168.3.8 colon 9,000. And we said use SSL. So that should work, right? We test connection. Oh, you must fill in an ID. YouTube demo, one, two, three, four. Test connection, no, fail to connect. MinIO replied, unsuccessful response from server XML trust failure. That's what I was saying here. And this is one of those things you're gonna run into and we have to choose accept any SSL as in accept our self-sign certificate. Makes sense. Test connection, do you wanna create this now? Yeah, let's create that bucket. Connection worked, awesome. Next. I like that they still call it My Documents even though this is technically the root folder. But we're just gonna back up the My Documents or root folder here. You could select a computer and you can see all the things. It is a neat little program, by the way. It's free. So schedule, run backups. Sure, why not? It's got a scheduling option. Keep all backups. Yeah, that's fine. We're not gonna play with any details. Have you stored that passphrase successfully? Yeah, I definitely did it. Have it committed to memory. It was a great place for that. And run now. Starting back up, waiting upload, and away we go. And we'll run a couple more times. Okay, now that we've run Duplicati a few times, we can go back over to the browser and we'll just reload the page and you can see there's all the different data Duplicati created in here. And here's the buckets, which apparently I misspelled. That's okay, the misspelled buckets in here. And here's the Tom bucket we created with the cat video it didn't load but then here's all these so we can see all the data. Now, here's where there could be a little bit of a problem. Before you think this is the next solution to not paying a bunch of fees and hosting your own S3, please note that this bucket and this bucket, I can set policies but there's not permission options and what I mean by that is there's only one access key and secret key shared. So, let's say for example, we wanted to back up all of my servers and use this built in with FreeNAS MinIO setup just like we did here and I wanna create a bucket for each one. If a single endpoint is compromised that access key, secret key combination is within there. So in this case, we put it in Duplicati. So if someone were to extract those they would be then able to peruse all the buckets and look at all the buckets and then even potentially delete or change data within those buckets. Now, you always should be encrypting data at rest so provided you use a different password in terms of, and we're using Duplicati as the example here but many of the different backup programs have backup password options that we encrypts all the data at rest, that's great in terms of them not being able to view or exfiltrate the data but it still does give them read write access to the data. So that is one kind of shortcoming of doing this inside of FreeNAS as we set it up here is if you're in my case, for example, an IT search writer and you wanted to roll your own backups as in or roll your own storage for those backups and you use a tool such as Veeam and you go, hey, I'm gonna use Veeam to grab all these different clients and back them all up. Well, if you're using the same key over and over again the key combination and one client gets compromised then potentially someone could make a mess of all your backups and I know you could snapshot it to have it but you can kind of see the problem there. Now, to mitigate this and it goes out of scope of this particular talk but the way you would do that is you can load the Minio into a jail and for each jail you would create a different access and shared key and then each jail would have its own storage and now you've done all proper segmentation to create a proper secure backup solution that you can have yourself in terms of storage and if a particular client's compromised well, I highly recommend just on an off chance that they got it, go ahead and re-roll the key. This also saves you trouble if you were to re-roll the key that you didn't have to re-roll it for everywhere you put it. Maybe and hopefully you've only used that access key combination for that particular client. Now, the granularity depends on your security policies. Do you want to create an access key in a jail for storage for every single one? Maybe it is possible and it could be scripted. Like I said, it goes out of scope of this video. Think about what level of security you want but just in generally speaking in scope of this particular video, yes, if you're using S3 storage there's only that one combination key. So, while it's a great solution think about the security implementations before you deploy this at scale or with the suggested way that I just had of individual clients and having individual keys. Just something to think about. I didn't want to end this video without leaving that on a table and bringing up security concerns so people are thinking about them because unfortunately, and this is not just a freelance issue, we see constantly companies when we've covered different security topics where they just constantly use the same bucket credentials over and over again, maybe throughout an entire IoT deployment. And if one of those people decides to anyone reverse engineers and I've covered this before in the news and we've covered this on the How They Got Hack channel where someone compromises one of these devices they go, oh look, the bucket ID was shared. Now we can get to everyone who ever stored something in that bucket because every device has the same one. So don't worry, it's not really a limitation of freelance. It's more of a security consideration that seems to get overlooked very frequently. So there's something out there that's my final thoughts on this as a project. I don't know if duplicating needs its own video but it's a pretty neat product. I do know the MinIO system. I will probably sometime in the future do a more in-depth video on it because it's a really cool tool in terms of creating these S3 compatible systems and it has scalability function well beyond what basic usage we talked about here. I encourage you to take a look at it. It's a pretty neat product. Once again, it's open source so you can stand it up yourself and stand it up on FreeNAS, stand it up in a FreeNAS jail, stand it up in a Debian server, Linux server of your choice. It even has some options for running in Windows which is really cool and it has options for tying all those things together. So that's something future down the road I'll probably get to doing a video on but it's a really cool product. I really like that it's built into FreeNAS and it's pretty easy to deploy. All right, thanks. And thank you for making it to the end of the video. If you like this video, please give it a thumbs up. If you'd like to see more content from the channel hit the subscribe button and hit the bell icon if you'd like YouTube to notify you when new videos come out. If you'd like to hire us, head over to laurancesystems.com to let our contact page and let us know what we can help you with and what projects you'd like us to work together on. If you want to carry on the discussion head over to forums.laurancesystems.com where we can carry on the discussion about this video, other videos or other tech topics in general even suggestions for new videos they're accepted right there on our forums which are free. Also, if you'd like to help the channel in other ways head over to our affiliate page we have a lot of great tech offers for you and once again thanks for watching and see you next time.