 And I would like to introduce you to Chris Moore, working at the IEC system and mostly on the PCBSD side of things. And he's going to talk about DFS and all the bright new tools they have both to PCBSD and that. Well, thank you guys. Appreciate you being here. This is kind of an IX double header, I guess, back-to-back. That's pretty cool. But anyway, appreciate getting to talk to you guys out here in Bulgaria. This has already been a fun trip, a little sleep deprived, but we'll see if we can get through this briskly since I have a fair number of slides and I do want to leave some times for questions and answers at the end. I'm not really particular about how I do talk, so if you have a question or something, feel free to try and get my attention. I may try and answer it on the spot if I can. So I'm okay with that. All right, so today we're going to be talking about a lot of DFS, snapshots, replications, boot environments, and how ZFS utilities are changing what we do in PCBSD and FreeBSD if you choose to use those utilities. As Olivia said, my name is Chris Moore. I'm the founder of the PCBSD project, and so my day job is working on this kind of technology and workstation-related stuff. So first of all, a question I got a lot when we made this shift was, why are you guys moving to ZFS? Why wouldn't we? So, you know, we began to look at some of the reasons why we'd want to. First of all, the benefits have really just greatly outweighed the drawbacks for workstation usage now. UFS was great. It ran its course and did what it needed to do, but when ZFS arrived, it just brought with it a whole new class of things we could do on a desktop or even my laptop, which weren't possible before. And also last year when we moved in PCBSD 9.2, we went 64-bit only. We just dropped the 32-bit release, which, you know, annoyed the 20 users out there. Still we're running on 32-bit boxes, but, you know, it just made sense for us. Like, now we can also move to ZFS only and really focus our development effort on one file system and not have to be thinking always like, oh, how can I write this on UFS now and provide the same functionality? Oh, I need to disable 90% of this utility because it won't run on UFS. And, of course, this allowed us to begin developing this new class of utilities and methods that are all possible due to the great ZFS file systems, which, thank you, Pavel. We did an interview with you yesterday for the BSD Now show talking about his porting experience, bringing that over. So watch for that in a future episode. It's always fascinating to hear. So, boot environments is what we're going to start with talking about today. First of all, show of hands. How many people have heard of boot environments know what they are? Okay, so about 60%, I'd say. Good deal. So, kind of a brief overview of them. They first originated in Solaris 10, and they essentially are just a method of using ZFS snapshots and clones to create an instant, bootable backup of your operating system. So, what would you use this for? Well, obviously, it's most commonly going to be used before you do something quote-unquote dangerous. You know, maybe updating kernels, applying some risky patch that you want to test out, you're updating your world, or in a desktop case, we actually include your packages in with that, too. So, on a desktop, it can be just as annoying to upgrade and find out that X doesn't work anymore. So, we try and include that in a boot environment as well. So, you always have something to roll back to should it fail. So, how do we do this in PCBSD? Well, at the moment, we're doing this with Grubb, which is used to perform the direct boot up of the FreeBSD kernel using the KFreeBSD feature. We create a special ZFS dataset layout, which enables this type of usage. And then, if you've noticed in the portraiture of your packages, there's a BEADM utility, which will allow you to manage your boot environments from the command line, and then we kind of splatted a GUI on top of that to make it easy for workstation users. So, of course, being a BSD conference, the first thing people ask is, why in the world would you go to Grubb? What are you doing? Well, we didn't really have a choice. So, first of all, it's able to very easily tie into the BEADM command and provide menus and submenus for boot environments. So, I'll show you some pictures in a moment, but in a nice graphical way, you can browse to old snapshots and say, I want to boot that. Oh, boot that one in Safe Mode, or boot that one with some option set and turned off, and it just makes that real easy to configure. And it's also what Solaris uses, which, if they went that route as well, I assume they had good reasons. And also, it's 2014, and it was nice for a desktop to have an actual graphical bootloader that didn't look like it was 1994 rocking all over again. So, another side benefit we got was a much faster loading of the kernel and modules. I don't know what disk mode it accesses them in, but it just loads them a heck of a lot faster. So, that was a nice side benefit. And recently, this last year, we were able to add full disk encryption using Jelly and a single Z-Pool, which Grubb allowed us to do too. I mean, who knew? And I was really shocked when I found out. I think someone emailed in, did you know Grubb can do Jelly? No, what? When did this happen? It was just something buried in the code there. And we went and took a look, and sure enough, it had Jelly support. So, now we could do boot environments on a fully encrypted disk with no unencrypted kernel or boot or separate Z-Pool or any of that nonsense that you'd normally have to do. So, you know, another question we get is, can I still use the BSD loader? So, yes, if you're going to go grab a PCBSD install disk, we give you the option to use the older BSD loader. However, you're going to mess out on the automatic integration with boot environments just because it doesn't do that at this time. So, and then, of course, the follow-up, would we ever switch back? So, that was part of why I come to these things and go to the Dev Summit and find out what's going on. And as the BSD loader grows some of these features, then, yeah, I'll be the first one to say, yep, we're ready to jump ship and go back over to the BSD loader. So, how do we do this magic? Well, we just start with kind of a special ZFS layout. So, when we do the system installation, I love the term tank, and that's what it was in the original dock, so I'm always going to call it that in our installer. But we create our Z-Pool as tank whatever, and then we will create a special root, all uppercase dataset, and within there you'll have your boot environments will be listed. And this first dataset called default is what's going to be snapped and cloned to create new boot environments. So, that's going to be your starting place default as your original. Not to say you can't remove it later if at some point you corrupt that one or, you know, destroy it, or you just have moved to a new snapshot for whatever reason. I know, I don't know if Alan Jude's in here, but his laptop, like he has 11 currents on one of his boot environments and 10 on another and toggle back and forth, and it's just a really handy way to do that. One thing I will note though, any additional datasets you create in ZFS are not included in the boot environment, which is important because I don't want my home directory, for example, to be snapped and included in boot environment. If I want to go to an 11 kernel or something else and boot into different environments, I expect my home data to be fresh and current and what I left it with. So, the way we do that is we create cache var datasets with the can mount off flag so that we can create sub-child datasets of those while still preserving things like user bin, user sbin, user lib, and other parts of the system on the initial default dataset. So, on a typical PCBSD installation, this is what it's going to mostly look like. Of course you can customize this during the install so your mileage may vary here, but in this case what I've done is I have all my external datasets so I keep my home directories out of the boot environments, I keep my jails out of the boot environments, ports, source, and then some various var directories by default, which again, if you have particular needs, you may customize that and say, I want slash data to be its own thing and not include it with the boot environment as well. So, how do I manage these? Well, it's pretty simple. If you're on the command line and if that's your thing, you just use the be-admin utility and I'll show that in a minute how you do that. But the be-admin utility, we've done some work with the port maintainers and we customized it so that when you create or remove boot environments, it'll automatically update the grub configuration and your menus will get populated with whatever you just added or removed an environment. Also, during package updates, grub may be restamped as well. So, when I roll out a new version of grub to the PCBSD users, we may restamp that on the disk because you want it to be consistent with what's on the boot environment and what's on the actual boot loader. So, usage is pretty simple. The be-admin list is going to show you what boot environment you have on a fresh box. I have my default. In this case, I'm just going to create a new one. We'll call it newbie for lack of a better word. And it's going to go ahead and create a new grub config. It's going to apply theme because grub does graphics. You can theme it any way you like. And then when I list, yeah, I have a newbie created which you can see is only using 672K. It's pretty small. It's a snapshot in clone, so it's only going to grow as the contents change on the disk. It's been super fast to make. So, if you're on the desktop or laptop or a workstation, of course, we have a graphical utility for the exact same thing. So, you can always fire up the utility in our control panel and just, you know, add boot environments as you see fit. And then, of course, we have some basic grub configuration within the GUI as well. So, if you don't feel like looking at man pages and figuring out how grub syntax works, you can come in here and say, ooh, but I also have windows on another disk. And you could add entries for that here as well. And it'll handle all the config for you. So, how do you boot it? So, when you first install PCBSD or TrueOS, which is our server version, by default, we're not going to show the grub menu. If you only have one boot environment, there's really nothing to see. It's up some Safe Mode options. So, it'll display a message saying hold down left shift if you want to see the menu. Otherwise, I think it has a two-second countdown. It'll just go right into boot. But once BEs are created, though, and you start having, you know, six to choose from, then the menu will begin to appear by default and give you, I think, a five-second countdown as the default to pick an alternate one if you need to boot into something different. Of course, by default, it's going to, you know, boot the first entry, which is probably the one you always are going to be on unless you have to roll back because you screwed up default somehow or a package upgrade went errant. So, that's what it looks like. You boot it up, you get nice graphics. Of course, we did some pretty artwork and themes, and it just looks nice. That's the reason why you couldn't tweak that to have, you know, the various daemon backgrounds that are floating around the net as well. So, grub customization, though, since I know it's a FreeBSD crowd, I'm not sure how familiar all you are with grub, but we then get questions, well, how can I do this in grub to boot other options? So, most grub customization can be done via knobs in the file there under default grub and use a local Etsy. And then any time you make a change to grub, you just need to recreate the config file, and that command's pretty simple. Go to grub make config dash o, and then you give it the path to your grub on disk configuration. And so you can use that to prototype, too. If you just want to test without, you know, changing anything on the system, just point it at a temp file, and then you can go inspect it by hand to see if it did what you expected it to do. And if you take a look at bootgrub grub.cfg, it's going to contain the bootup script, which is mostly a shell-style syntax, so it's pretty easy to hack on and add new menu entries. I personally really like it, because it is so simple, and it's not dealing with fourth, which, hey, what do you know? Like, I appreciate that. So some of the common options people are going to customize whether you want to have a hidden timeout. That's that option I was telling you about where it doesn't show the menu unless you hold left shift. And then the timeout for that, which default would be which boot environment or which menu option to boot. So say you are triple-booting or you have, you know, I want to boot the fifth entry in the list. That's where you would set it here, where you create your fig. Default timeout is before auto-boots and then, of course, themes, and you can even do fonts. So if you don't like the font we chose, more power to you. I socket picking fonts, so I'm always told, oh, I'd rather have this. Well, go for it. I don't mind. Yes. Yes, yes. Next boot on FreeBSD. So the way you would do that is it would, yes, there is a way to do it in that default file. I don't have the option in my slides though, adding menu entries. So the grub make config command is actually a shell script. If you take a look at it, it's not very complex. And what it does is it executes a series of shell scripts, which are located on a FreeBSD system in user local at zgrub.d. There's going to be some default files in this directory, which are ordered by numbers. And, you know, lower numbers get executed first, greater later. And what will happen is, first of all, these files may be overwritten by package upgrade, so don't tweak those. You may lose them as soon as you do a package upgrade, but you can add new files into this directory. So say you want to add custom menu entries, again, for Linux, Windows, any other operating system or type of options you want to boot with. You can just splat a new shell file in there that echoes out five lines of syntax, and that gets included in your config automatically. Additionally, you can put a file called boot grub custom.cfg, and that'll be sourced and thrown at the end of the config as well automatically. So you've got a couple locations for your boot setup. So what does that look like? For the uninitiated, this is how we do a boot environment setup on PCBSD. So grub supports ZFS, obviously, otherwise we couldn't do this. So we just load a couple grub modules, ZFS in this case. We're searching for the ZPool, which tank one on my laptop. And then we're using the kfreeBSD command in grub to boot the kernel, and you'll notice I have slash root slash default in this case. Boot environment data set. If you look at the menu, you all have maybe, I think, eight or nine blocks like this all pointing at different root slash backup as of this date or whatever, and that would be that data set. It'll then load modules. So we load the ZPool cache, and then we can set VFS root mount from. That's kind of the magic there, telling it which boot environment to mount at the mount root prompt. And then we can do the self command. In this case, it's a laptop, so I'm loading NVIDIAs, EFS, VirtualBox, et cetera. One caveat I will mention here is grub doesn't do the automatic detection of dependencies for modules. So if you load something that needs to load three other modules, you need to list those as well. It's not going to automatically grab those for you. Otherwise, when you boot up the kernel, we'll say, I tried to load this, but it's messing something, so that's the only real deal you have to pay attention to, and then, of course, we can set kernel environment variables. So take your pick, whatever you need to set. You can set it right in here grub, and away you go. But that's pretty much a fully functional menu entry. I tried to show a little bit more real-world example of what it would look like on my laptop, for example. Okay, so that's kind of what boot environments does, but now we're going to take a look at the second part of the talk, which is called Life Preserver, which is based with all this, and is something in PCBSD and FreeBSD. So what is the utility? Well, first of all, it's two. It's a command line and a graphical front end to basically scheduling ZFS snapshots and pruning to schedule ZFS send and receive replication, and Zpool monitoring, and better yet, the feature I use at the most for is bare metal restore, using the install media for PCBSD. We'll get into that here in just a moment. So how do I get these utilities? First of all, it's included out of the box on any PCBSD install, 10.0 and later, or TrueOS, which is the server version on the PCBSD media. It's also in FreeBSD ports and packages, under assist utils, PCBSD utils, which will be all the command line stuff. So if you're not running X11, go for that one. And then, of course, if you want all our GUI utilities, we have that in ports as well. People ask me how updated those are. We try and update them quarterly. PCBSD has moved to a quarterly update system, so we get all our newest stuff out about every three months, and that's when I update the port as well. I try and have to do it in between, because a lot of times we're doing work, and it's not quite stable, and I don't want to put that in the port street till I know it's good enough for what we would put out for our users. Okay, so scheduling snapshots. So to get started with the snapshot schedule, you can pretty much just use the following command from the command line. We'll say, I'll Preserver, and it could be a pool is, or it could be a data set if you're not into backing up or snapping your entire pool, and then I'm going to start it, what time, daily I want to do it at 22, and then the number 10 will be the number I want to keep, so basically keep 10 days worth of snapshots auto pruning the rest. And you can replace that 10 with some different options. We'll have, of course, daily yet, whatever the hour is. We can say hourly, 30 minutes, 10 minutes, five minutes, or we have the auto feature, which I'll describe here in a second. But what does that command do? Just going to create a cron entry, no magic. It's nothing super fancy. You can edit this by hand if you want, but it's just a nice front end to parse that all out and figure out what to put in your cron tab. When it runs that script, though, here's pretty much what it does. It's going to first confirm that the pool and data set's there and accessible, and obviously it's not going to bomb out. It's going to then create a new snapshot recursively by default. It'll then selectively auto prune old snapshots to the threshold where it's time to do that. It can then also send out notification emails if you've enabled that and start auto replication if you've enabled that. So as I said before, though, the auto mode, this is something new we added about six months ago. Auto mode's kind of special. It does a weird schedule where it'll create snapshots every five minutes, which are kept for an hour. An hourly snapshot will be kept for 24 hours a day. Then that daily snapshot, though, I guess the oldest one at midnight or whatever, will be kept for a month, and then a monthly snapshot for a year. So you can use that to... It's a pretty convenient way on my laptop, in particular, not necessarily for boot environments, but in the case of, oh, my wife deleted a document. Oh, I deleted that three weeks ago. Oh, crap, like, okay, let me go see if we have the old snapshot. Oh, there it is in a couple months ago snapshot, no problem. So a pretty handy way to do that. So of course we have a graphical utility for it as well. This is all written in Qt. So pretty standard. When you first boot it up, you don't have any pools being managed, so it's kind of grayed out. Then we're going to go say I want to manage a pool now, and by default the GUI is going to default doing pools because we like to do the whole system, not just particular data sets. Again, this is for desktop users. We're trying to make it as simple as possible. I'm going to manage Tank in this case, and it's going to just kind of walk through and ask you the same questions. When do you want to schedule your snapshots? We'll also ask you in days or total snapshots, and it'll figure out the math to determine what that number might be at the end. And then the last thing it's going to ask you is do you want to replicate? And replication is pretty cool. We can scan the network, and by default it can replicate to any system over SSH which supports ZFS receive. So scanning will look for SSH running on ports on your network. And in this case, like on my home network, it'll detect FreeNAS, so I can just scan, so there's my FreeNAS mini, and I can replicate to that box. Hostname, username, it figures that out. SSH port, and then, of course, the remote data set you're going to replicate to on the box. You'll need to mention that. Frequency, so you can do it when the snapshot's created, which is going to be best for doing a daily scenario. Or if you're doing snapshots every five minutes, you may not want to try and replicate every five minutes, because that could take longer than the snapshot, and that could get a little messier, so you may just want to say do it daily when nobody's on the computer, whenever the network's not congested. So in addition to all this, the Life Preserver has a daemon that runs, which also keeps track of your Zpool disk space, and it can then monitor if we hit 75%, I've been told there's, once you get close to that number, ZFS gets a tad bit cranky, so we can then auto prune old snapshots, and it'll send out a warning like, hey, we had to remove this snapshot, you were starting to get low on disk space on your pool, you may want to take a look at that. So you can turn on or off. Email notification, very simple to set. You just help Preserver set command, and that'll set a lot of options, but it'll just pass it an email, and it'll use the mail command by default, so you need to make sure that's configured to get out on your machine. And then additional commands, you can say I want to set disk usage warning at a certain percent, so when my disk gets 90%, please warn me, because maybe I'm not monitoring it every day, but that's save my bacon because I run on 240 gig SSD, and all it takes is a few VMs running on there. Next thing you know, I was like, ah, I made it 85%, and I'm starting to get a little full here. So additional commands, you can set the email options, what's the frequency you want to get emails, do you want to get an email for everything it does, which I do by default, because I just want to know it, did a snapshot and it replicated, or you can say just warn me if something failed, or warnings are more of a, not necessarily a failure, but hey, we noticed something here, you may want to check into it, but then errors are like, we tried to replicate and hey, your backup server is not online, or we couldn't connect, or ZFS then received fail for some reason, those will be warnings. And again, very helpful depending on how much information you want to get from the utility. So a recent update about last week, and I apologize, I forget the guy's name, I don't know if he's even here, but somebody sent in patches of ZFS scrubs in the same manner as well, and get all the warnings and emails saying hey, we fired off a scrub this month, or this week, or whatever, here's your results, and that'll be included in PCBSD 10.1, which will coincide with 3BSD 10.1 later this fall, and then of course I'll update the port. So what about replication? This is where it gets into the fun stuff for me anyway. So once snapshots are enabled, that's great, but what happens if your system gets wiped out, so replication is kind of the answer to that. So replication, of course, you can set up to run automatically, or at a specific interval, and as I said, it runs over SSH by default. Some of that, we may add some new options for that in the future, we're working on it. But it uses ZFS send and receive, and it will require that the target system have a supported ZFS version. It needs to be capable of receiving whatever ZFS version your client's on. In my case, again, our Free NAS Mini's FreeBSD 9.2 or something, but I back up my 10 boxes to it just fine. So long as it's got a... It's because Free NAS pulls in a lot of ZFS options way ahead of time and backports it. But starting a replication's pretty easy. So first of all, you're going to need to create a ZFS dataset, and then, of course, an SSH user on the remote box. I really don't like SSH as root for obvious reasons. That just doesn't make sense to me, but ZFS makes this easy because you're going to need to have permissions on the dataset you created for receiving your backups. So you'll run a command, something similar to this. This is in our documentation as well. These are the options we want this user to have on this particular dataset, and that's where I'm going to replicate to. And then starting the replication, again, is just a single command from the command line. We're going to add a replication task, host, user, port. What are we replicating? In this case, I'm doing my entire pool, which would be whatever my remote pool is slash backups, and sync is do it whenever I create a snapshot, or it could be some hourly schedule, you know, 2 a.m., whatever. So what does the replication do? Again, it's going to create a cron entry as I showed earlier for the snapshot, something similar there. It'll then check the Z pool, make sure everything's kosher there before we begin replication. We set a backup Lpreserver ZFS property to keep track of when the last replication is done, parse that, and figure out, okay, this has never been replicated before, so we need to do a full replication stream, backup everything. If that flag is set, we can then look and figure out, oh, this was the last dataset replicated, so let's do an incremental stream instead, and it'll manage that. It'll then kick off the ZFS send receive commands, wait a while depending on how much data you're doing, and then after a successful send, it'll mark that backup Lpreserver property with the last snapshot that was replicated saying, okay, we've taken care of up to this point, and we can move on and come back to whatever needs to be done later. Also, we do a little extra magic in the replication. We also build a complete list of Z pool and dataset properties because what we found when doing bare metal restores is some properties we may not send over properly and we need to set manually at install time, so we create a list of those so we know oh, he had these options specifically set, and mount points would be a good example because we don't mount anything on the remote end. This list is then also saved off to the remote replication box, just stored on my free NAS system in the home directory there, and then lastly it'll check, okay, do I need to notify the user that we did this, and it'll send results and log files. So, of course with any backup, the first time you do it, it's going to take a long time because it's doing everything. During the replication, we allow new snapshots to be created, and now our replication, that's not going to stop new snapshots from getting created. It's just going to disable the auto pruning until we have successfully replicated the old snapshots, so we're not deleting things as they're being sent across. One note though, if replication fails, it may be required to reinitialize the remote side, and we have a command for that. That may go away here in the future when ZFS send and receive has resume mode, which we're greatly looking forward to because that'll fix the issue where you have a partially replicated system, and you'll be able to pick up where you left off. So, getting my stuff back. Backups are great, but if you can't retrieve them, they're no good, right? So, once your snapshots are being created, there's several ways you can revert or restore files. So, via the command line, if you want to revert an entire dataset, you can of course use the revert snap command dataset, and then give it the snapshot name, which will be some kind of auto date that is created by Life Preserver. So, once you can browse, this is a ZFS feature, you can just browse the .ZFS directory, slash snapshot, and view a state of all the files at the time that snapshot was created. We also have the ability via the GUI to browse snapshot data as well as scroll backwards in time, which is really cool. So, when I fire up the GUI on a system that's got Life Preserver running on it, I can see when a really quick snapshot of my system, when the last snapshot was done, I have a green last snapshot, and it's successful. Life is good there. But where it gets fun is I can now pick a dataset. In my case I'm going to say, look at my home directory and, oh, that source I didn't commit a couple weeks ago and I accidentally just nuked. It would be really nice to get that back. So, I can just browse my home directory, find the directory where I thought it was, and then just scroll back in time and see the state of those files at whatever time that snapshot was created, which could be all the way back to a year if I'm using the auto handy, and then you just click and say, restore that, and it'll plop it into your current environment, no problem. So, how about bare metal restores? This is actually what I'm using mine mostly for at the house. I have about seven workstations, some for the kids, some for the wife, and a couple for me. And, you know, hardware fails. Things get old. It's time to upgrade. So, a cool thing that we really wanted to have in the installer was the ability to do a bare metal restore from your replication. You know, sitting on the network. Why can't I just pull that data back onto a new machine? So, at the moment, I'll mention this is just in the graphical installer, but we do have a text installer now. We're going to add that to it as well. But it also lets me change and adjust as a Zpool option. So, while I'm replicating all the data back, my workstation, for example, I went from spinning this to an SSD, and I was like, I want to change a few things when I do this restore. So, I got all my data back. I'm going to go back to the Zpool options for the Zpool to make it run a little better on an SSD. So, what does that look like? When you boot the graphical installer for PCVSD, instead of a desktop or server, you'll have a third option, which is just restore from a backup you've made. That'll then fire off the wizard where you're going to point it at whatever system it is that has your replication. Username, password, all that's going to get prompted for. On my FreeNAS Mini, you can back up multiple systems to that. You don't have to do that per system, and it keeps track of them by the host name of the client. So, when I connect to my FreeNAS Mini, I go, oh cool, I got four systems already there replicated. Which one do I want to restore today? They're all available via the same account I created. And at that point, it's going to take you to the confirmation screen for the disk selection. It's going to let you know for datasets, you're going to adjust those, but here's the disk, swap, all that. And of course you can click customize at this point, and that's where it gets fun, and you can go through and say, okay, I'm an advanced user. Let me tinker with some settings, right? So I'm going to go through and say, okay, this is the disk I want to use. Oh, I really like GPT. Let's do the force 4K trick because I would like my Z pool to be with 4K block size. I'm installing to an SSD, of course the boot loader I'm going to leave there. I want to come up with something new, whatever. You can do that. A cool thing is too, if you're doing a restore, you don't have to go to the same type of Z pool. So one of the servers I had was running on two disks in a mirror, but then I got four disks in there, and I was like, oh, let's make a new mirror with four disks now, or maybe I want to switch it to RAID-Z or whatever and restore all the same data back to it. No big deal. It'll just handle that. And the last one, it's going to gray out all the datasets because again, those are going to be something you don't get to set, but you can adjust the swap size, so my new system maybe needs more or less swap, whatever. You can adjust that right there. How does this work, though? The PCBSD installer is using the PCSys install installation back in, which I've given talks on in previous years. And the PCSys install, we added support for the ZFS restore option, which basically says I'm going to be pulling all my data instead of from the disk files from a ZFS send and receive stream. It's going to use SSH as the transfer agent, and then it's going to use pretty much the normal installation syntax from that point on for doing disk configuration. Again, all fully automated. The GUI is just writing a config. So what that looks like in an example PCSys install config is install mode ZFS restore. You put in some information about your SSH, where your system is, what the key is you're going to connect with, the properties file that we're going to pull back and set those original data set properties, and then the remote data set we're pulling from. It's how many is that? Eight options? Seven options? It's really pretty minimal to start or restore all in a scripted manner. Again, as I mentioned too, ZFS mirroring. During the system installation, maybe you set up with a single disk or RAID-Z. Life Preserver has kind of a not so documented feature. It's not in the GUI, I don't believe, but you can attach a new disk Life Preserver, which it's really not hard to do that with ZFS. That's not the special part. What this utility is going to do, though, is when you run it and say, okay, I got a new disk. It's going to first wipe the disk. It's going to create a matching style partition to the other disks in your pool, GPT or MBR. It's going to then make that disk bootable with grub and handle all those steps of stamping grub, make sure the config is on it and whatnot so that it's ready to go. It's literally one command. It'll then insert the disk into the Z pool and then ZFS will happily go and resilver it, and when that's done, you're ready to rock and roll. There was no more to it than that. So that's pretty much the end of what I have prepared, and I did have a reference here to my old PC SysInstall talk, where if you want to see how our fully automated scriptable installer works, and of course these slides are available at the following URL as well on slideshare.net and at this point, I'll be happy to answer any questions if I can and tell you I can't if I can't. Any questions, guys? Yes? Oh, no, probably. You can do the dataset rename when you replicate? Yeah, we don't support it at this time, but we would like to. It's hard. Send patches. I would love to do it. That would be fun. Oh, we have a microphone for questions. So when you have VAR as its own separate dataset, then the package database then is global. No, so what we do is we create a VAR dataset, but we set it to count mount equals off. So it's actually a part of the boot environment. But then we create VAR log as its own dataset, so that your logs persist between reboots, because maybe you want to see why the previous boot environment failed to start for some reason. Any other questions, guys? Well, cool. Well, thank you guys so much for coming. I really appreciate it.