 Welcome to the Home Lab Show, episode 55. Now, Butter FS, B-Tree FS, or what do we call it? Butter FS? I say Butter FS because that seems like what the majority call it, but it's either that, B-Tree FS or Better FS, I've heard that too. Better FS. There's, I will let anyone who wants to throw some comments and the best debate on how to pronounce it, I'm all in on that. But we're going to, whatever you call it, we're going to be talking about B-T-R-F-S. Well, those are the initials. And if I occasionally say the Butter FS file system, yes, I just said B-Tree file system, file system. Then when Red Hat decided not to include Butter FS, it's like, I can't believe it's not Butter FS. I can't believe it's not Butter FS. We're going to butter you up for this one here. We're going to try to throw a few puns in, but I'll apologize in advance. Every time I see a Better FS file system, I know it's redundant, like saying ATM machine. We'll just get over it. Just feel free to hate us in the comments. If you need to, if I slip up, I will try not to say it. I'm not doing it on purpose. I'm just saying it's going to happen. Anyways, before we dive into Butter FS, we want to thank a sponsor to show. And that is Linode. And Linode has been a sponsor since the beginning. If you're listening to this in a podcast and you download it, you literally downloaded it from a Linode server that we maintain, that's running WordPress and all kinds of fun stuff, or any project you want. You can run over on Linode. Many of the projects we talk about here on the Home Lab show are great place. If you don't want to run it on your servers, or maybe it's just better if it's public-facing and not sitting in your home lab, it's not a bad idea to throw it over in Linode. Great place. They have a lot of one-click packages, essentially, pre-built things you can do with them. They have an offer code. If you'd like to get started, it's down in the link below. It is the Home Lab show. So that's the offer code for Linode to get you started, and we thank them for being a sponsor. Yep. All right. I see when people in there, when all else fails, step back and make a pun. Yes. Yep, absolutely. So it's finally time to talk about Butter FS. And this is something that I've been wanting to do for a while. I mean, people that know me and probably everybody at this point know that I'm constantly complaining about my backlog, but I'm getting caught up on that. And the fact that the Butter FS video has been filmed and it actually exists and is in the editing queue, is testament to that. So considering that I've been looking at it, I thought it'd be a great topic for the podcast. And first and foremost, I'll let you guys know, obviously, I'm not an expert yet. So this is kind of like a snapshot, pun intended, of my knowledge on Butter FS at this point, because I know enough to make a video about the foundational concepts of it. And that's what we're doing today. This is a foundational video. We're going to lay the groundwork. So that way, in the future, whenever we refer to Butter FS again or talk about it, we have Episode 55 of the podcast to point people back to. Yes. It's interesting because it came after ZFS. Right. So that's, it's got an interesting beginning. And I'm not going to say end. It's still being developed. I don't know how active a development is still in Butter FS. Jay's going to probably know a little bit more about that. But it is, like ZFS, a copy-on-write file system. And the mechanics are slightly different, but the concept is the same for doing the copy-on-write. I have a video where I call ZFS a cow. I dive into the intricacies of how that copy-on-write works and, of course, leave you even more readings so you can dive more into it. And this is an important concept because these are more advanced, essentially, Butter FS with their integrity checking than your average file system, like EXT-4, EXT-3 and TFS. There's a lot of advanced features in there. And this is why these are popularly looked at. And Butter FS does have a few features ZFS does not necessarily have. I think Wendell did a couple of videos on it as well from Level 1 Text. I know he's mentioned it before. He's been some of my knowledge source because me and Jay were hanging out with Wendell a couple of weeks ago from Level 1 Text. And he's dove into it. He's a pretty knowledgeable person about file systems as well. Oh, he knows so much. It's like a hospital syndrome, no matter how great I get, you know, because he's so much more, you know, he's greater. But I think it's awesome to have people that are more awesome because then, you know, what else are you going to strive for? But when it comes to Butter FS, I'll talk a little bit about the history because I think that it's a little it's important to understand the stigma. And I normally don't like to talk about the politics or controversy around things. But I think with this, I kind of have to because, you know, if anyone listening to this gets, you know, excited and they check it out, then they see some negative comments about it, then that's going to be a bit confusing. So I definitely want to address that. And I'm not going to talk about all the features, but I'll talk about my understanding of everything so far. And then in the future, of course, we'll bring it back up whenever there's something new to share. Because I feel like Butter FS is such a huge topic, like we could do probably a number of these episodes and still not cover everything. So yeah, we're going to stick to kind of the basics today. And obviously, if anybody notices, if I misspeak about anything, please let me know right in or whatever. Because I'm learning along with everyone else, so I'm not going to say I'm perfect. I'm absolutely far from that. But let's go ahead and talk about Butter FS. So I think where I'm going to start is a little bit about the history. But first of all, I'm kind of weirded out by file systems nowadays. And I think this is a consequence of when I started in the industry because I remember, you know, fat versus fat 16, fat 16 versus fat 32 and an NTFS. And, you know, back then file systems were basically just a construct to define how data is written to a disk. And that's what they are today as well. I don't mean to make it, you know, overly simplified because it's not like, you know, just because file systems were primarily that that have other features that didn't have, you know, they weren't hard to develop. I think file systems are notoriously hard to develop even the most basic of them. So in my mindset, seeing a file system nowadays that handles snapshots has built in raid capabilities and scrubbing and all these other things. It's almost like a file system plus a bunch of other stuff, almost like the system, the equivalent of a file system. When you talk about ZFS or Butter FS, it has the one job, but it also does these like 10 or 15 other things. But I think the term nowadays is modern file system. That's kind of what it's referred to. I think that's where Butter FS came from. So my understanding, based on my notes, it was developed by an individual named Chris Mason, who at the time worked for Oracle. And a lot of people think that Butter FS is an Oracle project. It's not. He did work for Oracle, but Oracle didn't, as far as I understand, like claim ownership of Butter FS, they might have helped or gave him resources or something. I'm not really sure how extensive their involvement was. But the idea was to bring modern features to a Linux file system. Nothing that I've read called out ZFS is the inspiration for Chris Mason wanting to do this. But I think that you could pretty much connect the dots, Oracle being involved with the Unix world and ZFS being a Unix file system, BSD or whatever. And now open ZFS is on Linux, but back then we didn't have that. And it's not like you couldn't get ZFS on Linux. You probably could shoehorn it in back then, but nobody did because of licensing. So, you know, ZFS, as an aside, is this really funny mic drop moment for BSD people. Because anytime someone, you know, you have your Linux person and your Unix or BSD person arguing about which one is better. They'll argue for a while until ultimately the BSD person will just play the ZFS card and then mic drop and then the argument's won because the Linux person's like, bang, okay, you have ZFS and that's great. And for up until most recently, we Linux people, we didn't have anything that really compared. The best thing we had was, you know, extended four on top of LVM, Logical Volume Manager, on MD raids. We had these different technologies we would stack and we could get some feature overlap with ZFS, but it's still not ZFS. So the idea back when, you know, ButterFS was being developed was to give all these modern features to the Linux file, to a Linux file system. And my understanding is Chris was also involved with RyzerFS. We're not going to get into that one for obvious reasons. And if it's not so obvious, you can Google it. But if you Google it, brace yourself, that aside, you know, moved on to ButterFS and wanted to build these features into it. And that kind of laid the inspiration for why this thing existed or why it exists now. Yeah, it's interesting because it may not have existed if it wasn't for the controversies of putting ZFS on Linux. It was that is one of the things that made it kind of interesting. It didn't, you know, we developed it. Well, we, the community, I should say, the people involved developed it with full knowledge because ZFS being ZFS. So they did take some of the great features of ZFS to build it, as Jason, maybe there's not a direct lineage you can do. But good ideas in the open source community, well, we keep using them because they're good ideas. The semantics of how a copy on write file system works is an important move and step forward when it comes to managing files and having integrity on them. Yeah. And, you know, when it comes to the Linux community, you know, this was a long time ago. But if I remember correctly, the way that the community was back then with ButterFS, you know, before it was, you know, in the Linux kernel when it was being, you know, actively developed, I mean, it's still developed. It's just, you know, when it was coming out, there's excitement around it because, yeah, we would love to have those features. That'd be really cool. But then what happened was something that a lot of people didn't really, you know, account for possibly happening, which is all of a sudden, there's a lot of hate against ButterFS. And then you had some people that would tell you use it. It's great. And then other people, you know, avoid it, avoid it at all costs. It'll eat your data. So ultimately, we, you know, even though the reason for it to exist is a great reason, stigma kind of came in. I feel like it's done a lot of damage. So I want to, I want to, you know, talk about this because I feel like people are going to run into this when they Google it anyway. Why is it that ButterFS is so, you know, the usage and popularity or acceptance is so divided? Now, just to kind of set the stage here, I mean, we have Fedora and Sousa that ship ButterFS as the default file system. So a really good counterargument if anyone, you know, decides that they hate ButterFS because, you know, there's a lot of toxicity and opposition in, you know, average communities nowadays, then it's like, well, yeah, but Fedora and Sousa ship it by default. So how bad can it be if they feel like it's rock solid enough to be the default file system in production distributions? Those in particular, that's a good point. But then it's even more confusing because Red Hat totally purged it from their distribution. They, it was in Red Hat 7, I believe. And I'm, if I remember correctly, 8 was when they decided to purge it because Red Hat wants to go a completely different direction. So then you think, well, Fedora is closely related to Red Hat, so they're going to drop it too. But actually, Fedora is doubling down on ButterFS. Red Hat is doubling down on not using ButterFS. So you can't even use the distro argument. So what the heck is happening here? So let's take a moment to kind of explain why I think it actually received some of the stigma. And one of the reasons why I'm going to not use this person's name because it's not about people or throwing anyone under the bus. And the individual I'm talking about that I won't name is a great person who's done a lot for the Red Hat community. And this was a long time ago, but there was a Linux podcast that no longer exists today. And a host on that podcast basically went on a rant for multiple episodes, very angry about the fact that ButterFS quote unquote ate his data. And then I feel like that was the beginning of the stigma because there's lots of excitement around it. And then wait a minute, this person's data was eaten by it. OK, we probably don't want to use this, do we? And that kind of perpetuated. But in this case, in my opinion, it's just that there's certain things you have to know about ButterFS before you use it. And if you go in making assumptions, then you're going to have a bad time and you shouldn't implement something making assumptions or not understanding how it's implemented. So I feel like that's done not necessary damage. And to make matters worse, reputation in the Linux community is eternal almost. You think about it like people complain about Ubuntu because of the back when it had the Amazon thing going on in the app menu and that they had search results from Amazon integrated there. Then it's like, I don't even know how long ago it was that canonical stripped that out of Ubuntu. But to this day, people make the claim Ubuntu is stealing your personal information, which has never been true. And they haven't had this in a long time. But reputation is eternal. And I kind of feel like when ButterFS was integrated into the Linux kernel, which generally means it's ready to go and then the community gets it and then they bash it because a lot of people don't understand it. Then that causes damage. But at the same time, some of the complaints are valid, actually. There's some rough edges here that we're going to talk about. But when it comes to the stigma against ButterFS, I always say, try it on the test system. That's why we have these things. Don't just decide not to use it because someone else doesn't like it. There's a reason why we have virtual machines. There's a reason why we have test instances and things like that. Just give it a shot and try it. But by the end of this episode, you're going to understand more about the use cases and the edge cases, quirks and whatnot. So that way you'll go in having an understanding of what to expect rather than just going in impulsively. Now, this is worth noting as well. And I did a little reading to see if I could find where people had mentioned their hate for ButterFS. And I'd seen even other people on YouTube as recent as 2018 still saying they didn't think ButterFS was ready for production. But there's a little subdivide I want to do because I see people commenting in the live stream here about Synology using ButterFS. And I actually have talked to the Synology engineers and there's a reason they're using ButterFS because it works and it works well. And Synology does have a reputation to maintain as if you're going to put data on the NAS that that data work really well. Now, one of their choices of ButterFS, and this is where the nuances matter and the details matter, what they're doing is they're using the MDADM RAID, the Linux RAID utility to build the drives. They're not having ButterFS directly control them. So that is an important distinction. So you get all the cool benefits of ButterFS. Now you're back to layering like me and Jay had mentioned earlier where you're saying, yeah, we've got LVM. And we've got these things and we stack them together. But it's necessarily a bad thing because the stacking together also means you can make a simpler choice and not necessarily need the entire stack. You can only need a component of it versus let's talk about ZFS needing, you know, building the VDevs, the less flexibility when you design it of it's once you set this design in place. There the VDevs have to be built out symmetrically. So it's not that you can't expand the file system. There's a lot of rules around it. And by the way, a rather famous tech YouTuber managed to lose quite a bit of data using ZFS, never underestimate people's ability to deploy things improperly. We're looking at you Linus tech tips and losing a major system because he forgot to actually turn on the integrity checking. So any file system back to like Jay's point, you know, can be run and you can hit someone. Good news is Linus didn't bash ZFS over it. He actually realized as often as the case, we probably should have set it up properly and maybe we should have contacted someone maybe like me or, you know, Wendell, who do all these files some set up so they don't do it. So at least they were self aware, but not all technicians are. More technicians are going to lean towards it's not my fault for configuring it improperly. It's the system's fault for letting me configure things improperly. Yeah, that's the, you know, so as an aside, because these things are going to come up because we're going to talk about RAID and whether or not it supports it well and all these other things. So I'm going to take a somewhat controversial opinion, but I think a lot of people will probably agree. I know at least you will, but, you know, we'll see. I think it's important, at least for me, because I feel that all hard drives are temporary storage. Now I'm not talking about temporary storage in terms of volatile versus involatile storage, which is whether or not, you know, data is expected to be, you know, reasonably expected to be on the device after you powered off. Obviously what you have saved in RAM is going to be gone when you, you know, and RAM has no power. But if you save something on a hard disk, then the point of a hard disk is for data to be there when you go to get it the next time. But all hard drives will fail. It's not a matter of if it's always a matter of when every single hard drive in existence will ultimately fail. That's the truth. And if you are relying on your file system to ensure that your data is not lost, then in 100% of the cases, the problem is that person, period. Like their mindset is absolutely the problem. Now, don't get me wrong. I'm not saying we should accept a crappy product or we should just buy a hard drive that's super cheap and known to fail because it's temporary anyway. I mean, yes, we still care about these things, but if you're relying on your data, you know, from one technology to make or break whether or not your data is there or not, then the problem can never be that solution that you're using ever because your mindset is the issue. So we need to really have the mentality that we need backups. We need a data recovery plan or disaster prevention plan, even homeland people. We need that too because we don't want to lose our family photos. If we have it on a butter FS file system and it gets eaten or something like that, if that's the only place we have it, that's a problem. So keeping that in mind, you know, later in the video, we'll talk about the individual quirks because there again are some legitimate issues of butter FS. I'm not going to say it's perfect, but before we get to that, we should probably define what butter FS is. I mean, yeah, we've mentioned it's a file system and it is, but it's like a file system on steroids because it has all these other features that until recently file systems generally didn't have, right? You would like we were talking about earlier, you'd implement LVM, MD rate or something like that, which, you know, I guess we're still doing that, aren't we? But butter FS has modern features and it's a copy on write file system and I understand what copy and write means, but I kind of tell you when I was creating the video, I had a hell of a time trying to put that into words. So, Tom, if you want to take a stab at copy on write and what that means, then I'll let you go for it. If you'd like to, or I'll just try to do it. So copy on write is, this is where I dove into that video saying ZFS is a cow. They refer to it as an atomic right. There's all the data is collected that is going to be committed to the file system. And it's kind of a linking unlinking thing. If you want to kind of think about how this works. And what you have is the data gets all ready, all put together. We've figured out where we're going to put it on the desk and maybe we're replacing an existing file. We drop that data on the desk. Now, when does the new version of file get placed? Well, copy on write. That means if any part of that process, this is all going as the drives are spinning, as it's passing through the memory, passing through the processor, there comes a point where it has to commit it to the drive. But until it's committed and then check some verified, it's actually not committed to the drive. This is the unwinding that you can get within copy on write file system and how you can prevent losses on there because you think of a file being written and the file doesn't get written or overwritten. Well, you could have an integrity problem, but the last known good version is always there until that final piece of the commit with copy on write. This is why it doesn't need to do your standard, you know, check disk like you see where they check all the super nodes and everything else on like any XT4 system or the check disk that occurs on an NTFS system. It's all done as a copy on write functions referred to as an atomic function because it has to be complete. It has to have all the integrity placed on the drive and verified before it can move on. This is just a really cool feature. It adds a lot of complexity to the way the file system is handled, but it gives you a high, high level of integrity to make sure the data is actually there. And that's why I decided to have you explain it rather than have me explain it. So yeah, copy on write. That's what butter FS is, is a copy and write file system, but what's interesting about that as an aside is yes, it's a copy and write file system, but you could disable the copy and write features if you want to. I don't know why I would want to do that yet. At this point in my research, I haven't found a reason for that. But so you I guess, can you do that on ZFS? Can you do? I mean, I don't know why you would again. I don't see a good reason to do that. But can you disable copy and write? Because I want to say, I kind of feel like that might be a different thing about butter FS that you could turn that on or off, even if it's on by default. I don't think so. It's the underlying fundamental of how it works. I don't know. There's anyway, I mean, there's all kinds of fun, quiet features that you could probably find as the FS, but I don't think there's any way to turn off the copy on right. But in a way you do when, for example, you turn off syncing, because syncing is verifying that there's copy and write. And this is where the debate will come in of whether or not you have sync turned on for NFS shares and you can get better performance by turning it off, but all you're doing is lying and eventually will do the copy and write. You're just lying to the commit upstream when NFS sends it that you're, oh yeah, we committed it and whether or not you lie or not is whether or not that sync button is checked. That makes sense. So I'm going to probably assume we can't, but I haven't heard of it before. So with butter FS, it's a copy and write file system, regardless of the technicalities there. It is that. So we'll call it that and additional features that it adds are things like snapshots, for example. You have check sums so it could scrub and you could create what they call sub volumes, which are kind of like partitions, but also not, which we'll get to that. There's a lot of features on butter FS that you can implement and snapshots are definitely one of my favorite features there. Now, before we go any further on the feature set about the feature set, if you're going to consider using butter FS, you have to check that status page that's on the butter FS documentation page because it gives you a list of the features and how stable they feel they are or not. Like for example, quota is listed as mostly okay. That's literally what it says for status. So if you look at that, you get a general idea of what you can and can use. Now, one thing to kind of knock out straight out the gate here is RAID because like ZFS, butter FS can directly handle the drive and the RAID and things like that. The general consensus is to not use RAID directly from butter FS. You can and some RAID levels are more stable than others, but also the definition of each type of RAID and butter FS is different. So RAID one and butter FS versus RAID one and anything else. It's not completely the same thing. There's a lot of differences, but I'm not going to go into that though because generally just avoid that. It's better right now to avoid that and tell you here otherwise. And that could be one of the reasons why people ultimately lose data because maybe they're using RAID. Another reason is free space, which I'll get to that as well. So stay away from RAID. So I'm not gonna talk much about RAID and the RAID features because that's not something that I decided to look into at this current point in time, but I will look at that later. So we're gonna just totally eliminate that. Now, another thing to keep in mind too is that looking at free disk space on your butter FS implementation, if you use the standard tools, which it's such a easy assumption to make, right? Even the first time I set up an XFS file system, I used the DF command. I mean, I just figured it was the command to use and it is, but going into butter FS, someone might actually use it and use the DF command and use that as gospel as far as whether or not they have free space. But the problem is these Linux tools for free space checking like DU, DF and things like that. They're not really taking everything into account when it comes to butter FS or there's snapshots and other things there, it's not going to show you that. So you could literally run into a situation where DF actually says you have plenty of free space and then all of a sudden you can't write data to your volume anymore. And at that point, you might be losing information and this particular podcast years back, that's exactly what happened. He put the data on his butter FS volume, use the DF command, had free space, but he didn't and the data was wiped out. Now, if he was, if a person was to look at the documentation, I'm not telling people to, you know, RTFM, that's not what I'm trying to do here, but there are some important things you have to know from the documentation. One of which is that butter FS has its own free space commands that you use to find out how much space you have available, do not use DF, do not use DU. Now, that being said, what I didn't have a time to research was whether or not any of the distros have patched the DF and DU commands, which they might have done by now, I don't think so, but they could have to where it does take those things into account. I think that's necessary, but you know, depending on your distribution that might not be the case, probably isn't the case. So it's better to just use the butter FS commands for interrogating the free space because you'll get a more realistic answer when you do that. So that's the first thing to keep in mind. Yeah, those little details, I see other people commenting, even before you got to that part, Jay, of saying, don't use the standard file system commands, butter FS has its own. This is where you can get into trouble where you haven't taken the time to research, but then you start implementing something more complex like this, and then the bad reputation goes, well, it broke everything because there wasn't good documentation and good tutorials that told me exactly how to do it and more understanding of these things. And I don't know how much blame Jay, would you place on a distro who leaves the DF command and doesn't have a warning or doesn't modify it to recognize butter FS with a parameter? That would be an interesting question because some distributions actually pride themselves on not modifying the upstream code because they want to present it to you in the way that the upstream developers, you know, intended to arch Linux as an example of this. Like often somebody might have a problem with an update and it's not really necessarily a problem in the normal sense because of arch Linux, you're getting it as the developers delivered it for better or worse. Yes, they test things first to a point, but they're also not going to put their own fixes into things. So some distributions are that way. Ubuntu is the complete opposite. They patched the heck out of things constantly to the point where we have Frankenome and Franken kernels because we can have kernels of a certain version that have features from like two or three versions ahead that it's not really the same Linux kernel anymore. So it really depends on the policies of the distribution. I would like to see them do this, you know, patch the commands and maybe some of them have. Now, thinking about Synology, I didn't actually know that they use ButterFS until I think a year ago or so, maybe it was two years ago. And I've never had an issue on Synology looking at the available space and having that be inaccurate. So obviously, you know, since they implement this they must have figured that part out because, you know, when they sell a NAS to somebody, I think it's common knowledge that you want to turn key solution. And when you see that you have, you know, a terabyte free, you actually have a terabyte free, it's not lying to you. I think that's really important, but that's, you know, key to Synology's product as far as distributions are concerned. As we're going to talk about the implementation of ButterFS is all over the place in and of itself. So it's kind of a hard question to answer. Yeah, it's also, it should be noted, Synology is not running a standard Linux distribution at all. They're engineers custom built that deeply. So there's a ton of customization, which is good and bad. Good for the smoothness you get with Synology. Bad for me and Jay who want to tinker with things going, hey, it didn't behave exactly how I expected it to with something like our sync. Right. We won't get off topic on that, but it is something of note when it sounds like it does work really well with ButterFS, but it's because they integrated from the complete ecosystem that they have. They also have very strict controls over how they implement things. And to be fair, you know, I don't feel like it's the user's responsibility unless the user is testing things and contributing patches or filing bug reports upstream to help out. But technically you could just create a alias or maybe a function near bash RC that, you know, if ButterFS, you know, just gives you uses a different command any time you run DF, because you can override that. But it's like at what level do you go and then you go that level, then, you know, the distributions don't, it's anyway, just use the rules that come with ButterFS for that and just leave it as simple as that. I think you'll be fine for now. And if in the future, you know, that the commands are patched across the board, we'll mention it. Like we'll say today's the day from now on, you can actually use the DF command and the DU command and things like that. We'll let you know. And if we don't, then it's probably not the case. So when it comes to features, my favorite feature is the ability to snapshot. Now, I've been a very strong, you know, lover of snapshots and everything, LVM is one. LVM is something that a lot of people use. A lot of people don't even realize that LVM itself has snapshot capability. You can actually, you know, create a snapshot much in the same way you could create with ZFS. You can mount it separately to restore a file. You can overwrite, I mean, you could like take a snapshot of your system before you run updates with LVM and then roll them back if something doesn't work out. And historically, that's how I've used Arch Linux. I just used LVM. So when it's, you know, I update weekly. So I'll just create an LVM snapshot. I'll update, if something's broke, then I'll revert the snapshot, wait about two or three days, try it again and eventually everything's fine. That's been the best way I've found to run Arch Linux. So snapshots are great. But having snapshots built into the file system is really awesome because it's not like we have a, you know, EXT-4 makes snapshot command, right? We don't because it doesn't exist. You know, the extended for file system can't do that, but RFS can take snapshots. So immediately that opens up a world of possibilities because if you want to test some software that you're not really sure if this is gonna work out and I don't care what the operating system is, it's always the case that when you install something, it's leaving files all over your hard drive, at least with snapshots, you could make it so that way it's like you never even installed it in the first place, which is great. A friend of mine also that loves snapshots was telling me that Comcast came into his house to install internet and was demanding to run the CD at the time to install the software. I was saying, well, this is required to activate the connection. He says, no, it's not because it isn't and that's a lie. But then he insisted on it. So he gave him a laptop, had a virtual machine running on there, had a snapshot, let him install the software. He said, oh, great, it's activated, check this out. He reversed the snapshot and the Comcast software is gone. He didn't like that very well, but snapshots are great and having them in ButterFS is awesome too. And I had a lot of time to play with that. I like it a lot, but there's gonna be some edge cases there too. So there's a lot of features. I think that's the point of most of what I'm saying here. You know, you have your scrubbing, you have its ability to handle rate even though you should avoid it. It's just that, you know, snapshots are built in. So there's a very good reason to consider using ButterFS. But let's get into like some of the differences when you compare ButterFS against other file systems. Now, first of all, ButterFS is built into the Linux kernel. I mentioned that earlier, but what's interesting about this and this is the same for others as well, you have to have the user space utilities installed to manage your ButterFS volume. Even though it's built into the Linux kernel, that just means that the Linux kernel understands what a ButterFS volume is, but you have to install the ButterFS progs package on your distribution. I don't know if it's ever gonna be in a different name, but I've always seen it as that so far. And that gives you the ability to run a command like mkfs.butterfs, like you would normally, you know, like makefs.exe4, that's how you make an extended for file system. But now with those or with that package installed, you have the ButterFS command, which you use to manage it, but then you also have the mkfs.butterfs command as well to create the volume. So it's pretty much, at least when you start out, it's the Linux way. That's how we format things in Linux. Now, after that, that's when things start to become a little strange because when you look at partitions, which is something we've had forever now. I mean, I don't even remember a time we didn't have partitions on our drive and they represent real boundaries. I mean, if you think about it, you can have a situation where you have a server where var log gets full because an application goes crazy, but if you only have one partition, then that could fill up your entire disk and then the server just falls over because Linux doesn't really handle full hard drives very well. So what do you do in that situation? Well, what you do is you create a partition for the thing that you don't want to take the whole disk with it. So var log could be its own partition. So you give it like 20 gigabytes or something, it can never go beyond that. That's a boundary, that's what a partition is. But when you look at sub volumes on a butter FS system, they're not partitions. They're treated like partitions. You can mount them, but they're not partitions and they have no boundary. This is the first thing to keep in mind here. So if you have a 500 gig SSD, hypothetically, and let's just say you create a sub volume for the root file system and a separate sub volume for home. So if you check the free space of your root file system, it's gonna say you have 500, it's a 500 gig volume. You check the home partition or the home sub volume. It's going to say that it's a sub volume, it's 500 gigs, that's how much you have. The boundary is the disk, even when you create sub volumes. And this is something that's gonna be weird to a lot of people and it's weird to me. And that might be the same in ZFS, but since I use ZFS mostly from the TrueNAS console, not really sure, but is that the case of ZFS or do you have a boundary with the datasets? Yeah, there's the ability to set different parameters because the parameters in ZFS are all set on a per dataset basis from the ACLs and everything else. So that's how you customize everything in there. So it's not essentially a very much, it works in much the same way because you can even set the per compression levels on a per dataset as well. So I guess in similar ways it does that. Yeah, so that's not that different then. So now that doesn't mean that with sub volumes, you can't have a boundary. It just means when you create a sub volume, there is no boundary. You can apply a boundary to it. So you could say, for example, I don't want my home sub volume to ever go beyond 100 gig. You could do that. You can make your root file systems stay at 30, whatever you wanna do. But the problem here is that, actually before I say the problem, the intended solution is quotas. So the idea is you apply a quota to the sub volume. This sub volume can't go beyond X. So now you have that limit, but you don't have that limit by default. If you create a partition that's 30 gig, it's a 30 gig partition. Yes, you can resize it. There's ways to do that. But until you resize it, there's that hard limit. It can't go past that. By default, ButterFS has no limit. Now here's where we run into the first problem. And I already mentioned this earlier. Quotas are considered mostly okay. So you have a solution to add a boundary, but you should not use that solution because that particular feature is not considered ready for prime time. So basically, yeah, you can't add quotas. I mean, you can, but you shouldn't. My understanding is that snapshots are what really complicates the quota thing, but you'll run into things like this. So don't use quotas. You have sub volumes. Just for quick clarification, quotas and ZFS are also done on a per data set basis. Yep. And so I think in that sense, that's pretty much the same idea, except in ZFS apparently it works. I don't really hear anybody complaining about that. So that's just an aside with partitions and in ButterFS sub volumes, if you will, which is their best equivalent of that, not the same thing, but it is what it is. So already we have some interesting asides here. Another thing that's interesting is the varying levels that ButterFS is implemented by default. So if you were to install Fedora today, you have ButterFS by default. In fact, Fedora 33 was the first one, to my knowledge, that actually included ButterFS by default. If you look at the file system, and this will be in the video that I'm doing, you'll see how they implemented it, okay? So basically you have three standard partitions on the default installation. So I'm not even talking about ButterFS at this point. At the hard drive level, they're creating, you know, SDA1, SDA2, SDA3, it's gonna be the other, you know, probably a different name nowadays with MVMEs, but you have three partitions. The first two partitions are going to be for the boot process, extended for, okay? So you have that. The third partition is going to be the ButterFS partition. So they contain it on its own partition and they do not allow it to take over the disk. And that's basically how it is nowadays. You just use it on a partition. Now that means that you lose some features, yes, but already you can snapshot the, you know, the partition, or I mean, yeah, that partition, you could use ButterFS snapshots already, which is great. And that gives you that capability. So you do get to utilize ButterFS in Fedora, but it's a very limited implementation. And going back on the history thing, the mindset of Fedora is that even though the mothership Red Hat, which is directly involved with Fedora, you know, they went away from ButterFS, Fedora feels like if they get away from ButterFS, then there's no hope for Red Hat to ever reconsider because Fedora feels like they have to set the example and show how great it is. And that'll maybe make Red Hat change their mind. So we have a basic implementation right now, but in the future, they'll probably build on that, I assume. One of the things that you can do with ButterFS that Fedora is not doing to my knowledge is you could have the distribution automatically create a ButterFS snapshot when you update it. Now that is so cool when a distribution does this because if you install your updates through GNOME software or whatever, it's really cool. Well, it's not cool if you have a problem, but if you do, it's awesome to be able to select a different version on the Grub Boot menu to go back to the way things were. And my understanding, I'm pretty sure SUSA implements that, Fedora does not. So Fedora is not creating snapshots to my knowledge automatically. I saw no evidence of this when I was doing this. So if you wanna create a snapshot of Fedora, you do that manually. There's no graphical utility in Fedora to handle ButterFS graphically. But SUSA has, I believe it's called Snapper that allows you to do this. And of course, you could download that and install it on Fedora as well. So we do have graphical utilities, but they're just not installed by default on Fedora. I think SUSA has it by default, but I'll have to dive more into SUSA to find out. I think it's a curiosity I have is, will it come to the Ubuntu and Papa West worlds? I wonder, so with Ubuntu, it's confusing to me because when I was working with my publisher to start working on the new edition of the Ubuntu Server book, which I could talk openly about now because it has been announced. So it is open knowledge that I'm doing this right now. When we're deliberating over what to include in the book, ZFS is a very common topic with Ubuntu, but considering that the desktop version has it, the server version does not by default, and the book is for the server version, then we don't cover it because canonical themselves they don't even put that in the menu. So yeah, there's some differences there. Will canonical go to ButterFS? I don't know. I mean, we have open ZFS now, and I'm almost wondering if open ZFS is just gonna make it all the much harder for ButterFS to even be implemented. But when it comes to Papa West, yeah, they're built on Ubuntu, they're not beholden to Ubuntu though. They will ditch Ubuntu tomorrow if they feel a good reason to do that because they're always testing things and other alternatives. And they might go with ButterFS. I mean, I've been talking to them a lot that could you at least give me LVM through the installer or something just so I can benefit from snapshots? And they're keen on the idea, but I don't think they've settled on a direction yet. So I would say Papa West would probably do that before Ubuntu does. Yeah, it also, one of the reasons I like Papa West is the by default, they have an installer that lets you create a boot password without having to bolt that on or do any work instruction. That's part of their install process. So I'm not sure how well the ButterFS handles boot unlocking in the same way, because they've done a nice job integrating it into Papa West. And that's not something I'm as familiar with than ButterFS is how it handles some of that encryption. Yeah, I'm not sure either. I'm thinking that it's just Lux encryption and you just formatted Lux volume as ButterFS. And it's just, you know, ButterFS itself isn't doing encryption. It's just, you know, the way you basically do ButterFS now is you just, for now you use the Linux tools like MDRAID and you can even use LVM with ButterFS at least until the features stabilize, which I hope they do. There seems to be a renewed interest in ButterFS lately. And I'm really liking that a lot because it looks, it seems to me like more people are thinking about it than before. And the more people that are checking it out, the more people there are in that community to talk about it and being a part of that community, they can help make it better because it's a great file system. It's just not something you go into like impulsively just making assumptions like you could use DF. You go into it just understanding the basics. If nothing else, you could just install Fedora and you're using ButterFS right then and there. And you could argue that they're tuning it and setting it up in a very stable way. I don't think I've ever heard of anyone complaining about Fedora and ButterFS. It seems to be working just fine. It's never failed me. So you could go with Fedora or SUSE if you wanted a turnkey solution that has ButterFS built in, you could definitely do that. But we'll just have to see how the rest of it goes because there's so many more features than this. And for me, I think snapshots are probably one of the best reasons to implement ButterFS. I feel like snapshots are better in ButterFS than they are in LVM because in LVM, my understanding is that if you take a snapshot and you're not careful, you can start to lose data because your snapshot goes beyond the boundary of LVM. Snapshots are temporary anyway, but especially with LVM, they recommend you get rid of that snapshot after you're done testing whatever it is you're testing. Don't leave it hanging around. I'm not to say that's not the case in ButterFS, but I think it was a lot more stable in that regard because it's part of the file system. So it's built right into the file system and there's something to be said about built in. And I want to comment that we have quite the brain trust here in our live stream who have commented that ButterFS works great with Lux and we have a Joshua Lee posting in here says he's actually one of the ButterFS contributors, so. Oh, wow, it's so great to have him in there. So maybe he can answer how he thinks the, because we asked if it is it actively developed? I didn't really look at any stats. It seemed to me there's renewed interest in it. And I have to say, I don't feel I'm biased, I think I might be biased about this a little bit because I love ZFS. I think it's awesome, okay? But I would really like to have a Linux, internal native competitor. I'm not saying that I want ButterFS to defeat ZFS, but what I would like to see is the conversation between using ZFS or ButterFS to be a hard choice for someone. Like someone eventually has to say like, they're both great and I think they're both awesome. I'm having a really hard time choosing between the two, that would be awesome because I really like having that choice. And I want to see ButterFS succeed. And I do kind of feel like ButterFS is just a victim of an unfair reputation that it didn't deserve because, you know, people are using it just because it's in the Linux kernel. They're assuming everything is, you know, at feature parity with ZFS, which has never been claimed in my opinion. And then they judge it unfairly. If you take it for what it is, I think ButterFS is absolutely something that we should try out, especially in the Home Lab. I mean, how many times have you tried a new piece of software? You maybe mucked something up or you uninstalled it and there's like bits and pieces of things you had installed five years ago. I mean, how cool would it be just to take a snapshot and then revert it back? For Home Lab people, I would argue that ButterFS might even be more useful than in the enterprise. Yeah, well, and Josh really also commented back in terms of who's contributing to it. Apparently the top contributors are Open Susie, Facebook and Amazon, those are your contributions from them. So it is actively developed and as someone else mentioned, Open Susie, we don't talk about it a lot here and you'd be right. It's not the most popular distribution in the United States. It's not, but I love it actually. And it's something that I wanna cover more because the more I use, you know, any of the SUSE versions, you know, we have Open SUSE, Leap and Tumbleweed and then you have the Enterprise Equivalent. They're doing a really good job as well. So hopefully that also gets more popular and I think it's probably going to considering the controversy we just got over with CentOS that some people are still suffering from. So we have the right stewards from what it sounds like and what I wanna see is these rough edges to be developed and people to be excited about developing it and contributing to it. And I don't care what toxicity or opposition you run into in the Linux community, I don't feel like the answer is to completely abandon something. Sure, you might not put your most important server on it, but at least keep looking at it, keep using it, testing it out, submitting bugs, wish lists, be a part of the community, keep the conversation going and you might actually help turn everything around. And I think that's what's needed and the more people can evolve, the better. Yeah, there's, I've seen someone else mentioned too. There's notes, the words they use is please mention that you definitely need to run a balance after a scrub. It might leave blocks corrupted. These are some of those things that need to be better documented in ButterFS and I didn't find all of this information, maybe besides having looked at a right resource and me and Jay have had to actually dive in a little bit to it. So if someone does have some very specific resources they think are really good on ButterFS, hey, feel free to tag us on Twitter, DM us or hit our forum, there's places you can contact us. I doesn't have to be Twitter, but however, however you wanna reach out to us in some way to throw information, I'm on LinkedIn as well and places like that. But yeah, we are interested in learning more and that's why we did the video on this topic or did this podcast on this topic and why Jay has a video he's working on that he'll have released pretty soon about ButterFS as well. We like to see development of it and I think it does have some play in the market. And obviously Synology, they've bet their company on it essentially as all the integration that they have with it. But it is important to know the nuances of how to set it up and maintain it because Churnass has made ZFS easy because it has all the maintenance, everything built in and it's often people's first introduction to ZFS is through there. And now the team over at 45 Drives has all their tooling around it with the Houston project that they're working on that I've covered. But there's not, besides Synology, we haven't mentioned someone, I mean, yes, we know Facebook uses it and we know OpenSusie has it in there but there hasn't been any big NAS companies built on it that has all those maintenance scripts built in and that would be interesting if someone did. So that's kind of our thoughts on it. Yeah, and also to piggyback off of that, I mean, I don't know if there's a way to shoehorn PapOS on top of ButterFS. I mean, technically, there's often nothing stopping you from opening up something like G-Parted on a live CD or something and partitioning your disk and then going into your distros installer, even one that doesn't normally offer it and then say, hey, use this partition and its ability to do that depends on if it understands, at least has some capability of understanding ButterFS and the required packages are there. But if that's possible, in my case, I feel like ButterFS is perfectly usable for me and many other people as well. The thing is, and I think you're the same way, the laptop in front of me right now, if it did go south and I lost everything on the hard drive, I don't really care because the installation is automated with Ansible. I could just execute one command, walk away, come back, then have sync thing, re-sync all my data back to the computer. I don't care if I lose information on my computers. I literally couldn't possibly care less. So I think that's the right way to do it because again, it's not your file system's job to give you data stability. Yes, we need our file systems to be stable. I'm not saying we should use something alpha that will delete itself every day or something, but within reason. And I think ButterFS is perfectly stable as long as you're following the other best practices that I'd argue you really should be following anyway. Yeah, and actually we'll scroll back to Josh Lee's comment as well. ButterFS auto balances, you don't have to run a balance after scrub, which it does automatically for any RAID 110 profile. So clearly we need more documentation because we actually have some different information in here. So I think it's important. And I just know I accidentally clicked on his, that's pretty cool. I could just, for the people watching live, we can click on a comment and apparently someone has a guide out there. Will, I can't pronounce the last name here, but will he? Will, M-U-T-S-C-H-L-E-R, when Will, I'm sorry, Willey, Mustcherler, Mustcher, M-U-T-L-E-R, has an excellent guide in ButterFS for installing PopOS. So yes, that's all. I think I know what I'm gonna be doing later because that would be a pretty cool video to give that person credit and say, look what I did because of that blog post and then give it a whirl. Now you said soon for my ButterFS video. To be fair with my editing key right now, soon is kind of a crazy definition. I'm gonna try to get it out soon. It's probably gonna end up being around two weeks, hopefully not given my backlog, but it did get filmed, so it is happening. So definitely look forward to that. And most of the information I gave in the video is exactly what I've already gave you in this podcast, but also in that video, I will show off manually creating a ButterFS volume on a system that is not installed on ButterFS, where I add another block storage device and I create a ButterFS implementation on that. So there's still reason to watch the video if just for that part. Yeah, so nonetheless, me and Jay have made ourselves accessible. So if you have one of those book information you wanna throw at us, that's how we learn. That's part of the fun we have as being content creators is we actually enjoy hearing back from you. My only comment usually is my DMs are not for tech support, but they're definitely for saying hello. They're definitely for sending me book references or technical references that might be interesting, so we can dive more in depth because me and Jay spent a lot of time reading to bring you all this content to make sure we understand these things so we can share the knowledge. Also, I wanna give a shout out to Chris Titus for his video on CFS. And ButterFS. ButterFS, see, I keep doing that. So yeah, I guess maybe I'm getting tired. So basically after I wrote the script for the video, I watched his and I'm gonna probably put a link to his video in there anyway, just to kind of fact check myself because I like to see if like someone else has a contrary opinion and it was pretty much the same, but on his video, there's a lot more hands on in his than mine. So I would say watch both. Yeah, yeah, Chris Titus does have, if you look for ButterFS videos, I think there's very, very few of them, Chris Titus being one of them and there's not many. So he's easy enough to find. It was really hard to find information on this. So yeah. All right, we've reached the end of the show. I don't think you have anything else in your notes. So we made it to the end, right? Well, I have 15 pages of notes. Literally, I'm not making this up, but I think for now the podcast can handle. The foundation has been laid. So I could always dust off the notes in a later date and talk about some other aspects of it if I feel the need to do so. Yeah, I see people even debating some things on there. And we see a lot of debate and conflicting information. Sometimes it's because we need to have more documentation consolidated on things. So that's why, I don't know if I'll do a video on ButterFS outside of my Synology videos on there, but I'll certainly refer back to JS because people do ask me about it. And if someone knows a NAS system, because maybe one exists that's been built on ButterFS that's maybe just lesser known, let me know, that'd be interesting as well. And if you have an awesome ButterFS implementation you'd like to let us know about, send us a note, let us know what you've done with it. Yeah, I mean, I don't know that Facebook has done any public disclosure and deep dive into how they implement things. Like Facebook's actually done some, Facebook is part of the, this is gonna sound odd to people, the open initiative for when it comes to servers and things like that. There's actually a lot of large companies, the hyperscalers is referred to, your Facebook's, Amazon's, Google's, AWS, they're all, they know that the value is the data, not necessarily the hardware. So they actually talk a lot about how they build things and they open up a lot about the architecture of things to an extent. And that's because they know by sharing that knowledge amongst each other they can get better at slurping up your data. Because that's their ultimate goal. Their goal is the data, not the building the fastest server. So there's actually, like a lot of people won't realize like the Z standard compression, which is also used in better FS to make it relatable here was developed at Facebook. So because Facebook had a problem and all that data they collected on us, they had to compress it better. So. Yeah, it made me how much something develops when a business has a financial need for it. Things happen a lot faster, unfortunately, but we need to get the community revved up to at least parallel that if not exceed it. Yeah. All right. Well, thank you everyone for joining and awesome. Thank you contributor, Joshua Lee. So even said he actually posted an energy, he'd be happy to help with the video. So yeah, yeah. Connect and Jay's definitely all ears because he's the one making the video. So awesome. Thank you very much and see you guys next week. See you.