 So, there was some talk about removing fire systems from the kernel, and also, this unprivileged security issues with mounting fire systems by user namespaces and that sort of thing. And my idea would be that we run a kernel in a VM just for mounting a fire system. And so, and now that, it fuse. And so, we could run all the kernels, which still support all of the fire systems we want to remove. And I see what you're saying, so you, like, pass through from the VM as a fuse mount for any old file systems that we remove that for users that might still have it. Christian, are you in here? Oh, yeah. Okay, good. I just want you to hear what I'm going to say. I think that we are extremely conservative when it comes to removing file systems when we shouldn't be. I definitely agree with Linus's point of view that we shouldn't be breaking user space, but like the NTFS thing is a really good example of this, is like, if it's been in tree for like three months and people just disappeared, delete it. Like if somebody wants it, they come back and they do the whole thing and then they promise that they're actually going to stick around. I think RyzerFS is a different beast, but like a lot of these other file systems, like I think we should be extremely, extremely aggressive about deleting things we just don't care about anymore because if there's one or two users, great, that's what stable kernels are for. They can, or they can buy RHEL, or they can do literally anything else, but for the upstream kernel, if there's no maintainer and nobody's using it, let's delete it. I don't understand why we have to keep all this code around. But it still could be some file system image lying around that somebody wants to mount. But you can't use it because GCC has gone, so you can't compile it anymore. But if we still have a certain kernel which we maintain with all these file systems in it that still compiles, at least the file system still compiles with the recent GCC, then that would, I guess, solve that issue. But not sure, that's still some maintenance. I wonder, so I think the only way we can, so with the file systems that we just mounted and it haven't been upstream for long, I think that would make sense. The only thing that we would need, we would basically need to ask Linus to maintain our summit or whatever and be like, hey, how would you feel about this if we were to switch to this type of policy? But in general, I think it makes sense because it's really, it's a burden, I think, on the mailing list, a burden on the other maintainers if just patches and Sysport reports keep accumulating for file systems that have been there for a very short time. And using a VM, UML might make a better option. UML, user mode linux. And I'm not sure UML is still working. In the older kernels you're talking about, it might be, it's a thought. The other thing is, so mounting files and images, arbitrary images without any checking, the user wants to mount VFET or anything, that's another use case for this. So I think that if someone was willing to implement some sort of run UML in a VM or some other sort of kernel in a VM, whatever, to ease the fear that there might be potential user pain, it may make it easier for linux to agree to remove a file system with extreme prejudice. There may be disagreement in this room of people of good will about whether or not that is strictly speaking necessary, and we could have that debate. I do think that UML may be an interesting path forward because UML works well enough for K-Unit, so I suspect UML hasn't completely bit-rotted at least for X86, because that's what K-Unit uses, and that might be an interesting way of, we somehow have to feed the block IO requests in through the VM into UML or whatever kernel we use, and then feed it back out through some sort of vert IO into the fuse on the host kernel. So there's real technical work there. I think it'd be really, really cool if it was done just so that we don't have to have that discussion about is it realistic that people be able to build an LTS kernel from two years ago, if that's the only way they can get a particular file system, because the reality is I can't build, I think it's 4.15 or 4.14, there are older kernels that are not buildable with GCC anymore, and so yeah, that's a concern. Whether or not that should stop us from removing a file system, and I agree that RyzerFS is probably a little bit different from NTFS-3 about what the decision matrix should be, but like if you think you can actually do the fuse thing, I think that'd be really cool, right, that just from a technology perspective, and it just short circuits that entire argument. Yeah, so that's the... Just for a second, you've mentioned UML being not entirely dethrone on X86, just to check it never existed outside of X86. The last time I heard this argument come up was several years ago when we were here in Vancouver for the Plumbers Conference, and we were talking about accessing files that we would... Our file systems we had find in USB drives we picked up off the street, and the solution offered then was why not start up a VM and export the unknown file system via a shared network file system like SMB or NFS. You don't have to deal with fuse or write any new code, you just mount it in that file system and export it. Unfortunately, USB drive picked on the street might do rather unpleasant things to you simply when you stick it in without any... So kind of the other thing that I'm getting at here, right, is we use the specter of unknown user to avoid making this decision all of the time. And I'm quite tired of using arbitrary things... Arbitrary fears with no data backing it. So I think Ryzer FS, again, is a clear exception because there are real users, there are enterprise users that are still on Ryzer FS. Great, we don't eat that guy. But we can easily go look at SLEZ or OpenSUSA or Fedora configs and see what file systems are actually turned on and delete everything that's not turned on because chances are there are no users. Of course, if something the user complains, then we need to have a conversation, right, but like we constantly use the specter of some unknown user. And we pretend like there's no other option. Okay, we can't build 4.14, but we can build 6.2. And so we eat something right now and we find out that there's a user. Then we can decide, okay, do we care about that user? Because they have 6.2 and literally many other options. And now I do agree with you, Ted, that this is a cool technology of being able to export things and solve this problem in general. But at the same time, I am so tired of being like, well, there might be a user, yeah, okay, there's lots of other options. I mean, this is a slightly different scenario, of course. But we do it with architectures, right? At least from regular intervals, we keep, let's remove this architecture. And then users will pop up and they will tell us, no, you can't. But that's how we got rid of a bunch of architectures over the years. And it's kind of similar, right? So, you know, when you start to get into the CPU, you stop getting produced, but still there is some university who is still like two machines in their basement that they whip out for a Christmas party. Or so they can use an old kernel. There is one slight difference you need to consider, which is whether the file system is usable as a root device. So if it's not like NTFS, I don't think anybody will care. You just want access to your data. But the problem with root device file systems is you can't upgrade once the file system is gone. So re-image your system, which is considered to be a really nasty operation by some people. I mean, yes, but that's again, that's, I think, the argument of the invisible user, the hidden user, like who's going to run a riser-FS file system and then upgrade to a butter-FS root file system. Yeah, so actually, I was deprecating the riser-FS stuff about an year ago. And so I've got a couple of users complaining, hey, I'm still using it. But basically, all of two of them were kind of fine with the statement. Okay, but so we will remove it in two years. And do you know that you are running a code base that has so many security bugs that you probably don't want to think about it and say, okay, yeah. I thought that when it is in kernel, it must be still fine and maintained. So, okay, thanks for the notice. I am going to switch to something else. So like all these users, when they are given sufficiently advanced notice, they are willing to transition their machines to something else. So I believe it is really the path forward to kind of make light noise, this code base is really bad, just use something else and give them like better suggestions that they can use. So with riser-FS, I'm still kind of confident that we can follow the precation notice and remove it like in one year. Or I don't know how long we set it, but about in one year, I believe we should be able to remove it. I think ultimately though, it would be useful for us to get a clear statement from Linus. So I think that probably is the next step. Whether we do it via email or we say, this is something we want to queue up for the maintainer summit. Because I was against NTFS3 going in, it wasn't up to me. And it's like, okay, it's in, I'll step back and have some popcorn and watch. But like, so I have absolutely no objections if NTFS3 were ejected tomorrow. If it would make it easier for Linus to agree to remove it, by having this cool technology, I'm all for it. Because I never wanted NTFS3 in the kernel in the first place. But again, it's not up to me, right? Ultimately, I think we need to escalate this to Linus. And if we're willing to have a sense of the file system developers in the room say, we should be a little bit more aggressive about being willing to remove deprecated file systems like we remove deprecated architectures. I'll certainly sign on the dotted line for that. Yeah, I think this is what I'm- Go ahead, Derek. So I was talking with Kent about an hour ago about this. And I was thinking, if there was some way for me to pass IOMAP mappings through views or something like that to IOMAP, I was thinking, why can't we just get rid of XFS from the kernel? And my other stupid end of the day thought was, could we have an FS staging? Yeah, let's put things in there and kick them out in six months if they're not making progress and generating too many sysbot reports. I have been tried. It was in driver stage and it was looser. And it stuck around. I'll skip the adjectives I would like to apply to it. But put it that way. It's something that sticks to the boot so. And it stuck there for quite a while. It had been hard to out of the kernel. Right, and I think this is the thing is like, as a community, we've all been kind of willing to accept that there's like, okay, whatever, it's not that big of a deal. But we're finding out that it is a big deal. And I think that as a community, file system maintainers, VFS maintainers, I'll go to Linus and say, listen, we would live a lot more aggressive about removing stuff. I think Linus would be like, okay, that's reasonable. And I think that we just need to be unified in this idea that like, okay, there is real cost to carrying the shit around. Let's be a lot more aggressive about evicting it when it's clearly getting in our way. Because it makes us look bad. Yeah, it makes us look bad. First one on staging, if you have a file system staging, you will have staging for entry to file systems as well and you've lost control of what file systems go in. And the second one to Ted's point is, I really don't think you want to persuade Linus to make a statement. I think you want to take a statement to Linus and tell him this is it. That's exactly what I'm saying is like, I don't think Linus cares too, I mean, he does. But like, I think he's going to, in this particular case, he would defer to all of us. And if all of us come up to him and say, we want to get rid of stuff as quickly as possible and be a lot more aggressive about when we bring in things and there's clear failure of maintainership, NTFS-3. Or, you know, there's clearly no users or there's just, if there are users, we just simply don't care because it's too old, it's unmaintained and it's causing us problems. Let's get rid of it. And I think that Linus would be okay with that. Eric's okay with it. I mean, it's also with the understanding, you know, the pull request to remove that code and then you might get reverted, but so what? I mean, happens all the time. I think it would make sense. Chuck just asked for a short list. I think JFS, ROMFS, RAMFS maybe, HFS. RAMFS, you're kidding me. Not RAMFS, the compressed, CRAMFS, sorry. Fucking, what else, God, now I'm blind, I'm blind. You know, 65 things concerning. Yeah, there's a bunch. So NTFS, I mean, I can sit down and look at the list, but like, you know, we know what we use regularly, right? Like, ButterFS, EXT4, XFS, NFS, SIFS, ZoneFS, F2FS. And you're looking for something. No, I just saw, yeah, it's maintain. There are strange folks who like AFS. AFS, we actually did get rid of ZXFS. The Veritas one, yeah, the Veritas one, sweet. So I mean, like, I feel like, you know, we probably have, what, 10, 12 that are actively maintained and actively used and the rest we can get rid of. How much maintenance do you need for MinixFS, for example? How much maintenance does it take to keep, say, MinixFS alive? I have no idea. I think the price is usually paid by... It's not a great burden from what I can tell. Until the, you know, SysBot report comes along and you really have to, yeah, probably also for you, it might be different, I think, because you're so experienced that for you, it might be obvious to maintain them. But the thing is, if we have SysBot reports for these file systems coming in and then fixing these SysBot reports and bugs, who is actually doing it apart from maybe you and if I know what's going on sometimes myself? And they just accumulate if you look into the SysBot instance and, you know. I don't know, in my experience, SysBot reports often is not random garbage. So I think the other thing is like, we're starting at this point where we're having major API changes and propagating these across all the different file systems is becoming a huge pain in the ass. New-mounted API, IOMAP. Like, there are big changes that we are halfway through and likely won't have somebody to do for all these other things. So we're just also not, we're not only carrying around these dead file systems, we're also carrying around all of this dead code and all these dead interfaces that new developers are gonna come upon and be like, I don't know what I'm supposed to use. And so like, there's a real value in cleaning these up because they can also then clean up all these dead interfaces that we don't need anymore. I was gonna refer back to something earlier, you know, maybe SysBot is the answer. If the SysBot reports are piling up for a file system and, you know, there's no particular motion on many of them, that maybe that's how we know it's time to get rid of that particular file system. I was wondering, how many of the buffer head file systems can we get rid of just to simplify the buffer head? If the answer is 23, we can just get rid of buffer heads. I suspect we can't get rid of the ISO 9.660, maybe. But if that's the only one left. James says it's an archiving problem. And that the archives get annoyed because they store things on DVDs that they were promised will rest, the data there will rest for years. You get to it 50 years later and they can't find anybody who will read it. So if ISO 9.990 still works and it's not a burden, just leave it, because somebody will complain by the time all of us are dead and there'll be no one to fix it. Sounds like the laser disk problem that they had in there. Yeah, but okay. Yeah, I'll also note though that for a number of these file systems, in particular ISO 9.660, that's a classic, why isn't there a FuseFS for the damn thing, right? Because we don't care about performance for ISO 9.660. I agree. So Ted will write the FuseFS and you can remove it. Well, yeah, but yeah. I mean, does it even have to be a file system? It's ISO 9.660, it's an archive format. Why is it not simply a TAR format, right? If you use TAR to extract ISO 9.660 to your current file system, isn't that perfect? Yeah, so there is a Fuse ISO 9.660 already. So I don't know how well it works, but like it exists. Does it work for root? It doesn't mean it works. So yeah. It exists, good enough. Does it work as a root file system? Yeah. Fuse makes it up though. Yeah, the other thing maybe to consider here also is it's not just Syspot reports. There are a number of these file systems that actually still have XFS test support. And every once in a while, I'll run XFS tests on some of these ancient file systems just for grins. And it's like, gee, that's nice. There's over like 150 different failures. I ran it on RyzerFS as part of my, because I had this insane idea that for every file system that is runnable via XFS tests, I bucked a mirror about this on WhatsApp a lot. I ran it and it was just like complete mess, like kernel crashes, everything failed. You didn't know what from run to run different failures. It's really not great. Yeah, the last time I tried running UDF with XFS tests, the kernel crashed. So yeah, I know. Look, the thing is here, if you're doing VFS change, or a folio changes, for example, and you're making a change to a file system that you cannot test, then it's kind of wrong. It's kind of better to remove it. But yeah, it's not very practical, but that's basically the correct answer for development. It's not good to change code that you cannot test, but. Is something like ISO 966, these read-only file systems, or are they something that's easier to convert to IOMAP because you don't have to worry about all the write back stuff, all the synchronization stuff? Can you just, instead of using BV, just came out like a buff and read into it? Actually, what makes read-only file systems particularly good from an IOMAP conversion point of view is you don't have to understand what get block with the create flag set to true does. You just have to understand what get block does in terms of getting a block rather than actually trying to allocate new blocks, which is what I've always found hardest about trying to convert something to IOMAP. I think the biggest problem is something like EXT2. And thank you, Anne, for volunteering to take that on, but where you actually do still want to preserve the read-write nature of it. Yeah, the one thing I was gonna say about EXT2 is I think EXT2 being well maintained as a simple file system is one of the best arguments for why we don't need to keep Minix FS, right? The original reason why we would point at Minix FS was, okay, you're writing a simple file system, Minix FS is what you should look at. I think these days, if we want a simple file system that people should look at, we should point them at EXT2, and that's one of the reasons why I'm really glad that it's going to IOFS, because it's like, it's a well-maintained simple file system using the modern interfaces, right? Now, Minix FS may have sentimental value because it was the first file system in Linux, blah, blah, blah, that's not a technical discussion, and I don't care. We have good chemistry. We can't even run Minix binaries anymore, I think, because we took ADOT out, so that's a sentimental attachment, but yeah, we can talk to Linus about it, but I would be all in favor of taking out Minix. It probably isn't that bad, but I don't think it has a reason to exist anymore, right? And this is sort of the, like, there are real booting faults. Second, Al. Al, can you repeat that? There are faults who are dual-booting Minix and Linux, strangely enough, because there had been, for real, patches supporting the new version of file system, Minix 3, that went into the tree, doesn't seem to be particularly obnoxious, so somebody cares about that. Had been a few, some years ago, I don't remember when, but strangely enough, it's not embedded. Yeah, unfortunately, there is actually an active Minix 3 community. I think it's pretty small, and I didn't even realize that they were actually interested in doing Linux file exchange, but whatever. I mean, it doesn't have to be, like, Minix 3 is maybe just a, we don't need to get hooked up about this specific file system. If we find out that we need to keep it, then that's perfectly fine. I mean, this really isn't a story about ripping out people, like the file system that a lot of people had. It's a story about trying to get rid of legacy code, and if anybody complains and has valid complaints, then we'll happily revert it or leave it in. That's at least how I would see it. Can we actually reduce some of the read-write file systems for really old legacy formats to just read-only file systems and just simplify them a lot? I mean, I would be, you know, if that's like an intermediate option, I mean, I'd prefer to just remove, obviously, but like, I think as long as we, I don't know, see, like, this is the problem, right? It's like, there's still things like the amount API changes, and IonMapOK is easier in this case, but what next API change are we gonna have that suddenly is a pain? And additionally, if we have no way to test it, oh, I know. But additionally, if we simply don't have any way to test it, and we're all around here running around updating APIs, and we just simply break the file system because, like, we can't test it, like, that's not great either, so. I mean, I'll note for architectures, that actually has been a feature where we accidentally break an architecture, and then we point out, we broke this architecture three years ago, no one stepped forward to complain, and then that makes it a whole lot easier to remove the architecture. And also, do we remove things that we don't have a MacFS in the F6.4? Oh, I wouldn't. That, I feel. For example, MkFS for UFS is quite available, not packaged. Not packaged. Of course, these defaults can't. It doesn't work for things like ISO file systems because they have CD record, which does exactly, namely creating an ISO without being in MkFS. Yeah, I think that's sort of a special case, and we're some of them like, I have no idea whether or not there are anyone who's using the QNX6 file system for interchange with QNX or BFS, right? I'm pretty sure ADFS probably doesn't have a whole lot of users, because like OSF-1 is kind of dead, dead, dead. But, you know. While at it, look at the partition schemes, please. If you kill the file systems, you might as well kill the partition scheme. Yes. Okay, I think we're all in agreement here, and we're all just kind of going around in circles, so let's wrap this one up. Does anybody else have any other lightning talks? I know Chuck, you're still waiting. Do you want, are you good with who's in the room now? Yeah, okay, cool. You're up. This talk is for Christian and Al and Andrew Morton. Where did Christian go? Nope. You moved. The issue is tempFS, stable directory cookies. People have heard me preach about this before. On the mailing list, the issue is that when you delete or create a file in a tempFS directory, the offsets change. And there's a sort of a simple cursor scheme that allows applications to find their way around with getDents, at least when they're accessing tempFS directories locally, but over NFS, that doesn't work because every reader on the NFS server opens the file and then closes it, which throws away the cursor. My proposed solution was to replace the cursor mechanism with something that is an XRA-based mechanism for finding dentries, based on their offset, and then every file gets an offset that is the same for as long as the directory lives and for tempFS, that's as long as the system is booted. There's been some grumbling about doing this. It looks maybe more complex than people have stomach for, so I'm looking for feedback about what might be an alternate approach that is simpler and performs well. Another suggestion was maybe we should, instead of changing tempFS, we should actually plumb all of this into libFS, which is why I'm kind of looking at Christian. We can put it into libFS, but... Go ahead. We can put it into libFS, but there is no real difference because tempFS is mostly a use of libFS functions with the distinctions. It uses a lot of the simple calls, but there are certain ones that tempFS has its own rename and that could get a little sticky, but tempFS uses the simple fs, reader, and lseq. Yep. Is there another comment? You want something like xr8 to directory reference to what the interest of children? Yes, that's exactly what I did. Yeah, and it works well. There are some performance regressions I have to look into, but I seem to have addressed all of the functional problems with it, maybe not all of the political problems with it. Sounds fine to me and it makes sense to put it into libFS, I think, even though I'm not super experienced in this area, but what did the tempFS maintainers say? Which is you probably, right? Hugh didn't comment on it. Andrew suggested it was a little complicated looking and it is file system specific code. Maybe it's a little outside the wheelhouse of the MM folks and maybe they would rather see it done in libFS as well just because it becomes not their problem. Okay, just a sec. So, okay, I have probably not looked your patches here, would you remind me of the subject? Subject line you used, one question of them, if they fit the question. Yeah, I don't remember the exact subject line, but I can send it to you once we're done talking here. Okay. In principle, I don't see any great problems. I don't know if it's worth actually, but the same logics for regular friends is used for, unfortunately, for some synthetic file systems, if I'm not correct. Yes. TempFS doesn't have to deal with random changes of directory contents coming out of hell knows what source was, it's all from user. So, I would need to actually take a look at horrors like, what was that, libFS? Some, there are completely insane specialist file systems, something in a band. I'm sorry, which are using that. Figuring out when and how they change the director is, is not something I would wish upon anyone. Right, as dev. So, maybe it's better to, I have no problem with that being in libFS, just not replacing the existing variant. One idea there was to actually add a different version of reDur and Durl seek just for the X-ray version and then have the, the weirder file systems continue to use the simple one that's already in there. How do you get from I0 to X-ray? The X-rays. File system specific. Yes, it is. So, how would those library help us deal with that? I'm not sure. What we have there right now can be directly used as instances of get the answer. Okay, iterate and lc. If it becomes a helper function that should be called by that file system's instance of file seek for directors, given it's some kind of callback or something like that, that's a bit of additional headache. It's really, the situation there was in the details. I would need to take a good look at your page settings see if, how widely can that be used without excessive headache. Yeah, fair enough. My patch that is currently Shemem FS specific. I haven't actually done anything with libFS itself but we could think about that offline. Thank you. I obviously have no problem with that. All right, great. Do you guys have any other lightning topics they'd like to discuss? Excellent. Oh, really. I tried to sneak this into the MM track but Michael gave the slot to Andrew instead. I want to reduce the amount of support we have for Hymem in the kernel. Like when Hymem was originally introduced, we were talking about like x86 servers with eight gig or theoretically up to 64 gig of memory but I don't think that ever worked. So we're talking about like 32 bit x86 systems. These days, I don't think anybody sells a 32 bit x86 system anymore and the 32 bit x86 kernel is almost unmaintained. So really the only 32 bit systems we actually care about I think are ARM based and even then it's really hard to buy a 32 bit ARM system with an amount of memory that causes you to actually need to use Hymem. Intel does not still sell Quark. Quark was an experiment and it failed. I have one. I've never used it. Just for the record, Fedora just recently disabled 32 bit ARM altogether and Fedora 38. Yeah, I mean for a while you've been able to use a 32 bit user space with a 64 bit kernel and I'm not proposing doing anything about that. I'm really talking about systems that can only run a 32 bit kernel. I don't propose entirely killing Hymem because I mean if you have more than I think about 800 megabytes of memory, you do still need it and you can buy like a ARM router or something with a gigabyte of RAM and it would be nice to still use the page cache and still use anonymous memory. So I'm talking about keeping those but I do want to start removing support for things like page tables in Hymem because that actually gives us a bit of software simplification. I want to remove keeping directories in Hymem. I want to other file system metadata. Just get that crap out of Hymem because it's just complicating this and it's not helping any users. James, you had a little thing. One knows curiosity question. So a lot of the Hymem architectures also support a four gig, four gig split. It's just supposed to be slower. I think all of them do in which case can we not just disable Hymem as a performance optimization and keep the four gig, four gig split? I don't think ARM supports four gig, four gig split. I don't know. I mean, as I recall, four gig, four gig split, next city six was incredibly slow. Like, I mean, not just a little bit slow but like, but benchmarks ran half as fast. Like, it was, it was really, really bad. Well, yes, it's those two, but I mean people, people do still buy routers with like a gig of RAM, right? And less than a gig of RAM. So, am I hearing any objections to starting this work? Now, like, ButterFS, we went through and like ripped out the Hymem support for our metadata. So like, I don't think you're gonna find too much argument here, because like, you know, for us, it drastically simplified things. And I think that, you know, if anybody else still does it, I don't know if anybody, like, I don't think you're gonna find a lot of resistance to it, right, because it like, it sucks to do all the K-map stuff, right? Yeah, yeah. I mean, specifically for folios, right? Because you can't K-map an entire folio. And it just makes everything more complicated. And it's like, why, why are we straining so hard at something that matters to so, so few people? Yeah, I mean, I know we did it because it just was silly, right? And so like, I don't think you're gonna find a lot of people that are gonna complain loudly about it. So I'm all in favor of it. I'm seeing a lot of nodding. Okay, thank you. All right, any last minute lightning talks, perfect. All right, this one a lot longer than I expected. We're gonna go out here and we're gonna take a group photo. And then we have dinner at 6.30, and it's, I forgot the name of the restaurant. Steamworks, thank you. It's right down the road over here. It's on the back of your badge too. It's in the, there's a Google Maps link in the schedule. I cannot stress enough, 6.30. So find something else to do for a while. Go drop your bags off. And it's really close, so you can walk through it. But first, out here for the photo. Thanks everybody.