 These are trivial for me to report. They're trivial for NTFS to report. And some file systems obviously support these, but have private interfaces for them. But the ones seem to be very relevant that I wanted to ask, even though there are others maybe, are is this an important file that needs extra integrity checking? I know that there have been talks about B2FS patches in the past where you'd mark a file rather than a file system as needing enhanced checksums. And then there's the reverse. These files, or temp files, we don't need any integrity checking. These are rarely used in the Windows case, but they're trivial for me to report. And if other file systems have a concept of integrity checking, maybe there's particular files you wanna mark. Is that a useful flag? It's like a five lines of code for me to add, but is it something somebody else would find useful? The other examples that seem to jump out at me in this cloud world is offline. I think other operating systems all have a concept, and I think most network protocols have a concept that you could have a file locally on your laptop, but it's really offline and saved in AWS, or saved in Azure, or saved in Google Cloud. So don't read that file, right? If you wanna read the file, it may take you five minutes to download it. So there's an offline flag that's a hint. So like I don't know if you guys use Macs, but on a Mac, they will read the beginning of a file to bring up its icon. That's a really bad idea if a file has an offline attribute, because it could take a long, long time. So trivial for me to report, it's like five lines of code for me to report, but is there any other file system that might have a concept of an offline file? Now in Windows, they have four additional flags relating to offline that you can use to explain how offline it is and where it's offline, but I think just that flag alone would have value. So I wanted to toss out there the idea, are there any file systems that support offline files? I mean, NFS and SMB, right? But from our perspective, it's like two lines of code that take an attribute that comes off the network protocol and stick it in an attribute flag. So it's trivial for us, but are there any local file systems? NTFS could do this trivially. So anyway, offline, there's also more detail about offline flags you can do if you want. And second, the integrity checking. Marking a file that it wants. Whatever you do, file system, the best you can on integrity checking or the reverse. This is just a temp scratch file, I don't care. Don't bother. So those were examples, and there may be other flags you can think of, but I've been thinking it's been a few years since we've extended the Stadex flags. Let's extend some more, if you have ideas, let's talk to, let's convince Dave to work it. Any other lightning talk topics that anybody wanted to throw out here? No, perfect, I've got one. I'm going to keep myself to five minutes because this will be here all day. So let me reset it. So the roles of maintainers. We're going to have a longer talk about this and helping talk about maintainer burnouts and how we support our maintainers and that sort of thing. Yeah, this week. Oh yeah, it's tomorrow, I want to say. It was Derrick Long's slot. He is taking a mental health week, so it is apropos that he was going to talk about this. I will talk about it in his stead and hopefully do it justice for him. But I kind of wanted to talk about how and what maintainers should be doing, right? So kind of historically like maintainers are the catch-all. Like they take the patches, they do the review, they do the testing, they do a bunch of different things. And this is I think why we tend to burn them out so quickly. They do everything, guys and women that are writing code and submitting it or whatever. And then you have, and then they get frustrated because their code doesn't go anywhere because we've got overworked maintainers doing a bunch of things. A lot of different subsystems have kind of addressed this in different ways, rotating maintainers through and that sort of thing. I kind of want to like step back and rethink about how we maintain things in general because and hold our maintainers accountable to this. Not just to like punish them when they're not doing their job but to like help make sure that they are properly supported. I think the VFS is a really good example of this. And like it is just standard operating procedure for file system developers to send VFS patches out and if Vero doesn't say anything we send them through our tree. Like that essentially like and this ambiguity of like did Vero see it? Did he not? Did he just see it and doesn't care? And therefore it is the green light to go. Like that kind of ambiguity makes it kind of hard to do work on the VFS layer. And this is an important subsystem, right? And then on the other side of it you have how do you deal with maintainers that kind of do everything and how can we support them? And in the butter FS land and I know BPF does this, right? Like there's a lot of patches and there's a lot of review. It's not up to the maintainer to do that patch review. It's up to the developers. And we have it codified in our practices that like we rotate the patch review responsibility. I know BPF does it because I copied it from them. So the meta guys all go through and once a week you know one week I do the patch review the next week Omar does patch review the next week the next guy and then so on and so forth. And that way Dave Sturba, the official ButterFest maintainer he can just sit there and be like okay this was reviewed I can merge it. And then on top of that is testing. I know that it is really really frustrating to try and like merge stuff and you test it locally and it merges into the tree and then like the maintainer tests it in a different way or you didn't get all the kind of like standard ways to test it and your patch gets kicked out. One of the things that we're trying to do with ButterFS is to like we have continual integration testing. So that as soon as your patch has merged that night it gets the full suite of tests run. And that way you can test it locally and make sure it doesn't blow up but you don't have to worry about all of the all of the random different things that we test. And so as soon as it gets merged it gets tested and if it fails kicks it out. And it's not a big deal nobody gets mad it's continual integration testing and it makes our maintainers job a lot simpler cause he doesn't have to like look at it something be like wow this is really hard I hope we don't break anything and like delay merging it. No he can be like okay this looks cool we can merge it it gets tested if it fails great we kick it out report it to the patch submitter they can change the things and it closes the loop a lot faster and you can kind of do this. And I think this is a really good way to like kind of offload the responsibility of the maintainer because in the end what I want from the maintainer is to make the decision when there's a conflict like the if there is competing ideas from multiple developers in the same community like that's where I want the maintainer to be spending their energy because the maintainer has the most context about what's all going on cause they're the ones that are merging the code and I would love to see us move away from the maintainer that just kind of like sits there and merges stuff for whatever and kind of like doesn't get involved but rather is more actively involved in you know not only setting the direction but like helping developers come to a common understanding and I'm done so yeah lots of stuff but I'm out of time. Does anybody else have any other anything else they want to talk about or any comments they want to. So as a maintainer many of these frustrations every maintainer has but the number one thing that I think about is that there's gotta be like when Ted gave that example of how to automate XFS tests for EXT4 every single developer in the room had some benefit they could draw from that. From my perspective what I keep thinking is I can mirror trees to GitHub or GitLab or whatever and there's gotta be somebody who's smarter who's figured out how to do actions inside GitHub, GitLab or whatever to run sparse, to run check patch, to run these like why is it that I've not found a single tree that automated the kinds of boring things that waste maintainer's time. So I'm hoping somebody's done it. So sorry. This is the direction that I'm taking ButterFS. Right now I built a lot of that scaffolding myself that like all the continual integration stuff just exists in scripts. I've been working with Lewis on KDevOps to get to move my crappy scripts or whatever to do this thing. The next step is to tie that into GitHub actions. So not only we'll have the continual integration testing of our main branch but then developers can just push branches and then get the same testing that they'll get when it gets merged and they can know before they even send us patches that oh yeah, my stuff works fine. I didn't break anything. This reminds me in Samba, I don't do a lot for Samba anymore but Samba whenever a developer would do a check-in it would automatically kick off a regression test and do all this kind of stuff and it wouldn't even get to the point of asking for a merge into the official branch that the temporary merge would cause all this to happen and it saves so much time for Samba developers to have all of this offloaded in scripts. Yeah, this is kind of... We actually have all these actions implemented for BPF and we're running them. Yeah, I think that BPF, again, I steal a lot of your guys' ideas because they're fucking great and this is kind of, BPF is a little bit easier to test than XFS tests, right? But this is kind of like I am saying this out loud in front of a bunch of maintainers so we can all be thinking about this because I never really thought about it because I know how to run XFS tests locally but it wasn't until I had to onboard new guys that it's like, hey, go run XFS tests and like three days later they're like, yeah, I still can't get this thing to work. And like that's like a three days waste of time, right? Like if I could say, hey, run, make, test and KDevOps and I don't have to think about it again then, oh God, save me so much. Or figure out how to set up a baseline and then figure out which tests are expected to fail given the current state of the tree and so on. This is really a big problem, especially if you have to run XFS tests for a file system with, I don't know, different mount options, for example. Right, so this is one of the things that I did, this is why I did it, is because I needed to know the baseline and so I have, with the continual integration, I have the baseline and then I go and I figured out all the tests that were failing that weren't supposed to be failing fixed XFS tests. And like we have this kind of built in mentality of like, oh yeah, these things fail and this is hard to describe to new people. It's like, what, you expect things to fail? Then why are you even testing it? Well, that's just the way we've always done it. The way we've always done it sucks. So I would really love to, yeah, thanks, Willie. I mean, I want to change this attitude to get towards this place where our testing and all of our things that we do are a lot easier to do because then it makes it easier for us to have new developers, but it makes it so much easier on our maintainers because the maintainers don't have to fret and worry and stress out about whether or not this thing broke because we have all of these automated things in place and all of these things in place, like reviewing and testing and all this stuff and that way all they have to worry about is, okay, this has been here for a while and it works and it was reviewed and then they have to manage conflicts or manage cross areas. I mean, there is a file system specific aspect to this, but there is also VFS aspect to this. We sometimes simply also, even if you invest a lot of times in reviewing patches and trying to help people along and then we often don't have tests for various things that are security-relevant and so it's sometimes really unclear what the intended semantics are and if you start to write tests then suddenly you realize. Yeah, so Butterfuss has had a long-standing requirement that if you're adding any sort of feature or whatever that you write an XFS test because I'm not, like as the reviewer, I'm not able to think about, I'm not a machine, I can't like reason out what's going to happen. I need to have a test so that when I review or as a maintainer, when I merge it, I know it's working and when other people change and I know it didn't break. And like these are kind of like, you say them out loud and you're like, oh yeah, duh. But like we don't do this generally and I think that thing like VFS is a really good example of this. We're adding new interfaces or whatever, it should be thoroughly tested in an XFS test before it's merged. I think to give credit where credit is due that actually is one of the things that folks like Kristoff and Dave Chinner are actually pretty good at, which is anything which is adding a core VFS feature is they're one of the first ones to say, where's the XFS tests? Yeah. And the other thing I'll say is a lot of the work that I did with KVM XFS tests and GCE XFS tests was so I could scale, right? I've brought up new, I've been onboarding some new junior engineers and it's so much easier when you have a test suite which is easy to run with a pre-built test appliance. I think folks who haven't actually tried to start up using XFS tests from scratch very often have forgotten how hard it is to set up XFS tests, right? And I think, so I think Louise and I both have our own test runners and the more we can encourage people to use an existing test runner if you don't wanna write your own, it really, really helps. Yeah, absolutely. I'm not saying this like we're all doing a terrible job. We are all in pockets doing great stuff. I used your KVM XFS tests really early on like years and years ago, right? And these are all really good ideas but what I see is we end up like pockets of us are doing good things and some of us are doing, some of them are doing different things and I would love it if we could all as a community talk about these things and do better and kind of help just, because this helps everybody, this helps new people get up on board and get going faster, it helps the maintainers, it helps the developers merge stuff quicker because we can have higher confidence that we're not breaking anything and it helps the maintainers focus on the important things, keeping track of the big picture, what new features are coming in and helping to moderate when we have developer disagreements. And helping to pick a path forward. A lot of times I see things where we have like a really big blow up and you have two strong developers arguing on one side and there's no progress towards a real solution and this is what I want the maintainer role to do is to come in and say, okay, this is what we're gonna do. You're gonna do this and you're gonna do this and this is the way forward and it'll work. Third person stepping in and essentially saying, calm down. Yeah, exactly. This is not happening at all, currently. So one thing I wanted to emphasize is I think you said in the BTRFS community, certainly there may be a few where it's expected, when obviously Kristoff and others are good about this too, when you do a feature, when you do certain types of fixes you're expected to add a test case. I think I'm looking at like my example as a maintainer. 300 patches, 300 change sets, maybe 10 test cases were added. Maybe a little bit more, but about 10 per year. So that's not a really good ratio, one out of 30. By contrast, take Samba, right? They're pretty strict if something, I mean obviously there's some things that there really isn't a good test case to do, but in many cases you can. So they were very, very good about, if you added some feature, they would encourage, almost require a test case. And I think it varies a lot by community, but if the expectation is not just the BTRFS team does it, but the EXT4 team, the XFS team, the NFS team, the SB team, this is gonna change everything for a maintainer in a good way. Because we do a terrible job of putting in regression tests compared to some other projects. And I think we could do a lot more. Yeah, I don't wanna bag on people because I think in general the file system community does a really good job with test coverage. It's more about actually running the stupid things consistently, right? Running them and making sure that everything is working. We have a, I know that historically maintainers do it, like I know Ted does it like all the time. And what I want to do is move towards this thing where that's easier, it's easier to run the tests, it's done in a consistent way, and it's done in a more automated way. So Ted doesn't have to sit there and do it, it just happens. And then maybe Ted doesn't just have to do it, like you can just push somewhere. A developer gets results before they even go to Ted or me or Chin or whatever. Yeah, I think something else it's probably worth talking about. And again, maybe this is just spilling over to tomorrow's XFS test talk is sometimes I need to run for example, NFS test, because I'm actually trying to debug an NFS problem. And I don't necessarily know what tests are expected to pass or fail for NFS because NFS is special and XFS test is generic. And so somewhere there's a wiki page that I found which said these are the tests that are expected to pass and these are the tests that don't. And I hope it's up to date. I used it, it mostly seemed to work, right? But that's actually one of the areas where I suspect we could collaborate, right? It is, you know, how do we communicate amongst ourselves? If somebody other than me wants to run EXT4 tests, how do they know which tests to exclude because the tests are buggy and it's a lot faster for me to exclude the test than to fix the test to understand EXT4 cluster allocation, right, which the XFS tests don't do. So I just exclude some tests. But unless you use my test runner, you won't know that. That's what I meant by the baseline. It's the thing, I spent one and a half weeks or about two weeks talking to him because he was knowledgeable in the file system that I was touching. And constantly asked, is this test expected to fail? Is this test expected to fail? And because it's a file system that's on top of other file systems, I had to test four underlying file systems. It drove me crazy. Yeah, and like, I literally spent two weeks doing this. Yeah, and I spent a month. Like, I just, it fucking drove me nuts. And like, I had all this data. I have a year of continued integration testing. I was like, okay, I know exactly what fails all the time. And I went down, I've tracked down every single one of them. I fixed them or I disabled them. So with KDevOps, what Lewis is doing is he's basically maintaining an expansion list as well. So anything which fails gets added to the expansion list as anything which fails and he thinks is not should be excluded goes into the expansion list. And that is how we kind of have a baseline against which we can check. Yeah, so. Like, I think that's a good solution. I like my, that's a great solution. Let's not say this is bad solution, but I spent the time to get us down to zero failures. ButterFS should not fail at all ever. If it is, then we have a problem because that's a lot easier to explain to a new person than, oh yeah, go check this exclude list. We expected these ones to fail. Why the fuck are they in there still? But I mean, there's a lot of flaky tests, at least on XFS. Though I've been spending a lot of time on the XFS test with failures. And there's, you know, if the phase of the moon is right, then this test is going to fail. Yeah. Generic slash 270. So I think that flaky tests just we take, like we take them out of the auto group. What I should say is I want the auto group to be to always pass. So I think if you run checked minus G auto, which is the normal set of tests, like those tests should always pass. I think there's a place for the flaky ones. They're like stress tests or whatever to like, you know, make sure the thing doesn't explode, right? I think that's a great, great thing. But like general, like I'm testing to make sure I didn't regress anything. Nothing should fail. What about adding, if you are worried about, you know, what tests to be run, adding that to the maintainers file, at least having, you know, this subsystem, here's the test that you should run, or at least a link to something that says, here's where you should go and has all that, like a wiki or something to do that. And I guess someone must have mentioned about, you know, setting up tests may be difficult. Perhaps you could have some sort of repository of like VMs that actually say here, download this VM, load your code to it or boot your kernel with this VM, have some instructions on doing QMU and go run your tests. At least that way they don't, you know, ruin their file system if they did something wrong. And then the VM would have the file system that'd be testing. Yeah, so like, this is, that is what I'm moving towards with the KDevOps stuff, is that I literally just have a wiki page that says, install KDevOps, run this, because I have configuration stuff in KDevOps to point out by the ButterFS tree with the ButterFS progs of all the configs that you need. And then you can just run it, and it does everything like I do in my setup. That's the ultimate goal, right, is to make it as easy as possible for random developers to wander in and test and make sure they didn't break anything. The other thing we, someone needs to do with XFS tests is document much better how to actually write tests. Because I added, I had to add some tests to it. And went to the documentation direction, there was a readme file saying, in there saying someone should add some documentation. So I added some documentation, I've noticed some people have extended the documentation, but other people have added feature things and haven't bothered to add them to the documentation. So adding new tests also needs to come with documentation requirements, or at least adding new library things for testing things needs to come to the documentation requirement. So people can work out how to write tests. Definitely. One thing to note here is that half the problem sometimes is fixing XFS tests is hard, right? So, you know, we were testing XFS and XFS and some of the non-standard configs, like if you're using the real-time config with a 28K real-time config thing, like blows up on the standard XFS tests. Now, if you take Derek's XFS tree, he has like several hundred patches, not all of which he's managed to get upstream. And if you cherry-pick the right fixes from his XFS test config, then the real-time XFS config will actually pass mostly, right? And so that's actually half the problem. Like for EXD4, the default 4K saw thing were actually pretty good. But for some of the more exotic configs, figuring out how to actually make generic XFS tests pass for some of the more exotic file system configs, like XFS real-time, like EXD4 Big Alec, which is a clustered allocation system, is actually not trivial. And a lot of it is, again, maintainer bandwidth, right? I spent a whole day looking at all of the flaky tests and all the test failures and figuring out how do I clear some of the test failures from the more exotic EXD4 cases, right? But I'm the only one doing that. And help is needed. I think if there are people who wanna help their file systems get XFS tests to be clean, don't wait for the maintainers to actually get the test clean, help them clean the tests. One thing I wanted to do that's related to that is some time ago I tried to add an FS info system calls. You could actually query quite a lot of information about the capabilities of the file system, what it supports, what it doesn't support, what its limitations are, because I thought we can take that and we can feed that information into XFS tests to say this is what the file system should be able to do. So it helped key which tests you need to run and or are there, which one you need to run, which ones should fail because this feature is not supported or it's not applicable in some way. Right. I think we're wandering off into the XFS test weeds a little bit and half third of the room does not give a shit. So there's a lot of good work here. I just kinda wanted to raise awareness about like the different things that different communities are doing and what I envision a really good way to kind of solve the problem of maintain or burn out is to kind of offload a lot of the things that maintainers do, automate the testing, distribute the review stuff and free up maintainers to do more of the vision and moderating and arbitration work. With that being said, it is 5.39. We have dinner at six o'clock over there. It's literally right down the right down the pathway here. It's the Oasis, I wanna say. You can find it on the maps around the resort. Go drop your stuff off, come back here around six. We're all gonna have dinner. It's been a great first day. Also, I meant to say this in the beginning and I didn't because I didn't write anything down. The Linux Foundation has been fantastic. They set up, they did all of the stuff for the virtual things. I've been working with them for three years. And like they go through and they sign the contracts and they get the hotel set up and then we cancel. So they did all of it, like as much work as I did, it is pales in comparison to what the Linux Foundation did to do this every single year. They got counting tracks like signed and AV people set up and all of that every year that we had to cancel and then got it working again this year. So they did a wonderful job. So round of applause for one expenditure. All right, see you everybody at dinner.