 Then I will roll into the thing except for at this test sorry The so Ted, I know you do this and I do it too we have our own homegrown setups for running FS tests and butterfs has FS tests that run and then we collate all the information. We have a website that shows you What failed and what configuration and all that stuff? Like that has been huge and that's been running for a year Lewis's approach is to run the thing a thousand times in a row to see if anything fails My approach is to run it once and then I get a good after about a year or a few months I've got a good list of what is flaky and what isn't and I've gone back and fixed up the flakiness and fixed up those things. These are two different models I think they're both pretty valid, right and to your point earlier Ted about like best bang for your buck for me This has been the best bang for my buck because I'm not gonna run a test a thousand times unless I notice there was a problem, right? I mean, I think one of the other things which I've certainly learned at this point is I trust XFS tests way more than I trust You know Linux next testing because I find the problems in XFS tests I don't find them in Linux next soaking and it's also the case that there are a Lot of tests where you know There's a flake percentage of 15% in reality. I don't see users Complaining about that bug when I finally get it fixed It's just XFS test is way more nitpicky than the vast majority of the user base or Even our most stressful production workloads at work And so that is probably one of the reasons why I don't stress when I have tests that you know, there's a 15% flake flake percentage because I Just know I can't fix them all. I would love to fix them all I don't have enough hours in the day or you know engineers on my team to get to all of them and you know That is an uncomfortable truth and it's also something which is hard to get across to new people who are actually trying to You know get started with testing one of the things I've actually been Strongly thinking about is I have these exclude files that sort of explain all the different tests Why I exclude them why they fail some of them are XFS test bugs some of them are just We flake on this 15% of the time And I'm seriously trying to figure out where do we check them in so other people can see them, right? I mean, yes, they're in my github repo for GCE XFS tests or KBM XFS tests, but I don't know should we be checking some of that into kernel documentation with a freshness state and Just sort of that way other people can find it and it's all in one place Yeah, should we put it somewhere else and that might be an interesting thing to discuss before we do that Could we rewind text to Linux next it will run XFS tests if you ask it and give it a config So some of us are relying on Linux next actually running that so is your experience that runs aren't reliable You just haven't tried asking to run it on your truck. So I think part of it is that Linux next You are you talking about the zero-day test spot? Yeah. Yeah, the zero-day test spot only tests One configuration and that's the one that's always clean It's also the one the vast majority of users use which is the 4k block size I have a whole bunch of other test configs like, you know, one K block size Which is how I test power PC with the you know 4k block size 16k page size config And that's where I have more of the flaky tests And you know if I worked for a company that made power PC machines, maybe I'd prioritize them more But I don't so I you know, but the point is is that that's where a lot of the flakes are and Linux the zero-day Ba is only testing one config out of like 12 So I'm not sure Intel would be willing to you know do all that extra testing But every test that is useful and to Lewis's point is sort of not Intimately failing if we add it to Linux next and it gives us a warning Test we don't have to run is it useful to make more use of Linux next to this So I don't know what the communication path is to you ask a fangwai. Okay. Yeah. I mean, I think the other thing is is that Currently we would need to make sure it understands things about like flaky tests, right because Very often the zero-day bot will Pronounce something as a regression when it sees a test failure and it doesn't know necessarily that oh, this is a flaky test So it's not that this commit introduced the problem. It's just the zero-day bot got unlucky, right? And so There's some work that we could do but I certainly we could talk to Fengang about that. Yeah I definitely want to move towards this this new reality where it is easy to tell what we expect to fail Because I have no idea what we expect to fail next the xd4 So if I make an xd4 change, I have to run it once make the change run it again if I know what you xd4 expects to fail then I can just use their exclude script and You know Liz does this with kdev ops like he has the excludes scripts committed to the thing Which is likely what I'll do with butterf s But if we can do it if we don't all want to use gate of ops. I'm okay with that we can pick a We can make a new repo or whatever We can give all of the maintainers and the core people commit access and then we can just update our own exclude files as Things change and like then having those exclude files is really helpful for onboarding engineers. Hey, go figure out why this test fails Yeah, one of my wishlist items which I will probably do when I have time to do more XFS test hacking is A mode where when a test fails it will immediately run it 25 or a hundred times So it can establish a failure percentage And that and then what it can report is not only you know generic 382 failed It failed 15% of the time right because that's actually the useful bit of information I have information now that I periodically collect about failure percentages And the problem is I don't want to auto exclude a test Just because it's failing 15% of the time because I know that if it's in the exclude file I will stop worrying about it right that's just human nature So I need I actually want it in my face that you know this test is failing and I'll say oh Yeah, that's one of the tests that fail 10% of the time I should really get to that one of the day these days right But if we had a better way of encoding that information so that the test automatically Establish the flaky percentages and then that could be documented somewhere that would probably serve both needs right because There is the developers requirements, which is I want to know where the bugs are and I want to actually get tickled to Remember to fix them and then there is the I'm a new developer and I want to know if my patch has caused Things to have gotten worse right right and for that it yes It absolutely makes sense to exclude you know auto exclude tests that even fail 2% of the time Yeah, because otherwise it will stress out someone who is expecting all green tests right right And so we need to sort of answer both needs In in some sort of solution that we we can like everyone is happy with Okay, the risk of getting into bike shitting about this is there a reason that we can't just have this in the XFS FS test repo itself have the auto group be what a onboarding person can run and expect to pass a hundred percent of the time have a Maintainer cares about this, but it's failing group. Yeah, but part of the problem is is that it's very config specific right? So so I think I I know an easy way that this could be done I think One of the ways would essentially just have a fast test Essentially just use a git subtree that allows us to specify All these failures per kernel per configuration and then you can have either a fast test or any test infrastructure Whether it's a teds minds whatever whoever's can essentially just embrace That that git tree is a git subtree and then whenever you want to update it Simply just do it in one command now if you're not familiar with the gits up piece. I just want to clarify it It's not a skid sub module. It's very very very different. It's pretty much like doing a merge commit locally That's that's pretty much it so then I guess Maybe it makes sense to in the same place where you have this the exe for config that you're running have the Exclude exclude list for that config, but then you're saying that it's could also be kernel specific It's kernel specific and it's also you have to get you have to think about the fact that it's also Specific to the type of test environment that you're running right so like for instance It'll fail for instance the tests Failures that I have for instance will fail for sure if you're running. Let's say on a loop back device, right? Granted, yeah, this can also fail in a real type of block device right if you're running the file system on that it's just that it seems to that Using loop back devices the failures happen a bit more often. That's all But yes, you know technically speaking we are talking about configuration changes as well We have to think about also the fact that there's a kernel configuration aspect to this too, right? So I guess at some point we can stratify it so much that we can't none of us can share it So I think at some point we'd have to pick a Common ground where we all say okay this fails on kdev ops Let's just add it to the global exclude list and Just so we can for the sake of us all sharing this and just stick that in the get repo like Joseph said yeah And so like the other alternative to the exclude list things is what I did originally Which was had the running list of like have the nightly tests with what failed and so that way I always knew like I click on a test and see okay This one's flaky right and so I knew when I ran my local tests I could just check against what has been happening. So this kind of gives us the best of both worlds, right? Yeah, I think the other thing is is that if we want to do promote interchange We will need to come to consensus on things like kernel configs and things like the names of The configs right so I have you know a whole bunch of configs, right? So I have things like e xd4 slash 1k or XFS slash RT dev underscore log dev, right? That's one of Derek's ones I came up with the name and so if we want to be able to intelligently Exchange include files Exclude files right or you know whatever we need to be able to know that we're both talking about the same file system type and Configuration of that file system so I think this part is easier than you think it is because so what I do with my My thing is like I have different hosts that are running, right? So the results are broken down by like username host configuration name or whatever so we can easily like Structure the things so much like structurally get trees. So it's like okay. This is Ted's thing and so all of Ted's stuff gets Ted's Prefixes okay, this is Ted underscore and then slash whatever your new name the thing so you name it whatever arbitrary thing it is and then one thing that we should probably add XFS tests is an exclude lists Environmental variable that we can put in the sections and we can point at individual section excluded lists that Per section for things that we expect to fail for those particular sections Yeah, I mean there are multiple ways we can sort of do that, right? I actually have multiple includes a different exclude files at different levels. So I've like a global exclude list exclude list for all XFS tests for all XFS related configs and then Config specific exclude files, right? But the thing about yeah, we could have a separate one for this is Ted's system This is Louisa system. It doesn't help for Someone who is you know coming at this new unless they're actually using our test runner, right? So for example for someone who wants to use my test runner I actually have a test appliance, you know, it's a KVM root FS image. That's up on colonel.org It actually has the exclude files built into the test appliance and so therefore if you're using You know my test appliance you use my test configs and you use You know my exclude files and it's all one sort of turn key thing the part that gets a little bit tricky is If you're trying to do an XFS patch and you want to use your test runner How do you map my config names into your test runner config? That's where we can either decide that yeah That's just a manual process or we could try to see if we can try to automate that somehow, right? and that's what I would like to do is have a a repo with a set way that we include these configuration These configs and then it's just a matter of wiring it into kdev ops your test harness Whatever and then that way it is still turnkey There is a centralized repo that we have a way to do and then whatever test appliance whether it's a new one or the existing ones they They know that this repo looks a certain way and then can integrate it. Let's think about this like scaling this right? It seems like having a tree where we can commit, you know these sorts of new failures for instance That seems like a welcome thing In terms of kernel kernel kernel configuration, I think it may be possible to at least Streamline on a generic kernel configuration. I really do think that's possible I I can say that at least I have one kernel config that works on all cloud providers and also works with virtualization So and if you guys have any other changes that you guys wish to add, you know Send the patch I have the kernel configs there present as well So in that regards, I think we can start off trying to see if that works just to simply You know kernel release specific type of expunge list right now How do we scale this though right the next step would be right? Well, who's gonna send patches and who can commit to this repository? How often do you update that sort of stuff? Yeah, so one thing I wanted to mention I think it would be great for new people trying to run tests without you know knowledge of which ones are flaky or anything else Like if the test output had a way to say this test is flaky 80% of the time or something like so they don't have to go to Joseph's webpage and compare like it's just right there in the test output Right, you know, we're talking about putting this stuff into a get repo somewhere like that should be right in the test output Yeah, well, that's that's what happens right is that you know, I'm All of these test runners have some sort of reporting system and the reporting system may just simply be you know just a straightforward translation of XFS test output or the J unit XML file, but we're essentially Formatting that so that if you're using our test runner the excludes automatically happen or whatnot or once we start adding flaky tests You know, we can sort of put that there, but I think the assumption is that is Something that the test runner does now right there's this interesting question for which we don't have all the right people in the room to have that conversation which is What should be in the core XFS tests and what should be in the test runners? And what maybe we'll have to factor out because the XFS test maintainer doesn't think that should be an XFS test But Louise and I are both doing it So maybe we can share code so that we're not reinventing the wheel But there is a certain question of where should that functionality live right well at some point like so now we're Experimenting like we're figuring out the best way to provide the best information to the people to increase velocity But at some point we'll have that dialed in pretty well, and then it'll be really obvious that it's time to pull that into the main XFS test Yeah Yeah, so one of the things about the kernel config is I actually have a very standardized Utility where you know I call it install dash K config and it sets up something that works for KVM and GCE It'd be interesting to compare notes to see what you have for AWS and other cloud systems But then the other thing that my install K config has is I can give it command line options like dash dash lock depth dash dash k asan dash dash block tests because block test requires friggin modules And so I have a special install K config if I'm actually going to be running block tests And I think I have one or two others But it's sort of like the standardized utility we should compare notes because maybe that's a utility That we just start to share now Whether that's a sub module or we just manually keep it in sync But that's probably something that we're both doing that is so similar that it's something that we can you know cooperate on Yeah, the actual K config is a lot less interesting than the delta that makes that K config important So turning on butterfast debugging or turning on lock depth or config debug page alloc or whatever it is Like that delta is really small and that's the important part Well, so the and this is one of those interesting things right which is I'm looking for repeatable results And so having a standardized K config Means that certain tests will either you know almost never happen or always fail But I don't have to deal with kernel configs as being yet another variable, right? And so that may actually that's one of those interesting things right because you could argue that Having everybody use a different set of kernel configs maybe with some sort of randomization put in finds more bugs But I'm actually looking for something different which is I want stability so that the bugs are reproducible Because that actually is one of the things I ask people to do is I will tell them You know your patch is causing this to fail if you use KVM XFS test with these arguments It will fail and if you download this get repo and whatnot you can reproduce the failure I'm seeing and that requires a stable kernel config And even if it's not the best kernel config for finding bugs the fact that it is stable is actually more important for my use case Right, and and that's actually you know again. That's part of the what are the goals of testing Are you trying to find all possible bugs? Are you trying to reliably reproduce some set of bugs? Yeah and so like I Think we're moving in the right direction here I think the next thing that I want to talk about is Just one thing the thing that you mentioned Chris that you wanted to have I had already posted The first version of the patch set that Annotates tests with fixed by commit also had known issue on a face So you'd get the failure but you get an hint knowing that somebody has already failed that on a face I mean, yes, you can make it a little better with confidant stuff You can add the text for the hint, but that's the idea Exactly what you asked for yeah And the real thing that people are trying to answer when they look at a failure is are the maintainers going to get Like I have failed this test. Does it matter? And knowing that someone else has felt it in the past is different from the maintainers know this is flaky and they don't actually So right now, it's not up to merging because there was some objection, but if you're interested we can bring them back This is a really important the last one of the last larger patch series that I said that I sent I Created a git repo where for all of the X of s test layouts that I had to run the change for Posted the full output and Then Andy and posted the reason why I think each of the failures that was observed has nothing to do with the change That I posted with which I was like, why do I have to do this? It's cost me like two hours, right exactly and that's like the My this I think standardizing on how we do this and how we run and making it easy for developers to reproduce and have a good answer Leads me into my next thing which is I don't want us to be running it ourselves in our own hardware anymore Like we should all or not all this should be a service provided by I don't know perhaps the link Foundation or somebody where that We have this thing that runs consistently, you know, perhaps just on on certain trees or whatever, but So that like, you know, I have Four machines sitting at home that I've built to like do this in the variety different ways I want to throw all those in the recycling them and I don't want to have to maintain my own hardware I'm sure Ted you don't want to though. I know you use GCE so whatever, but like that's the thing is like I want Everybody to just be on GC or AWS or whatever Similar for the like we do it for cable for example cable at some point started replying To trees that I push same build succeeded same thing for ex best test Yeah, this this is what I want as I want a Community thing where we're all using the same stuff and then we can do fancier things like tied into the get trees I can push a get branch and it just automatically gets tested and I get an email back saying yeah everything works. Good job One of the things that might be helpful is if we actually spoke with one voice There is somewhere in Colonel CI github a bug a Github issue where I tried to explain This is what I need for Colonel CI to be useful for me Yeah, there's a reason why I'm not using Colonel CI It doesn't meet my needs as a file system developer And I actually tried to outline this is the feature request that I would like you to have and I did this at last year's plumbers And they said thank you for your input, and I have never heard from them again But you know we have to we have to pitch in the labor to yeah for whatever it is Right, it's not all on you obviously Yeah, but I think if we all sort of agreed this is the thing that we would like Colonel CI to do It would be great if there was a centralized dashboard that somebody else was running for which we could upload all of our test results and You know we can have the debate about should it be k-tap or should it be J unit? You know, you know, can you please upload all of our test artifacts? You know all of the XFS tests, you know dot out dot full dot bad what not into the test appliance That's again one of the things Colonel CI doesn't do If we could actually give them a feature request and they would actually implement it That would be great because that way even if we're all using our own test runners And sure, let's see if we can standardize that I would love to get the dashboard standardized first Yes, because that way we can share each other's results So one of the things is really really missing here to have IRC channel Because what happens every LSF we have this topic and we talk annually and then things go missing Whatever discussion we have they don't go to where they're supposed to be but things just they don't complete So if we have IRC channel, then we can at least sync on a channel See what needs to be done logically in bi-monthly basis Also, I mean that doesn't work with time zones Everybody's not at the same time. So email is fine Chat thing doesn't work for me even within my team. So email, please But I think the point about follow-through is pretty valid, right? I mean Yeah, so I haven't seen at least Good follow-through happening over the email. Well, or things get like missing. That's my whole point I have to say I mean like I I knew that this was a problem and different LSF mms This was I as you're saying this does come up I'm still I still I haven't even presented here or talked about in details about k-dabs I plan to talk about in plumbers though, but I mean I did some work that should took years Let me tell you a lot of time so You know, I really Do think that I put quite a bit of effort into the architecture behind it And I'll just say is the only reason why I didn't use Ted's stuff It's just because I wanted to support multiple clouds. That's all but you know in terms of Public expunge lists. It's there. Do I update it regularly? Yes pretty much as soon as I see a failure It's there kernel configuration that works on old cloud solutions present. What else do you need? Yeah, and this is the thing right as I've been doing this by myself Ted's been doing it by himself and You know to be fair to Ted. I tried to use his stuff years ago, and it worked pretty well It just didn't work for butterf s at the time The reason I jumped on k-dabs was Random I actually don't know why I did but I This is the thing is like I can't sustain the development of my stuff and the development that I have to do in butterf s I need community support This is doing the community support for the part that I don't want to do which is making All the virtualization stuff work and make it all consistent and make sure it continues to work with all the different things I want to give him the tools that he needs from me to Do the same thing that I do at home, and I'm gonna take his thing And when I make it work in my system once we have it his system works and my system works That's two people that are using the same project and then we can grow it from there And I can get my other Developers to use it and work out the kinks and once we've got all the kinks worked out It's working consistently and we can collaborate on the exclude files and all that all those things Then it's a lot easier to go to Lenny's Foundation or somebody else would be like, okay Now take this off our hands or pay for us to use it in the cloud or whatever we already have Sorry, we already have this big project that we're all using Let's get it integrated and let's get it working So the only thing I'll add to that is that the Lenny's Foundation isn't really set up to fund that it's set up to Channel funds from other people into a central place, right? So, you know, we'll just have to gather up between our companies the the funding to make it happen That's not hard. I also just wanted to throw out there to that. I do think that Zero they can likely do some of this but the problem was the complexities around setting up a fast test It's just a pain and even for block test to you know So to be fair, it was just really complex and I know that certain Policy system developers like James, you know, for instance does ask 08 to enable a few tests But they only run it once and the only run it for a few series of tests So I think that zero they could potentially if they really wanted to or we wanted to if we wanted to ask some For instance to consider using k-develops just as a way to enable running file a fast test I think it's possible there that doesn't require much funding either It's just telling me hey, you know just set this up run a few make commands and didn't just run the loop The problem though is that as kind of like I had in my presentation though There's about 50% of the rest of the work which is reporting the issues, right? Now if we don't care about reporting the issues and we just want the expunges then that's fine and easy Right, there's a whole lazy baseline thing, but actually collecting as a pet Indicates the artifacts that's very valuable right because that would be really beautiful to have That will take a bit of resources, but yeah, you know, maybe maybe it doesn't require much funding You know, I'm not sure really right. So like my my plan is With kdevops is to first of all get it working for my hardware and second of all tie it into my existing Results thing because it's gonna have to like I have my existing result thing. It is terrible But it's something right and if we can be running on a bunch of stuff then we have We already I've already done the work to get the results stuff And then all it takes is the Colonel see I guys look at me And be appalled and horrified and be like, you know what let's put this into Colonel see I Well, and the thing is the funding is the easy part The hard part is the influence to do all the other stuff that we're talking about here I'm to get all the file system people to go down the same path and agree on what's important and all that stuff That's the hard part. Yeah. Yeah, I think there are two hard parts, right? One is I've actually said this which is if somebody wants Colonel Google Cloud credits they want to use GCE XFS tests I'll give it to them if they're willing to commit to analyzing the test failure so that we actually have Human annotated exclude files because otherwise I can run the butter fs test. I can run the f2fs test I do that every so often what I don't have time is to analyze the failures to understand what's going on with all the failures, right? That's what actually requires, you know, real, you know file system engineering time to do and You know if people want that that that actually is I think the critical short resource The other thing which I think might be helpful to do is to actually start thinking about Maybe collaborating on some requirement documents because I've looked at K DevOps One of the reasons why I'm continuing to do my system is there are certain critical things Where I optimize KVM XFS test for a file system developer not as a quality control person, right? Not not as quality assurance, right? So for me it is absolutely critical that I be able to Run KVM XFS test shell it plucks the kernel out of my build tree on my laptop that I have just built myself 15 seconds ago and Pop up a QMU and then run the test and then see whether or not I can get things working or not, right? That is a very very different requirement than Something which is you know plucking kernels out of a get tree and building from the get tree. I Added that later, but I have a whole set of requirements which are for my development workflow and I think one of the critical systems is it'd be great if we could Standardize on one system. I've looked at K DevOps. It doesn't meet my needs as a file system developer So I keep doing my own thing, right? And so if we could somehow unify systems so that we were all using one system or at least we can unify Parts of the effort like kernel config management like exclude file management like file system scenario testing, right? You know different file system configs that be great, right? I mean, let's see what we can unify at first and then maybe we can eventually get to one system But it may very well be that there will be one system that is really optimized for a local developer as Opposed to something where you're just simply running a test spinner a hundred times 24 hours a day And sure that should be in the data center somewhere, right or a cloud VM somewhere But you know the local file system developer experience is also important I believe for some file system developers. Yeah, so like I totally agreed Ted because like I have the same requirement right like I'd rather just build on my local machine and Do and I use vert me for this right like I just Build and run vert me and it does my thing. That's where I do my local testing, but I see That set up as separate from what I do with KDevOps KDevOps is replacing my continuing gracious Sorry my daughters Anyway Through me off there. I see these are two separate things, right? And I think that if KDevOps could grow that ability to like just throw a random kernel on there I would So it should it should be possible to easily add that because you know again one of the things that we're discussing a lot here is variability and This is exactly why I ended up embracing Kconfig So this this should just be a matter of adding a Kconfig entry to say hey, you know instead of using this Getry, you know what here use my local directory and use the you know plan 9 and there you go like and This is Again, why I have put effort into doing this because I know that like I will get it working for me And then I can wander off and you were going to keep it working And the community that we build around this stuff is going to continue working not that I'm not going to Like I would continue to be the user and contributor But like I will be able to put that in the back of my head Lewis has got this covered It's going to continue to work and I can go back to focusing on but our festival So going back to Ted's idea on requirements documents I think I'm totally on board with rallying behind anyone who's interested in making this their mission right and You know the next step is information from Lewis on what he needs from us and we can write down what we need from him Yeah, you know, I think the Lewis and I have a good idea of what I need and like the things that we need to add for For my setup, which is mostly just the PCI pass through and then there's other stuff that I've got to do but By and large like it works really well, right? So I will say this, you know, I Use Kate the ops also for my own kernel development. And so the only thing is that I actually run my getries within the gusts So essentially instead of having my development environment Run in my home directory on my laptop I simply just SSH into the gust and that's where I do my development. So You know, I think this is a philosophical difference, you know, because for example I got really valuable development because one of the XT for developers runs XFS tests on a raspberry pi and You don't build on a raspberry pi. Yeah, just I do right I build on a raspberry pi Yeah, it's just like and and so for me one of the big things is that I don't build in the VMs Because you know when I'm running KBM XFS tests on my laptop, I don't have a 48 CPU, you know Laptop, so I'm actually trying to conserve CPU And I actually build in my host system I only run my tests in the VM I don't build in the VM because it's just more efficient that way, right? And so that's a philosophical Yeah, I know because I want to be able to run tests on my laptop when I'm in an airplane, right and That may not be important for some use use cases Yeah, it should be possible to easily add support for that Yeah, no, I I think we've got a lot of good things I think the biggest thing is kind of Standardizing exclude lists and stuff and then documentation. That's the other big thing with KDevOps is like I've spent a lot of time just trying to get work for me because You use Slez and or use Debian and I use Fedora, right and it's just more comfortable for me But I want it, you know, I think For me, it's going to be getting it to work for my my setup Documenting it and then we can start working on the communal exclude list setup and config setup and start integrating that and then from there We can start worrying about how we're gonna run this for everybody automatically without us after I have our own hardware All right Perfect right on time I think in terms of next step Would it be perhaps useful to maybe start some threads on the FS tests mailing list? I'm just sort of thinking about how can we take this forward, right? I think there are a bunch of things about like, you know, kernel configs like exclude lists and That may be easier to do via email and maybe that's a way that we can sort of continue that conversation as opposed to Waiting until plumbers Yeah, no, I so in my head and clearly I'm not very good at estimating times, but Probably end of the month. I have kid of ops doing it replacing my setup beginning of June I have email of this is what I want to do. This is the My idea for the repo. This is all the documentation how to run kid of ops and from our setup And this is what we've done and then we can go from there Because I definitely by the time we all show up at plumbers I want at least most of this hoping that by plumbers I can run the stuff on AWS whether or not somebody else is Running it automatically as anybody's guest But I want to have at least replaced my stuff and have consensus on where we find exclude lists and has good documentation On how to run the stuff for different file systems I can easily also just put a repo that is abstract to the runner And then we can just try to share something like that. I don't know if that's that should be easy I could just send that to the mailing list. That's a next follow-up to then what I can do is instead of using an expunge list That's specific to K dev ops. It's a public one that's agnostic to any runner and that's a pretty much It would just be you know public and anyone can contribute to it But again, there's questions about who can contribute to it I'm you know, I think it may be just easy to enable all policies to maintainers to be able to push for that and for developers if they want to send a patch that I think they may be Reasonable justification might be to have also the artifacts somewhere for instance Yeah, I like I like to get up for this because it makes it easy like we just create a shared repo at everybody's user names And they can just push for their config stuff because Lord knows we don't need more people more maintainers for random get trees Let's just trust that we're all not idiots and push stuff Again, we still need to we still need to standardize configs. I think you're not under you may be underestimating how Complex that may be because it looks like you've got essentially one default config for each of the file systems at this point mostly I think XFS is the one Counter example where you've got like 12 or something But like you know For me, I have a lot of exclude lists which are if you're using EXT for big Alec with clustered allocation Here's a bunch of excludes right if you're using a 1k block file system. Here are a bunch of excludes Sometimes they're test bugs Sometimes they are generic kernel bugs, right, but it's in fact It's it's not just here are all the excludes for EXT for it's here are the excludes for all of these different Yes, and we have to give but we have to agree on the names, right? Yeah. Yeah, I agree I actually think that that's not difficult because you already have the names and you know Joseph has his names So I basically just picked those and in the XFS community I came up with those names and I did collaborate with the XFS community on those names. So I think we're solid there In terms of the in terms of the expunges I do have them per section too So they they are very specific per section when I see a failure in multiple sections And then basically there's an all dot txt file that basically represents all the sections too Yeah, I think you know, this is easily solvable and probably the least complicated part of it Yeah, well, we still need to document these are the mount options These are the make of s options that correspond to each of the configs, right? So yeah, we'll figure it out I'm just saying there actually is a bunch of stuff there. That's not That does require some some thought