 The next talk we have is Neil Brown from SUSE, he's Colonel Engineer at SUSE, and this talk is SysFS classes for virtual block devices which is slightly shorter than what I had in the program. The original title I probably should have shortened. So Neil, please. Thank you. Yeah, so when I came to make the slides, I realized I had to shorten the title because there's more of kind of a mini abstract or a long title that I gave him. I picked out the important bits and important bits of SysFS and virtual block devices because those are two concepts I'm interested in, want to talk about, hope you might be interested in how they work together. By virtual block devices, I particularly mean RAID sort of things, MD and DM, which I might call MDM from time to time because hopefully the distinction is blurring and all blur more. So SysFS and adding new functionality is something I've noticed a few times is a bit of a problem. SysFS isn't something that was kind of designed perfectly in the beginning and has a design document we can always follow when we're adding things to it, SysFS kind of evolves. It originally was really focused on power management and how to, I guess, know what device you had to power on before powering on the next device and so forth, but it's kind of grown since that in different ways and I pick up a sort of reading the lists and stuff that different people have quite different ideas about what should be in SysFS, what it's for, what its purpose is. Just recently there was a bit of a fuffle because in SysFS there are these symbolic links between like a DM device and the device of a component of it, it's called the Slaves and links going one way are called Slaves and links going the other way are currently what they're called, but it was suggested that this was wrong and SysFS should never been used this way and it's horrible, which may be true, but the people who originally meant implemented how are they to know, do you know? And so I figure maybe if I want to do something new it's good to try and find a forum to present the ideas and get people to complain and tell me how I am first and exactly what forum to use isn't clear. If I post some design document on the LKML it'll probably get ignored. People like code, they don't like design documents, if I post a code it'll probably end up going upstream before anyone complained about it. So I thought I'd try a conference and maybe there'll be someone here who could say oh I don't like that for some reason and maybe it'll be a good reason, it'll be good, but so I should ask who thinks they have some understanding about SysFS and what goes on inside it and how it works. A few half hands, who's tried to understand it kind of failed and it's just the same hand to go up and a few more. Yeah, it's got some really good stuff in it, but trying to say this is what it's meant to work, this is the rule for how it works is really hard. I mean the only rule you hear so often is one file, one value. One file, one value and I'm not sure that actually makes a lot of sense when you have like array values and stuff like that. So the only rule we have isn't much good. So basically I want you to tell me if the idea is dumb, reflect any way that you can, is this sort of thing you'd like to see more of in SysFS, what I'm talking about. And what I'm talking about is using SysFS to expose more of the structure of things like DM and MD. For those of you who don't know, DM and MD are the two kind of competing sort of RAID thingy implementations in Linux. They combine multiple devices into, well actually multiple devices, one device or more devices. MD focuses more on traditional RAIDs as RAID 0 and RAID 1 and RAID 4 and RAID 5 and RAID 6 and RAID 10, which is a bit special. What MD stands for is not clear. It's either multiple devices or method devices or something. DM focuses more on logical volume management. So there's less of the redundancy management. It's more of well chopping up a device in lots of little bits and reassembling them into whatever you want it to. But it's the same basic concept of taking some devices and producing some virtual devices out of them. And so they have a lot in common. But they're really two completely separate code streams developed by different people with different agendas who don't talk to each other. Not because we don't like each other particularly. I've met a number of the key developers and they're nice guys. But they're busy doing their thing and I'm busy doing my thing. And there's just not a lot of cause for overlap. But as that first point says, we have just recently beginning to see a little bit of unification. I've sort of wanted there to be more unification for a while but I never had the time or the motivation. What's really brought it along was there's a DM implementation of RAID 5. It's not in mainline, but it's in SLEZ. I'm pretty sure it's in Red Hat, I'm certain it's in Red Hat. And it's maybe being used, I'm not sure. I think there's been bug reports against it, so it's probably being used. And having a separate RAID 5 implementation is, well having two of them in the kernel is not really a good idea for maintainability, it's actually fairly complex. RAID 0, there's two different RAID 0 implementations. RAID 0 is really trivial. There's no big deal, RAID 5 is a different Kettle of Fish, though. So that sort of pushed me to sort of do a bit of work towards unification. Something else pushed one of the DM guys to do a bit of work. And so we have been talking to each other, which is really quite awesome. I'm not sure creating what we're currently created, which is I think in the current merge window, or the recent merge window, merged a driver that uses MD's RAID code to present a DM target, which maybe isn't terribly useful, but the important thing is a step towards talking to each other. Anyway, even if we merge bits of MD and DM at that level, there's still a point that there's some core problems in both DM and MD that need to be solved for both of them. It'd be nice to solve them for both of them at once, rather than I fix it for MD and find that it's really good. It doesn't quite work for the DM. And making horrible designs neat is a rewarding part of kernel development. Fixing bugs is kind of good, but it gets boring after a while. Actually doing something new and making it look neat and elegant, getting rid of the rubbish and creating new stuff is something I like doing. I hope a lot of kernel developers do. So the particular focus here is device creation. How do you create a new MD or DM device? What does that look like? I mean, if you think with a USB drive, you plug it in and then magic percolates all the way up the stack. It's an external physical event that causes it to happen. But you don't have any external physical event like that for MDM sort of devices. And the way device creation works at the moment is actually quite ugly. It doesn't play very well with Udev. We've made it work. Every so often Udev maintainer says, you know, this should be fixed to me. And I think, well, yes, it probably should, but it kind of works at the moment. So the point, what's wrong with it? You need to think about an arrayed array. Let's stick with arrayed. An array is, there are two kind of distinct objects. There's the mapping, the array, the description that says these few devices have to be connected together with this chunk size and this sort of layout and whatever. And that sort of describes the array. And then as a separate thing, there's a block device, a block device which appears in slash dev, which you can open, which you can mount, which accepts a standard sort of interface. So the block device kind of defines the interface to this thing. And the array defines the structure of this thing. That might seem a bit abstract. Hopefully it'll get a bit clearer as we move on. And the problem is that both MD and DM create these things at exactly the same time. They use completely different mechanisms. But essentially they're created at the same time. What that means when you create it, you've got a block device that doesn't work. You try and read it to look at the petition table, IO error. You try and do anything to it, IO error. So you there sort of finds this device, tries to initialize it, sees all new block devices. Maybe this is a device I should mount somewhere. Maybe it's encrypted and I need to do something crypto thing on it. It can't do that because there's no block device. So it can't respond to the add event when it appears. It's got to wait sort of for another separate change event that comes on. It says, oh, that block device I gave you a while ago, well, there's now data there. And that's an ugliness. And it's, I mean, that's a fairly simple view of the ugliness. More deep, you look further down, in the code, there are kind of ugly aspects of it as well. So they should be separate and they should be a clear ordering. You know, the array should be created first and then set up and then the block device in it. And this is where we connect to SysFS. Because SysFS is a lot about, well, one aspect of SysFS is a lot about device discovery order. As I mentioned, there's originally, as I understand it, developed to help power management. And so there's a devices tree. And in order to turn off, you can't turn off one device in the tree until you've turned off everything further down the tree because there's kind of a dependency. Unfortunately, life is actually more complex than a hierarchical tree, isn't it always? And sometimes you have to do multiple things that are dependent. A lot of things can depend on multiple things for power and addressing. But anyway, the basic idea of the devices part of SysFS is that a higher thing is needed to access a lower down thing. So things that are discovered from the top down. So in kind of a typical SCSI IDSR to sort of device, you have a host bus adapter, which has got a bus. And then in that, you find some targets. In the target, you find a logical unit. So the logical unit, you find a block device. On that block device, you find some petitions. In the petition, you find the file system. In the file system, you find a file. Anyway, so it's nice. It's superficially a nice, clean hierarchical thing. So I said there are sometimes exceptions that don't work. So how can I follow this pattern to make virtual block devices work? And it seems kind of obvious, once you've presented that way, that there should be a device in slash devices that represents the array. And then a child of that, a sub-directory or sub-file, something of that, the device that represents the block device. And then it again might have petitions, of course. But that's not at all what we've got. What, where a DM or an MD device appears is simply in, I got that wrong. Sorry, it's slash sys-devices-virtual block. There's this sub-directory and sys-devices called virtual, which just gets all rubbish. Everything that doesn't belong somewhere else gets thrown in there because it's probably the virtual device. But it doesn't tell you anything about the structure, sort of, who controls it, who owns it. It doesn't differentiate between MD devices and DM devices and loop devices and other sorts of virtual devices. So it's, yeah, it's a bit ugly. So what else, so currently the main thing you see is just the block device. You don't kind of see much of the array at all, which is not a bit. I mean, if sys-device really is the way of the future, and some people might think it isn't, I think the reality is that if we want uniformity in the kernel, we want to make as many things use sys-device as uniformly as we can. So DM currently doesn't expose anything else in sys-device at all. All you see in sys-device is the block device. If you want to configure it, there's this major arch-ottle for doing configuration and kind of works, but it's legacy. It's not the way of the future. MD does have some stuff in sys-device, but it's in exactly the wrong place, right? As I said before, the array should be above the block device. You define the array and then out of the array you make a block device. But the MD stuff in sys-device is inside the block device. So you have a device directory called MD0 or MD1. Inside that there's a directory called MD. Inside that there's your chunk size and your layout and your array level and all the other stuff. So it's in exactly the wrong place, which is my fault. I put it there, but I blame Q. If you look inside a SCSI drive, any block device is a directory called Q, which refers to the request Q, but that actually technically belongs up at the SCSI layer. When you instantiate a SCSI drive, it actually creates a Q so it can send a request to the device to see if it's actually a block device. So the Q exists before the block device, but it appears in sys-device afterwards. This is one of the reasons I have trouble understanding sys-device. It breaks its own rules because they're never being written down. But anyway, so the current state of MD in sys-device is ugly. It's in the wrong place and even though I put it there, I don't like it being there. It should be a way to put it above. The whole way MD devices are created at the moment is also ugly. You actually do a mooknod in slash dev to create the block device and you open it. That action of opening the block device causes it to exist, which is really backwards too. At the time, it probably seemed like a good idea 12 years ago whatever when this stuff was first written by somebody else, but it doesn't feel like a good idea anymore. So how would I fix this? What would I do to make this better? And is this a good use of sys-device? Does this actually fit your model of how sys-device might work or more accurately does it deeply offend your model of how sys-device might work? So people are more likely to talk about how they are offended than how they are pleased. So can you imagine this offending you or someone else? And so the idea is to create two new device types. One's a class device and one's a bus device, which itself is a bit awkward because some people tell me that this distinction is going away. They're all going to be called subsystems. We'll get to why there's a distinction between them soon. But anyway, a class device and a bus device. Vol group is obviously short for volume group. It represents the concept of a volume group. Now LVM has a concept of volume group. It's a good concept. It's a concept that doesn't actually exist in the kernel at the moment. So adding it maybe not necessary. It's kind of one thing I'm a bit certain of. But the basic concept is that it's a set of volumes described by one chunk of metadata. So if you have a bunch of arrays, a bunch of drives, they all have some metadata stored in them and it describes several different arrays that make use of those devices. Then that's one volume group. If you have this set of devices that have metadata describing stuff here and this set of devices that have metadata describing stuff there and they're kind of two separate volume groups. A lot of vendor metadata. So when you get these firmware aid cards or fake raid cards, whatever you choose to call them, they tend to... Well, those that allow multiple arrays will allow... will have multiple arrays on a particular set of drives, all the drives attached to their own controller with one set of metadata for all of those. So kind of an idea of a volume group rather than just individual volumes does make sense to some degree. And I think it helps. And the way you create a volume group is just to echo a name out to a magic attribute in slash sys slash class slash vol group slash new or something, which does have a little bit of a precedent. So this is one thing when you're doing stuff in SysFS, you want to say, well, how did somebody else do this? Is there a precedent? And there's a module called PKCDVD, which is packet access to CDs and DVDs, which allows you to do... It makes it look like they've got 512 byte sectors when really they've got something like 4K byte sectors and you can do reads and writes and it coalesces them and it does the right stuff. I don't know much about it except that you can create them by echoing out to a SysFS attribute. But when you follow examples in SysFS code, you've got to ask yourself, is this a good example to follow that other people would say, yes, that's a good idea? This is actually a horrible example that everyone who's ever used it thinks, oh, this is so hard to use. Of course, I've never actually used packet CDVD myself. I don't know how it is to use. Hello, question? Right, so this is a network bonding device. So just to repeat, the question was simply an observation that the bonding networking driver has a very similar sort of concept and so there is more precedent, which I like to know, that's always good. But there's a very good point that I've sort of thought of too is namespace control inside the various directories. I mean, it's kind of best, either a directory has a bunch of ad hoc names for like attributes of a device or on something, or it has an ordered list of names, like names of all the devices. Right, but if you try and mix those two, then you find you can't create a device named ad because ad is a special file for adding new devices. As this is a thing that came up, always mentioned about bonding devices, that you have both a list and some ad hoc names in the one directory, and that's certainly a bad thing. I think I've probably done that in the MD sub-directory too, but it's good to actually... I mean, it's worth kind of writing all these things down somewhere, somewhere, to try and avoid making the same mistakes again. So vol groups, a volume group, is a fairly simple concept. Basically, we can collect volumes into it. It gives sort of an identity to a group of volumes, which is more than just aesthetic. It does have a little bit of meaning, which hopefully I'll get to in the next slide. The other type of device, which is a bus device, is what I call volume, and the volume includes both physical volumes for people who understand LVM and logical volumes and RAID arrays. Everything that can conceivably contain a block of data or a contiguous set of data is a volume and volumes group together into vol groups. Now, the reason why volumes have to be bus devices is, brief pause, who knows the difference between class devices and bus devices? Who knows that there are such things? A few hands. It took me a while to figure out the difference. It's kind of... Once you see it, it becomes, oh, of course, but until you see it. The difference that I saw is that bus devices, you can bind a driver to each different device. So you can have a different driver for each device on the bus, I guess. Class devices, there is no concept of distinct drivers. It's just a class device has a set of methods. Somewhere deep in the kernel, it does stuff. But with a bus device, you can specifically bind a separate driver to each device on the bus. So a SCSI target turns out to be a disk drive. You bind SD to it. It turns out to be a tape drive. You bind ST to it. It turns out to be a CD drive. You bind the other one to it. SCSI. CD. Is it just CD? Anyway. So, now, this kind of is fairly similar to the DM concept of targets or the MD concept of personalities, that a... and a RAID array can be RAID 1 or RAID 5 or RAID 0 or linear or RAID 10. A DM target can be snapshot or stripe or linear or all the rest. This list goes on. And so it's a very similar concept of different targets. Different drivers. It's a generic sort of device for which there are different drivers. So a volume is kind of a generic container device and you can bind different devices to it to get different behaviors. So, to get something like a pVol, a physical volume, you bind a driver which I've just here called BlockDev which basically takes a block device from outside of the volume infrastructure and takes ownership of it, which stops anybody from mounting a file system on it for something like it, kind of takes ownership and makes it available for other volumes inside the volume group to make use of. And make it available as the volume for other volumes to make use of. So, you know, hopefully we could then implement the current loop driver or present the current loop driver in this volume abstraction in SysFS and it would kind of all fit together better. So there'd be some volumes, BlockDev and Loop, that just take something from outside and make it available internally. And then you have more logical volumes like RAID1 or Stripe or Catanate or RAID456 or Snapshot that basically take other volumes in the same volume group and do some magic to the blocks in them and present them as a new volume, right? And then these things could be arbitrarily stacked because each logical volume takes other volumes inside the same volume group and that's where the concept of the volume group starts to become important. There's a container that keeps these ones separate from those ones, in a sense, to make sure you don't tread on your own toes to some extent. And then once you've created a volume, it might be just a simple RAID5 out of a bunch of block devices or you might want to do, you know, RAID... you might want to do RAID50 and so you make RAID0 a striped set of RAID5 devices over block devices or something like that. You could combine them without having kind of the infrastructure of lots of block devices in the way. You can currently do this by stacking block devices but block devices internally have kind of a lot of extra weight that just kind of gets in the way. They're not really designed for it. It's been added as a afterthought and it doesn't always work the way you want. But anyway, once you've created a volume and you think this is a volume I actually want to export outside of the volume group not just to be used internally inside the volume group, you tell it again probably by writing to some magic attribute, SysFest attribute, you tell it to export a block device and then the block device MD0 or you probably can give it a name instead of all these meaningless numbers MD underscore my home or MD underscore backup or whatever would appear in the SysFest device directories underneath. So you'd have the volume group and then the particular volume and then the block device appears underneath that in the correct order, in the discovery order as you'd want it to. And that means the moment the block device appears it's already fully configured all the data is completely available when Udev sort of sees this block device now asks for a petition table it gets a petition table or maybe there isn't a petition table it gets whatever's there sees a XT3 false and super block and decides to mount it or not depending on what the policy is it just sort of removes that ugliness of doing things in the wrong order which sort of has to be worked around so that's what a volume would be and that's really the whole picture it's stuff that's kind of already there but it's exposed in SysFest it's exposed in SysFest with hopefully a fairly meaningful abstraction it does things in the right order so that particularly so the device discovery happens properly but is it a good idea and some of the questions I've actually got code that kind of does some of this not all of this, haven't written all of the different drivers for all the different targets but just kind of proof of concept code and it sort of works but is the volume grid level really necessary? DM which is the kernel side of LVM doesn't have a concept of volume groups at all and it still works, is that good or bad I think the main thing about volume groups see the kernel has a concept of ownership for every block device so when you mount a file system from a block device that file system owns a block device and nobody else can own the block device so you can't sort of turn swap on on a block device that you've already mounted and when you try and if you try and fix this FSEK a block device it'll try and take ownership of it first and say oh somebody else owns this you can't just chuck it at the moment, sorry oh you have to be more careful so and a volume group would then each volume group would own a particular set of block devices and you'd sort of the ownership model would work better currently in DM DM owns every block device that you give DM no matter which volume group it's in it sort of waters down the ownership model maybe that's not a problem I'm not sure is it cool to create devices by writing to a class attribute well we've had a suggestion that it's been done twice at least once in network bonding as well as in the packet dvd thing so maybe it is is this a question I mean is binding a personality to a blank volume is binding a driver to a regular bus device they sort of feel similar but they're really quite different realms of activity there you bind a driver to a block device because to a particular device on the bus because you've probed the device you've looked at it it's got some magic vendor comma device number and so you've done a mapping and that's the driver has to be it's quite different from saying I'm I'm configuring this and I want it to be this way around it's sort of a different thing to me I missed the question should I use configfs this is one thing that there's a thing called configfs does anyone know about configfs yeah good explain it to me there's a thing called configfs which is kind of a bit like sysfs but the documentation says no it's different from sysfs it's because you use it for configuring things you can have it that's the bug sys come on configfs come on yeah so why don't we add mcdo to sysfs and be done I just don't kind of get the distinction to configfs I've read what people have written and it still doesn't make any sense to me so either I'm wrong or they're wrong or we're both wrong um yeah so if anyone could suggest you know why it would be better to use to configfs or something like this um I'd love to have that conversation and the fourth point there um and this is kind of a sticky one in that as anyone who's used lvm knows you can reconfigure a device on a fly on the fly you can um one really good example is pvmove when you want to use pvmove to move the data in the physical volume from one place to another it turns this device whatever was on top of um the device gets turned off a raid one gets inserted in there and it's turned on and then there's this raid one so you're writing to both devices and the data is migrated across from one to the other as a pvmove progresses and once the data has been copied onto both of them they both have a full copy of the data which might have been changing the whole time then the whole thing suspended and the raid one is converted back into just a plain device and then turned back on again so data moves around underneath the covers and your falses doesn't need to care about that which is and that's just a simple example there's lots of different ways in which this suspend and reconfigure and resume is really good stuff in dm um how does that fit into this model kind of the most obvious way to me is you create another volume with a new description of how it should be and move the block device from one volume to the other but this concept of moving a block device it's probably impossible to implement um so I don't know it's kind of the the stickiest bit of this and it's kind of should we should maybe Sisyphus should be enhanced so you can move things but if this is the only situation where moving things could ever possibly make sense then maybe I need to find another way um I don't suppose anyone's ever heard of people moving I mean you can't even look there in Sisyphus you almost certainly can't move things around um so those are my questions yeah unplug then re-plug which is exactly what I don't want to do because I want the file system to stay mounted in this case yeah um oh you use that feature of LVM and PV move and stuff yeah oh certainly wouldn't want to take it well I want to sort of make it more accessible I'd really like to be able to convert a sda into md0 and stuff like that there's kind of issues of where to put the method data but they're probably solvable the difficult thing is getting the infrastructure inside the kernel to do it seamlessly and um maybe there's something else to be done here solution to every problem is another layer of indirection yeah another virtual layer to put the block devices I did sort of instead of putting block devices underneath the array devices which is where I think they should be maybe they should be higher up with some sort of linkage to them with with something not similar when you when you unplug yeah possibly so reconfigure the volume yeah yeah well that works if what you're saying is what I think you're saying um like if you create a dm target a dm device that just has sda under it and you have mount the falseness of the dm target then you can later replace sda with a mirror of sda and stb easily but people don't tend to put a dm target on top dm device on top of every block device they use they tend to sort of look for sda one mount sda one oh damn I really want that to be a raid one what do I do now um and it'd be nice to be able to make that work um and it's it's kind of hard to make it work so the observation is in slara particularly using zfs all this intelligence has kind of moved into the file system and the file system creates volumes and the file system you can tell the file system to do stuff with the block device so in a sense and it's a bit maybe like the direction of going with butter fs is merging the file system logical volume layer and and doing it all doing all under the covers in a sense it's kind of hiding it again from sysfs it's it's saying creating my own separate new abstraction where a file system can have multiple block devices and it can do whatever you like for them and maybe that's a good thing to do it just kind of it'll be a long time before the whole world is butter fs or zfs so okay so maybe the idea of the sysat the configuration tools say well if you're allowed to mount sda1 you have to mount dm-sda1 which is the dm wrapper on sda1 insist that extra level of indirection is put in there from the start I've certainly heard that proposed yeah I haven't heard it accepted though it feels you're automatically at the level yeah possibly whether yeah well into a cp inside sysfs yeah yeah well it'd be nice if we could implement move inside sysfs but it yeah move from one direction to another is kind of possibly do something like that there's another question yeah there must have been a lie okay yep procfs do I think sorry do I think what should be in procfs yeah so do I think there are things that should be in one and should be in the other um they're both a mess they were designed with good intentions but there's nobody to oversee the progress and say this is right this is wrong and there's no maintainer of procfs there's no sort of heavy hander maintainer of sysfs who kind of has the time to understand all the needs of all the different devices because you know different devices have really different needs of sysfs and should file systems appear in sysfs and how and well some of them are starting to in different ways um so I mean like any I mean sysfs and procfs are both apis to the user space and apis it really should be a high what's the word you need a high level of requirements to get things into the api but it's just too easy to add stuff to sysfs and people have myself included that's not really well thought out not really discussed um apis are hard I think I think sysfs is going the way of procfs um has gone the way of procfs but we can't really create another one that would be even more insane sorry oh that's our Vero's pen have a new file system for everything I've sort of kind of tried that and that I used to be the NFSD maintainer NFSD has its own file system for exporting its own personal stuff but where do you mount it I don't know in sysfs well actually it mounts in procfs um but it does it just makes it somebody else's problem it's still a problem exactly it's an api which is a path name and a thing at the end of it and it's simple nothing new goes in procfs all new things go in sysfs probably not worth the effort Linus I figured this out he was saying before you never allowed to break the api and then he would break the api and that'd be fine the general summit actually I finally figured it out the rule is you can't no one is allowed to complain that's the rule if you do something and someone complains Linus will revert it if you do something nobody complains that's fine if you want to break you can break the api as much as you like as long as you orchestrate things so nobody complains so if you control the main user space as well and you get it upgraded itself like we're actually going to break the api for NFS we can rip out some old code but that's the right because the user space tool the only user space tool that uses it has been using the new api for years and everyone's really using that I'm sure when we remove this deprecated stuff no one's going to notice and if nobody notices nobody will tell Linus it'll be fine it'll be seriously fine because the problem isn't breaking the api the problem is inconveniencing users and that's really what it's about we don't want to inconvenience people we don't want people to think Linus is bad so as long as you can fix things without inconveniencing people it's fine which is one of the problems with do I want to do this I'll still have to keep the old md inside of block devices around for a period of time is it worth the effort well I think it is this is maybe a good point to finish on I want Linus to be a great operating system so I want to get rid of some of the horrible stuff now and this is a step in the right direction so I'm finished any last minute questions or should the file system hierarchy standard contain any information about Procifest this is a standard yeah well I don't I think the agenda of the FHS is to tell distros how to lay out the file system and Procifest is already a standard determined by the kernel FHS maybe should say Procifest should be mounted at Slashproc but there's nothing else for it to say if maybe there should be a separate standard that describes everything in Slashproc that'd be fun to write but I think it's a separate a separate concern because it's different people who can break it anything else oh thank you