 It's not reserve command. It's rather commander boards. But so yeah, meaning, or rather, time out and the board handling again. Yeah, sure. Go ahead. So as discussed several times recently, commander boards from user land are tricky. So the issue is, what do you do if you have a VM running? You have a VM, we're just trying to do IO. It does IO. But then the VM sets a timeout on that IO within the VM, sends down the IO, you will get the IO and sends it off to the underlying host. Sadly, the VM has no way of transmitting the timeout to QML, because it's using a standard interface, say, HCI, SATA, NVMe, you name it, none of which are equipped to transport a timeout. So when leaving the KVM guest, all information about the timeout is gone. Hence, it's anyone's guess where the timeout you set in the VM is identical to the timeout, which the host is using. And more often than not, they will not be because application, which will really care, let's say cluster application will set down timeout, which probably is different and IE shorter than the default timeout. Otherwise, we would be point setting it if it's longer. Which means that a timeout might trigger on the host or in the VM, and the VM tries to abort the command. And then it sends an abort command, which will be duly transmitted to QML, and then QML says, no, that is bad, because I can't really do anything with it, because it's either an IO control, in which case I can't abort it at all. It might be LibIO, in which case I can abort it, but the abort on LibIO is simply cancelling off the Q element in the internal LibIO ring, again, not doing an abort whatsoever, or it's IO ring, in which case I can't abort it either. So you're stuck and you have to wait for the command completion to come in. Which means that in the VM, there is a timeout handler, but that timeout handler simply says retry and wait until you get a completion. And that's your system fucked. And yes, we have a rather large customer, which has been complaining about this one since, well, years, actually. And we always have to try to come up with different ways how we could handle it. None of the ways we have to test it worked. So essentially, we have two ways out of that. The one is, see if we can implement command timers from user space. Or alternatively, that is where CDL comes in. Alternatively, if we have a way of transmitting the timeouts, then everything would be aligned. And we would drastically reduce the possibility of ever to having to send abort. So that's the proposal here. Clearly, using the CDL way is an easier way because CDL is a standard, and that is we have a nice patch set, and that's about due to be merged. So that is should be getting relatively easy. So then we only need a patch set for QMU, translating the CDL settings into the request timeout, which the QMU will be using, and everything will be nice and dandy. Nearly dandy because there's still a risk of you getting a timeout, despite setting to a CDL because as Damian is one to remind me, CDL is the best effort only. So you might still be hitting timeouts. And at which point, we are back to square one, and don't have timeouts. And screw it again. So I take this application is probably the application that is used to sending abort to the app. It's the driver. Right, right, right. That's internal and the driver. So the NVMe driver, either it's the NVMe driver, the SATA driver, you name it, all of these can abort commands, and they do. Because normally they're talking to the hardware, and the hardware is able to do with the abort internally. But in the case of QMU, you're talking to the emulation, and the emulation has to convert the incoming abort into something which the host understands. But there's nothing we just can't convert it to, because there's nothing you can do. So this is a problem with QMU? Yes. Emulation. Okay. Yes. This is primarily QMU thing, or consequently not directly QMU thing, anything which needs to do abort from user land. So it's just that QMU features prominently in here, because that's the use case, or rather, ever since like an epic customer, which we have. So the alternative idea, yeah. So, Hanes, the timeout that's usually sent out from the application, right, an individual I.O. timeout, that isn't necessarily the total amount of time, in fact, usually isn't, that say the SCSI middle layer will spend trying to perform your I.O. before it exhausts the retries and gives up. We know. That's right. So when you map the CDL setting, does it take that into account? Or is it, is the timeout as details? So these are really details that we can figure out once we have defined what we're going to do. So clearly we can set anything and we can shorten, do something. Key point here is that with CDL, you can transport the timeout settings from one layer to the other. Okay. So really what you're talking about is this is CDL as a way to pass the information down to the HL. It's actually a form of I.O. of what you actually intended. Yes, precisely. So this is not about implementing CDL such that we can pass 3Ls down to the underlying layer, but rather that we interpret CDLs in QEMO such that QEMO can do the right thing. So, okay. So conceptually, how is this different from giving CDLs to a storage array and how the storage array will handle it, right? I mean, isn't that analogous? Yeah, roughly, yeah. So which of the 10 billion storage backends in QEMO are we talking about here? There are so many ways you can do I.O. What is your target? So the targets are typically those where it can set a timeout where it really makes sense to have a timeout. That is the MSG interface where you can easily can set the request timeout because it even has a bloody field along with you to do so. And the other one is I.O. U-ring where you also can set the request directly in the timeout directory in the SQE, if I remember correctly. So you know that with the third I.O. block, you can set up also an I.O. priority, which is totally undocumented in the specifications, by the way. Yes, you can. But then again, you have to transport this information. The point is that currently we have no way allowing us to cross the boundary from the VM down to QEMO. Well, if you are using Vertos-Cuzzy, you just get the CDL index from the SCSI command. If we have CDLs again, as I said, if we have CDLs, everything's fine. Well, but Martin is about to queue everything. So if we have CDLs, yeah, exactly. So if we have CDLs, everything's fine. So is Vertos-Cuzzy then an acceptable solution or do you need Vertos-Block? No, no, no. Vertos-Cuzzy is perfectly fine. Yeah, sure, because that's actually what they're using currently. So yeah, that's perfectly fine. Another idea was whether it wouldn't be possible to have to implement commander boards on IOU ring. That's essentially taking a queue from the ATA spec. And so because currently IOU ring is using the polling interface, which essentially means so it polls on completions and the kernel will notify right this amount of completions has have happened. And then it needs to go looking right which of these commands I'm looking for have completed. That's the IOU ring submitter, which needs to do it. We could abuse this mechanism to do a boards in the sense that we can implement or it should be possible to implement an additional command, which will abort all outstanding commands on that queue, which should be relatively easy to transmit to the underlying layers because you wouldn't even need to know which commands you abort. You can just say, well, whatever you have, abort. Then we would be getting completions back, meaning the polling interface would continue to work. We won't be getting completions back. We could be looking right is that command, which I wanted to abort in there. Return it and resubmit the remaining bits. But I mean, if you're sending down an IOU ring abort operation, you know what your SQE CQE thing is. You know that. But this is an abstract number. This only has a meaning with an IOU ring. It has no meaning outside of it. But you already converted from IOU ring to an IOU operation once. Yes, on the way down. But do I get the information back about the identifier, which ends up down at the bottom layer? But you still need to handle IOU completion, right? So, you know, the information exists. Otherwise, how are you going to make the IOU? But yeah, it's disconnected. So. And how do you actually cancel things depending on your backend storage? Sure, you can do some ugly things with ATA. That's easy, but SAS and NGME means not that easy to cancel everything. Well, we can, you know, details are ever so slightly hazy here. And if it's a QCO image back by the file, you can cancel anything at all. Yeah, sure. Clearly, I mean, this whole thing only will work if you have the means of sending basically whether the underlying storage allows you to send a board. If the underlying storage doesn't allow you to send a board, it's clearly this won't work. But that sounds to me like it's such a corner case in the end. Is it really useful? Remains to be seen, I guess CDLs will bring us a long way. What? Okay. But with the current NGME abort definition, you know, the host can, QMU can emulate this. Sure, send me an abort, right? It doesn't have to do anything with it. I'm going to just return the abort. And if, because it's a lot of, it's going to, what QMU does with this, it's good. Like Damian said, it's going to entirely be an implementation specific thing of if it even has the ability to do anything with an abort on the back end. Yes, fully correct. But currently, we will always have to do nothing because we have, we have for no back end whatsoever, any means for aborting. What's the improvement from there? It doesn't sound like it's a big deal. So you can always see a lot of work. You could build an area that can't seem really good for it. I don't think it's worth the effort. That's not a worry. So yeah. And Dr. really reminds me, I guess, as I said, I guess, if we have CDLs, it'll bring us a long way. Yeah, but then, but the IBM cloud has a somewhat daft setup where the volume images are on NFS, which are then exported back up to the virtual, but yeah, but you see the problem. And we use Vertioblock, which has no preset timeout, and we switched to Vertioscuzzi so we could basically tell the customer VMs to set the timeout to pretty much machine infinity. So we never got a boards because the problem was not, would never be solved by actually sending an abort down. The problem was basically solved just by waiting for NFS to get its act together. Are you sure you need to actually transmit the abort or can't you adjust the timeout? So yes, we are because the real story, so meaning the long story is that that customer wants to do a multipathing for the VMs and be persistent reservations for the VMs. And so he has the chance of either running multipath within the VM, which requires you to do abort that you can switch over paths. And then you're able to use persistent reservations or move multipathing into the host such that multipathing actually works. How does multipathing work? Maybe the application wants to use abort with multipathing, but no, it's not. No, no, no. So I mean, you need to switch over. Yeah. And you switch over when it's exactly when one path fails, which means there will be outstanding commands on that path. Right. And these you have to abort. They will not be coming back. You have to actively telling please abort these commands. You're saying on the other path? Yeah, on the failed path. There are outstanding commands on the failed path because the path has bloody failed. So honours. You can't send an abort on a failed path, right? So yes, but you're talking about you said trouble, you know, scuzzy has a third party abort type of thing, right? But abort the abort the command on the other it next ideal nexus. Right. But if your path is failed, then, you know, you're not going to get a scuzzy abort through there. You have outstanding commands when the path is failed. These commands are not only in your VM, but more particularly in the hardware of the host. Right. So and they are being mapped directly into the VM. You can, if you want to do a reset of your VM emulation, which you would be doing normally from a multi-part thing, do a reset there, which you can do. That doesn't really help at all for the hardware underneath. So I haven't looked at curium use implementation, say for instance, of the scuzzy interface, right? But I have to imagine it, you know, has the ability to you know, receive an abort, whether or not it does anything with it is going to be entirely an implementation dependent thing. Exactly. It could do nothing, something that could do nothing. And, you know, the, you know, abort recovery, you know, algorithms, if the application, if has an IO timeout pops, send the abort, well, nothing's going to happen. It doesn't come back. That will eventually escalate, you know, and get a one reset or a controller reset. And you're just going to, that's where you go. You can't do a reset. How would you do a reset? What I'm saying is that, is that this is entirely the emulation, an emulation problem, right? It's there, there's, yes, of course, it is partially an emulation problem because the emulation can't do a reset on the underlying hardware. Right. But what I'm saying is that I don't understand, I think you're talking to the wrong people. You should be talking to the QEMU people, right? No, so tell me what, what are we supposed to do? We have, so normally, normally, so what the underlying hardware does, say Farber channel, Farber channel is sending the board if there's an error. It, so basically tries to figure out what's going on, all these kind of things. But we have no idea what the Farber channel underneath does from, from within QEMU. Sure. So, but we have no way of telling what it does. So when, when we, when we did, you know, the abort cancel thing, you know, enhancements, this is one of the things we had in mind, right? Yes, but again, we wanted the abort and the cancel to be able to return some type of a meme status that says, Hey, I did nothing. Yes. Right. You did nothing. And so that way, at least not like in the scuzzy case, you're stuck in the situation where you send the abort and then the abort times out as well, right? Because that's not helping anything. It's a stupid protocol. So you just got, you know, cascading timeouts. So we tried to fix the protocol and whether or not the controller does anything with the abort or with the cancel is entirely that we've got, you know, I don't understand what it is that we're going to do to solve that problem. So yes, I know, but the problem is so that we're having two different set of command identifications. We have one commander identification in the VM and we have another completely different commander identification in the host. Okay, so we can, we will be sending a cancel command from the VM that will be turned out through you will be seeing it in the QEMU. Everything's nice and dandy. Okay. So maybe, maybe what we want is we want some type of the interface to be able to understand what the command identifier was that was assigned for the particular IO that I sent down. And so if I could basically, you know, provide that information back up to the caller or something or somehow or other come up in some way where it could be deterministic, if you can understood or determined, then you could create another interface that could actually say send down the abort command. I need the command identifier. Here it is. I know what it is. I couldn't agree more. Okay. So that's what the proposal is. I think I can understand that. So, but this is not the proposal because our stack has not set up for doing so. Well, right. So the idea would be to add that functionality. So currently the stack is unidirectional only. So you're sending down IO and on each step on the way the IO might or will be transformed into something else. So any ID you're setting in one stage will be lost in the next stage and duplicated and striped and whatnot converted. You name it. So there is no direct relationship between the IO you're issuing to the IO which has been sent down to the hardware. So there are really any identifier which you were able to set are completely meaningless to the other layer. They simply have no relationship whatsoever. So I guess while that's true largely, but I think underneath the block layer, the moment we are entering into the block layer, I think we have something. This is why this is why that the polling is able to work because we have something called cookie that is written after the submission and we end up storing it. This cookie is also used during completion. That is why we are able to do completion polling. So cookie is a thing which is recognized from, you know, generates the cookie. The cookie is the request, right? So cookie is having two things, right? As far as NVMe is concerned, cookies having two things. It's having QID. It is also having command ID. It does that. I mean, this is what we probably require for abort or cancel command, right? We only use it for polling at this point of time. And the way we use it for, let's say, during base polling, we don't really care about a specific completion. We never try to say that we want to poll for a specific IO. We say whatever I have gets completed, that's over. So that's why as far as user interface is concerned, there is no such thing. But it seems to me that it should be possible to, you know, to hook that up, maybe match with... But then as you already don't care which command completion you get, why can't you just have a normal cancel command? Because then the commands will be awarded. They will have a certain status. Most of them, the status will be aborted. And then it's up to the issuing application to say, right, these have been aborted. I have to do something here. And then he can do something there. But the commands would be aborted. So, Honus, if I could ask a question. Sorry. I know you guys are probably talking to each other and I'm interrupting. But, Honus, when you said that you would be using Vert IOS SCSI because you want to be able to support reservations in the guest, right? So if you're using Vert IOS SCSI, if you get a timeout, you're going to issue an abort through task management. It's an attribute not issuing abort because it doesn't work so they are disabled. The timeout end of a Vert IOS SCSI is preset timeout. Okay. But I mean, is there any reason why making that work would not solve your problem? If you can make it work, be my guest. We couldn't. Okay. I see. So I think if I come back to it, so do you see issue if we if we connect the cookie that is there in the block layer with where the user space identifier. In case of IEuring, that happens to be something called user data. That is how you identify a particular command. If we connect that with the block layer cookie, do you think this problem is solved or do you see gaps? We have to look here how this whole thing will work. But then well, I don't know. We have to see whether it works. So it really depends on whether you're able to use IEuring on stack devices or not. So if the bio you're looking at or for your polling might be the last one of a chain. So your cookies should use this because you would still have to cancel a lot of requests before canceling everything. Yep. So really, I guess the best option here really would be cancel all because it really, as said, we've really done now what which I will actually send down underneath and how it looked like. We might be getting a cookie back which has relationship to parts of the IE we sent. But these are not right. So with the cancel command, as you know, we can basically send down the cancel request and say actually, you know, cancel or abort everything on this IO queue. Exactly. That's what we would want to do. Yes. Right. Yes. So I agree that I think this is this is likely to go right when we have one to one relationship. But yes, when we have one to one relationship, something we have we submitted one command which turned out to be end commands down the line. We don't really have a cookie as this, you know, that's representable. I think that's what your point is. But I think that, you know, Damien's point is there's no such thing as a one to one relationship with a request or even a bio and a command. It is possible because at times we have whenever we have if you think of polling that, you know, that's that's what we do, right? Whenever the IO is not split, we are able to pull for it. When it is split, when it becomes one to end relationship, we are not able to pull for it. Yeah. The same cookie concept, right? And what if the request is merged? I think yeah, we exactly. Yeah. All right. Good. Okay. Cool. What was it? Oh, yes. And I will be sending the research. Come on. Touch that again. Yeah. Right. So just to let people know, you and I did have one other quick topic to discuss. We're going to do that at four o'clock at the lightning talk session, hopefully right at the beginning. It's sort of a pity because I don't really know that we need to talk to the whole room, but sure. And it's quite anyway. Unless there's some time to squeeze it in now or you want to do it now? You want to do it now? I'm going to go get up and plug my laptop in. Hold on. It's really only an IO thing and this is going to probably take like 10 minutes, but we'll do it whenever. That'd be good because we need our coffee. Of course. It's light in the day. I can't see anything and I can barely hear it, you guys. All right. You can't see it now? Yeah, there you go. Yeah. So this is real quick and I'll give a background on this. The basic thing is it has to do with the change that we made, I don't know, a couple of years ago now where we changed to use the driver core for asynchronous SCSI device probing. And this has had a side effect that the minor device numbers for SD devices are now essentially randomized at boot on various people's systems. And we're getting a ton of complaints about it. Now, historically, we have always documented that we never guarantee that the minor numbers are going to be the same, especially for fabric attached disks. And we have documentation to this effect. Regardless, they were popularly used and especially in the context of provisioning devices to VMs. And there's also a use case with MPT-3SAS. And so this change that we made basically made it so that we have people saying we can't upgrade to newer kernels because this kernel just doesn't work and there doesn't seem to be any way to get predictable behavior. And so, of course, we tell them, well, you really weren't supposed to use the SDA numbers and the minor device numbers in this way and go look at the documentation. But this falls on deaf ears from people that were used to it working for so long. That's actually a legitimate issue is that if you make a synchronous call to SCSI ad device and say you make multiple synchronous calls because you're adding a bunch of devices, you might rightly expect that you would get devices added in order. But this turns out not to be the case. And so we effectively change the kernel API when we did that. So you can go to the next one, John. So the question is whether is there any support whatsoever for adding a way to have some compatibility module option or something that would let people get the old behavior? Of course, there's a bunch of disadvantages to that. One thing is it would make all of the probing asynchronous not just the minor device assignment. So if you got to go and read all the BPD pages and stuff, it would take forever if you had a lot of devices. It's a step backward in where the kernel is going with asynchronous probing. And the real issue is that it's not so much the fact that the numbers are different. It's the fact that we expose them to user space and they were popularly used. Another complaint to notice that I get is that this behavior doesn't happen with the assignment of SCSI generic minor device numbers because of course those get probed synchronously when you go and add the device. It isn't deferred to any synchronous thread on the driver for. Sorry. If we were to use such a work around UN, I mean we're just postponing the pain. Eventually somebody's going to have to rip that band-aid off and update their FS tab or whatever. Well, it's even worse than that. The pain will exist. It will just be found less often. There's still no guarantee that synchronous and probe devices come up in the same order because of the way the lower subsystems like PCI do asynchronous discovery. I mean fine, if we try and do it synchronously, you'll only get a failure one every few thousand times instead of pretty much every boot, but they'll still get failures. Right. I mean the problem is about using labels or UUIDs or all of the other stuff that we've had in for years to fix this. Right. Yeah, and this is correct. And of course I may call these arguments with the people who are asking for the old behavior. And I say, look, it has never actually been the way that you think it was. It just happened to work for you because you had a fairly well-defined system. The problem is that there's a lot of people with fairly well-defined systems. And we essentially threw a roll of the dice into the mix. The real complaint is it's not just that it's different. It's the fact that it's different on every boot. And you never get the same numbers twice on every boot on some of these people's machines. And this is a problem for them. The other thing is that we have people who for, I won't get into the exact details of the cases because it's somewhat complicated. But there's people that essentially tell us, well, we can't go and make use of Dev by ID because we're using a deployment model where we're trying to use a single image and deploy it to multiple systems. And the IDs are different, but the minor devices would have been the same. And that means we've got to go and essentially customize everything for the 1,000 systems that we're trying to spin up. So there's probably ways around that too, but that involves work on their part. And again, the question comes back to why on earth did you change it? Something you may want to tell to the users of the old kernel is that old behavior as Tracy, the notification, the UDEV event that the device has been, that probing has finished, the center user space before the SCSI driver has finished probing. Right, I mean, that's not so much the issue. The issue is that the way we used to do it was that the minor device assignment was synchronous when we did the probe. And the rest of it was asynchronous. But the minor device assignment was always the synchronous part. Can I just go back to your original point? All they really care about is that SDA, BC, and D come up in the same order. What if we did something like network does, which where you can get F1, F2, F3 to come up in the same order. And under the covers, they're basically doing UUIDs on the network devices. If you look at how UDEV does it, I mean, would that be a simple workaround for this? As in, effectively, you're figuring out what the UUIDs are under the covers, then installing a file and they think that that ABC and D come all come up in the same order. We can even do a slightly crazier version of that where we use the report and set device identifying and identification to SDA, SDA, SDA, SDA. So the devices are really going to be called SDA, SDA, SDA. We can fix the boot order just by doing something like this, if that's what they really want. But is that what they really want? Yes. I think the problem is, is that for ages, we have said, we will not support these consistent names. You need to use device by AD. But then every kernel we publish for the next few years, we'd always reboot it consistently given. So they just said, we're not going to change our applications. We're not going to, you know, and then all of a sudden we actually started doing these, you know, asynchronous probing, which we all understood how it worked. But because empirically, that's not what the customer experienced when they rebooted or even when they upgraded or installed to an, they just, well, we don't have to deal with this problem. So we're just going to ignore that. And are they actually interested in the block devices and not file systems or database disk labels? Or is it actually like raw block device access that they're interested in? Yeah. So instead of proposing a solution, I think we should state the problem. The problem is, is they want consistent names at the slash dev entry. That's what they want. That's what the customer wants. That's what they expect. Even though we told them that's actually technically not possible, that you can't depend upon that. That's what they want. And that's what their, that's what they're, basically it's a requirement they're telling us. They will have to implement that requirement themselves using the facilities that are already there. Well, no, no, I think Red Hat can do it using Udev. We just basically record the UIDs on first boot. That becomes the default order. And Udev reorders everything on second next boot to match that order. That's what I mean. It's a scripting problem. You can get what you want. You just have to tell the system to do it that way. Yeah. I mean, I, I do, I mean, there are multiple people that are asking about this, and they have slightly different use cases. I do have one fairly large user who is basically saying, you know, if I have, you know, one zero target unit zero, I want that to be SDA just like it always was. If I have, you know, one, one, one, I want that to be STB. How come you changed it? Right. So there, you know, figuring out at first boot and then making it the same isn't, isn't what they want either. Although presumably a Udev rule could go and look at the physical, you know, one numbering and, you know, say, all right, this is clearly what it ought to be. The Udev rule can do whatever you want. So if you want it to be done by LAN and target, it can do it. Provided the target. And remember, the host number is not necessarily guaranteed to be the same. You're getting into all sorts of nasty problems here. But if we script around it and we don't have to worry about it in the kernel, are we done as far as kernel stuff goes? You and you, did you have more? So I had one more slide, which it's sort of a somewhat related problem, but wasn't, didn't derive from the customers complaining about the probing behavior change. And that is, we are seeing similar questions regarding NVMe identifiers. Because of course, NVMe, there has never been any consistent behavior. All of the numbers that get exposed to user space are the kernel's internal instance IDs. And they're all over the place, particularly the controller numbers, because of the discovery controller lifetimes. And I guess the question is, are we really being kind enough to our users by exposing all this kernel internal instance numbering as things that they see and interact with? You do NVMe list and you get a whole bunch of things that purport to refer to your devices. But they're different every time you go and boot your system or go and do NVMe connect all or whatever. Right. So I may have a solution for that problem. And that's something we've been working on for a little bit. That's, it's sort of like a side features to something else we're working on. But from the problem of managing, you know, thousands of scuzzy runs and multi-path and figuring out which devices relate to which other devices and hosts and, you know, host, host two, is that the Emulex or the Qlogic HBA and all that stuff. So we've been working on a tool that tries to be friendly. Of course, Guzzi, we can entertain NVMe and allows users to name their devices. So they can say this HBA is Bob and this HBA is Fred and this disk is Susie or whatever. And when you use that tool to inquire about those devices, the user chosen names will be the ones that you see. Of course, that's not going to fix the problem if you're using an Emulex utility to, you know, update firmware or whatever. But at least for some of the tooling, we'll have slash them names that also matches whatever the user used to name that thing on the command line or whatever. Yeah. I mean, I have some similar tooling like that for looking at crash dumps and log files and things like that in a customer environment where, you know, they give you a bunch of log files from crashes on different days. And you can't be 100% sure that the devices you're looking at in one are the same as in the other. So it kind of digs in and looks at it and tries to match them up, right? So what you're saying is something like that could be used to sort of provide users a way to sort of refer to their devices, particularly if they were sort of managing their own, you know, environment deployment. So it was easier on them as well. I mean, one of the biggest problems we have honestly is customers that have got 50 systems that are more or less identical, that are running some application, but the devices aren't always the same from system to system and trying to figure out if a problem is seen, you know, one week on one system and then three weeks later it's seen on a different system trying to figure out if you're looking at the same thing. So that could be very helpful. Again, it's sort of like, it comes down to, you know, this whole, we keep, we sort of, we assign things in the kernel based on, you know, IDA alloc or something that's just assigning numbering into the kernel that's only the lifetime of that particular boot. But this information, we keep exposing it. We expose it all over the place. And it's so painful for people to use. So I think the real answer here is enough, you know, let's not go change the kernel, which actually is not actually fixing the problem. It's just creating the side effect that we want that's going to meet the expectations, attack the problem straight on and say, what are the expectations of the customer and the requirement develop the tools with the existing technology we have, or maybe make a new tool. Oh, we have the technology. That already does this. We just steal our scripts. They record the order on first boot and they redo the order every time it reboots. Yeah, but I think, I think to Martin's point, it's a little bit more than just what was the first order on the first boot. They actually really, what the customer wants, they want to say, I want, I want that device to be named this and that device to name that in perpetuity. I agree. We can do a much better job at appeasing our users for, I mean, even like the whole file system, UUID or whatever, that's also not user friendly, right? They would rather call it Bob than, you know, however many hex characters of string for an UI 64. Incidentally, not to speak ill of our colleagues, but if we don't get to tea soon, they'll have stolen all cakes. All right. Thank you, you win. Well, this has been very good and I appreciate your time taking out of your break, everybody. So that's all I had. And this is very helpful. Thanks. All right, cool. Thanks, guys. Thank you.