 Yeah, so it's it's funny. I mean how many people here actually like to manage storage in Linux? James, Tron. Yeah, it's one of those things that I think is you know Typically one of the more painful things But I do think there's good news to start with so I would like to start with something positive especially the end of a long day You know so again, I would claim that our storage stack is really world-class, right? We have a lot of the features that you need to be leading in everything And we do span the the gamut all the way from you know your Android phones up to supercomputers all use Linux IO stack Linux file systems to drive them and we don't it's that's not a bad accomplishment We support SSDs new storage classes. We often come out with support for new devices Among the first if not the first for most types of devices and again scale a scalable performance Iops and everything we do pretty well in so You know, we have a lot of choices choices good usually how many people here use XFS as their favorite file system To e64 3 butter a fast ZFS I'm many couple. Yeah, so Riser fest anybody use riser I Did use riser and EMC Centera for five years and we actually had really good experience with it So I don't want to knock things but but again, you know that choice actually makes complexity, right? So the more choice we give our users and not to mention the block the block stack itself has a lot of diversity And how you set it up how you manage it network file systems again There's choice and protocols and choice and in versions of a protocol this all makes it kind of mysterious to people and You know with choice comes comes the need to actually figure this out We do have As a result of all this choice and all this power We're really popular a lot of startup companies are based around Linux The next storage stack is under a lot of NASA appliances all the way from little embedded things like an believe I omega's a Linux box to most of the Low end things all the way up to high-end appliances from a lot of enterprise storage companies so What happens when you get out of our sweet spot though if you don't have that power user The typical Linux sys admin is somebody who has a lot of doing expertise and how to set up sand networks how to set up raid groups setting up A rel a redhead enterprise Linux server this the storage can be very complicated But as we get more and more popular as you get into cloud and self-provisioning All that choice and all that power leads to confusion and we can't rely on people having been certified by one vendor or another It's really we need a lot more simplicity in our tools. I Mean how many people could actually get somebody using the Android phone to set up device mapper with the encrypt and you know Multi Storage partition and maybe export it through NFS. I know yeah, but that's that's a kind of usability you need It's a James's point is you know you need to set you need to set a button for this kind of and it's not the power users It's not the power use cases, but so I wanted to talk a little bit about kind of the bad news, right? So our complexity Number one Anybody here? I see LSI here. I Sent ten years at EMC so I take a lot of blame for helping propagate lies and mistruths Tron can take some share from that app, but if you talk to storage We always lie to you when you write a block We don't put it where you ask us to put it or where we tell you to put it We put it wherever we want and it may or may not persist when we when we acknowledge a right So if you're in the storage industry, we're often surprised that people expect us to actually tell them the truth Right and you look at fragmentation and inside file systems. It's a logical fragmentation We might actually be physically contiguous If you look at SSDs SSD devices it takes that by an order of magnitude greater I mean every single block could be mapped to some other physical address So again, you know, there's a lot of complexity in this as well We have logical software constructs like RAID or software or device mapper itself, which can do some of the For example a DM thin target can remap all the blocks into random locations On top of your storage So I would argue that we've made this really confusing for you for users The word target or initiator are kind of storage specific as opposed to server and client We don't use terms that most people are familiar with in the file system space. We're not innocent either Yeah, if you yeah to your point I mean everybody has their own vocabulary, but we don't use common vocabulary and in file system people and I Mainly in the file system developer, although it's been a lot of time in storage business Our our vocabulary is just as confusing. How many people here know what the barrier is in Linux? Look good. Good. Most of us if you don't use the barrier option on your file systems You will lose data. I can almost promise you right so if you don't know it We'll explain it to you at the end You know, and so again, you know what this has led to with large enterprise customers People have very domain specific knowledge You'll typically have whole groups of people at a large financial service firm that does just storage You have a second group that does just networking Maybe a third group that does provisioning of servers standing at things like servers in a cloud They don't know how to do each other's job really very well But as we get out of our traditional markets You have people in the open cloud group who actually need to provision storage and they need to do just the basic things so this is kind of my My take on what we need to do to kind of help take our kind of very high-powered Linux hacker hats off and take these powerful abstractions up the layer to make them easier to manage and You know just a minor point if it's it's probably obvious to everybody in this room But no matter what kind of fancy file system you use on the top of the cloud Whether it's HDFS or Gluster or SAF or whatever we actually do have hidden down there All that real complicated storage hardware in some flavor in all of our local file systems in our local storage stack Hang on James. I'm gonna give you a mic. You can complain in person You got to give it a second. Yeah, it's on now It's on very low, yeah So the cloud is supposed to be all about hiding the nasty bits from the users So all they see is their applications running into the cloud So if we hide all of the crap that's going on under this in the cloud That's going to be run by the same experts who have the domain specific knowledge You've just said that we need to expand on the other slide So we could move to a world where all of the experts do the cloud stuff Manage all of this super complex storage and we never expose it except via s3 or something to the end users You wouldn't that be just as good? I think I would argue and there's a comment in the back of the room Yeah, you you do hide a lot, but you don't hide the complexity from the people who have to run the clouds and Manage the clouds themselves. So so so my answer is you know That that storage specific expertise isn't necessarily the same expertise It's gone into the people who have to run your cloud instances. That's my argument at least right maybe and I think they actually Don't need to do that Typically they're not configuring sands or they're not configuring really complicated things somebody in the storage shop has done that But the basic operations people who want to to provision storage whether through Automation tools or whatever the things that you use in the cloud space We'll call down the chain to things that have to be more empowered to do the right stuff So that's a good point, right? I mean, I think we agree on the goal the goal is to hide it from people but the goal is also to Show the people in the cloud the same kind of power and functionality We have all the way down the stack either in performance time terms or scalability terms or robustness Okay, so Again, I you know to the point of the Comment in the back we try to hide this from people not just in the cloud, but in consumer devices Users think about I need storage for my pictures. I need an instance for my enterprise database I don't care how you all do it. Just you know, give me something reliable and make it this big Maybe give me snapshots and backups if they're really sophisticated You know, they also want to know annoying things after a crash, you know, did you lose my data? You know, did it all come back? And if you did lose something, can you please tell me exactly what went missing? Right, those are kind of difficult questions, but that's kind of how users think Usually we in the storage business we think about oh well, you you know it crashed and you lost sector 72,401 Right, and that's that's not very meaningful to most people NSA could restore it for you if they knew what sector 72,401 was yeah, so we have to help the NSA do a better job Yeah, I Think again, I come out of a kernel focused background I think again the the way we develop things in the Linux kernel community is people have a lot of deep expertise and their specific component I mean if I want to ask James about, you know, butterfuss allocation policies Not your sweet spot right the same thing if I ask the the butterfuss guys about scuzzy or trond about I Don't know pick your least favorite part device mapper Maybe you know you get you get a lot of expertise in one area people will polish a feature up But we don't think into end use cases and again those use cases are what we end up deploying especially when we get to these less sophisticated consumers of our storage stack This is one of my favorite pictures that shows just why it's hard for people to actually visualize this This picture was done by there's a little attribution to sides that I can't read so tiny But Werner Fisher and George Schoen Schoenberger Some these two people have put together this really complicated graph and you can see everybody has expertise Kind of in one of those boxes, but the NT in use case how to configure all those components how to get it to actually run Well, it's pretty difficult The ZFS guys have slightly different more converged stack in some area, but you still have a lot of complexity here But that's a very long Introduction to kind of why open-source storage management is actually fairly challenging challenging area to work in To make it even worse we actually tend to write storage management software in a bunch of different communities People anybody work on installers Anaconda, yeah The Anaconda team that the yas team from Suzy the other I don't know all the installers We have they write storage installation code people who want to run maintain storage while it's running They have a different set of in of routines they use people want to repair it yet a different set So we have different communities of developers who have their own utilities. They don't collaborate very much We tend to rewrite stuff from scratch And when we take the opportunity to write everything about five to ten times We get the opportunity to write the same bugs have the same errors Or maybe just have a diversity of different errors that you know, we could have collaborated on better I I also think again anybody here use GUI's Network management for the storage But how many people prefer to use CLI's? Right, that's kind of the traditional user, you know kind of power user approach We all like CLI's because you can do everything with the CLI, right? If you can read through the device mapper CLI and tell me how to do something Yeah, so James is saying if you can get to the shell on your cell phone How many can use their shell on their cell phone a couple yeah, yeah Yeah, so we're not the audience for this stuff in some ways because most of the people in the room are fairly happy with style with With the stack and the complex in the power, but I would argue that even most of us would rather do things in an easier way And one of my arguments again to credit the ZFS community You shouldn't have to use a different file system just because we made our tools so difficult to use right, so so this is kind of again, I've worked at Red Hat, so this is roughly the Assemblage of things that you have to deal with if you're trying to manage a rel server in terms of storage Overt is something our KVM team our Virtualization team uses to manage setting up storage and virtual machines and a condo with its blivet libraries how you install rel boxes The blivet code is the code that does storage management specifically done by somebody In the installer team Storage system manager somebody in my team wrote this as a way to kind of emulate ease ease of use of file systems layered on top of LVM Or butter of S directly Open LMI is a new project Which lets you do remote sim based management Which will do some basic operations for storage network infrastructure, but they all call down to the kernel directly They might call low-level tools invoking them through the CLI trying to parse the CLI output Really bad way to actually manage stuff right these things aren't really they don't have programmatic API's as they originally designed and Then there's vendor of specific tools which exist out in their own entire universe, right? They're not well integrated typically So what have we been doing to fix this? So we we have been thinking about this and and I've been complaining about it for a couple of years I complained about this a couple years ago in Prague Luke Lucas a churner my team had been doing some work on storage system manager Which is kind of inspired by the butter of S ease of use, you know If you want to add a device to a file system, you just add a device right of sense of pointers to the project later It Linux plumbers the last year James 2012 year before Both years we've tried to actually get the installer people the runtime people the kernel people to actually sit in the same room and talk About how to manage storage how to actually collaborate on sharing code and We have been investing as enterprise distributions and actually Spending more dollars and more human resources and actually making things easier to manage So I think we're doing the right things. So I'm going to share a little bit of what we're doing and So one thing I think is really critical is going back to you know hiding the the complexity of people Figure out the top six to ten things that everybody wants to do Right those are the things we need to make really polished and really easy to do When you fall off those common paths I think it's perfectly reasonable to go back to your your storage guru in the back room and have that person use the power tools Right, but creating a file system resizing a file system migrating a file system to a new new set of storage devices That should be a fairly common and trivial task We need to make sure that we have low-level Libraries with C and Python bindings that people who want to build more sophisticated GUIs on top of our kernel stuff can consume It's really bad to have people invoke command line interfaces freeze the output You know the the printf statements become an API. That's kind of the world We've lived in for a long time that has to go away, right? People have to be able to use programmatic interfaces to program storage probe storage configurations and monitor for return codes We also have a real lack of kind of a robust infrastructure for providing asynchronous alerts We've talked about this again at LSF at the Linux storage and file system workshop for a couple of years I'm a good example anybody know what thinly provisioned storage is It's another lie, right? If your users all want a hundred terabyte file system will give it to you But we'll only put 50 terabytes in the actual physical storage and we'll let you lazily grab real disk as you use it Well, what happens when they take you up on the offer and you get to 40 terabytes of physical storage consumed? What happens in Linux? We we send a message which gets logged on the console I think it gets logged in var log messages. We hope the sysadmin notices it by tail or something It says oh darn. I've I've I've run out of disk base I should add more disk before the users all crash and die because it's not pretty if you run out of physical storage under any virtually provisioned file system So we've talked about how to make these things programmatically Meaningful on how to do it the kernel people kind of uniformly thought it was best left up to user space Right have some kind of notification funnel up to user space We don't really see the kernel as the right place to send email to assistive men or something like that That's work that hasn't really I don't think it's really spun up yet Oh Ewan Yeah, so you events will be exported up to user space which you can monitor for but they haven't really been consumed and I Think in a product that way Yeah James says it's not his problem Yeah, but which kind of goes back to that again I think we also have to restructure mobile some of these projects and I'll talk a little bit about how we've done that Because again writing it five times or ten times is inefficient So I talked about this already just some example of projects here There was a good coverage of this at the Linux weekly news anybody here not read LWM net Yeah, it's a great source resource They have on staff people who come and cover all the kernel events They did really good coverage of LSF this past year So the Blivot Library mentioned that before this is something that came out of a red hat project where we've tried to Yank out all the Python code that did storage related management out of anaconda Put it into a common place that people can consume and in program against We're also going to try over time to move more code out of things like storage system manager Which is also written in Python move it into Blivot and let Storage system manager actually become the effectively the CLI It's a very active project it's something we've committed to delivering in time for rail seven which is Coming out with betas later this year and GA roughly next year sometime middle of next year, that's all public information It's an active project, but we definitely need more documentation there Lib storage management anybody use lip storage management. I know actually we've worked with a lot of our hardware partners Net app has actually been one of the more active partners I think LSI other people have worked with us in this the idea of lip storage management is to actually have a way that can Give you standard interfaces to do these common storage tasks Probe your sand topology tell you what devices are there all kinds of other things it can actually stand up a net app Filer for you. I think it can call the net app tools. It can invoke the proprietary tools running on your box in a standard way So it'll give higher level Administrative tools a way to do very common things and even some complicated things Depends how much power in the tools how much powers in the proprietary tools and how much is supported by the standard storage? administrative API's Lib LVM is an attempt to Bring that same kind of programmatic interface Python and C bindings again to device mapper and LVM devices If anybody's ever tried to parse LVM I mean again when you codify the output of CLI tools You've basically codified the print apps in the C code and people have to parse that This is a way to get that more programmable the project. We've started up again It was dormant for a while, but we hope to get this back back up and running storage system manager is Again coded all in Python It does basic operations that will run on top of butterfs 64 XFS on top of LVM kind of the The motivation of this project is to make it easy to consume storage in these ways. I Think it's perfectly reasonable to try something with it and have it say oh, that's not supported You know you have to go back if you go off the common path to the power tools underneath it But it should make life easier for the common thing Open LMI is one of the newer projects It's has both It's basically meant to have a lightweight way to monitor setup and provision networking storage and some few basic things So again, there's been some projects and some talks done at various Linux Foundation events there's What do they call them? Scriptlets or something like that where you can actually do little CLI like ways to invoke these things as well Overt is a project we use to manage things the overt team actually collaborated with a Gluster engineering team to actually put provisioning of Red Hat storage and Gluster in the community edition of overt so you can stand up Gluster clusters with an overt Interface, so that's a nice graphical gooey example And again, we're trying to get that stack refactored. So it looks kind of like this, right? And this is again Red Hat centric. I apologize, but again, you still have your vendor specific stuff Everybody should be going through common code, right? Think of the top level as a presentation layer and however you want to present it for your use case whether it's the cloud or specific small virtualization or CLI if we start consuming the same routines and debugging the same routines and Have them extract away all the very specific hardware hardware and storage topologies I think it'll be a more consumable thing and it's not just a technical challenge it's also challenging getting us to use the same terminology at each layer of the stack and So that when the users jump from one experience to the other from installer time to runtime management They don't have to relearn a different vocabulary These are you know, it's not a done deal. These are goals But as you see these projects and hopefully take a look at them or contribute code it's good to try to keep those those things in mind right because we don't want to Multiply the confusion by making things better in 10 different ways that are all unique and different and all need to be learned Any questions about this stuff? Why wouldn't the LVM commands use the LibLVM? They probably should eventually but what I Guess you I could draw these these are just boxes on a slide So the truth is today the LVM commands device purple cans totally exist or kind of have been around forever Live LVM is a new piece of code. That'll be a library that will probably compile in the same code So it's a way of getting at the internal code without invoking the executable So it'd be I would hope it's going to look kind of like the exe The e2fs progs libraries that you can link into your code so think of it that way right So it's the library that gets you into the same code. This is CLI house But that's a good question Any other questions here so The other thing we have to do and this is a little bit of a departure from just straight storage management But kind of where we're going more broadly in storage community and some of these things have to do with management as well So one of the things you'll see in kind of the competitive landscape with open source is VMware Microsoft Do an excellent job in manageability of their storage infrastructures? At least our customers hold them up as something to beat us up with Right storage management is one of the key jewels in a virtualized cluster You don't like that James. That's very true customers beat you up with something all the time yeah Yeah, so I think the other thing that that Some of the people who not in the non-opening source world have been better at than Linux kernel people traditionally have been is Participating in standards bodies to drive some of these manageability APIs like copy off load We're driven by VMware and and Microsoft I think netapp was there and they involved but not really the Linux people as much as Yeah, we have a few people doing standards, but we don't Yeah, well this guzzy standards and the NFS standards So so there are standards that get driven by the competition A lot of us about manageability how to migrate data between Luns, right? Usually the way that I find out about standards is not by having people going to participate active in the centers But through vendors who are partners who come back to me and ask does this make any sense from a Linux point of view? I think we do need to step up and be more engaged because these things we need to actually manage as well We do have new work here. I'll talk about oh the copy off load stuff Anybody know copy off load It's a great idea right instead of pulling all the bytes when you want to copy a file or a range of blocks on the LUN Pull it back from the storage server to your to your client and then push it back over again That makes no sense if the storage server is empowered It can just say copy a to be or the range of a to the range and be you don't move any blocks It could be done instantaneously you can do this now with NFS in for to the spec You can do it with scuzzy at least two variations right token copies and and the extended copy. I Think we've implemented both partially I don't know what Martin ended up with I think both of their and you can do it with various file systems things like OCFS To and butterfests can do ref links, which can do effectively a file system implementation of a zero data movement copy So we have actually go ahead So some before can do it with butterfests. So the SMB 3 protocol Right a good point. Yeah, SMB 3 is actually a really important protocol for support. We have SMB 3 servers. That's the the standard Microsoft protocol And if you look at VMware, it's actually promoted a whole set of storage management API Is that it uses to rely on for management and migration enablement? So again, we need to be able to to expose those to to manage those and allow allow fancy GUIs into the common migration We talked about thinning provision alerts a little bit before I think this is really important again in a virtual machine world And in an instance where you're doing heavily over the provisioning being able to get meaningful alerts back to people So they can act upon them is is a really good thing to keep in mind And thinning provision storage is actually something that's been around forever in the enterprise storage world I know net apps done it for how many years from forever 20 years. Yeah, I know EMC has done it other vendors. We can now do it just in with the vice mapper with a DM thin target You can do it pretty much anything. So it's not an unusual feature It's actually fairly commonplace today And as I mentioned the the protocols the the implementation specs actually define watermarks We need to alert appropriately let the storage administrator know about those in a timely fashion so that the admin can react Copy off load. This has made progress even this week. I think I think Zach Brown will be here for plumbers has been driving a new implementation of a new variation of the splice system call he gave up driving a new system call at least temporarily although I saw some churn this week But splice will allow us to do that copy off load and in the kernel will figure out what the target is Whether it's a scuzzy back end or an NFS back end or local file system It should be Progress after three four years of debate. Hopefully we'll actually have something in shipping shipping upstream kernels Butterfests, I think butterfests is something that actually in the native Linux base made a brave attempt to make it easier to manage storage It's getting a lot more robust we had a talk from from the slas from About the state of butterfests and their product I think butterfests has been hitting that point where they've been investing most recently in getting stability into the code More than features, which is is welcome. I hope we get it in a robust full featured with raid and everything working soon And you'll see it more broadly supported in the full featured way Last time we asked Chris Mason and Joseph Bassick I think was at LSF Joseph gave a stand-in state of butterfests talk in April-May Where he promised it would be a hundred percent solid by the end of 2013 Gives him a few months left Yeah, but but some of the things I'm looking for in butterfests as somebody who has to look at how we how we treated in rel are Can you run it? When you're running out of DRAM in low memory pressure and you're running out of disk space and you know If you hit an IO error, can you recover? Can you drop power and get your file system back are the user space tools robust and fully supported and Chris? Chris still owes us a 1.0 release of the butterfests user space utilities Which has been promised for a few few months now, so hopefully that's all going to come together in the next few weeks to months and We'll have a boring reliable file system as opposed to a really exciting one NFS features Some of the things that actually NFS has has come a long way in the past couple of years Anybody know about labeled NFS? Good. You know about it strong Yeah, so One thing that in I remember Matthew Garrett was talking about se linux and secure avert and so on how to how to Harden your your open-sec instances Again, this is more complexity But what we can do now is we can pack past security attribute labels over the NFS protocol That was part of for two as well the itf standard and we've implemented both on the server side in the upstream kernel And on the client side, I don't know if any vendors have implemented it in their arrays But hopefully it will come in the next few years. Oh the standards. We were a bit aggressive Tron says Yeah, it's not the inks not dry. No one signed it Yeah, some some evil distribution wanted it. Yeah, but It is there so in an all Linux environment You could actually stand up secure Linux guests and clients with S with S for today Which is kind of cool and I do I do expect it will be ratified and it'll get into production servers I'm I can't tell when but I don't think it'll be forever. It's not that it's not that much of a big deal I think If Tron could do it and as I mentioned before one of the evil bender tricks is to Standardize something you've already shipped and then other people copy you quickly. So maybe we'll maybe we're learning right a Lot of work is started to fill in kind of the the few extra bits of the NFS 4 to protocol That's actually I don't know Tron. He's gonna talk anything about that. I mean it's kind of Early early days, right? As I just said, you know the the standard isn't quite done yet But we've already started on the on the implementations because a lot of them are just POSIX features like you know F allocate Hope on hope on she isn't quite POSIX, but it's you know, they're there are standards for it and The spec is very very unlikely to change Until the final publication copy off load You know is as as you said, you know being being driven on on many levels, right? We have running code But there's still some details that need to be fixed in the in the protocol And that we were waiting for Again, I don't think That's gonna be sort of a big delay. How long did for one take to get drafted? for one took us about Ten years end to end. Yeah, so so the jump from 4.0 to 4.1 was 10 years We hope to go from 4.1 to 4.2 in like a year and a half or so, right? We're trying to get an order of magnitude faster. So no promises, but we were optimistic So that that was roughly what I had today, but I'm happy to take questions about this or other file and storage questions And all the stands between you all and dinner and drinks or whatever The drinks are the important part James says questions Hand you leave this guy on So did I hear you say that you want the kernel to send mail to your management program? No, no, I said we don't want the kernel to send mail So when we were debating it was funny We were debating with mostly a kernel audience about who should take care of all these asynchronous events and what to do with it And the kernel people felt routinely it was somebody else's problem If we just had a reliable mechanism to communicate out to some user space agent It's much better to do, you know, the proper actions based on notifications Let some demon or something up in user space send an email or poke somebody or flash a red light But the kernel shouldn't really be in that that business Right So you say the kernel should not notify User space about no it has to know that has to have a notification mechanism And that has to be reliable and somebody who subscribes to it should interpret it and do the appropriate user user land thing So the improved question is did I hear you say that the kernel should send something to user space It I think James it does as of 312, right? That patches in there So what it does is you know the current you event mechanism that you dev listens for We will send a you dev event for certain Unit attention code some of which are the thinly provisioned ones So if you configure you dev or something else to listen for them They can perform almost any action you want based on that so you can add a script to you dev It will send the email that you're looking for Question the back there I'll run good exercise We've had discs that can support in provisioning for some time Um, is there any timeline for when we might get file systems that will support sending down trim or? Right same or unmapped commands by default ext4 has marked this experimental for probably five years Well, not not experimental so so the what we've done and again making so the question was about When will file systems actually send down the the we call a discard at the file system level and it maps into trim for SSDs that do 88 commands and it writes same with unmapped or unmapped commands for scuzzy devices and who knows for Weird PCI Express cards, but it's called either Data set management commands so so we we muck out into the right command Yeah, so it's all but but it's a it's a good question So the reason we left it off is we bricked a ton of devices when we turned it on Right, so and the other reason was And mostly young to be honest these were early consumer versions of SSD devices But for enterprise storage enterprise SSDs and so on it's probably less of an issue but the other way to do this is not the the What I would call in band discard management which is with the mount option Which every time you delete a file you send down little potentially 4k discards But you could also do FS trim which will lock a whole range of a file system and re-sync the device I will add that There's also a problem in the 88 protocol the t-13 protocol where the trim command is Unqueued which means we have to drain the queue you have a huge performance hit for 88 devices I believe they fixed it or are in process of fixing it Yeah, so as of today for for 88 devices, which is your sounded devices and consumer Laptops and desktops. That's not in shipping in product or or ratified inspect Yeah, so James says the 88 people are catching up as guzzies Okay, so but but that's a good question I mean you could turn it on but you the reason we turned it off and I will add we should have turned it off more completely than we did as opposed to leaving it on more Regularly because we do discard everything when you make a file system routinely and we've had nothing but pain from that the number of even high-end devices that have hung or never returned or whatever has been Way too common. Yeah, I think it I think it really so we actually have support for the discard operation all the way from Vert guests all the way through the QMU stack all the way down in many if not all cases If we're at least we're trying to make that complete Bird I have scuzzy we tried to do it even on file where you map the discard into a hole punch That's where you deallocate blocks in the little file which will turn back into magically scuzzy or a TA commands underneath that If and when that all works Yeah No, they okay, so so the question is how would a disc actually simply disc advertised capabilities There's a complaint with this back. We believe them. We issue the commands and they die sometimes Right because enterprise storage people do a really good job of testing it some of the consumer-grade devices Specifically that we see in Linux frequently don't do as well Yeah, if he was using Yeah, yeah, but but it's just like we don't have a bit in the kernel that says we have no kernel bugs either Right, so this to be fair. Yeah Another question Yeah, so I'll just add one more note We actually have a couple of things later in the week that we're going to be talking about that I didn't mention the slides here They're just interesting kind of generically There's a new class of drives called shingled drives and we have some of the storage vendors coming to pitch what is hopefully a converged proposal from different warring Vendors on how to what they want Linux to implement and Shingle drives are kind of neat in that they look almost like a tape to us and that they have effectively very large rights of a Pend only bounds and you can't do random IO to them or you get much less density and less performance So that's at one end of the spectrum really giant capacities really slow IOPS and the other end We're going to be talking about persistent memory technologies, which are DRAM class parts Which might drive five million IOPS per second They're anything but slow and random, but they have much different capacity points and the problem there is we were a little bit slow We're a little short of five million IOPS per second per target. I think in the Colonel today by maybe an ordered magnitude James Can we do can you do a million I? think we Yeah, but we have a lot of work to do either way and When these devices become commodity and they they will come become commodity We'll need to have some new ways of getting our existing sector run really fast And we'll probably have whole new generations of file systems that might be specifically tailored to them So more bugs another five years of waiting just squeeze them together. Yes. That's cheating All right, go ahead Turn off the direct. I'll let it just coalesce everything. Yeah So any last questions or shall I let you all escape off to dinner and beer? Brinks. Well, thank you very much