 OK, everybody, settle down. Probably we'll start now with our next speaker. And by the way, there is like charging sockets in the desks. If your laptops died, you can charge them probably. So yeah, now let's move on and meet our next presenter, Brin Reeves, who is from the LVM and device mapper team, who'll tell us more about DM stuff and some good hints. So the floor is yours. Thanks. Let's welcome him. Hi. Thank you for the welcome. And it feels like a real honor to be talking to you all in such a great venue today. I spoke last year remotely on work we've been doing on device mapper statistics. Hopefully in person is even better than via remote stream. So we shipped the first version of DM stats a couple of years ago, 2015. Quite a bit of work since then. Before I jump into things, can I get some quick shows of hands? Who uses device mapper, either multipath, LVM, whatever? Yes, good numbers. Good. Anybody played with DM stats before? Anyone taking a look? Yeah, yeah, got some users. So what I wanted to do today is look at the most recent work. So apologies for those of you who haven't seen the earlier stuff. I'll do a real quick recap, but there's a lot of interesting new stuff. And that's really what I want to focus on. If anything seems confusing, because you didn't see last year's or you haven't had a look, track me down outside, catch me on a mailing list, IRC, whatever. I'm always happy to talk about this. It's kind of my baby at the moment. So the recap then. DM stats is a flexible IO statistics framework for device mapper. It builds on kernel work done by Mikolash Potoka. It's a really great mechanism that we have in the kernel. I'm heavily indebted here. It gives us a great base on which to build really powerful, interesting user space work. So we get a bunch of performance counters from the kernel, just raw 64-bit values, basically. We then turn those into metrics. So we do rate conversion, unit conversion, other things that you don't want to do in the kernel, as well as kind of metadata fields to help you understand what the information means. Some of the advantages that device mapper statistics have over the conventional PROC disk stats, what you get from SAR or the IO stat program, we can optionally do nanosecond precision counters rather than the Jiffy millisecond precision that you get traditionally. We also can do these funky little latency histograms. Now, I did promise him my abstract to demo them. Then I remembered I demoed them last year. So again, due to shortness of time, I'm not going to cover those again, but again, do feel free to catch me. I'm happy to show them off. The really cool thing, I think the really cool thing, we don't just have a single set of statistics for your whole device. We can dice and slice your disk in a whole load of different ways to produce much more fine-grained results. This gives us an insight into what's going on inside the device that just never was possible before. There are a couple of things in the talk I'm going to claim as world firsts. I should qualify that. They were possible with things like detrace, system tower, also by parsing block trace data. But these are easy to use, prepackaged solutions that you can deploy in production with no fear of excess resource consumption. The kernel acts against excessive memory consumption in the stat subsystem. So these are pretty solid compared to some of those earlier solutions. We also have really flexible reporting thanks to a colleague of mine, Peter, in the team and the device mapper reporting engine. You can choose whatever fields you like in any order, mix, map, recombine, anything. It's really cool. The downside, well, it's not such a downside. You do have to configure this. So let me just qualify that with a real brief demo. By default, we don't collect anything. So I just configured DM stats. It really is as easy as that. Now, this is taking a whole lot of defaults. If you add more options, set other flags on the command line, you get more control over the process. But this will set you up for a basic set of reports on all of your device mapper devices. So I'll just kill those all off now. We'll come back to the demos as we hit specific topics later on. So if you've ever unpacked an LVM2 or device mapper table or checked out the Git tree, you'll have seen the what's new and what's new DM files. So this is a kind of we snapshot of the stats changes from the last year and a half, I guess, but really most of it landed last summer through to now. And there are a couple of bits that are still to hit master. I'll give you all the branches later on if you want to take a look at those. The plan is those are going to hit master and find their way into a release. Should be in the next few weeks, maybe a month or so tops. But they are all there if you want to check them out. Some of the other components I'll talk about later are currently being developed out of tree. The idea is to merge them once they're stable. But if there's demand and interest, I'm quite happy to make coppers available for those so that you can get easily installable versions of them too. So the additions compared to what I discussed last year, we added a grouping facility. So you've got these regions on your device. We can now glue those together and get aggregate stats for them. I'll explain the regions and areas concept again in a moment, just again to recap. It is the distinguishing feature of DM stats or one of them. And it's an important concept to understand to really make the best use of it. So my baby, the thing I'm quite pleased with is file mapping. We'll take a deep look at that in a moment. But the basic idea is you can have IOS stats now for an arbitrary file anywhere in the file system, as long as it's EXT4 or XFS. So if you have virtual machine images, large database files, you needn't confine your workload in order to measure the behavior on just those files. You can just zero in directly on the data that you're interested in. I think this is a really powerful facility. It's very new. So obviously, I want to hear your feedback. I want users. So do please, if it sounds interesting, take a look and get in touch with any comment or feedback that you have. I'll explain more about the monitoring demon that comes with that and why it's necessary later. There's also a little, started off really as a toy to solve one particular customer problem. But I think it's going to be generally useful. The original name of it, well, my boss thought it looked like a rude word. So I came up with the name IOScope. It's a way of visualizing your IO. So you have a disk. We want to know where is the IO going? Where are the hotspots? This allows you to do things like cache dimensioning to figure out where bottlenecks are occurring in your workload. Perhaps you have a directory that's severely overloaded and there's contention to perform updates on the disk there. There's also some general API improvements. The metric interface had a bit of a revamp. So you can just call a single function now with an enum value and get your data rather than this kind of plethora of different name functions. The old interfaces is still there, of course. We don't just rip things out because we have something better unless there's a real need to. With all of these new things, we have a lot more output now, especially with the file mapping with files that have lots of fragments, extents in them. And this kind of adds some pressure to your terminal, especially if you're on a big screen like this. So we've added a filtering mechanism which works with the aggregation subsystem. So you can pick the level that you look at your data. Do you want to see the very smallest bits, the areas? Zoom out one level and look at the regions or take a more global view and look at whatever groups you may have configured. So, quick recap. I should have switched those slides around, never mind. Quick recap on this region and area concept. A region is just an arbitrary range of a device. One or more sectors anywhere in the device and we'll collect statistics for that according to the area configuration. In many uses, a region will have one area. So there's no further subdivision. But for some uses, you may want to chop the region up into yet more fine-grained units. Each one has its own set of counters within the kernel and you can report that data individually at that level of granularity or glue it all back together and look at the overview. This gives us a very granular range of possibilities. We can go all the way down to a 512-byte block or all the way up to a multi-terabyte disk. If you don't configure device map of statistics, no regions, no costs. Actually, Nicholas can give you the true story on that. I believe it's just a conditional branch. Yeah, so if you don't have any regions configured, we just look at the pointer. If it's not set, there's very, very minimal overhead. I guess you could compile it out of your kernel if you wanted to, but I would tell you I'm missing out if you do that. So, let me just pop back to that one. So what comes in the box? Actually, I haven't finished putting all of it in the box yet, as I said, but this is what the box will be in a couple of months' time. So the first thing that you'll find is the DMStats command line tool that I was using a moment ago. That's the primary interface to the subsystem, at least for kind of administrative use. You can build applications on top of that. Currently, the CAPI is fully supported. That's ready to go for production. There is also a Python API, which is... I told somebody the other day it was 80% complete, and then I thought again, I think it's more like 70%. But the plan is to get that finished off this year. DMStats I've been working on for two, three years now, and this year should be really the last year of heavy development on the call. We're getting very close to feature completeness now. The Python API, I think, adds some really interesting possibilities. I'll talk more about those later on. One of which is the ioscope. Right now, it doesn't use the Python API. It was actually the reason I wrote the Python API. Again, more about that later. Those last two are currently on GitHub. The URLs will be up later. I grabbed them from the PDF when it's published. I would try and write them down now. So do feel free to come and join in on GitHub. I'm ever eager for users. We do respond to issues and pull requests. So if these are interesting projects, then come and get involved. Okay, we've done that. So this grouping concept, this kind of sprang up out of the initial idea of mapping objects in the file system and being able to look at those as a single object, even though the data is scattered out across the disk. You can use it manually. It's quite a lot to configure. I was gonna do a demo of this, but it is manual. It's lots and lots of steps, and it would be kind of boring. But what I will do is take that demo and put it in the GitHub Wiki. So if anybody wants to take a look at those steps, there are also examples in the manual page of using the group facility manually. It's real simple though. You wanna make a group, DM stats group. Your regions list is in the same format as kernel bitmap list. If you've ever set a CPU mask in the kernel, like one, two, dash, four, I borrowed the code from them, so it's exactly the same. This identifies the set of regions we're gonna group, and we can give them a friendly name and alias. Quacks here, call them what you wish. When we do file mapping by default, it's the file name. It's just a helpful label to appear in reports to allow you to identify what it is that you're looking at. So, ungroup, you just give the group ID, which we've returned from the group command, and it dissolves the group back to its component regions. Why do we want to use grouping? Well, I kinda hinted at some of it already, but let's just put it out clearly. DeviceMapper and LVM device segments were one of the first cases for this, so I'm sure the number of hands that went up for using DeviceMapper, you'll have seen a logical volume that perhaps has either been extended over time or for some other reason contains segments that correspond to different areas of a physical volume. We can map those and group them together so that you can either look at the logical volume as a whole or drill down to hey, why am I getting bad performance on this area of the disk and see those separate from the remaining segments? That then kicked off this idea that, well, hey, these aren't the only structured objects that you find inside a block device. We can do other things, files, extended attributes, directories even. You may want to look at the journal, perhaps your journal is a bottleneck. Right now, the only one of these that's implemented in mainline is, as I said, the file mapping. The idea for the others is to sort of defer them to the Python interface so that we can have a generic way. If you have some object of interest and you know how to find it, you can write a little bit of Python script and we'll be able to find your objects and report information on them for you. Again, more on the Python bits later. Database formats, device data and metadata. Another colleague of mine, Milan, works on DMCrypt and I was looking at some of his work the other day and thought, hey, we can do a stats layer for this. It'll let you see your metadata performance and your data performance separately. Another use case is virtualization. The rev and overt, I think overt still does it this way. I haven't checked recently. They pack machine images interleaved on a logical volume. Sometimes you'd like to just say, okay, that one VM image which is spread out here, just give me the information for that. And with DMStats grouping, all of this becomes pretty easy. So we'll take a look at the first of those use cases next, the file mapping. As I said, it is kind of one of the features I'm most pleased with from this round of work. Again, hopefully easy to use. The create command allows you to create all sorts of regions and other configurations. We had a new switch here, dash dash file map and this is gonna go ahead and create those images for you, create those configurations for you. It does accept globs as well as single files. So if you've got a whole directory full of VM images, just start on image or start up QK2, whatever it is and we'll create new mappings for all of those. By default, we stick them together in a group and we set the alias for the file. If you don't want that for any reason, perhaps you want to do some customization first. Just set no group and we'll report all of the unique identifiers for them. Bear in mind with big files, you're gonna have a lot of identifiers to work with there. It's probably more suitable for scripting than manual use. So real quick, look at that then. So I've got a little Fedora 25. Am I gonna be able to get into Vert Manager here? I should be able to. Yeah, there we go. So I'm just gonna create a file mapped image. As you can see, my file system has quite a level of fragmentation. It's not a huge file and there are 927 extents in it. That may be because I have intensely tortured this file system because the more fragmented the better test case it is for my work. But this is a pretty extreme case for such a small image. So now I've got that there. Let's, and this is gonna be a little bit minced and mangled. I apologize for that. The reports are quite column hungry. They just love horizontal space. So if it comes out a bit, don't worry too much about it. Again, I'll post some nice clean output or you can take a look at the man page which has sample output. It's not too bad. Let's just boot that machine up if we can. So we should see that. That was the boot loader being read out just then. Hold on, let me filter this some more. Okay, so now we're just seeing the group, as I said, using those filtering options. The machine is still booting. You can see there's quite a lot of reads going on there. Not much in the way. Oh, there was a little right there, 1.8 bags. Hopefully that demonstrates the ease of use for this facility. I'll stop that now, because it is kind of hard to read and just delete that mapping. I use that command a lot. It just resets everything to the ground state and lets you go ahead and create a new configuration. Okay. So as much as I say it's my great new thing, it's for a user, kind of as simple as that. Again, quick show of hands. Does this look interesting, useful to people? Do you think you may have it used for this? Brilliant, give me some feedback. I'd love to know what else you would like to be able to do with it. We have thought, could you repeat the question? Oh, me repeat the question, I am not very bright. Sorry, the question was, could we integrate this with DM cache so that you can say cache this particular file or some other hint to request the data in the cache? Actually that specific thing, not so much, but one of the things we are looking at is to use DM stats for dimensioning your cache. But you've now given me an idea and I'm gonna go back and think about that some more. I'll need to talk to Joe, who is the architect and author of DM cache. I can't even give you a guess for the feasibility of that right now. I'm not familiar enough with the internals of the cache, but it's certainly an interesting question, thanks. Okay, yeah. I'm just about to come onto that, no problem. So when we run that create command, we take a snapshot of the extent. Now that's not a snapshot in the sort of thin piece ends, just a momentary consistent view of the file. But of course, files change. And a QCOW very often starts off very sparse. There are some limitations here. I'm gonna draw those out clearly in a moment. I don't wanna sell this as a universal solution. It works very well for many use cases, but there are some things you need to be aware of. We, if we are right into the file and it's not fully pre-allocated, we're gonna need to update the mapping. Now there's a command in the tree now. It's a new DM stats command update file map, which just does a one shot update. The next bit of work to land again, hopefully in the next few weeks, adds a demon, a DM file map D, which just sits in the background. It's a tiny, tiny demon. It has like one file descriptor and all it does is read the extents and then talk to device map to update the mapping in memory. This allows us to follow the files as they change and grow, adding and removing extents as needed. This facility has two ways of operation in the demon, two modes of operation. We can either follow the inode or we can follow the path. And these are useful for different styles of updating the files. This is why you kinda have to choose here. If your file is likely to be moved or renamed and you want to continue to get updates for it, you want follow inode. And we will keep that file descriptor open on the very same inode number and follow it as it moves around the file system. On the other hand, if your update style is to write a new file and then unlink and replace, you want to use the path following mode. And that will just constantly reopen the same path and update the statistics mappings accordingly. So we'll have a quick look at how to use this and some of the command line options in a second. I just wanna run through the limitations first and discuss a thing that I call the right gap. It's important to be aware of these, but again, as I say, there are a lot of situations where these problems are not going to affect your statistics or the use of the facility. We do require physical FIE map data. FIE map is an IOCTL that gives us the location of a file's data on desk. Most file systems that implement it tell you where it is on a physical block device. XFS, EXT4. I list those two here, but actually as long as you're not ButraFS and you have that data available, DM stats file maps will work. ButraFS gives us these virtual internal devices because of its internal volume management features. And we can't tell where the data is. So it's unlikely that this is going to be supported certainly anywhere in the short term for ButraFS. We also can't map the space until it gets allocated, right? This, you know, until something causes those, those disk blocks to be allocated, we can't see them. So as we said, sparse files, anything that you're extending via right past EOF or if you're using the FAllocate whole punching and extension API, then you will want to use the demon or the update file map command to make sure that your, your data, your mapping is consistent with, with reality. It's also important to be aware, and this is where the right gap comes from, that the updates are asynchronous with respect to that allocation, yeah? So a right happens, we'll take a look at this in graphical form. It's easier to understand in a second and sometime later we react to that and it will always be this way. There is no way I am hooking into the block layer and holding the world up while we update our mapping. It's, you know, these are statistics. They're not mission critical. They're not something to, to harm the performance of the system to, to collect unless you're doing very kind of specialist research or something. So the right gap is my term for this, this interval between a right triggering an allocation and us responding to that. We use the Inotify API. I did actually write it to use FA Notify first because it's new and cool and turns out FA Notify doesn't really deal well with file deletion whereas Inotify does. That's the reason in case you were wondering. So when a right occurs, it triggers the allocation. It will complete at some point. And at that point we get the notification, excuse me, that something changed. I notify and FA notify our VFS interfaces so they don't really tell us that allocation happened. So we have to do some heuristics, stat the file and things to figure out what's going on. And if something changed, we'll call this API routine DM stats update regions from FD and that carries out the FIE map and then the update steps to re-synchronize things. Now I've used slightly facetious units on the diagram of Femto Fortnites. It's kind of hard at the moment. There is such a gamut of storage performance from at one end NVMe Express which is down in the sub microsecond completion times all the way out to crappy consumer flash with sucky flash translation layers where it just goes off to lunch. Sucky FTLs we cope really well with but the shorter your device response time, either the more lag there will be between your updates or the more resources in terms of CPU you have to throw at the daemon to keep everything kind of synchronized for you. Why that 20.23 centimeters? I've deleted that thing a dozen times now. LibreOffice. Okay, so that fundamentally is the right gap. I have some kernel patches to address one aspect of it. I'm not gonna talk too much about that because it is kind of bleeding edge and again we are short of time to fit everything into the talk but again if you're interested just get in touch. Wrong way. Let's just have a look at the daemon before we head on to ioscopes. So I mentioned there's a daemon. Actually it was running, I hope. Yeah, there we go. So here's dmpharmapd. It is a daemon so the command line is kind of cryptic. It's explained in the manual page and there are examples of how to run that. I think rather than me trying to type all of that in now I'm just gonna do a blue peter here and here's one I prepared earlier. Let's have a look. Okay, so this is a little bit of debug logging. Is that reasonably readable? I'm gonna kind of pick out highlights anyway but actually we've got some, there we go. That should be a bit better. So what this is saying, dmpharmapd is monitoring farmer script of three. You can see I'm using a shell redirect. This is how you would run it from the command line. It starts up, does a couple of quick checks and it finds out that it's monitoring vm.image and if I just scroll down through here we should see the file getting extended. So we've, that's where I want to be, we've returned from the monitoring function and we've got something to check. So we scan the extents, we get the disk locations printed out there. We kept region zero, which was one out of two of the extents and we found three new ones. So the file was growing pretty quickly. Actually I kind of paused the demon to make sure there was some good churn for you to see there. Again, most users are never gonna be interested in seeing that level of detail but just to explain it, if you ever need to debug it or collect data to send back for feedback. So that's file mapping, as I say, it's really new. I think it should be useful. If you run into any problems or so on then do get in touch. We are actively looking to improve. I think we have about 10 minutes left so I'll do a quick demo of the IOScope and then like a really brief mention of Dumpy, the device mapper Python bindings. I'm really excited about that project but I started it after I put my proposal in for the talk so it's kind of not fair to hijack the rest of the talk just because I've got some cool new thing I've moved on to. The IOScope, as I said, it's a visualization tool primarily. It does little ASCII graphs. It also generates data in CSV and JSON time series format. That enables funky 3D plots. Unfortunately, there's still a bit bleeding edge. Rendering them with Matplotlib live is kind of slow. I hope next year if I'm invited back to speak again we'll have a better solution for that that I can show off to you then. The idea with this is we can easily identify hotspots. You can literally look at it and say, hey, there's a big peak there and lots of stuff is slamming that region. So far as I know this is a first as a general purpose facility in an everyday OS. I remember some guys at EMC in Ireland showing me a DMX frame doing this about 10 years ago in court and I was like, whoa. So I'm really, really happy to now see this coming into general use for DeviceMapper. It is a hacky Python script at the moment. I make no apology for that. Half of the script is this kind of rough Python abstraction over the DMStats command and DMSetup. And that was what prompted me to write proper bindings for the whole DeviceMapper library, including DMStats. So once I finish on that, I'll port or rewrite DMioscope to use the new interface. And once it's stable and we're happy with it we'll merge that back into the rest of the DeviceMapper project. So before I run out of time completely, let's, which one do I want to be on for that? That's this one, isn't it? Okay. So first of all, I'll demo the static mode. This is good for analysis purposes if you're gonna crunch the numbers later because it's unchanging. We have a fixed number of bins, just mentioned bins, and I didn't mention histograms yet. Fundamentally, the Ioscope is a histogram. We use the region facility to chop your device up and we accumulate counts in the bins and then we plot them out graphically. That's all it is. Since the bins can be variable size, technically the values are frequencies, not counts, but for the most part you needn't worry about that unless you are processing them numerically. So I'll start that running and let's get some load. So you can see here that the display is updating in real time behind me. Then the vertical rows are the disk space from zero up to 28, 30 gigs, whatever my disk is. And along the horizontal axes are the frequencies or counts of data. Since these are fixed sized, there's no difference between those. The frequency in the count is equal. On the other hand, if I stick the adaptive mode on and this is a bit, yeah, there's a whole lot of options involved in this and I want to reduce those down and make it easier to use. But without getting too detailed, we start off with just one bin and we use the counts to adapt. So we merge things if they're a low count and we split them apart if they're a high count. And you can see that it's kind of scrolled off the screen because I've got it zoomed in so far. This is handy for, we really have no idea and you just want to get a rough idea of what your distribution is. You can just leave it run. There is also a summary mode which rather than giving you an instantaneous point in time snapshot, it accumulates everything. And that's handy to just get a rough sense of what your workload does when you're really starting from nothing. Okay, so I've got, I think, five minutes left which just gives me just a moment to talk about my new thing which is feel free to call it dmpy if you like. I had to call it that because there's already a pydm which is another shell wrapper for device mapper but this is native. It uses libdevmapper, it's written in C, supports Python 2 and Python 3 from a single build. We're aiming to get 100% coverage of at least everything that makes sense from the API and I want it to be a comfortable discoverable Pythonic kind of interface. It maintains as little state internally as possible. Everything it gets live from the underlying library, there is a cache in there but it's a reference cache so that we avoid recycling Python objects when we don't need to. That's all the caching that we do. There's also a little state machine in there. There has to be a state machine because the C API has rules and Python, if you break the rules, you want an exception. C, you break the rules and you're gonna get a segfault. So the state machine is essential to prevent invalid uses of the C API. It's, I think, useful for prototyping, for one-offs like the IOScope which has kind of evolved past a one-off now but all for interactive use. You can fire it up in the Python shell and just shoot commands into there which, come back to that in a second. Python import. Dump it as dm, help dm. And this is one of my things with, if I'm gonna do Python stuff, I'm gonna do it Python style. Doc strings, discoverable, introspectable. You will find that these are very, very similar to the text that you'll find in the libdevmapper header file, the comments that document that interface, but obviously with names and so on I change to suit the Python use. I won't spend too much time on that. I'm sure you can read docs as well as I can. Okay, so. The classes and modules that it currently provides, the basic familiar, dm task, info, cookie, timestamp and so on, cookies really broken. I didn't understand dm cookies until I wrote this code. Amazingly, they still work, but there's some weird clutter in the interface that I'm gonna clean up as soon as I get home. So hopefully you won't see that. The stuff I really was trying to focus on, dm stats came in later obviously once the basic infrastructure was there, but it maps directly to its C equivalents. And I'm about to run out of time, so my last thing to mention is the dumpy test suite. This lets you do unit testing for device mapper in Python. So again, we're not gonna have time to run through this in detail, but here is a bit of Python script to do device creation with Udev synchronization. It's a lot easier than doing it from C, but you have the same level of control and power you'd have with the native API. Okay, I had better draw things to a close there. That's just a bit more information about the test suite behind me there. So just about fitting into, oh, future work. Yeah, shrink the right gap, dm stats, numpy integration, probably. I talked about that, right, I did. So if I don't call it to a close there, we're gonna have no time for questions at all. So thank you all for listening, thank you for coming along today, and I'll turn it over to the floor for questions. No, but you could feed... Sorry, the question was, you can use globs for file maps. Can you also do recursive? I guess you'd use like a bash process substitution with a find for now. I don't think I really wanna add file system recursion to that code. The shell does the glob for us. That's coming soon. I'm gonna build a directory mapper on top of the Python work, but files came first. I think your hand was next. The question there was, what's the overhead? It's not each trivial to measure. There are two sources of overhead. There's overhead in the IO path, which Miklash will be able to tell you in more detail and more correctly than I can without kind of going and looking. It's relatively minimal, but it scales in the number of regions. Unless you're using histograms, or it scales with the bins, but I have tortured the histograms and I can barely measure them with Perth. So it's pretty low. On the other hand, if you create a million regions, that's gonna have some expense associated with it. There's some optimization we can do that's on my to do in terms of how we communicate with the kernel when there are very large numbers of regions. But the more you throw at it, the more the cost. But the unit cost for single sets of regions is really quite hard to even measure. I believe it is linear, yeah. And it won't allow you to use more than, I think, 25% of physical memory or 50% of Vmalloc space. Yeah, you should call in your file system maintainer and blame them. Sorry, the question was, if the hotspot identifier finds a hotspot in file system metadata, what should you do? And I've just totally passed the buck on that. Honestly, it depends on the file system, it depends on the device it's on. It's a big topic to get into here, I think. But it's a valid question and it's something we may include in the Wiki or the documentation later. Sure. Yeah. Okay, so the question was, if I find a hotspot, can I find out what file it is? Yes, you can. I don't have anything kind of canned for that, but yes. If I don't file an issue on GitHub first, just to remind me, it's pretty easy to do now. It's just a case of asking the file system, basically, and the file systems can all tell us. Yeah, yeah, that's a good RFE. I didn't say that it wasn't expensive, but the computer does all that for me. Yeah, yeah, yeah. Yeah, the forward look up, we need to reverse that and it will be costly, but. Oh, you can do it already, cool. I probably need to talk to you in that case then. Yeah, I think we can use that. Any more questions? Yeah, yes, it works with all, sorry, the question was, does it work with thin volumes? We support all of the device mapper targets. Originally, there was a limitation for request-based device mappers, a multi-path, but the kernel side of that was solved a couple of years ago. So any device mapper target, linear, multi-path, snapshot, old snapshot, new snapshot, thin provisioning, cache the lot. Yeah, thanks. Any more? Yeah, yes, absolutely. Yeah, I hope again, if I'm invited back next year, once we finish kind of the nuts and bolts and mechanics, we can start to talk more seriously about, I'd really like to do a workshop next time, take a kind of two, three hour chunk to look at the different ways you can actually use it. You know, I'm talking about the code here rather than the performance investigation for possibilities that it really opens up. Yeah, thanks. Right, yeah. So each region has a unique identifier within that device. They're not unique across different devices, but if I've got my root volume, region ID zero there is a unique identifier for that region. We also, in the user space, create these fake identifiers for the areas. It's just an index, you know, but it's the region IDs and the group IDs that you use to tell the command line tool or the library, hey, this is the group or this is the region that I want to operate on. And they are qualified by the device. Sorry, I'm tired. The question there was, what are these identifier things that we use to reference the regions or the groups that we're operating on? So the question was on integration points suggesting the cockpit dashboard system and also to integrate with virtual machine start and stop. And those are both great ideas and it's the thing I'm most interested to see now is to kind of get this plugged into different places. The simple text displays I've been showing you can be turned obviously into really fancy graphics and I'd love to see that. Yeah, the question was on containers using Oval AFS. I don't know is the answer to that. If there's a device map or device underneath, we can work with that. I have no idea how the file mapping stuff would work with Oval AFS. Red Hat's Docker stuff has tended to use the device map driver for that. So I have limited personal experience of Oval AFS. I don't wanna make a promise, I don't understand yet, yeah. Anymore? Right, well thank you for coming and putting up with me and my continual forgetfulness to repeat the question. Thanks again. Yeah, sorry, he was a teacher before, he's strict. You know, like, ah, you don't remember one. No, no, it's fine, I really hate people who overrun and I'm gonna nip out for five minutes because I really need nicotine, but I wanna see yours. Ah, come on. Hey. It's only been 15 years since I'll be in one stance. Yes, yes, I finally get to close, I may even have closed it already, but yeah, it's one of the oldest bugs.