 Hello, I'm Steve Rosted I'm talking about you read ahead, which is a dead project But I'm resurrecting it from the dead. So first of all, what is you read ahead? Well, it's a system startup tool created by by canonical in 2009 this is like right after f-trace was created f-trace was created in 2008 so a year later they use tracing to Help boot up processing. It was done by Scott James Ramnet who by the way now works for Google FYI I work for Google too We don't work on the same team So what's it do personally does when it boots up it traces The open system call basically so when a file gets open it traces it records it Then after the boot up's done it reads the trace and calls min core min core tells you where memory in a file is Present so you can look at a file and say this part of the file is present in memory And this part of the files present in memory everything else is not present in memory So it records that then it records this information and creates a pack file On the second boot just as at the beginning of boot up It will start reading this pack file and start calling the read ahead system call to tell the kernel start pulling these parts of the file Into the page cache Why is this useful well when an application execs It doesn't get its memory So what the kernel does it sets up a virtual memory address So basically reads the L file and says okay this part of the file Will go into this memory this part of the file will go into this memory and it does all this funny stuff But it actually doesn't pull The file from the disk into memory it just kind of has this metadata that tells where the files are and where it's going to go But it doesn't do it immediately You can call M lock all and it's useful but as John Where is he ordinance is there he mentioned at our teeth it doesn't always help everything, but it helps quite a lot and When the process executes it's basically a just-in-time situation where when it hits something it pulls it in The kernel will look it up from the VMA table and then read the memory in From the disk and if it has to read in the memory from the disk It's considered a major page fault because it's slow But if it's already in the page cache because sometimes the metadata is there But it's not mapped anything, but the the memory the disk information is actually in memory somewhere and This is a minor page fault because all it does is have to fill in the page table and come right back And that's usually very quick Databases do the same thing you have the same issue if you have a database you worry about the same problems So this is a visualization of what I just said so you have your application memory off there to the left and Say right at startup it loads You know this is where those functions will be and you go to execute well the first time you go to execute Guess what? There's no memory there. So when you go to actually the CPU goes to execute. It's nothing there It faults It goes into the system it goes into the kernel the kernel will then look up to VMA table for the process and say hey This is the memory location in disk that this address these it will then pull it into memory pull it to the page cache And then now is fills in the page tables So then it goes back to the user space and it continues happily Executes until it gets to the next page and bolts again wash rinse repeat so to give you an idea of how this looks I Ran trace command which is a front-end tool something. I don't go to trace-cmd org if you want no more information about it and I did a tracing of the kernel tracing Recording and what I'm going to do is I'm a record function graph tracer which tells it which traces both the start and the end of a function I say dash L to say limit the tracing to only a single function because that way you know function graph taser has Overhead and I only care about one function right now. I want to look at the handle Mn fault That's the kernel system call or the kernel function that handles when The user space faults and has to pull in stuff from the VMA tables. So that's the first function it calls I want to see let's just trace that I want to see how long that runs. I Also found out from Matthew Wilcox. Thank you very much, Matthew that There's a system where there's a trace event in the The virtual file system layer Called MM file map add to page cache. That's a very long name I could never remember it. I always have to look it up. It's not something that comes to my mind but what this trace event does is it gets triggered every time the Page a disk gets pulled into memory into the page cache And I want to know about those every time that it has to go to actually a hard drive or a device to pull into a page Cache this gets triggered very useful information So then I recorded this in X executed chrome. I hope everyone knows what chrome is I work for Google So hopefully I'm presenting this on Chrome on a Chrome West book on a Chromebook so Executed Chrome I traced it trace command by default puts it into trace that that file So I moved it to say trace chrome start that that I could have just done dash o With the trace chrome start that that but if I did that it made the line wrap And I didn't want to first line to wrap so I broke it up to do that So I pulled up chrome shut it down and I recorded it again I just had so I did two records you could do this if you want you know do trace command make sure start off Clear your page cache. I think it's like slash proc something drop drop caches or something It was something I already echoed one in there and it flushes the page cache So nothing's in the page cast so I did that first then I did boom and then I just shut it I let it boot up shut it down recorded it let it boot up shut it down That's I did it twice To give you an idea what it looks like it did trace command report And on the trace that that file and this is the output You'll notice there was first a minor fault, which means that there was it took a fault, but the The actual executable was in meant was in the page cache So all I had to do is basically update the page tables and then continue and that only took 53 microseconds to do the fault You'll notice down here. It triggered all the loading of so something faulted and had to read a lot of Disspace those are a lot of pages that are being pulled in And that's a major fault and if you look at that that took 489 microseconds actually rounds up to 490 How do I do this? So I wrote this during Thomas collections ending talk at RT mini summit So I wrote this code because I want to actually analyze this so here's me kind of marketing lib trace command which is part of you go to trace command org and This is how simple this is in three sides. I got useful basically useful information This is the main what I do is just say okay open up the file That's you know trace command open will open up the trace that that file and it gives me a handler called handle Now I'm going to follow two events. I care about the funk graph exit event and I care about the that long really long name mm file map add to page cache event So every time the function graph exit event gets hit I wanted to call this funk graph exit function that you see there So you'll see right there's a fun graph exit function and then below that you every time the that file map Event gets hit. I'm going to call mm file map Function passing in a data my data here that I might give it to and then I do I just call this iterate trace Iterate events this is how easy it's that's it So now what this trace iterate events will do is it will iterate for the beginning of the trace that that file and hit all The events as it goes down and every time it hits one of these events. It's going to call that function you registered to So the funk graph exit event This is what looks like you gets the handler the event or record the CPU But I'm not going to talk about the CPU right now. That's another story and the data I'm going to get the tep handler tep is the trace event parser handler. It's how do you it? It's a way of Parsing it or it helps you parse the binary data Into a human readable format Because I mean having a blob of data from the event is kind of useless for me. I want to actually extract data from it So in order to do that I have a funk field because I want from the function graph tracer event I want to get the name of the function So to do that. I need the instruction pointer that's saved in the event I also want the call time and the return time. So the exit event of the function graph tracer records a time stamp of when the function entered and a time sample when the function exit Which is gives me the time of the function. So in the exit event, I have to start in that which is nice I I've always debated about whether or not the function graph exit should do that because there's a function graph enter But I always thought you know for analysis. It's always been easier So a long time ago. I made a decision to have the function graph exit record the enter event without having to have the function graph enter function part So then I I these are static fields because I only have to initialize them once so the first time this callback It's called it's going to initialize. So I know how to parse these from the raw data And then here. This is how I read it So the tep read number field by the way the tep anything that starts with tep is part of the lib trace Lib trace event library, which by the way was pulled into perf early So it got pulled in before I was able to vet the API. So the API is kind of horrible So I hate the API got man pages go to that trace command at work. There's man pages for all this All the functions are documented So the first thing I do is I get the IP address, but that's not useful for me I need an actual name So then you can actually ask a tep find function and you give it the IP address the instruction pointer address of it It actually returns the name from KL sims. So the same thing you would get from KL sims. You get the name of it From now I'm also going to read the call time field because I want to know the time stamp and when the function entered I also want to know the time stamp when the function exit and then here I check if the function name is called handle M and fault just in case for some reason I'm tracing something else which I wasn't I didn't really need this But I put it in there because I only care about the M and fault so I there then I'm going to increment a counter of Hey, you know a page will happen and we record the time stamp or time duration of that page fault Very simple one slide The file map handler so every time it pulled something in I don't care about the events So I didn't do anything. I just want to counter. I always want to see how many times a page got pulled from the disk into it So it's just a counter. That's how simple that function is Finally I print it So here's the data. I'm going to print out, you know, the number of page faults that happened the time stamp and of or the length of time of that and The number of file maps since everything is in nanoseconds. I don't really read nanoseconds well It's too many numbers Confuses my brain So I created this print time function that just simply just converts, you know For the nanoseconds into a second dot you second so I actually truncated three so I don't care about anything more I don't care about, you know anything sub microsecond By the way, I put I uploaded the code I'm going to upload these slides to the website. I haven't done it yet because I just finished the slides two minutes before the presentation and The I'll be uploading this but that's if you're if you're interested in this code It's simple. You go right at yourself cut and paste But this is this is the output so that little thing actually gives me information That's actually very very useful the first time chrome booted up It took I should have come to this too many zeros too many numbers a hundred thousand page faults, okay? 1.98 seconds of page fault So every time I went into that handle MM fault. It took a total of 1.98 seconds The second time I ran it it took 90 page faults or 90,000 page faults 0.7 seconds Less than half the time but still that's over one second a boot up time was just in pulling in page tables shows you the impact of this so What does your head do like I said it records what is read first boot opens up all the files After the trace it reads what's memory creates a pack file next boot reads the pack file calls the read-ahead The problem with this too This is at least from Chrome OS and I think canonical used to do this where when we started we'd kick off You read ahead, but of course it's it's competing with the start It's you know on one CPU running and there's another CPU that's running So you know if it takes a while to pull everything in and it could be kind of useless because it's going to be pulling And stuff that's going to be pulling it in by the applications. That's running along with it So there is a little bit race there, but still has good results even with that race and even at night It's still actually much better than not running it so The minor page fault. Let's see what minor page flows. So so you have real read ahead It's going to call read ahead. It's going so it reads the pack file and goes and looks at all the Pages that are you know in the disk space and starts pulling it into the page fault Or it's a page cache while the applications are running So when we take a fault because we still might take the fault because the page tables are still not loaded It just basically sets up the metadata and goes and sets you know So when the application goes it's going to fault still so you'll still see the fault But it doesn't have to go to disk. It's in memory. It says hey, it's right here load that in much quicker If you do a you read ahead dump It shows you the output of the pack file. So if you're interested in seeing what was pulled in It will show it give you a list. This is just one of thousands of Listings is one after another, but it's just back and I just grabbed one that fit nicely on a slide Some of them are very very large. Some of them are just one liners But it gives you a little idea of Here's the file. Here's where it loaded stuff in I have no idea what those at signs are but something Well, maybe it's the start of it. Well, maybe there's a yeah, okay Oh, I guess it's where it yeah, it must be where a pack instances I'm just realized what it is now because they all start with that at sign and Down below you'll see that it gives you the offsets of where in the file the length of That all that data and also the physical address from the block So why do I care? Well, I work at Google for Chromebooks. I'm actually on the performance base OS performance team I care about performance. I can very much care about boot up times and since this is you know, the embedded open-source summit embedded folks are here from what I understand and better folks Like really really care for boot up times. I think that's why this room is full It was initially created I said or by this but ironically Scott James remnant is not involved in this work at all He's the creator of it Came over to Google never actually worked on it. So it's just hey, here's a really talented developer Let's pull him over but that's not let him work on the thing that we're using that he created. Okay, I Reached out to him and once is a hey, I'm gonna be working on this and you said I haven't touched that code in over 10 years Do whatever you want with it. That's what he actually told me like, okay So Chromo is testing shows significant improvements with it. We have this tool called boot perf It's upstream. You could look for it So what we do is I'm gonna have the output file go to this test board So whenever I work on a Chrome OS, I work on several different Chromebooks and my house has 10 Chromebooks laying all over the place And I usually I just plug it DUT stands for device under test So I always use these variables board and device under test the device under tests It's just basically the IP address of how to get to that Chromebook for me But this is common from a lot of the Chromebook developers do this So I have Chromebooks all over the place. I put this thing here. I say, okay, whatever you see board DUT That's means it's going to access this. So boot perf is actually a command this I use this all the time and I just change the variables And that's why I just go into my bash history to hit again I don't have to modify that because I do search for boot perf enter and Runs it basically will reboot the machine run it 10 times. It'll boot it up 10 times and record You know a bunch of information about how long the parts How long the boot up took So I went in and to the DUT that I wanted and deleted the pack file Knowing that's going to trigger you read ahead to recreate it on the first try So I ran it and this is I cared the one thing I really cared about that I knew you read ahead helps with is the seconds between when the kernel hands off to user space to the login screen shows up and 10 times here this all in seconds the first boot took seven point four four five seconds This is when the pack file did not exist Every boot after that took six point something Of fourteen point five percent savings. This is why we care about you read ahead and Yeah, so history of you read ahead again Created by Scott James remnant at conical Again, he now works for Google It adds to trace events to the kernel to get The open file system because the open file system call You can't look at the system call and try to get the file system because it may not be mapped You have to actually find in the kernel where it actually maps it and then get the name from there Of course now today's you might be able to use BPF, but I don't These are the two trace events Yes, I know there's three the But the last one is not used if they use it it must have been used in 2009 But they never got rid of it and there's a now like they added a check to say if this isn't available Let's just ignore it, but this use lib trace event. I have no idea what it was what it's used for It's still in the code So again, this information is used to find out where the file is open One of the problems is it can't handle relative of paths. So if you actually open a relative path, it just gives up and says ignore it So when this was pushed upstream to say, hey, you know, we want you to accept it Al Vero says I will not accept any Trace events that are in my code and this belonged to his code and he nacked it so That means to use you read ahead you must modify your kernel to add these trace events And then once again after a fact that use min core What one problem with min core and doing it this method is it doesn't give you any information about when This happened All you do is you get the opens, but you don't know when in the boot up time that this happened So it doesn't know if something happened right away or later in the future. So In 2011 You know Scott James Ramett left canonical for Google You read read ahead from canonicals point of view went into maintenance mode Canonical is not a huge company Compared to other companies So when a main developer does something and leaves usually no one else knows what that code did So they're just like, okay, you know, it still works. So we'll maintain it until it doesn't work So this required forward porting those trace events to every single canonical kernel So if you booted a canonical kernel, this would go and your boot would probably be faster than if you built your own custom kernel And did not include these trace points and then booted it you read had to be dead So no one actually took over maintainership It is today unsupported by canonical why most likely I bet you they stopped port forward porting those patches They probably said what are these patches that we put forward porting for not having any idea why I don't know I'm just my I'm just speculating. I don't I don't work for canonical I never asked the people that do this, but I'm just assuming that the chemical folks like I don't know what these things are They stopped supporting it. So of course you read had stopped working and they saw and no one knew why I I bet you they're like, why are we running this tool? It's not working. There's complaints bug reports my boot up slow down I try to use do you read ahead doesn't work. So I Guess they just thought it was broken and got rid of it So they said no more The last update to you read ahead was 2017 from canonical you read has dead long live you read ahead So Chromo s is now the last user. We actually understand it. We knew it. They actually been before I even joined Google I've only been in Google. I've a little very like about a year and a half now and They've been you know, we actually have quote-unquote sort of maintainers people actually looked at the code We said, oh, this is how it works. We'll maintain it boom No Scott James Ramett does not help at all with us But because you know the thing with Google is that we know it we have lots more people but these people are not Maintaining you read ahead. They're maintaining other things see you read ahead broke and said, okay We have to go fix it So you'll see a patch just a flyby patch saying slap it to make it work and continue because I have other things I need to work on other priorities other deadlines and everything else. So you read ahead has been Filling up with a bunch of band-aids It breaks every so often get a kernel update something happens. It breaks. We gotta go look at it very very fragile It needs a rewrite. This is where I come in and this is where you come in So I started doing it Firstly, I did I looked at the code and said wow what a hack this is But it was right in 2009. It's pretty impressive. You're like said a year after Tracy, you know Tracy and got into the kernel. They did this and they really had not much has changed since 2009 But I've gotten libraries now. I Could rip out half the code because it did everything manually. I'm like, this is all Liptrace FS does it and if things change liptrace FS will change it will fix it for you You don't have to maintain how you access the tracing system liptrace FS will do that for you rip it all out But liptrace FS in Being a distro like Chrome OS is this means I have to make now get liptrace FS upstream Oh my god, the bureaucracy of doing that was fun Password code coded all over the place a liptrace FS will find where the trace FS file system is for you It will Look at the prox file system and if it's not already mounted it'll say you'll use it if it's not it will mount it for you It does everything for you. You don't care So what about these two? Trace events that it uses gotta get rid of them because our goal is to make you read ahead upstream and We can't have it upstream if there's trace events that are not in upstream mainland kernel get rid of them Right like I said it uses the open system call, but doesn't handle Reload to the paths. There's got to be a better trace event Anyone know we were just using it That Trace event with a name you can't you say because it's too long It can trace it traces the order of when things pull in so now I actually could use this trace event and even know when In time in the boot up that this page went from disk up into Page cache doesn't even care about relative paths So how do I do this? By using the mm file map add to page cache event. This is what the page event looks like You'll notice inside the page of it. It gives me the device major minor number The I know number and the offset into the file where that where it's going to go It also by the way, I didn't put this like highlight it, but it says the order right now Order equals zero which means it only pulls in one page at a time But talking with Matthew Wilcox who's the one who told me about this event said that in the future with you know the what says folio With his folio things that are working. It's going to have it's going to actually be pulling an order of pages So we get the order of pages that comes into so we get all this information So we'll know all of it when the you know discs being accessed the devices are being accessed to go into the page cache It's very useful information Then what I do is I look at Prox self mount info because this gives me a list this gives me a mapping of that device to the Where it's mounted so I found out the root file system, which is not the first slash. It's the second slash Is that you know 254 that comma three? That's root file system because some places if it's reading other file systems I care about that. I want to know about those Then I'm like I looked at the I looked I left open I love open source Because I downloaded the find system call or to find find application source code and I was looking at what it does and It uses get DN 64 because you could get like all the I know it's on the directory from like one system call at least actually you get a block You have to allocate and you have to say give me this many I'll give you this many so it's actually really fast So I could scan an entire, you know file system Finding all the I know it's I keep track of which I know they found so what I do is I get I collect all the I know it's from the trace and then I go through and once all the I know it's are gone I'm like, okay, stop trying stop doing it and this is like a split second. It could do this It's pretty pretty fast. Actually, I could when you just do this you're actually faster than find So I did I did benchmarks. I'm like, okay my little utility that scans the entire, you know File system is actually faster than the actual find doing the same thing If you want to look at by the way my development code This is not upstream, but I do have a github account that I do I'm actually putting all the stuff There's the main branch if you go to my github branch There's Where we have it what we were using in chrome as of today, which does not have these updates Because I'm still working with getting the lib trace FS and we have to verify with make sure there's no regressions You know all the fun stuff that everyone here knows about So but I have a dev a develop branch that has the code that this works So I'm actually I've been running tests and yes, actually it's still just as good as what it is today But actually it's a little bit better. There's no regression so far. Sometimes it's better. It's more consistent Because I noticed that the other one the old way wasn't always consistent in what how it created its pack file This one's a little bit more consistent But there's much more to do Because this is just a start only thing I did was I made it use an upstream Trace of it now you read ahead for my develop branch can't be used by anyone You guys could go download it and it will work if you have a recent kernel that has that trace point So it's upstream kernel you read ahead develop works upstream kernel But That's not why I'm here. I'm here to tell you folks what I've done Now I'm looking for ideas on what more can we do about this now? It's just kind of open up on doors box. We can now record like how things are done I would like to create. Let me sound like I better go through. I might be say this I Want to split? The utility into one doing the tracing and another utility reading the pack file There's no reason that we need to have one utility to do both You could then trace multiple types of different things Create multiple pack files You know yep boom So I want to make a series of pack files for series of different situations for one thing for Chrome OS perfect We would like to do create the pack files in the lab that we don't even do the tracing and just install the pack files Like every time you do an update you get a pack file, but also we want to know like the first time you boot up Chrome Your it does a whole goes through a whole path. You have to go in through a log in you have to do your What's it called? You know register your account and all that stuff set up You know you everyone's got the device the first time you serve a device Which is not the path I want to be recording or actually I can have a pack just for that But then after you logged in I can say hey now we're logged in We can use a different pack file and start pulling things in we could really fine-tune this So let's make it smarter Let's start also looking at the timestamps and figure out where in the boot are we and where we should start you know pulling stuff in Maybe we could throw things out knowing that you know We're not going to catch up to these so we could now be smart enough to say hey We know how long this takes let's just drop these and start pulling these in we can make you read ahead much Smarter much faster in our boot better So we know the timestamps they said can't skip things I had to post this It's actually a serious consideration. If we're going to rewrite it Maybe doing a rust there's people that are doing rust wrappers around the lip trace of us the lip trace command There's actually one person actually rewriting my lip trace command in rust so Everyone's going rust why not and If there's any other ideas you have ping me This I could said I am I abstract when I Wrote to come here. I really said I'm not really talking about what I've done I Want to come here to talk about what we we can do So that's this is a call for arms call for action Let's see you guys worry about a lot of people here care about boot up times a lot of people care about things that bump bump bump We have a utility that might be able to help us with this contribute Thank you. I also want to say that there was 120 slides told you Chris. I'd finish it we have This is we guess we have virtual attendees so any questions We have a microphone. Well, there's a question up here Wolf Wolf run Mike Mike Mike Mike So did you apply or no you have access to timestamps and could do some is basic sorting Did you just apply some basic sorting sorting and did it Give better results. I haven't done it yet. I just this is something. Okay, so I Reason want to run reason why I'm here and asking for help. I don't have time to do it myself I'm one of those Google engineers now that saw this and say oh well I got maybe one day to work on this and work on it and I'll go back to doing what I'm actually being paid to do I mean, I am being paid to do this too. This is like and There's why you'll try to get people in and I'm this is why I'm trying to come and make this a community event if we do it where we do it we okay if Google does it. I mean, I'm going really gonna be focusing on Google's work I'm like I figure I'll get better ideas from other people other people have different things I don't want focus on just Chrome OS. I want this on everything. So No, I haven't done that yet. I just got this working a few months ago. Probably actually Maybe in January actually. Yeah, maybe in January. I had all this working but then I had to go work on scheduling and S frames and so I'm working on a bunch of other things that No, I haven't done it. So the answer is no. I haven't done that yet. I'm saying it's a possibility It's it's a to-do thing and ideally one of the reasons why I'm here is hopefully one of you will do it and send me the patches Sure couple of use cases from embedded that are interesting one is Often your your root file system is a compressed root file system It's a little bit of a disconnect between the block device Which may be a slow flash device and then the CPU intensive decompression. So we're gonna the file layer you get Kind of both pieces at once, but you can actually have some interesting You know optimally you want to keep that block device busy But also if you're burning CPU while you do that decompressing that may actually make your boot up slower So it's kind of something a challenge with you to read it It's disconnected it from the block device, which may be slow and then the compression side I don't know if there's a way to Connect those two things or trace both things so that you you understand how that works The other challenge is like the first time you boot up running you ate ahead If your application can tolerate that extra slowness the first time around you're okay But for embedded case you may have that ROM file system at build time like something with build route You want to be able to somehow do some kind of performance-based measurement and then build another root file system containing you know again the performance information you've recorded somehow You know that creates another problem with you read ahead trying to implement it You can imagine you like a scenario where like you know you boot up time Determines how fast your backup camera in your car shows a very real example First time after a software update if your backup camera it takes a while to boot up You know for one there may be a homologation requirement that you're violating, but it's also not a good user use case Okay, so first I want to come back and say something here that this is kind of like how people come to real-time They say hey, I'm going to install the preempt RTPatch and guess what my system is real-time. No, it's not It takes a lot more work you have to know your system so you read ahead is not a magic bullet I don't expect it to be. I'm not marking it that way. I'm saying it's a tool that we could use you brought up a lot of Corner cases that maybe it's not appropriate Maybe you could find a way like hey if you read ahead did it this way it would work It's one thing that's one thing I also want to show is with the tracing there I have you could now have used tracing using the trace command run your traces and then get the information to see If those scenarios are happening and how to circumvent it So I'm just giving you like the tools but it's you have to do your homework you have to understand it Right It was more a call to action to see if there's a way to solve some of these problems in general Yeah, and maybe work the tooling into something like build route where you have the ability to do this By the way, I will ask a question here to the audience. Is there some place like? Group IRC mailing lists whatever that's basically just on like boot up times Maybe we should have some like that have a boot group Whoops, there's a hand over there. Thanks I'm assuming that the pack file contains the actual data. I copy of it from the disc with a SSD or some sort of Device that's fast at seeking is there any point in doing that and what I believe this was done on SSDs That sure problems. It was SSD So I mean what I mean is could the pack file just contain a list of the blocks to read No, that's all it does. No, it doesn't contain it doesn't contain what it's not containing content that that going back to the Thank you. I just missed understood that. Okay. Thank you. Yeah, let me see This happens when you have oops this This is all it holds offset length and Physical block address. That's all those if file name file name offset link boom That's it says volume and all the Offsets in it. That's the only information that you get other questions Is there anything online? Maybe do you know or any questions online? Oh, there's I Like it when I have no questions that means I explained everything perfectly and everyone understands everything great Okay, wait, wait. Good. Oh, we have one more one more question And for speeding up the whole boot process you would start you read ahead and early in the In the boot chain. Yeah, all right. Thank you That's it hope question over here Dave Sitting down. Can you just grid them all together the space guy he's launching right now? He's like no, I don't want to talk to you now nah Just a quick question. The pack file is there a Way to validate that it still matches your file system or file systems You'd have to write a tool to do that right now. I think what all it does is The way you read ahead today, I think I think it's time I think it recreates the pack file like once every month or something or every few days It'll say oh, let's try to recreate a new pack file just to see if it's key things changed So it doesn't really that's another actually something we're looking at is is our way to like detect that, you know Maybe the boot up change, but usually boot up doesn't change That's why we want to get a pack trial every time we have a new upgrade because every the update will be different So we want to make sure there's a pack file and kind of attach it with the applications You could probably actually just Computer hash doesn't take too long. Well put out in the pack file. Did you read what you expected to read? That's true Another thing by the way, I didn't bring it up because embedded folks and boy I think I've heard a few times in time so but VMs is another use case that we're looking at Because VMs you want to quick up bring up a really quick VM You have this running maybe every so often then when the VM goes it has all this stuff because that's It does a lot of disc read to pull up a VM Okay, I was there are we out of time we're out of time I'm afraid so thank you very much and thank you Steve That was really great. Thank you very much