 Actually, this talk is supposed to be more of a discussion to get ideas and actually what I forgot to get unless someone else wants to, Kate, maybe you can, if you have a pen and paper or something. Just take notes. I picked you because I knew you were the most reliable on that. But I know everyone else here is even less reliable. Okay, so first off I think a lot of people probably already know what F-trace is. Should I say who doesn't know what F-trace is? Have you ever heard of F-trace? So everyone here, yeah. So everyone knows what F-trace is. So the first talk is just to go quickly through what F-trace currently does because this talk is supposed to be about or discussion is about what it should do or what actually what you'd like it to do. But to know what you'd like it to do you have to kind of know all the little things about what it currently does because a lot of times I will talk about F-trace and a lot of people that's been using it for almost as long as it's been around were like, oh, I didn't know it could do that. And I'm trying to go through a generalization of everything F-trace can do. And here's the topics I'll talk with. I'm gonna step through them one at a time. Starting with the most powerful aspect of F-trace and that's the function tracer. It's very powerful because you actually could see everything that's happening within the kernel. It uses, a lot of people don't realize this or maybe they do it that has a dynamic modification on the code to keep the overhead down. So that it's mostly no ops. We have like little plugs inside the each function by the GCC-PG. Oh God, I don't wanna know. But dash-PG option that puts in an M count call or F entry, I'm not gonna go into the details there. And then that boot up, we turn them all to no ops and when you want to turn on function tracing it changes those no ops dynamically into calls to tracing. This is also how live kernel patching works. It uses the F-trace infrastructure to hijack functions to replace them with a patch function for live updates. One of the, starting off the easiest thing to know is because of this dynamic tracing, it's ideally, you don't wanna, maybe you don't wanna trace everything because when you enable function tracing it adds quite a lot of overhead. And a lot of times what's nice to do is just to pick a few functions that you wanna see how they work. So the set F-trace filter is what you would echo the function name in and only those functions that you specify will be traced. There's also the set F-trace no trace which is nice if you enable all functions but you just don't want to trace some functions. Like a lot of times I don't wanna trace the spin locks because spin locks have more overhead when you're tracing them so or I'll start doing a trace and there's a lot of functions I don't care about so I'll throw them into the no trace because no trace takes precedence over the trace side. So if you have a function in both no trace and trace it won't be traced. So to keep the overhead down and no traces to make get rid of the stuff that shows up a lot that you don't care about. Now the also sort of new is the set F-trace PID if you only wanna trace a certain process you enable or you throw in the PID for that process into set F-trace and that process will be traced and nothing else will be traced. Still everything else has the overhead because all the functions are still hitting the trace but your trace data or your ring buffer will only contain the data for your functions. Sort of new is the option if you echo one into the options of function fork any if a process as PID is in the set F-trace PID and that process forks it will add the child of that process to the set F-trace PID and when the process exits that PID will disappear as well. So you only trace if you wanna trace not just a process but all the threads and everything or anything that it makes by setting the function fork it will create all the or trace everything you expected to. One of the also small or not very well known features of F-trace is the trigger file where you could actually or right into the actually the set F-trace filter there's a way to put triggers on the functions where you can say I want this function to do a stack trace. So if you wanna say let's say you wanna see where everything could get scheduled you wanna stack trace on all schedule you can actually put a trigger on the schedule function and the trace will do a stack trace and record every time a schedule happens so you can see why your process is scheduled out. Snapshot is a feature where I talk a little bit about this where you have two different buffers kind of like where the latency traces work you have a separate buffer that when a function hits a snapshot trigger it will swap the main buffer with the buffer that's a static buffer. So the static buffer doesn't change unless it gets swapped into the dynamic buffer that could constantly gets updated. So you're tracing along and then say if you hit a function and you said okay this function seldom gets hit any time it hits this I wanna save the trace. So when it hits snapshot does a swap of the buffers and the stack buffer you could look at it anytime later so it doesn't you don't lose the data from tracing going on. You could turn off tracing and turn on tracing actually I've never needed the trace on function tracer I think well maybe I used it once but trace off trigger is really nice because if you know there's a if you're trying to debug something on a kernel that you can't modify or touch and you know you're hitting some sort of bug and you wanna see everything up to that point and you know that a function that is in the bug path that's not normally called unless you hit a bug you could actually put a trigger on that function that gets called and say turn off tracing here. So as soon as you hit your bug tracing stops and then you get to see everything without worrying about everything being lost by overflow of the buffer. Yes. Yeah there's a trace. Actually I should have put that in there. There's a features I've added that I forgot about. There's actually a flag that you can say trace off on warning so if you actually hit a warning tracing turns off. It's actually I think it's just how if it's in the proc file system I'd forever forget. Yeah. Oh yeah. Two or three locations. Yeah I gotta put it back. I gotta put everything into the tracing directory. That's one of my to-do lists. It's just a one option I always use. Yeah so if you hit, so you're tracing you can actually set a flag though if you hit a warning tracing stops so you don't lose the data that you're looking for. You can also enable and disable events by if you hit a function enable events that's people have been asked about I never actually used that. Maybe I did it once. There's a profiler that if you wanna just see how many times a function gets hit there's a profiler but perfectly do that as well. There's a stack tracer that every function it looks at the stack to see how big the stack is and every time it hits a max stack level it will say okay record the stack. It does a stack trace kind of of every time it hits a new max so you actually and it tells you how much each function how much overhead each function actually has. After I did this or submitted this I was CC'd on a private email from Linus saying hey I see this new stack feature in here and I ran it and you guys here suck. So I was like hey but he didn't do that he didn't do it on the mailing list he just did that and he CC'd me. I was like that was kind of cool. I won't say who he was talking to either. Another nice tracer is a function graph tracer if you haven't used that that's one I think is the most impressive of the tracing utilities because it gives when you look at the trace you can see what's happening inside the kernel and it looks like CCo. I mean it's made to do the curly brackets and indentation so you see the pass of all the functions. So the end of the art is yeah you can see the calling graph of what function calls what function and also there's a time stamp as well so not only does it record what function calls what function tells you how long that function ran because it traces not only the entry of the function but also the exit of the function. Because of the overhead function graph tracer is a very expensive function tracer or is very expensive tracer that if you have a really deep call graph don't be too worried about high numbers of latency or how like this function took so long to run if it had a deep call graph because the function graph tracers overhead is going to definitely skew that. So if you're interested in them if you see something that looks really big and you're worried about it you use the set F trace filters which work with the function graph tracer turn off or just trace the one function you're worried about and that gives you a much more accurate reading of how long that function actually lasted. So I say if you ever just run function graph tracer and you see something you don't like just filter that one function run the function graph tracer and gives you a nice accurate of how long that function actually took. The max depth is I wrote this or added this I don't usually do much more than max depth of one what this does is when the function graph tracer goes on it will only go trace the depth of what you say here. So we say a max depth of one the first time it actually enters the kernel it's going to trace that function nothing else. Reason why I did this was for the no hurts code. Ideally no hurts is you don't I want to see what is actually if I have a user space spinning and I want to make sure that this guy could run and doesn't get interrupted at all by the kernel I run this guy run this and I can see anytime that function or that process goes into the kernel and I only care about the first entry point I don't care about anything else. But it's also actually some people say it's kind of a more high and are more powerful what's called S trace because it shows you page faults and such like that so when your function or your process takes a page fault you see that where S trace doesn't. Yeah, I said about single functions and. Yeah, and if you run the profiler with function graph tracing it gives you not only the number of hit count but also does the time stamps of how long functions were but again it could be skewed. I mentioned the snapshot there's actually a snapshot file in there so at any time your user process if you want to trace something and just take snapshots from the user process because of something that happens you actually write into the snapshot file and then if you ever forget how to use a snapshot file you just cat snapshot file and actually if it's not enabled it gives you the directions on how to use it because personally I forget how to use it and I put this in just to remind me so if you just documentation inside the kernel and that's actually a cut and paste from what it actually says on how to run it and again it swaps the main buffer with the snapshot buffer. Trace events, there's thousands of them today and I'm sure if you've used F trace to use trace events there's stuff for scheduling, interrupts, timers, things for hypervisors and everything else that's up there. There's also a set event PID so you could say I only want to trace this certain task and there's also the options event fork so if you're interested in tracing not just the tasks that you put into the set event PID but also any of the forks that it does now this is a different utility than the F trace utility or the function tracing utility so if you're running function graph tracer or function tracing or function graph tracer and you're running events and you want to trace the PIDs you gotta put your PID in both files and set both options if you want the children done so they're separate entities don't think if you set this the function tracing will follow. Events have triggers as well and there's the snapshot trigger the same thing I mentioned with the function tracing you could trace off on, this is for events so if you hit this event you could do this and you can enable and disable other events and there's histograms which I'm gonna talk a little bit about later what's nice about event triggers is you also could put a filter on it so you actually after you say like say if I want a snapshot only on this task when it hits us I only care about if I do scheduling if I set the snapshot trigger on the scheduler I could actually put a filter in there to say snapshot if com equals cyclic test and that means that every time the scheduler happens it will look to see if it was cyclic test that was running and it will only do the snapshot if it was or stack trace that's probably more efficient stack trace on every schedule if it's cyclic test there's latency tracers for there's interrupts latency tracer preempt latency tracer and the most common is the IRQ preempt latency tracer which does both preemption and interrupts disabled so if you have the case where you do local IRQ save preempt disable local IRQ enable preempt enable it'll record the outer two of the start at the IRQ disabled to the preempt enable so it kind of mixes both so you can see how long what's the worst place where you have scheduling latency is possible and it will record everything and every time it hits a new max it will do a snapshot of it so these latency tracers actually are magical snapshot tools the wake up tracer there's three wake up tracers if you just say I just wanna run the wake up tracer it will look for the highest priority tasks running whether it's the highest prior nice task and it will just when a wake up happens it'll record the time until it gets scheduled in if a higher priority process comes in there it says okay ignore the one I'm tracing now let's look at that guy so when something is preempted by a higher priority task it says oh this is normal forget what we're doing let's keep tracing if you use wake up RT it only cares about real time tasks and if you use wake up DL it only cares about deadline tasks both of these are static tracers which means that they can't do much to modify them they are what they are and it's kind of hard to do things like if you just care about a single task and run it on a single task if you wanna do debugging of your kernel trace print K is like the major thing that I would suggest to use it's like print K but it writes to the ring buffer and it doesn't have the problems of print K that I will be talking about the embedded Linux conference and open source summit both places and kernel summit the problems with print K because print K can't always be done in the scheduler it can't be done it it's where if you do it in interrupts it could cause problems trace print K is extremely fast and it can be done in any context NMI context scheduler the only thing you can't really use trace print K to for is actually if you throw it into the actual F trace ring buffer because then it sort of has problems although I think it might now work I think I do a recursion protection there that it'll actually you could actually maybe do a trace print K within the ring buffer I haven't tried it since I added the recursion protection F trace up on oops is a great utility I use it a lot and I know Thomas uses it and Peter you probably use it as well F trace up on oops is something that if you have a nice serial port attached to your machine that's you're recording the serial output and you hit a bug that lock crashes the machine it will dump out the entire trace buffer once it hits it usually first thing it does is turns off tracing so it'll hopefully the tracing itself won't fill you know the debugging stuff won't cause problems so it turns off tracing and then does a dump and you got to recall this there you actually if you have access to the code you actually could put in any situation or in any place just dump on oops or F trace dump and dump the buffer or dump the trace buffer when you hit a path but if you set it on either the kernel command line or sys control and the proc file system you can enable it so when you crash the kernel crashes it will just dump the buffer and note if you do use this you might want to shrink the ring buffers because by default the ring buffers about 1.4 megabytes per CPU the problem is it's as if you crash the kernel and you use F trace dump the loopers you usually have to increase the buffer size because the interesting information is otherwise gone out of the ring buffer if you deeper complex problems where you need a lot of historic information to understand it then that's where I end up often enough really yeah the last one took me four hours to get out of a serial okay that's interesting or the big box yeah I actually I had I was debugging a box with 240 CPUs and I didn't shrink the thing and I triggered it and it took over a day to finish the printout so use it with your own caution here another nice little utility I know some people hate this but actually if you get it working it's awesome and I've done a lot of debugging customer sites this way especially since the company I used to work for Red Hat has Kexec K-Dump working well and if you use Kexec K-Dump and what does is when the kernel crashes it jumps into a pristine kernel that just basically records all the memory and creates a core dump like that GDB could read and there's a utility called crash if you don't know about it look it up you know GDB crash Linux search it find that utility it's basically a Linux kernel knowledgeable GDB and when you load up the core or the core from there you could actually look at task trucks you could look at how the whole system state was when it crashed Li Jing sang I'm probably butchering his name I think he was working for Hitachi at the time I haven't seen in a while so I don't know where he is now but one day he asked me Steve for your trace command output file trace.dat do you have documentation on its format and I said yes you install trace command you do make install doc and do man trace command dash dot and actually I have a man page that tells you how the format is he said thank you very much a month later I get CC'd on these patches that are being sent to the crash retainer on a trace mod or trace plugin for crash and I have to fix it because I don't know if it's up to date right now but if it's not please let me know I will try to fix it up because it's dependent on kernel versions and what this does is if you go into the GDB or crash which looks like GDB and you load the trace plugin you could actually save do a trace dump and what it does is it will actually read the Ftrace ring buffers and create a trace dot file it reads the event file formats it reads everything and now you actually could use trace command kernel shark and everything else just as if you had had the trace saved at the time of the kernel crash and I've debug several things remotely where we couldn't have access to the machine and because they have proprietary software that they're running on it but they would run their software and the thing would crash and I just tell them here enable these trace points or trace functions, enable function tracing or whatever and they would give me the core dump I'd run this, get the trace dot file, analyze it realize, oh, I should have turned this else this something else on so I give them more information went back and forth a few times and sure enough, I found the problem what's coming, there is histograms today I'm not gonna really talk about it but what's coming is really cool histogram data where you could actually say inside the kernel you could say I want this event and this event and get a histogram adding time stamps as well so you could get a histogram of how difference in time between two events occur it's like, say, sched wake up and schedule, sched switch and you could look at it and get the histogram of all the processes that woke up and the time it took between them and a nice little spread out of the histogram actually I shouldn't save all the processes usually you do on one process this way you can actually run remember the WakeUp Tracer I said you can't really do it on a specific task. But with this, with the histogram features that's coming, I mean, it'll probably, it's not going to make 2,415, it'll probably be in 416. What? EVPF? No, it's Tom Sunusi, he asked if it was EVPF. No, this is Tom Sunusi's work. And it's very simple, because they were talking about using EVPF, but that's kind of like a little bit more complex to try to get that working. You can do more with it. You can do more with it. I'm repeating what he's saying anyway. So yes, you can do more with EVPF. And actually that's coming up soon, but this is one of the most common use cases for what we need, and to keep it simple, and it's not that difficult. There's a few modifications that I didn't like that he did that's why it's not going to be in 414, or 415, because he's got to rewrite one section of his code. So I'm looking at maybe 416 for it, but it's going to be a lot more flexible. It's making F-trace much more flexible for its use cases. It's also, you could create a synthetic event, this is how it's usually done, whereas the histograms work where you could, on a certain event, you could do how many times it was hit and you could specify a field within an event and it'll give you the histogram of the value of the field that you have, and then you could, yeah. And with the synthetic events, you could take two events, create histograms on both of them, and create a synthetic event that connects the two, and then you could actually, that actually works as a trace point. So when something triggers that says, hey, this histogram I want to trigger because of this value works, it will call a synthetic event and you could pass in the information that you want as parameters, what you want sent to that event and it will record it just like a normal event. And that synthetic event, you could do histograms on, you could add triggers to everything that a normal event you could do with a synthetic event. To do this, we needed variables. So now we actually have variables added to the events. So when an event is triggered in the histogram, you could actually say, hey, I want to record the timestamp at this event and then reference that timestamp at the next event, which gives you the latencies, but you could record anything you want. That's coming up, that the code has been written, it's only, like I said, it'll probably be in 4.16. Another thing where it's being worked on is the IQ and preempt disabled events. So whenever a interrupts are disabled or preemption is disabled, we'll actually create an event for that. It's dependent on lock depth and yes, it will add overhead if you run it. So it's not gonna be in any production machines, but for debugging purposes, it'll be there. Another thing that's coming is module init tracing. It's already partially there today. You could actually say, right now you could actually say, I want to trace the functions of the module when it gets loaded. So you actually set after trace filter, the set after trace filter, you could say, I want all the functions in this module to be traced as soon as it's loaded, even though it's not been loaded yet. So when you put it in there, it'll save it. And then when the module is loaded, it'll say, hey, here's the module that someone asked for, enable all the functions in that module and then trace it. And also on top of that, I now trace init calls, both the boot up init. So if you enable function tracing, you can now trace init calls and also module init functions. If anyone remembers the bricking of the E1000E, that had to do with init functions being traced. If you want to know more details about that, see me afterwards. And also I have a code coming up just real quick, is between tracing of this more on user space side, up between host and guest. So if you have a bunch of guests running and I'm trying to work on getting trace command to be able to say, hey, enable Tracy among all these guests and the host and then interleave all the, what's called, all the events together. And that actually takes a little kernel changes, but not much. Now for the wish list. Things I kind of want to happen and for me would be awesome. The first foremost is a zero overhead for preempt enabled. And it would actually allow for something like lock depth or something possibly, I don't know if it could, lock depth has to be initialized earlier, but to have maybe lock depth or something to be enabled on a production machine. How, by using jump labels or something, you can't, you don't think it's there? Okay, he needs a mic. He's got the mic. Just don't drop it. So now jump labels have the wrong thing. Alternatives could maybe work. You can do the trace points. You cannot do lock depth. Lock depth very heavily relies on prior state. So you can only enable it when you know there are no locks held, which is basically never. Okay, so let's say, okay. So lock depth probably not, but there's features that depend on lock depth that I would like to enable, like statistics. So who was, I think Josh, recently, or is in the process of rewriting a bunch of the Power Avert, RQ crap stuff into alternatives because Power Avert also has this thing. So if you can hook on top of that, that will be awesome for the interrupts. Thank you. For preempt, I'm not sure. So preempt disable on x86 is a single deck instruction, I think. Right. So maybe an alternative for that? Yeah, but you need to be careful to not increase text size too much. The call is what, seven bytes, and the deck has a- Five bytes. Five bytes. So yeah, that might work. Okay, so that's like one of my wish list things. I would love to be able to trace or enable latency tracers, the preempt IRQ off latency tracer on a production machine. Yeah, so alternative shouldn't be too hard. Okay, so use alternatives for that. Okay. No. No, no, alternative patching happens anytime your quantifier changes, which is typically the CPU capability bits, but it can happen anytime. Yeah, and they don't do this anymore, but there was a time when, if you went, if you had, if you booted up a SMP box and you did, you know, CPU down all the way to one CPU, it would say, oh, we only have one CPU, we're a uni processor now, and it would actually know up all the spin locks. That used to be, but then they realized that caused too much problems and they got rid of that. Yeah, I think we still patch out the log prefix, but I'm not entirely sure, but... Do we? So did we drop that as well? We replaced the log prefix with the DS prefix, I think. Yeah. Oh, so yeah, so there's changes that were done at boot time, but my first did is my first... We did it for some time to exercise that code. We did it on hotplug operations as well, but we did not longer do that. So it's a boot time only thing. Okay, yeah. If you run a SMP kernel on a UP machine system, we patch out the log instruction, the log prefixes. Yes, okay. And but the thing is... Why would I remember? So I actually was doing a runtime just to make sure it worked to stress it out because I remember seeing that, that actually shocked me when I went down to uni processor where I shot everything down and I saw everything change. I went, what happened here? Ftrace actually showed that to me. So I was just pulling up Ftrace and I saw all this stuff happening, and it freaked me out. But I was like, wow, this is kind of cool. And that's what I said about the lock events. Another thing I want is more interaction between EBPF and Ftrace to be able to have and maybe have histograms hook into EBPF to make it even more powerful. Have a simple case and then when you want to add even more complexity to be able to hook with EBPF. Anyone here not know what EBPF is? A few people? Okay, it's okay. BPF is the Berkeley packet filter. You probably use it if you ever use net filter. It was written for filtering of the network packets. And the EBPF is the extended Berkeley packet filter which has made it more generic. It's basically a virtual machine. It's a just in time compiler type of thing that goes into the kernel. That's in the kernel. So you can actually write programs and insert almost like a module but it's made for smaller specific thing. It's limited on what it can do to prevent problems like you can't do loops. There's certain constant things. It has a bunch of restrictions so that it can be validated that it's not gonna cause any issues with the kernel. But right now it works with Perf. Actually, you can actually write an EBPF program and modify your traces in Perf. I want the same functionality with Ftrace as well. It shouldn't be too hard. Another thing I would love to have and maybe EBPF can help on this is the way of reading the function parameters. So when you're tracing, I would love to see the arguments. It's not hard, but I could do a fixed set. The fact that there's no way to map a bunch of modules. I say, well, maybe what I could do is have something that could figure out what functions do what and have it loaded kind of like an EBPF program but maybe a module or something that just loads and say, okay, map a bunch of functions and say when this function is triggered record the parameters. The same thing with the function graph tracer since it traces the return call of a function who really be nice to know if that function has a return code to see what that return code is. Function graph needs a rewrite. It was written, we got working and great, but actually there's a lot of code in there that it has a lot of limitations and it's one of the reasons why actually it's so slow. It could be rewritten to be much faster and it really does need a rewrite. That's one of my wish lists. Oh, another thing is I would love to be able to group sections and files since so many times where I'm just like, I just wanna trace what's in this file and not care about it. And usually what I do is I do a grep of functions or something and somehow pipe it in. It's really bothersome and it's really would be nice to say everything in this file, I just enable this file to be traced. Of course, it's not, wouldn't be really too hard to implement that. The only issue is I don't wanna bloat the kernel by putting a bunch of metadata in. So I think is there might do a way where make it into kind of a module. So you can say, hey, I want these sections but it'll be actually put in a module. So you can just load a module that will then map everything up together. So you only bloat the kernel if you wanna use the feature or you could compile it in, I don't care. So that's the possibility. That's one of my wish list things to do. One thing I'm hopefully gonna get done is finally, this has been something that's been on my to-do list for a long time and it's been asked for is to convert the trace command output file to the common trace format. There's a common trace format utility for Perf. What that is is like Matthew De Noyes, his LTTNG has like, he worked on this way of having a way to read any trace data that he like know that you could have, if you convert everything to CTF, you should have all the utilities like kernel shark or trace command or whatever and Perf or whatever should be able to read it and LTTNG could read it. So you'll be able to see data from anywhere and everything should be able to correlate and to leave a bunch of different types of data and get a better picture of everything. One of my, one thing I wanna work on is actually take the Perf ring buffer and make it not so coupled with Perf and be able to maybe have F trace because F trace was optimized for splice, the splice call. So everything was done by pages. It's great for sending over the network because everything just has a zero copy for either writing two files or sending packets off to the network or UDP. F trace is really good at that. What it really sucks at is M mapping. You can't really M map the ring buffer. You actually have to read the data from it. So if you're doing anything and you just wanna stream stuff and not care about saving the data, F trace is weak on that. Perf is very, very strong at that. But if I could decouple the Perf, I don't wanna write another ring buffer. I'd rather use what's there and I kinda wanna decouple the Perf ring buffer. And that way maybe we can have Perf and F trace share a little bit better. I still need to make trace event and trace command libraries. We have a library that Perf uses. I think right now there's Perf, trace command, Powertop and the machine check exception utility. I don't know what that's called. All use the same library but they all have a copy of it. It's not anywhere. That's gotta be changed. We gotta have one copy. Am I talking too slow? Should I go faster? I'm actually working very hard to talk slow. Am I talking slow? Probably the slowest you ever heard me talk. Colonel Shark, I have a full-time developer given to me. He'll be here next week. I'm trying to introduce him to the community. He's working on Colonel Shark. The first thing we're doing is we're getting rid of GTK2 since it's not supported anymore and I don't really feel like porting to GTK3 because I know when GTK4 comes out I'm gonna have to rewrite the code again anyway. So instead of doing that, we're going to go to QT or Q or wherever you call it. So he has it almost fully functional so right now Colonel Shark will be QT. Getting away from GTK. I want plugins to be able to modify Colonel Shark. Like one thing we wanna do, we already have working sort of is we've asked to do this. So Zenami, when Zenami does schedule switches to be able to see, have that as a schedule switch in Colonel Shark and that code has already been written. It's almost done. It's almost out there. It hasn't been out yet but to be able to add a plugin so you can actually change how Colonel Shark works. I would like Colonel Shark to do other things than just showing what it the way it does but like flame graphs or any other. If you have ideas of like what you want Colonel Shark to do to come see me, I would love to add more information. I would love it to read the histogram data. Oh wait, I think I missed one thing. Oh yeah, I missed the one thing. In case of the UU encode the output for Ftrace dump on oops. I thought about doing something like that so that you could actually have an easy way to parse. Like if you do a dump and I just have a save it to a file, I can say, hey, convert the Ftrace dump on oops into a trace.dat file. So you have all the functionality of Colonel Shark and everything else on Ftrace dump on oops if you don't have kex.kdump working. So I've been asked, I never, I just use UU encode I think it's the easiest way to parse and read stuff but I've been asked so many times by people who, cause then I can just dump out the straight raw data and not parse it. I've been asked by a lot of people that would like, can you take the trace dump on oops and make it into a trace.dat file and I don't want to be parsing events the way they are now. So doing that. No, well, nope, you could, once you, it's a lot easier to do a lot more debugging once you have a trace.dat file. So if you could wait for the data to come out, it's a little bit faster for doing debugging. At least I know for me it is. You could do filtering if you could say, just ignore CPUs, you could do one CPU, three CPUs and just. I can do this in the text file just fine. Yeah, yeah, well now everyone's as expert to you as with said and awk as you are. Or grep or whatever. Okay, anything else that anyone would like? Any things that you could think of that I haven't mentioned? You want that? You like that? Or trace.dat what? Yeah. Yeah, so what Julia just said was you could, if we do get it into a trace.dat file just so you can either have given the mic while it can repeat it. If you could get it the trace, this ftrace.dat, or ftrace.dat on oops, you have that instead of what Peter's saying, well text file is good enough, that's all I need. Well some people would like to take that, convert it to a trace.dat file and then you could run it on KernelShark and see a visual of everything that's going on as well. Which you can't do really with text. I mean, and I don't want to go through and try to parse text because that's gonna be a pain, but if I could do a trace UU encode, I'd actually dump the binary data and then easily convert it to a trace.dat file. Yeah, I've never started KernelShark in my life. I wouldn't know what to do with it. So you're not my customer. Anyone else who'd be interested in like a dump from ftrace.dat on oops into a trace.dat file? I got one hand, so I have two. Do I hear three? Three, four, make it to an auction. So I know this is always hard after lunch and I probably, I was told to speak slower, which I probably shouldn't because you know after lunch, my speaking fast usually keeps people awake. Okay, but that's it. Thank you.