 Okay, so thank you for coming. My name is Elena Zanonik and can you hear me okay? Is the audio okay? Alright, thanks. So I work at Oracle and this talk will be about tracing in Linux in general and with emphasis on, you know, a little bit of history of the various tools and infrastructure and also what is happening in the last few months, what has happened in the last few months for the various tools. Alright, so first, I usually do a little introduction, a high level view of the building blocks that are in the infrastructure in the kernel that are used by the various trace point tools. So the first and the oldest is K-Probes, which allows dynamic kernel tracing. Basically, it allows you to trace a running kernel and that's, you know, the lowest level infrastructure that we have. Of course it requires kernel configurations, config K-Probe to be set to yes. And basically what it does is does something that is very similar to what a breakpoint in a debugger does. You select a location where the probe is going to be quote-unquote inserted, I think it has a breakpoint instruction. When the location is hit, the execution is taken by the handler of the probe which will execute some actions which usually mean collect some data that is available at that particular location, like, I don't know, the return value of a function, the arguments of a function, some other variables, timers and stuff like that. So there are basically, instead of using exception, there are various optimizations that have been introduced that make it less expensive, so they're using jumps. And all the tools are actually using this infrastructure first to assistant up and then F-trace and Perf came in and used this K-Probes. Another way of tracing is doing static events. So with K-Probes you can specify a location and insert a probe at that point while the program is running. With event markers, also known as trace points, basically you have to compile these guys into your source code. So you have to decide ahead of time where you want to probe and you know that at that point you can actually collect some information. So this, as I say here, it's basically static tracing, static probe points that I've inserted ahead of time. So there are a lot of them in the kernel code. The syntax uses macro, I'll show you in the next slide, and a lot of the tools, almost all of them actually are able to read and handle those trace points. The philosophy is the same. At that particular point do something, collect data, put it in a buffer display to user space. So basically the main block and the definitions and explanations here are in the tracepoint.h file. And this is kind of how you can define it. I didn't give a full example, but basically there are two ways of defining an event using a macro called trace event and using another set of macros that work together, define event and declare event class and other auxiliary macros, but those are the main ones. So basically trace event is when you have one location, one event, do one specific thing. In other circumstances you can declare a set of events based on, you can define a class and then each class can have different events declared for it that differ basically only in the name. The structure of the event and what they do is the same, so in that way you can coalesce some of the code into one location without having to redeclar the event in full because it's pretty beefy at each location. So first you define your events and then this is done in the .h file and then in the .c files where you actually want to probe and measure things and collect information, you insert a function call that starts with trace underscore and then the event name. So you can have trace underscore as cad switch that is this call. Or in another case in random.c you have trace underscore mix underscore pull underscore bytes. So in the first example if you go and look at the code that's one single event so it actually uses the trace event macro. In the other case in random.c it has been consolidated in using a class and the class is called random mix pull bytes and it defines several events within that class. And then you say you see that each of them is defined but a separate defined event macro that is much lighter weight than a whole trace event macro. So those are trace points and as I said the tools use this you know ftrace can use them, perf, assistant app and so on. So the other piece of infrastructure that you will hear and that you will see is the U probes. U probes allows to do user space dynamic tracing. The same way that K probes allows you to do dynamic tracing in the kernel U probes allows you to put probes on the fly in your applications. So again this was started and got into the kernel much later than K probes because it was a little bit more complicated and there wasn't a lot of agreement on how to approach the implementation. So basically you have user space breakpoints at this probe locations where your user probes, U probes are inserted and those are handled inside the kernel. So it also allows to have multiple different tracer for the same trace point, probe point. So U probes again must be enabled just like K probes was. The specific implementation is based on iNodes, the iNode identity. So basically think about saying breakpoint function full line 20. So that's exactly the similar philosophy here. So you have to specify the file, the offset, what you want to do when you reach the particular point which variables you want to collect, how you want to handle them and where you want to store the values and then for specific instruction handling that's usually for analyzing the instruction, stepping over the instruction and so on that is architecture specific. So there is a component of each U probe that is architecture specific. They're stored in a tree and when you register a probe that means you make known that you want to probe a certain location. The probe is added to the tree and then you insert the breakpoint instruction in the specific location of the code, just like a debugger. And then that's where the analogy ends. And then when you handle the probe you're actually calling the handler that performs the actions that you have specified and after that is done you resume to the user space. As I said before there could be multiple consumer per probe and so a probe is not taken out until all the consumers are done and also you can associate filtering to each of the U probes like making them fire only four, five times or only if some other condition is verified. So they're quite flexible. Another variant of this user space probing is probes that are specific for return points of functions. So again this is done in two different steps. So your program reaches the entry of the function when you want to put the return probe and at that point you will insert the real return probe which is inserted at the return address from that function so it's actually in the caller. But beside this point, beyond this point everything is the same as regular probes. So what is the status of this? This is the latest infrastructure that got into the kernel, right? So basically as I said, perfect trace, system tap they all use it and support it and allow the user with various commands to define U probes. U return probes was the last one to be added in 3.10 and then once everything was in the kernel we started seeing more people contributing different architectures. The one that's not there yet is ARM. They're just people submitting patches. They're just a new version of the patch for U probe that just got in submitted last week. U return probes, I haven't seen ARM for U return probes. I believe the patches did not include that part as far as I can tell. So this is the building blocks. Then I will talk about the tools that are within the kernel 3. Basically ftrace, perf and as of last week, ktap as well. I don't have a lot about ktap quite yet. So ftrace, it's a kernel tracer. Again, you can monitor many different areas in the kernel, many different activities in the kernel. It's growing, right? From a simple tracer it has been growing pretty fast. People are contributing different types of tracers and different features. Steve Rosted started it in 2008 and there is a separate tree that you can go and see the latest development on and then it gets pulled in once or twice per cycle into the main kernel tree. Basically, how do the user interact with ftrace? There are different ways to see what's going on. Basically, you can use this file system, syskernel debug tracing that has a lot of little files and you echo stuff into the files to turn on and off certain events, certain features tracing itself. So that's very low-level interface and very fine-grained. Then there is trace command, which is a user space tool which you will see was modeled after perf to have something more simpler and with a syntax that was a little bit more abstract in order to make it easier to specify certain activities. This also has its own tree outside and is still maintained by Steve, obviously. Then there is also kernel shark, which is a GUI to visualize the data after you collected it, which is very flexible, I would say, and it allows you to zoom into a specific time interval and zoom out so you can really see and it's color coded, the events are color coded, so you can actually really get a good feel for what's going on in your kernel. There is some documentation, it's somewhat kept up-to-date. I wouldn't say it's 100% up-to-date, but it's actually pretty good in terms of features and design. There are some articles, those articles have been around for a very long time, but it's a start if somebody wants to read up on this tool. Again, this also has a million configuration options for the build system, and basically each of them will tell you which type of tracers you are building into your tree. Some distributions like Fedora enable a bunch of the tracers by default. Basically, how do we do controlling and how do we look at the output? Basically, as I said, the debug tracing directory has all these little files. So basically some of the ones I have here are by no means the whole set. But there is a file called current tracer where you can specify which tracer you want it to turn on. If you say no up, then there is no particular tracer enabled at that particular point. Tracing on allows you to specify if you really want to collect the data into a buffer. Trace is your output buffer. Trace pipe allows you to actually you're not saving the buffer, you're not saving the trace data into the buffer and looking at it after the trace is over, but you're actually getting a continuous stream of the trace data that you are for the tracers that you have enabled. There are a list of events. There are the points where the markers are in the kernel. There are available tracers that will release you which tracers have been enabled and built into your kernel. And then there are also ways, as I said before, to specify dynamic tracing points using K probe and U probe with a command line. And that will be stored into the K probe events and U probe events file. And then there are a bunch of sub-directories. So if you look at this tracing directory it's kind of a complex tree of files and directories. So what can you do with F-trace really? What can you trace? So some of the tracers also call plug-in. So if you see the term plug-in, I mean Steve kind of uses both interchangeably. So basically what can you trace? So you can trace the entry of all the kernel functions so you can actually see the tree of the execution. You can actually have both the entry and the exit point so you can really see even more. You can do latency tracing, interrupts. You can have this tricky know-up which it just says do not write to the buffer but the tracing could still be happening. It's just that you're not saving the data. And all of those you can say echo into the name of the tracer into the current tracer file. How do you specify dynamic tracing using F-trace? Basically as I said K probe events and U probe events are the files that are involved in this controlling of and establishing and inserting probes. So for instance the first example is a probe at the return of the function and it's called my-ret probe in the function do's is open and it will return you the return value of the function. It will save that. And then you can also set a U probe where you give an address so in bash at that particular address you will have a probe and then you can clear them by empty in those files basically. So it's very flexible because it is a relatively simple way of specifying the behavior of F-trace. You can do the same things by using trace command and at that point you can just say trace command record, trace command report, start stop and other events that are using basically this infrastructure under the hood. As I said the tracing tree, the trace command tree is out there if you want to see. There hasn't been a lot of activity. 221 was released in March and I think pretty much, I haven't seen many commits after that, two trace commands, the trace command tree. So in 310 there's been quite a more activity in F-trace. They've added a few things, instances, function tracing triggers, more option and controls. Those are being added constantly. I mean that's not really news. Different clocks and better documentation. So instances I would say is the one that has to be kind of pointed out. Basically it allows you multiple output buffer and you can basically direct events to different sub-directories. So there was previously in that tracing tree there is a single output directory but now with the instances you can actually create multiple ones and control each tracer independently. So if you do, they are under a tree called instances and each of them you see the structure is kind of duplicated because it's the same functionality. But you don't have to populate them and write the files by yourself, they will be done for you. And basically the rules that apply for the main directory of echoing values and ones and zero into the files to control basically that's the same strategy that you have to use with instances. Right now it allows you to have this infrastructure for events or for the trace points that are in the kernel. It will be extended I'm sure. It's just that people have been very busy doing other work in the last few months. So that's the newest thing. And then the other one is function triggers. Basically it adds more fine control of events and allows you to start tracing something only when a certain function is entered. So instead of tracing everything from the start of your program to the end of your program you are restricting the interval or the area where you want to trace something to a specific function. Basically you can say if you see the examples of the syntax that are given basically in the enable event then you can set the first one is to set just a plain function trigger that works for that particular function. Timer start and then sorry add timer on. And then the second example has a five if you see a column five that means that you're going to trace only the first five times that you enter that function. And then the last one it tells you how to basically stop this conditional event. You're not interested in doing it conditionally on the function anymore because you're saying disable the event. So there are other conditional or also called filters. Basically you can collect data when you enter a certain function foo you can dump the stock trace. Again if you enter it the first one tells you to do it every time you enter it the second one only five times the first five times and then the third one disable it. Or you can take a snapshot of your system once you enter the foo function. So basically this allows you to be very very flexible in how you trace and how you narrow your trace instead of collecting a ton of data if you know that something is going wrong within the function foo and not anywhere else you can just focus the collection of your data for that particular function. There are a few new options that have gone in in 311. Basically you can dump the trace buffer when you enter a certain function with these two options dump and CPU dump. Basically again another way of getting intermediate state because the buffer could be overwritten because it's a circular buffer right. So if you want to save something before you're finished tracing. And then trace off on warning that basically will stop collecting trace data before a warning happens because then you might get the trace buffer overwritten by other stuff that you don't want you would just want to get up to that particular point. Other things that has happened there were a lot of cleanups and bug fixes and rationalizing of the code in the last two kernel releases. And then what's actually happening right now that's a work in progress and it hasn't been finalized yet. So I think this is the important one in terms of F trace behavior is the event trigger. So just like the function triggers before basically Tom Zanussi is working on a symmetric type of control buffer for events. So basically you enable tracing of whatever you want to collect only when something event has happened. So there is this particular event that's called the trigger event and when you reach that point in the kernel where the trace point is for instance then you turn on this other collection of other tracing data and not before. So only when that particular place is hit. And those are the actions at the end what you can do. You can enable and disable this conditional collection so you can start collecting when a certain event is reached and collecting when a certain event is reached as well. You can dump the stock trace, take a snapshot, turn on the overall tracing and also again counts and conditional are allowed. So again from these primitives quote-unquote you can build a lot of complicated tracing and fine controls so you really are allowed to zoom in instead of having a huge amount of data, try to narrow the data so it's easier to analyze. So that's kind of what's going on in Perf. Let's see what time do I end, I think I'm good. So that's kind of what's going on in Ftrace, sorry and the next tool that is seen a lot of activity and that is actually inside the kernel tree itself is Perf. What is Perf? So again Perf is in the kernel, it's in the tools slash Perf directory and it's a user space tool similar to trace command and actually it's trace command that is similar to Perf because I came after. Again this is Ingo and mostly Arnaldo at the moment and Nami and Kim also is doing some work and Jerry also and other contributors here and there. This started as Perfmon and performance counters interface initially it was called Perf counters and then from there on from the hardware counters and it started growing and growing and growing and it's basically doing everything that the other systems are doing. It's still very active obviously and there is a little bit of docs in the tool Perf directory. So what can you do with Perf? So you can do basically statistics about doing a particular command execution. Again you can record by running a command you can start tracing what happens during that command and then you can report because again this one can be done afterwards you can see what data was collected. You can then elaborate on various data for different runs maybe of the same command and you are able to different output files from different runs of Perf. You can do a Perf top so these are all useful things. By no means again this is not an exhaustive list of things you can do with Perf. You can also as I said before with trace command you can insert a probe on the fly. You can also do this with Perf and do with Perf probe you can insert a probe at a specific point in your kernel on the fly and the mechanism is the same if we use K probes. You can also talk about profiling and recording performance data of scripts for instance when you can run a trace a Perl script or a Python script and you can analyze what they are doing. So it's kind of pretty interesting. Perf list again allows all to show you what is available in types of events that you can monitor anything. There is a big part of Perf that is being done for KVM so you can monitor activities of guests and things in your virtual environment. What can you see? You can have hardware events those were the first ones that were added so CPU cycles and so on. This uses Libperfmon or IPFM software events in the kernel and then at the end static trace point and dynamics trace point which are the K probes and trace events and defined events that I showed you before. So it's basically a bit more... Everything ends a bit more. So again there are a lot of fine grain parameters that you can use to control what you are monitoring. You can do system-wide CPU specific, process specific and so on. So you can do quite a lot with Perf. Again you can also do dynamic tracing with Perf probe just like I said before with Perf with F trace. Again you can use many... The reason I say syntax can be simple or complex is that there are many different sub-parameters and parameters that you can specify basically to define a probe. You can make it as simple and as complicated as you want. You can also show the source code because it understands the debug info so this is becoming... It's learning how to be a debugger basically as well. Again as I said it uses K probes and it is simpler to use and is probably what prompted Steve to write the trace command tool because the interface here was much easier than fiddling with low-level controls like F trace. As you see there is various options to add a probe, delete a probe, show the line, list the list of available probes, do a run without really running. Again there is some documentation out there so you can also set a user space probe with the same syntax and using new probes underneath. What has happened recently for 311 and 312 is really a lot of bug fixes. I've seen going in quite a lot of them and tweaks and cleanups, a lot of fine tuning which is a sign of amateur tool obviously otherwise there will be still quite a lot of features going in and now it's more like I've seen... It seems to me and I think it's not just what seems to me but it's true that there are a lot more users and therefore they are finding little bugs here and there and different usage modes that maybe weren't anticipated at the beginning. So that explains all the little tweaks and bug fixes that are going in at the moment. As of 310, the top of the slide is basically related to 310. You can have difference in data. As I said before you can see what was different between two rounds maybe of the same commands in terms of performance, right? And so there are different ways of doing these diffs and they keep adding different ways of seeing what actually changed in runs. Also events can be grouped in the annotate output in different columns. So if you are tracing multiple events you can sort them for easy reading and then Perf mem that will show you when you access memory. So those were the latest things that were added. So again, what is not there yet but what is coming. So basically Nam-Yam Kim is working on integrating Perf and F trace. So basically there will be a new command under Perf called F trace which will allow some sub commands. So you will say Perf F trace blah. And so here there are a few of the commands that have been implemented so far. I'm sure more will come. So you can do Perf F trace live, Perf F trace record, Perf F trace show and Perf F trace report. And again there are right now as of today five versions of the patch that have been sent more for RFCs and stuff. So and there is a tree maintained by Nam-Yam Kim if you're interested in seeing how this thing works. And again it's still at the beginning but I suspect this would be a very good final step after many years of talking about integrating things. This is one part that will actually happen which is good. Other things with Perf F trace you can specify again the usual parameters of restricting to a CPU, a PID using a specific tracer or doing a system-wide. And again different ways of reporting the data in the histogram. So again here things keep being added various switches basically. Again another thing that's happening in Perf that I've seen some patches but I haven't seen a conclusion on that is to support statically defined tracing in the way that D trace does. So system type has a mode that can read and interact with D trace probe points in user space ops. So they wanna basically add the same capability to Perf. So that would be good but it's still kind of I think this further from happening. I haven't seen Masami, do you know? It's still a ways away right? Yeah, it's almost. Okay, yeah exactly just the beginning and more will be built on top of it as it comes. The other thing that I've seen is persistent events. Basically if you wanna collect hardware or events using part of this reliability availability and serviceability work that has been done by Robert Richter and Boris Lav Petkov. Again this is, I've seen some patches flying by. I haven't seen much in terms of aside from yeah this would be cool but this is something else that might come. Basically using the Perf infrastructure even though Perf is not there running but this other system could plug into the infrastructure and monitor some of the event errors. I mean sorry hardware errors that happen. Another work where there is a talk on this on Wednesday. There is a Thresemini summit on Wednesday and here is year to give a talk on this. It's toggle events which basically turning on and turning off monitoring of events depending on another event happening. So it's a sort of this trigger thing that we were mentioning for Thresemini. So there is a wiki page but it doesn't have a lot of information and there will be a talk on Wednesday on this so I'm curious to see myself and see where this is. Some stats this is kind of you know not very enlightening but it gives you a little bit of a history of how the two tools F-Trace and Perf evolved. So basically this is the number of different contributors in the chain drug nothing more than that but you can see the F-Trace was active and then kind of lost a little bit of momentum. A lot of people started working on Perf and now still kind of you know the balance is still towards Perf. There are more people contributing to Perf and then of course if Perf and F-Trace become one then this distinction will become kind of meaningless but until that happens I'll keep putting it into the slides. So there is another tool that just appeared this year and it's called K-Tap and it's new and it has been recently pulled into the 3.133 by Greg K.H. So basically this is a more lightweight tracing tool that aims to behave like D-Trace so it has interpreter as opposed to assistant app where your scripts are actually compiled. So this is just getting in and still being worked on by Jovi Zangwei and there is a web page there is some documentation and it has a port of a few important architectures already and so it's been received very well it's simple and kind of robust and so it's going to go in. So this is I think the big news of this development cycle is that this has been added. Then a little bit about other tools assistant app, LT-TNG and I will talk a little bit about D-Trace for Linux because that's what kind of my project. So I mentioned this in here so first of all assistant app this is an old project started in 2005 it was a cooperative project and these are not integrated into the kernel obviously that's why I kept them separate in a separate section of the top. So it has multiple people working Red Hat, IBM, Hitachi and Intel was there at the beginning and basically it was the Linux answer to D-Trace when this is pretty much contemporary to D-Trace when D-Trace started on Solaris then we started assistant app on Linux basically it does kernel tracing and user space tracing based on K-Probes and U-Probes and that actually assistant app was one of the main driver and getting the stuff into the kernel and it does dynamic tracing and static tracing as well so basically it has you write scripts in the scripting language that those scripts then are compiled by GCC into kernel modules the kernel modules are loaded and that allows you to do your tracing so this has been kind of frowned upon by the community and even though it is actually widely used I've seen from when I used to be at Red Hat like customers and people do use it but it is not integrated into the kernel it has become more flexible since it started allowing you to not have GCC installed on your machine you can move the scripts over and will recognize them anyway and load the kernel modules you can run it as non-root which is what a lot of customers were asking for and also allow only root mode and guru mode for certain specific operations that are more delicate it can be used remotely so it has its own following and again they continue doing releases and improving on it and the latest was from July so this is pretty lively tool it has good wiki examples and pretty well read and used mailing list as well another one pretty contemporary is LTT and G LTT Next Generation there is a good website and you hear a lot of talks about LTT and G here there was one this morning I think there is one right now while I'm talking and then there will be more at the tracing summit three or four talks at the tracing summit for about LTT and G again the good thing about LTT and G is that it could do user space tracing when other tools weren't able to do it using a special library again it's also not included into the kernel so it has to kind of keep up and people need to download it but also it is included in some of the embedded distributions again this also has been released recently the new version in September, beginning of September I believe it uses CTF common trace format careful with this acronym because it means a lot of stuff different stuff in different context so here is common trace format in D-traces the compact type format totally different but still CTF it also plugs into Eclipse and it has a GUI and a tool that reads the data called Babel Trace basically can actually understand different tracing data so basically this is also a very well ecosystem, very solid ecosystem as you will see tomorrow if you go to the tracing mini summit and then, let's see, perfect a few slides on D-trace on Linux basically this is something that we are doing at Oracle what is D-trace? D-trace was a Solaris tool so based on the Solaris kernel available since 2005 as I said before the idea and why we wanted to do this is that we want to offer a compatibility on Linux for the D-trace scripts that exist out there for Solaris also requests from customers and also inside Oracle it's used pretty widely and also as I said if people know D-trace then they can reuse it on Linux without learning much more in terms of scripting and stuff so we started this in 2011 was the first release and we're still progressing and moving forward with the port it's not going very fast so we don't have a big team but it's going to be the first code out there we just realized why I was writing this slide yesterday that we haven't posted the new code but that will be done in the next week or so we have a new version 040 which is coming out now we have a beta that is already out it's based on our kernel the UEC we call it Unbreakable Enterprise Kernel that is 3.8.13 there is also an older version available for D-trace integrated with the UEC 2 kernel right now it's x8664 only and you can see the beta builds on the open channel that's called Playground it's publiclyum.oracle.com you can actually download the RPMs from there so what do we have we have ported so if you are familiar with D-trace we have the D-trace provider the system call provider we do statically defined tracing and also right now this is the new version the new stuff that we have added is the user space statically defined tracing so if your application has D-trace markers in it then we should be able to read it also on Linux so we have profile provider, proc provider and a bigger and wider test suite than the original D-trace one and made a little bit more robust as I said it's x8664 unfortunately the kernel module is still under the CDDL license I don't have much input in how that is handled unfortunately but all the kernel changes are GPL and the source is available for all the kernel changes whether CDDL or GPL so we just because of this user space tracing that we have finally done we now also are able to build with PHP 5.5.4 has a D-trace support integrated with it if you look at the Opal blog which is Chris Jones in my team and he has submitted upstream a couple of changes for building with D-trace so that's coming and perfect and the same for we are looking at other integrations with MySQL and Postgres actually kind of works already out of the box so if Josh is here so a little example on how to do a user space probe in D-trace so basically this example shows you entry and exit of functions it's really simple so basically you can instrument your program by adding the entry and the return probes for D-trace and markers into this series of functions and then at the end with the script which I'll show in a second that's kind of the output that you get you basically get, you know, entry and exit of each of the functions and the script is really simple you have this provider which is, you know, UFBT basically user space probe function boundary traces so it's a way of faking function boundary tracing using user space probes in D-trace so that's kind of where we are with D-trace and we're being careful at preserving the D-trace semantic so that people that are used to Solaris D-trace compatible with, of course, the differences with the Linux kernel but at least the most of the semantic and syntax is maintained intact so that's pretty much it and what is I keep bringing up these issues that are still open so there was one time talk of KBI for trace points I don't know if there has been any progress on that I haven't seen any anything specific it's kind of died out as an issue but okay, okay a de facto okay so it has crystallized whether or not we wanted it and then scalability again this is still an area of improvement most of the kernel most of the work I showed you here is regarding more close to the kernel but this morning there was a keynote where they were talking about this Zipkin system and basically in general distributed tracing that's a little bit wider scope than most of the people that I know are actually focusing on so that's another area that probably users are feeling that there is the lack of a tool that can actually do this in terms of code integration there was talk a year or so ago about integrating the infrastructure of there are multiple different trace buffers that are maintained by perf, by ftrace there was talk of integrating the buffers for instance and have an API to insert data and whatnot but then realize that people weren't agreeing on what this buffer was supposed to do and how it was supposed to perform and so that fell apart but the tools on the tools level that's actually happening so at least the command and the usage side perf, ftrace is a good example of future integration which is actually very good for everybody KTAP for instance another item that one could think be a problem it is a different community if you're looking at the embedded community and the enterprise community they probably have different needs and they want to look at different things and so that's another issue that might bring things separate or eventually better together I don't know and then again what are the users asking for again low footprint and low overhead we know that possibly integrated in the kernel so that when I get a distro and I install it in my data center or whatever everything is there I don't have to rebuild anything and then how much data are you collecting are you collecting a lot of data from different spaces you might want to have a way of visualizing and consolidating eliminating some of the output data filtering if you have too much data or confidential data that you don't want to show so those are still areas that might need some improvement but I think we've come a long way since like 2006-2007 when we started having this as an issue where nobody really cared about tracing so a few years have gone by but a lot of work has been done and I think now we're seeing actually the tail end of that there's a lot of development and now a lot of users so that's actually pretty good so that's all I have and I think I'm on time so if you have any questions comments I have a microphone somewhere but they took it away so speak loudly it is in the kernel 3 I don't think it's very backwards compatible I'm not sure I think maybe some basic functionality you might be able to do but some if you're using a newer trace command on an older kernel I don't think there is a lot of cross and that's one of this thing of like no kernel rebuilds issues too a lot of people are not using the latest and greatest kernel like 312 or C6 they're not using that they're using rel4 but not still rel4 but rel6 or whatever and that is one of the problems that a lot of the users have brought up it takes a long time for this advanced development to trickle into the distros right exactly you can't fix anything that's the problem then you come with bug reports and nobody can fix it anymore because things have changed radically okay I don't know Masami do you know right you have to do a physical address and just don't stay down there and I don't think you're done right there alright we're being kicked out thank you