 OK, so it's time to start. The title of my presentation is Valgrinde, anti-alzheimer pill for your memory problems. So with this title, I hope that I will remember what I will have to describe, because otherwise it will be a disaster. So the tool content I will discuss and demo new functionality that has been developed. And this new functionality provides an easier way to visualize memory usage. Or it can be used also to visualize other types of data. So that will be the bulk of the presentation. If time permits, which is usually time does not permit, I often late in my presentation, I might discuss demo memory pool and leak statistics. So the idea is that you use Valgrinde to answer some questions about your application. So a first question you ask is, is my application buggy? And Julian has described in detail the main check that allows to answer to is my application buggy with respect to memory usage in valid address and so on. And another question we can ask to Valgrinde is, how much memory is your application using? And where has it been allocated? And for this, Valgrinde has a tool, Massif, which records the evolution of the memory of your application. And it does this by taking snapshots regularly. So your program runs. And at some interval, it says, good time to take a snapshot. And then a little bit later, take another snapshot, and so on, and so on. And this snapshot will show the evolution of the consumption of your program in memory. Some snapshots are detailed. And these detailed snapshots will also contain, will also show the allocation stack trace, the non-detailed snapshot, just record the amount of memory. So also Massif tries to detect the peak snapshot. So when your application encounters a peak in memory usage, it will record a peak snapshot. And maybe a little bit later, it will detect a higher peak. And it will replace then the previously taken peak snapshot by the new peak. So in the new release of Valgrinde, so it's still in SVN, so not yet in the released version, so a new tree view memory feature has been added. So you have a new command line option, which is minus, minus extreme memory equal. And then you can choose between non-alux or full. And when you select something else than non, this will produce a memory report at the end of the execution of your program. This snapshot will be produced at the end of the execution. But you can also produce an extreme snapshot of your memory on the mode using VGDB. So VGDB, it is the relay application between GDB and Valgrinde. So when you want to debug your application, while it's running under Valgrinde, you can do that using GDB. And VGDB is the relay application. It can also be used in standalone mode to send commands to your application running under Valgrinde. So here we see the classical use of the tool massive. So we start an application giving massive tool and we can now add a new argument, minus minus extreme memory equal full, which will produce extreme snapshot. So here I'm switching to the demo and I here start Valgrinde, massive tool, asking to produce an extreme memory snapshot at the end of the application. And I'm using for this Librofis and we will see how it works. So while it runs, because we will have a little bit of time, is there any Librofis developer in the room? No? Okay. So I can lie and pretend that it is Librofis that is slow and not Valgrinde. I will even... I will even lie more because I will do another comment later, which will be a scientific proof that it was Librofis which is slow and not Valgrinde. So while I was speaking, we see that Librofis, I have asked to convert a presentation, a press presentation to a PDF, and it has produced two reports because two process, in fact, Librofis has forked another process and each of these process has produced an extreme memory dump file. So first, what do we see? So we see that while we have run, we have produced the classical massive output file and we have also produced two new output files, the extreme memory. So let's maybe see how we can visualize the massive output file. So the classical way to visualize a massive output file is to use MS print and give it the name of the massive output file. And so here we see the summary snapshots that have been taken and above we see from time to time detailed snapshots. And these snapshots are showing the stack trace of who has allocated what amount of memory. So we see not very easily what's going on. And this is one of the weakness we have with the textual interpretation of the massive output. It's relatively difficult to see what's going on. You have quickly a huge list of functions which are called and scanning to this is not particularly easy. So this massive output file can also be visualized by something else on MS print. MS print is a default command which is part of the Valgrind tool suite. So you just, if you install Valgrind, you have MS print but you can also install other visualization for massive. And such a visualizer is massive visualizer and you give to it the same output file. Now this application is a graphical application which loads the massive output file and shows on this side, it shows the evolution of the consumption of the memory. So your application is started and then it shows the total memory here. And then you can see here various details about which function is allocated what. Now if you want to see more in details the allocation stack trace, you have all the snapshots which are in this window here and the details snapshot you can zoom, you can expand them. So for example, if we expand the snapshot number 11 we can have more details about the stack traces. Okay, so this is another way to look and again that's slightly easier to scan than the textual output we have seen initially. But see if you want to understand your stack trace it is not particularly easy to scan. So you have to expand and then further expand to see where this is coming from. So massive output file can be visualized with MSPrint. It can be visualized with massive visualizer that you have to install separately. No, let's see how we can visualize the new output file that was produced by Massif at the end of execution. This file can be visualized using K-cache green. So K-cache green is the visualizer which has been developed to show the data produced by the call green tool which is a call graph profiler. So it shows where how many instruction call green and then K-cache green can visualize how many instructions you have executed where you have done some memory access cache and so on. And the new feature here is X3 memory full produce a file which can be interpreted which can be visualized by K-cache green. So now let's look at the file that was produced by the X3 memory equal full. So we see LibreOffice is a big application K-cache green so it's checking where are the source file. And so here we see a new way to visualize the memory of your application. In fact what we have is if you know K-cache green usually what you have there is you have here the amount of instruction you have executed or you can have as events the memory access you have and now with this new feature you can choose various data that has been stored in this file like the currently allocated bytes, number of bytes at the time this X3 memory report was done, the currently number of allocated blocks, the total allocated bytes, total allocated blocks and total feed bytes and blocks. So first thing that you have is this will show more information than the massive output massive only shows you the current state of the memory but it doesn't show you for example who has feed a lot of memory or the number of blocks that has been allocated is also here shown. The second thing is if you want to see which stack trace have allocated which amount of memory now you have the classical K-cache green way to scan your stack trace and so you can start from main here and you can just travel in travel in your stack trace to see what has been allocated. So here I have double clicked on main and now I can follow here would be easier of course on a bigger screen but that's all what we have here so you can here scan where are your stack trace that have allocated memory. So the amount the various events that is shown is you can look at the total sum of what has been done. So you have the currently allocated bytes blocks the total allocated bytes remembers for this stack trace how much this stack trace has allocated in total. So for example, if it allocates a first megabyte then a second megabyte then a third megabyte then it freeze one megabyte then the currently allocated bytes will go back to two megabytes while the total will be three. If you reallocate a megabyte then the current allocated bytes will go to three megabytes and the total will go to four megabytes. So the total allocated bytes gives you how much your application has allocated and then maybe deallocated and maybe reallocated the current values are the values which are there at the time you take the snapshot. Okay, so this was a demo for which I've used LibreOffice. It's to show that compared to the previous way we have visualized the memory consumption. We see that this way of looking is a lot easier but for the rest of the presentation I will in any case switch to simpler application because it's faster to run as we have discussed. LibreOffice is not a very fast, a very fast beast. Okay, so that was one demo. So now I will relaunch, and this is by the way the scientific proof that it was LibreOffice. That is so I will relaunch VALGRIND with exactly the same commands but with another application. So now let's look how long it takes. So now you see it has taken 0.3 seconds rather than before, I don't know, 50 seconds. And the only thing that I've done is to replace LibreOffice by this small application here. And so this is a scientific proof that it was LibreOffice that is slow. What about VALGRIND? What about VALGRIND? Ah, this is left for home. Okay, so now I relaunch the same command. So it's again the massive tool which has produced some output. And again, I will now launch K-Cache-Green on the new produced file. Okay, so here, what do we see here? We see, I'll just show one event. So here we see another thing is when we look at this visualization, you have of course the stack trace which is here. You have here the list of functions and what the F consumed. And also another difference with the massive visualization is this visualization. You can look at the source file corresponding source file. So here, if we select main, we see that we see main here, which is called F1 and which is called F2. And we see here that this call to F1 as allocated as in this snapshot, the stack traces main F1 have allocated 388 bytes. While here, the call, the stack trace main F2, we see that it has allocated 140 bytes. And then we can, if you know K-Cache-Green, you can double click on F1 and then you navigate to the source file of F1 which is showing then what has been done inside F1 and which part of F1 have allocated which part of, have allocated which memory. So you can, like if you use K-Cache-Green to visualize call green data, you can with K-Cache-Green show up to two different events or K-Cache-Green terminology for something which is observed is an event. This is the main event that we are observing. So the currently allocated bytes but you can activate a secondary event to visualize. So for example here, I will show also the currently allocated blocks. So we see here that we have currently 220 bytes which are currently allocated by the calls to F1 that we have seen and that is with 20 blocks. If I'm looking here at the types, okay here, what do we see here? We see the relevant value of the different counters. By the way, you might imagine that there is something relatively strange which is we have total allocated bytes, 570. We have here total allocated bytes, 570. We have currently allocated bytes, 388 but we have total feed bytes, zero. Is that normal? Yes, of course, because otherwise I would not have asked the question. So this is showing what this stack trace has done. So this stack trace has in total allocated 570 bytes. Then some free operation happened in other stack traces and then the remaining memory allocated by this stack trace is 388 bytes. So no K-Cache-Green is not buggy and it can do subtraction without doing errors but it does not have in fact to do subtraction for this. So you have to pay attention that what you are looking at here is what is currently being visualized and so the total feed bytes is the bytes which are freed by the functions by the call stack you are looking at. One small detail is that I have a little bit hijacked the K-Cache-Green call-Green format to produce this. In fact, call-Green and K-Cache-Green, they produce files where they remember the number of calls and so on. And for this, in fact, I'm not really producing something which remember how many calls we have done. I'm just producing in a call-Green file. The fact that there is a stack trace and that this stack trace has, for example, a currently allocated bytes equal to this. So these number of calls here are in fact not correct. So in the next release of K-Cache-Green, Joseph Wainendorf, the developer of K-Cache-Green, has done a change so that this artificial number of calls will not be visible anymore. So this is somewhat misleading. The zero is also not in there. Yes, this zero, then in fact, you see also that the display is a little bit strange because here we see calls which we see one call and here we see zero call. Effectively, this column will be hidden if you, with the next release of K-Cache-Green and you load such a file, this column will be hidden and there you will also not see such type of let's say misleading number of calls. Okay, so that was one thing. So as I have explained, this new option, memory, extreme memory equal full is producing by default K-Cache-Green call green format. But as the massive output format has some advantages. For example, it can be used to, it can restrict the amount of data that we save using the massive visualization with the massive output file is still okay. And when you activate this extreme memory, you can control the file in which you output the report and if you give it an extension, a dot MS extension, then it will produce a massive output format. So let's look what is the layout when we put a massive layout. So here I have given again the extreme memory equal full command line option which instructs to produce an extreme report at the end of the execution. But I have specified a non-default value for the extreme memory file giving here a dot MS extension and the result is that we have no produced a massive output file that we can again visualize if we want, for example, with the massive visualizer. So here, of course, similarly to the fact that I have reused the call green K-Cache-Green format to store memory information, I have a little bit reused the massive output format, but not the way it was really intended. You remember the normal way with which massive produce snapshots is to produce much snapshots at regular interval and what I have done is for the events which are observed by extreme memory, I have produced six snapshots. So the first snapshot is in fact, if we look at it, it is in fact showing the current number of bytes which are allocated. So here we have the number of blocks and then here we have the, I don't remember what it was. So we have the six events, the six events which were stored. So as you see here, the tool can still visualize the new massive output file produced by extreme memory, but you shouldn't interpret this, for example, as the evolution of the snapshot. This tool, massive visualizer, believes that we have six snapshots and that this is the evolution of the memory between these six snapshots, but that's not the case. Each snapshot represents the same moments in the life of the program, but simply shows the currently number of bytes allocated, the current number of blocks allocated, the total number of bytes allocated, and so on and so on. So again here, if you want to use the new X3 feature, you have the freedom to use a massive visualizer and layout or to use the cold green K-cache greens visualization. So does that mean you're going to get exactly six snapshots in this? Yes, so each time you will instruct your application to do an X3 memory report. For example, if you use VGDB, you can do in a shell, while true, produce a snapshot, sleep 60 seconds. Then you will produce a snapshot every 60 seconds and this snapshot for massive, this output file, sorry for massive, will contain six snapshots. And each snapshot records different data, in fact. In this case, you said that those six snapshots are all three snapshots, they are different. They effectively, if you use massive and the massive visualizer, like it was designed for, each snapshot represent the state of your memory at a different time, while this is showing the state of the memory at a certain time, effectively. And this is one aspect of the snapshot and this second snapshot is in fact the another point of view of the state of your application at the time you have taken the report. So it's usable, as you can see, if you really like the massive visualizer, you can use the X3 memory reporting and use this kind of feature, but the K-Cache Grinde visualization tool is in fact better aimed at showing at the same time, various events. Okay, so that's another thing which is finished. I'm advancing more or less reasonably well. So we have now done some demo of this new feature using the massive tool. But in fact, the new feature can be used not only with the massive tool, but in fact it can be used with other tools. So for example, if you run your application under Manchec or under El Green, you can also give this new argument which instructs to produce an X3 memory report. So for example, here, I will produce another report, but this time using the El Green tool. So El Green, it's a tool which detects race condition in your program. So this is a single thread program. So it will, I hope, not detect any race condition. But the side effect now is that if you give this option and you use here El Green, Manchec or Massif, you will obtain your memory report. So again, here, if we see what has been produced, we see now that we have produced again a massive output file. If I rerun, but I remove these arguments, voilà. Same kind of visualization that we have seen and that has been produced by running your application under El Green in this case. So another thing which is worth showing is the producing such report during execution. So here, I will rerun, I will rerun, but this time under the default tool which is Manchec and I'm giving an argument to this wonderful program which says be LibreOffice compatible so spend more time running. Or in this time, it says sleeping. And what we can do now is to ask to produce to produce an extreme memory. So we do X, I was not, sorry, I was not fast enough, I'll restart. Voilà, so now I can ask this tool to produce a memory file report like this. So I have used VGDB from another shell. This is send a command to this to produce a report. And I can ask to produce another report after, for example, sleeping here. So what we have now is two files and we can load, for example, the first file here and again, we recognize the same kind of visualization except here we see something new. Effectively, now that I've given this argument, there was some printf calls and we see that printf has allocated some memory here. So this is good enough for the initial demo that I wanted to do. So we have seen now that there is a new way to look at what your application, which memory your application has allocated. So using this new feature. So we can use, as we have seen, the MS print output, this kind of layout, the massive visualizer, the K cache greens. We can use the massive visualizer to visualize this snapshot. And we can change the layout of the output file of the history if we give a MS extension. So just a few notes. The massive format has not been designed for a huge set of data. So it is not designed for more than one data kind, as we have seen. If you output the currently number of bytes allocated, currently blocks allocated and so on, the massive format does not do the link between the different snapshots. So if we run this command above, which is to again use LibreOffice and convert to PDF, maybe a slightly bigger presentation file, then this will produce a 6.3 gigabyte massive output file. Because massive output file format has no compression of data, while the cold green output file compresses the data. So this type of files is quite heavy, including for example for massive visualizer, it has to load the 6.3 gigabyte and then you really have a lot of data. So for such volume, even if you like massive visualizer, which is a really nice application, K cache green will work better. So for an X3 visualization, big X3 visualization at least, K cache green format is better. As we have explained, we can use X3 memory equal ALOX and FULL for other tools with Manchec and El Green, I have not yet spoken about ALOX. In fact, by default, this will be none, which means do not produce an X3 memory report. And if you use ALOX or FULL, it will produce with FULL what we have seen. So the six events, if you use ALOX, then it will only produce the events with the currently allocated bytes and blocks. So you will ask me, but why? Why do I need to have a more complex option where we can choose between nothing, part of the data or the full data? Is because this will have exactly zero impact on the runtime of your application when you run it under El Green or Manchec. So if you only want to do a report of what your memory consumes at a certain time in an X3 memory, you can start with this or do such use this option. And there is no overhead because the X3 will just be produced from the data structure that El Green and Manchec are maintaining in any case. So this is no impact on your runtime, except of course, when you take a snapshot, that will take time. And this has a reasonable time, a reasonable overhead. I will discuss in the talk this afternoon. I'm doing a talk about how this X3 data structure was built. And you will see why. This is relatively small impact. We speak about maybe a few person. So as we have said, if you want to produce a report during execution, you can use VGNB XT memory with a fine name. This is supported by massive Manchec El Green. And a small note is if you ask your application running under Manchec or El Green to produce an X3 memory using this, this will work even if you use none. Because as I've explained, even with the data structure that Manchec and El Green maintains, is good enough to produce this report. So even if you say, I don't want a report at the end of the execution. So this is to ask a report at the end of the execution. During execution, if you use this command at regular interval, you will have snapshots produced showing the current state of the memory at the time of the snapshot. So in summary, how to best visualize memory. So massive MS print, not that easy for big application. Plus massive visualizer. It provides a really nice evolution graph. But if you want to dig into the stack trace to see, ah, why is this stack trace allocated memory? I would like to zoom a little bit. It's not very easy and not very fast to go on specific points. Massive visualizer and extreme memory find equal full not really appropriate because it does not really understand the various events. K cache green plus extreme memory find equal full. This is easy to use. But the find can become huge also. So you better have a fast disk if you produce big reports. So maybe would be nice as future evolution if we have people that would like to work on K cache green and call green format. A nice thing that we could do is to extend the call green format and the K cache green visualization to expand this format so that they can present evolution of events and not just produce the current state of the data. OK, so another thing. So the extreme memory report, as we have seen, allows to show the state of your memory. No, the same visualization in an extreme can also be used to visualize the leak reports. So if you use Manchec and you give this option extreme leak equal, yes. It means that the Manchec will produce a leak report but in the layout of a file which can be visualized by K cache green. If you put extreme leak equal, yes. The final leak report will be produced as an extreme layout. The report is automatically a full leak report. So if you know by heart how Manchec works, when it produces a report, if you don't specify leak check equal full, it will just give you a summary. And if you want to see the list of stack traces and so on in the normal Manchec output, you have to specify an additional argument. But if you specify this option, it would be strange to say, I would like to have an extreme report but just have a number in it. So of course, if you specify this, the extreme will automatically contain a full report. So the report file name, like what we have seen with the extreme memory, it's controlled by an argument, extreme leak file equal something. But of course, there, don't try to put MS. I have not implemented an extreme leak report in a massive output format. So this will be automatically in a cold green format. This has been proved anyway, right? So I'm not going to release further than that. No, it is not yet. So this is in SVN. So you can extract the sources from the SVN repository. And it's available. Documentation is done. There is no regression test yet. So you are welcome. So similarly to what we have seen, VGDB, that you could use VGDB to set the command to dump the memory state of your program, you can use VGDB to send a command to your application, which is running under main check, to produce a leak check at various moments during the run of your application. And remember, if you use this, you can in fact do incremental leak reports. So it means that main check between two reports, when you do that during your application, it will be able to show the delta between the previous leak search and your current leak search. So with this, by using this kind of command from VGDB, you can produce reports, which are showing the delta between two things that has happened in your application. OK, so let's maybe do a small demo of this leak search. So now I'm starting again my classical program here. And what I will now do is I will ask to produce the check equal full. So this is a classical way to use main check in order to do a full leak check of your application, so showing how many bytes you have allocated, that are still reachable, how many are definitely lost, and so on. You run it and maybe I will decrease the time here. And so you see here the format of the classical main check output. So again, if you want, you can analyze this. And if you have big application, you might have huge stack trace and a lot of these. And now another way to see these reports is to add the option x3 leak equal. Yes, so what we have produced is a leak report here in a K-cache green loadable format. Now here we see here that we have a lot of events which have been recorded. In fact, now that we produce a leak report, we can see how many bytes are still reachable. Though we can see the possibly lost bytes, the indirectly lost bytes, the definitely lost bytes, including direct plus indirect, and the definitely indirect lost byte subset of this event, which is DB. So now with this, you can, if you want, zoom on a leak report using the classical visualization of K-cache green and see who has allocated what, who has lost what. So for example, in this case, there was IS. OK, here, definitely lost bytes. So if you say like this, then you can see your stack traces, which have led to some bytes being definitely lost. Now you also see here events which are increase and decrease. There, this is showing, as I explained, if you do two successive reports, the main check will produce, then the last report, will show the delta compared to the previous leak report. So for example, this can be extremely useful if you are searching for some leaks in a big application. So imagine you have a, and I'm speaking about a real case at my work. So my valgreen development activities, this is the evening and the weekend. But during the day, I'm working on a huge application. And this application starts and loads a lot of data. And then we have a part of the application. Graphical application is based on GTK toolkit. And we had some memory growing with our operators, controllers, that were using GTK. And main check was not reporting any leaks. In fact, the memory was still reachable. But what we wanted to see was, hey, why does it increase? And the way we did it was to do a leak report before a few screens and a leak report after a few screens. And then the increase here was pointing at exactly what was increasing. So despite the fact that we are at startup maybe load 500 megabytes of data in cache, then we were able to detect the memory leaks by running a regression test and doing a delta leaks before and after opening some screens. Because with the increase or decrease of some values, you can quickly point at what's happening. Then the next, the other thing that we had to use to really understand why we still had a memory allocated by GTK that was growing was we had to understand why this memory was still reachable. And then we have used another main check feature usable from GDB, which is you can from GDB ask to main check who is pointing at this memory. And then main check will scan the full memory and say this block is reachable from this place, this place, and that place. And so then you can go back at the origin of what you have forgotten to release. In this case, if you know a little bit how GTK works, we forgot to put an NREF at some place. And so the counter of reference did not drop to zero. And then it was not released. But it was still kept in one of the hash table of GTK or whatever. And so it was still accessible. League check, we have discussed. So another thing which has been done on top of this new X3 visualization is X3 for Cisco. So what I've explained before, the leak reports and the memory reports, this is NVM. This is in SVM. So it will be in the next release of Valkyrie. This is only in a corner. I have not yet commit this because it is not in a good state. But it's to show that the same visualization, we have used it for memory. We have used it for leak. But the idea here is to show with the same visualization what you do with the system code. So the idea will be you give this command X3-Syscall equal yes. And then it will produce an X3 memory report showing the number of Syscalls, the number of failed Syscalls, the Syscall time in microseconds spent somewhere. And currently bytes read and bytes written by the system code. So the idea of this future still to be expanded is that we record various events which have been done by Syscall. And we record data that has been consumed or produced by the Syscall. And the idea will be to use the X3 and Kcashgreen visualization to look at what is happening in your application regarding Syscalls. So I still have three minutes for a small demo, five minutes. So then I will demo this. And then I will, I think, stop the presentation. So here again, I will do a demo. But no, you see, this is with my own inner corner X3-Syscall version. So even if you see as the end in the past, this is not yet committed in the SVN version. And I'm relaunching LibreOffice. And then we'll see what kind of system call it is doing. And so maybe I can already take 10 or 20 questions during LibreOffice run. No, it was just a good job. But that's it. So no, you see that what we have produced is an X3-Syscall file. So let's look at how Kcashgreen visualized this. So the classical thing here is we see the functions. We see the number of Syscalls. So we could maybe go up here. And we can here zoom and see what this is doing. So here we see a specific stack trace, which has led to a Syscall cloud get time. So I don't know if I have something better to show, not much. OK, but so you see that the ID will be to show how many Syscalls have been executed. If some have failed, the time spent bites red and written. OK, so this was a nicer picture that I prepared at home, but I could not go back to this. No, last point is the demo was nice. The talk was more or less respecting the time. We've seen nice graphics and so on. So it looks like it's all perfect. It's wonderful, and there is nothing to do anymore. Not really. Who knows what is Syscall in cold green or G-prof? Nobody? Oh, but that's good. Oh, fine, that's bad. So cycles is when you have a cold graph, in fact K-cache green can visualize the data that was quite too much to visualize in Massif because it kind of fold the stack trace together. So if you have, for example, main that calls A and main that calls B, and A calls Maloch and B calls Maloch, Massif will show this as one stack trace, main A Maloch and main B Maloch. While K-cache green will fold this and will join together the two calls to Maloch. OK, that's all nice. That gives the visualization we have seen. But what's happening when we have function A that calls function B and that recursively calls function A, then that creates a cycle. And we can also have other cycles, even if you have no recursive call. For example, you have main which calls A that calls B and then during your run, main is also called B that is called A. There wasn't really not a real recursive call, but this will also create a cycle. And the cycles, this is not very easy to work and understand with, and it's difficult. And so K-cache green and G-prof, by the way, have the same solution to show stack traces with cycles. They create kind of artificial functions which regroup all the cluster of functions in a cycle. Not very easy to analyze and grasp. And for each tree, the each tree stack traces, one difficulty which is still there is that it just records a program pointer. So it does not record the functions. And so the code address are translated to function name. And if the code address, we cannot translate it to a function, then it means that this will be considered by each tree as an unknown function. And if we have a lot of such unknown functions at different places in memory, this has a tendency to increase the number of cycles. And there, there is probably still something to do. So, well, we have various things. Each tree work in progress. And each tree is called, we have a bunch of other ideas. Well, I think it's time to stop the presentation as I predicted. I could not do the additional slides which are at the end. But if you are interested to see how mem pools are working and so on, there are a few slides that will be soon on the FOSDEM website, I suppose. So voila. So that finishes the presentation. We have a little bit of time for questions. I'm asking my manager. Yeah. Yes? OK. So, questions? Yes? Yeah, also a C++ programmer. And then recently, designers of C++ came up with certain coding rules to prevent many of the memory errors. I was wondering, do you already, I have to already see that if people stick to those rules, what will then be the new dominant errors? OK. So first, as I say, evening and weekends. I'm working on Valgrind, which is in C. But during the day, I'm working in an organization called EuroControl. We are active in the air traffic flow management for complete Europe. And the applications on which I'm working on are written in Ada. So I can't really speak about how C++ coding rules are being used. But even if you don't have memory problems like leaks, for example, for these kind of things, you can still need to analyze why your memory is being used. But I don't know. Maybe some C++ users can better answer about if the C++ quality is being increased. I used to work on a 4 million line C++ code base. So I would say if it's not enforced by automation, something automatic, then it's going to be good. Yeah. In the end, we're getting more. Yes? So the C format and the K and K format are the same? Sorry? The X3 format and the K and K format are the same? Are there any other visualization tools? That can visualize call green, K cache green format. You can use the callgreen underscore annotate script, which is producing a text annotated output. A part of K cache green, I don't know of any other visualization tool for call green files. So you have the text one call green annotate and K cache green. Yes? So we need to get our grind problem to have this feature. Yes? That's fine. Do we also need to do that with K cache grind? Sorry? You? Do we need to do that with K cache grind as well? Can we just use like that? You can use the K cache green in your distro. It should work. So this was a, this is Ubuntu 16.04, I think. And the K cache grind is the standard K cache grind on Ubuntu. So your K cache grind distro version should be OK.