 Okay, so we'll begin with the next talk, the last talk about Gstreamer. This time, we'll focus on debugging Gstreamer applications told by Guillaume Demote. Thank you. So my name is Guillaume. Is that okay for the microphone? Okay. I'm a multimedia engineer working at Collabora. Just to let you know, we are currently looking to hire a bunch of people. So if you're interested working in free software please let us know you can find us with this kind of hoodie and we have a standard drop corner. Like that? Okay, come on. Okay. So yeah, if you're interested working on free software with us, just come talk to us. Today I'm going to talk about debugging Gstreamer application, what we developed in the last release to make that easier for you and what I learned while doing that. So I'll start talking about tracers, which are new mechanisms used for debugging. I'll give some more detail on the leak tracer, which I developed a few months ago. I'll talk about GST shark and I'll finish with some tools which can help you dealing with Gstreamer locks, which I found quite useful. So GST tracer, it's a new mechanism which has been introduced in 1.8. It's meant to be used by a debugging tool. So that's the only purpose of this new system and it's hollow tracers to hook inside the internal of Gstreamer core. So by doing that plugins can gather a lot of information about what's going on in the pipeline. For example, when a buffer is being pushed from one element to another when all this kind of information. This hollow tracers to gather all this info and produce a formatted output. So it's meant to be usable by external application for easily parsing and this kind of things. So it's been very useful so far and now I'm going to show you different kinds of things you can do using tracers. Okay, this is absolutely not readable. So Yeah, I wasn't sure about the best way to display console output. I tried the image but I wasn't terrible idea. The 2 enable tracer, it's pretty easy. You have the GST tracer environment variable that you can just define with the tracers you want to enable. So for this example, I use the stat and the error usage one. So you just define that and then you launch your GST remote application. Here I'm using GST launch which is the kind of tool we use to test Pipeline, but you can do that with pretty much well pretty on the app GST remote application. So you could use GST play just to play a video. You could use totem or whatever that will work with any GST remote application. Which is pretty convenient. Then you need to say that you want the tracer output to be generated. So for that we use the GST debug environment variable with the GST tracer category and lock level and then I drop everything to a file using the GST debug, debug file, sorry, environment variable. So as you can see, there is nothing to be done programmatically. You don't have to rebuild anything. It's all integrated in GST remote itself and it will be automatically enabled by using those few environment variable. You run that, it will generate a lot of logs to this file and then there is the GST stat tool, which is part of GST remote core as well, which will parse those files and generate a bunch of statistics, which are completely unreadable. But it will give you the number of elements which has been in the pipeline, the number of pads, the number of buffer, which has been exchanged across the pipeline. The timing as well. So you can see at what time in your pipeline each element has received its first buffer, which can be useful if you have like a very high startup time in your pipeline. You can see which element took more time the others to start, these kind of things. So that's the stat tracer. Another one, which I quite very useful, that Olivier mentioned in his talk, is the latency tracer. So the idea is each time a buffer is generated at the source, it will travel to the sink and this tracer will try to measure how long it took to go from the source to the sink. So this is used in live pipeline, like typically you are capturing from webcam and you are streaming to the internet or you are displaying on the screen and you want the latency to be as low as possible and this can help you to see what's all along it took for the buffer to to be processed by the pipeline. So once again to use that you don't have to recompile anything. You just enable the tracer by using the GST tracer environment variable, then you launch the pipeline. Here I just create a simple pipeline which is capturing from the webcam which is encoding in a choosing force, which is decoding and which is then displayed on the screen using the GL imaging element and the tracer will for each buffer tell you how long it took. So you have that here in the time which is in nanosecond which is not really readable but you have that displayed on the screen. So you have the information in real time to help you debugging this kind of problem. Another kind of things you can do with tracer is to implement very specific tool. So here I'm going to present the leak tracer which is a tracer I wrote one release ago. So the idea is most of the time when you are dealing with memory leaks, we are using Valgrind that I guess most C and C++ developer are used to. It's a very great tool but the problem is it can be very very slow. So if you are working on very CPU or memory consuming process sometimes Valgrind will just make things too slow. It's mostly unusable. Another problem of Valgrind is it may not be available on your platform. So as Olivier said, we are doing a lot of embedded development which usually have terrible build system and distribution and this kind of things and it may be a challenge just to get Valgrind running. And finally you may find a lot of memory leak or false positive but which appear as memory leak to Valgrind which are totally not related to your code. So if there is leak in library you are using like the encoding or decoding library or invalid memory or things like that you will get all this output from Valgrind, but that's not really something you can easily fix. So most of the time what you really want is to know the leak in the code you actually wrote. So for that I developed the leak tracer which is using the new memory, the new system hooks in jstreamer code and which will manually track the ref conting of each G object and GST mini object. So that means that only the jstreamer code will be tracked, which is actually what we want here. So it will keep track of that and at the end of the execution of the application if it detects that some objects are still alive so which have been leaked because they should have been destroyed at this point it will raise a jelly pruning. So that's something you can really hook in your QA system or CI system and to detect if any leaks has been introduced. This tracer has been integrated in GST in 1.10 So that means that if you have this jstreamer version, it's already there. You don't have to build any extra tool to use it. So here is an example. Once again in the same system, I just use the tracer's environment variable. The tracer is called leak, the name I put there, and I run a pipeline. The pipeline needs to terminate because the leak are detected at the end of the process. So if the process keep running, obviously, we won't be able to know if there is a leak or not. So that's why I say I just want 10 buffers from the source, so the webcam here, and 10 terminate it. And if at the end of the pipeline it detects a leak, you see? Well, it's yellow, so it's warning. You can trust me on that. And it will say, okay, this object is still alive. It will give you what the type of the object and it will give you the refcon so you can have an idea how many reference are getting lost in your code. So that's for the the basic feature. I try to make it more useful and a bit more smarter. So we are using leap and wind to get a track trace. So once you enable that, you have to enable it manually because it can consume quite a bit of a bunch of memory. So we try to only enable it when we actually need it and we filter out the object we are interested in. So a typical workflow for that will be to start with the simple version, try to see which object has been leaked. It's really a shame that you can't see the name. But you see, okay, the test sync is getting leaked. I'm going to track it down and then you enable the stack trace for this very specific object. So only this one will be tracked. And when doing that, the tracer output will give you a full call trace. So you can see the succession of calls which leads to the creation of the object. Of course, we don't know where it's been leaked, but at least you know where the object has been created. So if a lot of objects are created in a different context, you can have a better idea of where this object is coming from and start to track down manually the steps leading to the leak. The tracer also produces a bunch of extra features which can be quite useful while debugging. You can track each individual rafting and unwrapping operation. So that may be part of the debugging of the specific leak. You'll see each time an object gains or loses a reference. You will see the full stack trace. So that can make things easier for you. You can also enable things like signal support. So if you enable that, you need an extra environment variable. But if you do, you can send the SIG user one signal to the process and that will list all the objects which are currently alive. So this can be useful if you are debugging a problem without terminating the process. And we also have a checkpoint system. So the idea here is you send the signal once to the process. You do something with the application like, I don't know, start a video and stop it or something like that. And then you send it again and it will list all the objects which have been created since the last time you see the signal and all the objects which has been deleted. So this may be useful to develop application that you don't want to start and stop all the time, which will be too complicated to do so. So that's the kind of features we have which you wouldn't find with tools like Vitegrind, for example. So a few extra tracers. So now I'm going back to the tracers. All the tracers I presented so far are Merch and GS2 Markov, but third-party can also develop their own tracers. GS2 Shark is one of them. It's developed by which one and it contains a whole lot of tracers, which are more specialized version of the upstream one or provide extra features. You can measure things like interlatency. So it's a bit like the latency tracer I presented before, except that it will measure the latency between each element in the pipeline, while the upstream one it's measuring from the source to the stink. So it's a bit more precise in this regard. We have some plans with Nicola to merge that in the upstream tracer at some point. It can also be used to measure performance on the pipeline, which is something quite useful. So for example, it can give you at what rate our buffer are arriving on each source path in the pipeline. So for example, if you have a pipeline which is not going as fast as it should, so you are expecting 60 FPS in your screen, but you are just getting 20, for example. This may help you to find which element is operating slower. So, and then try to improve the performance on this specific element. It do a lot of different things like schedule times. That's the last time since before has been received on an element, this kind of things. You can use it to track Q. So it's a whole lot of tracers, which can be used to track performance, this kind of things. And another nice thing with GSTshark, it comes up with some script to generate graphics using Nuplot from the data generated by the tracers. So you can see things like So here we have the time, okay, in seconds. Here each here we have all the elements. So the source, the encoder, the decoder, the things, this kind of things. And here on the left, it will tell you at which rate each element is operating and here you have the CPU load. And so you can try to see or this is affecting the whole pipeline. So if you see a big spike somewhere, you may ask yourself, okay, this is not something that should happen on my pipeline and start debugging from there. So that's the kind of things you can do to try to have a bigger picture view of what's happening in your application, which can be quite useful. The last thing I wanted to talk today is some tools to help you working with GSTrimer logs. So if you have already done some GSTrimer debugging in your life, you probably end up with tons of this kind of logs, which contain a lot of very, very useful information, but which can be quite scary to look at. I know that when I start working with GSTrimer, that's something which was I was really afraid of that because you know that this contained helpful information to help you debugging your problem, but you don't really know where to start, you don't really know where to look if you are not used to do that. So I'm going to show you a couple of things which can make that easier for you. A nice one is the GSTrimer debug viewer, which is a nice Python graphical application, which is part of the GSTrimer developer tools set. So that's something you can find on the GSTrimer git repository and which will parse the full log and will give you a graphical interface to look at that. So you see here on the top bar it will kind some kind of graph over time of when the log are being generated, because each log is associated with a timestamp, so we can see at the beginning of the application, at the start of the application, a lot of log are generated, then some gap, then some log, and here we have a very big gap between more activity occurs in the pipeline. So just by looking at that, you can see that there is a gap here. So you may ask yourself, why is my pipeline block at this point? Because usually when a pipeline is doing something, it produce some log and just try to look at what's going on here. So for example, am I waiting for the kernel or is a thread block or something like that? So that's a nice tool. You can use to do that. And another one I just started a few months ago is GST log parser. My goal here was I often find myself writing small Python application to parse logs. So for example, if I wanted to compute some metrics from specific logs of these kind of things, and I was always doing very hacky Python script to do that, and I say, okay, let's try if I can do something more reusable and a bit cleaner. It was also a time where I wanted to learn Rust. So I said, okay, let's try to write it in Rust. This is literally my first Rust project. So the code may not be so well written, but at least it's been proved useful so far. So it's a high-level parsing library which will read your log and create high-level object for them. It will give you an iterator in Rust, and from that you can very easily filter for a specific category, from a specific object, specific line or whatever, and do some basically whatever you want with that. So the code is there on GitHub, and I would just give you one small example of things you can really easily do. It's once again, it's not usable. It's not readable, but TSD is a small tool. I wrote using that, and which will, for each thread, will compute the difference between one specific event and the last event is this thread. So once again, if there is some point, a gap in the pipeline which is being stuck, it will detect it very easily, and it's not yet, it's not really readable, but the biggest difference will be highlighted in red. So just by parsing that you can see, okay, my pipeline here rated for 100 milliseconds, that shouldn't happen, what's going on here. So just you can see the timestamp difference very easily using that. And that's it. I think I have like 25 seconds for questions. Okay, it's not enough. Okay, great. Okay. Hello. So when you mentioned in the beginning the leak tracer, your right voligriand is very slow for most things, but there is also Libasan, which is much faster, but can you, if you want, if someone wants to use Libasan, can you use it with your GST tracer? Or is it the... So is the LLVM tool? No, Libasan is actually part of it. It's included in the GCC. Okay, I don't think I've ever worked with that. So it's an address sanitizer, basically. So it's much faster. Okay, okay, okay. You can, it does not, the difference is that it does not detect the origins of the leaks, but it finds the leaks. Okay. I know. Use a stack trace. Okay, I never try using it. It's good to know. Thank you. Okay. Yeah, I guess it would be more... It's more generic. Yeah, more generic and lower level kind of tool like voligriand. So, yeah. Yeah, okay. And the ref-conting, it won't do ref-conting. Yeah, yeah, yeah. So it's a bit like with voligriand. It will just find the memory leak. Well, here we are interested in ref-conting problems. So the leak tracer will be like a more specialized tool for a very specific problem. And I guess Libasan could be more like a replacement to voligriand, I guess. Okay. What do you do? Maybe that's why I was looking at that. Yes. Apparently you have to recompile to be able to use it. Okay, okay. Oh, yeah, yeah, okay. I remember I look at that a while ago and that was a problem for me because I was working on a bit of device and recompiling with specific tool can be quite challenging, which is why I went to the tracer approach. Yeah, exactly. Yes. If you build a recent version of Jstreamer, it should be built by default and it would work. The only thing you may want to change for a build system is to make sure to have Liban wind, which will allow you to get the track trace. But even if you don't, it has some full-back code, which will try to get you useful information. So yeah, that's the big advantage of the tracer, I think, is you don't need to rebuild anything. Anything else? Yeah. You mentioned the GST shark. Yes. How do you use it? Is it an element in the pipeline or is it a standalone? No, no, it's a set of tools. So it's a bunch of tracers. So it's a Jstreamer plugin, basically. But it's not powered Jstreamer core. So you will have to build it yourself manually and it will give you all the tracer I mentioned to get more information about the performance, about the interlatency and the script to generate the graph I show as well. So it's not an element. It's a bunch of tracers and scripts to produce useful information from that. Anyone else? Okay, thank you. We have a question in the back. Wait for the microphone. Hi, and I used to have some pixelization issues when I'm trying to play the encrypted dash streams. So which is the best efficient method of debugging those kind of pixelization issues? What I will do, I will start by looking at the logs from the specific element which will cause the problem. Mostly the GST launch is completely successful and I did not see any error logs. Yeah, yeah, that's why you have to use the GST debug environment variable I mentioned here. So if you use Where is it? GST debug here. So here I ask him to just give me the information about the tracer but if you put just five, for example, that will give you all the debug information of all the element which would be quite a lot, but from there you can start looking at what's going on and that will give you more information. After that you'll have to end up debugging. So if you are not used to do that yourself, the best way would be to just file a bug, I'd say. Great, thanks. There is no magic concern, unfortunately. The bugging is out and I'm afraid it will always be. Last question maybe.