 Jérôme Martinez, from Media Area, we are a development company focused on media files analysis and today I present the tools we are sometimes developing, sometimes using, sometimes we are project manager on it, on them and sometimes we are only developers. By we, it is not only media area but also our community. So it is mostly from broadcasters or from producers or from web developers or from archives. So different kind of people and different knowledge about how they can deal with programs. Some people they are very technical and some others are not technical. So all these people have different needs. Sometimes it is only some metadata extraction and review about the files. Sometimes there is some needs of validation of a file format. Sometimes it is more complicated and the people need to investigate in the file format directly. And sometimes they need to have some conformance about local policies. Different archives may have different constraints about what they accept in their repository. It is not only archive, also some broadcasters, they want sometimes to reject some MxA because they are not compatible with their workflow and so on. And some people may want to edit or fix some metadata in some files because there are tons of buggy files and sometimes there is a lack of metadata or sometimes there are some buggy metadata and we need to fix it. And sometimes our community needs to do some quality control about the audiovisual content, not the container, not the codec, but directly the decoded frame. And for these different needs we have different tools for the moment. Our main product developed by MediArea and managed by MediArea is MediAinfo which is a metadata extraction tool. It is a convenient, unified display. So whatever is the file format, it may be MXF, it may be Matroska, MP4, the display is same. And it displays the most relevant technical and tag data about your files. So the format, the profile, the level, width, height, the bit depth of video or audio. If there is some text subtitles, captions, and so on. Not only with broadcast standard formats, but also inside some more professional content like ancillary data. Sometimes in MXF files there is dedicated ancillary data with some time codes or subtitles. And for example, FFMPEG is currently not able to handle such content. And MediAreaInfo is used in that case in order to detect such content. So over tools, FFMPEG can be used and sometimes not open source. But when FFMPEG is not able to handle that, people use other tools. MediAreaInfo is open source, BSD2 class license. We have 6000 downloads per day. So it is pretty lot used. We support most format used by individuals or by professionals. So sometimes professional people have very weird transport layers. For example, Dolby E, they don't like to be transported only in one track. Sometimes there are two mono tracks. And it's slightly difficult to detect it. And MediAreaInfo is able to detect all this kind of weird professional format. And to detect how many channels is inside it. How many programs is inside FFEC PCM data. For example, Dolby E, hidden is a PCM content. And we need to detect the Dolby E. Again, about subtitles and non-solid data. Or for some site card files. For example, MediAreaInfo is able to detect Netflix directory structure. And to detect where are the site card text files at TTML. We talk a bit sooner about TTML. And Netflix puts them in different directory. Or with a specific file name. File name is containing the language and so on. MediAreaInfo is conformed to Netflix directory structure. And can detect every subcharter. We also support new format like IMF. So this is the new format based on DCP. And DCP is based on MXF. All these formats are correctly detected by MediAreaInfo. So we can also detect classic formats like MP4, Matroscar, FLV, AVI, Wavefile and so on. We can also export such data in classic text files in XML. Or in different professional output formats. Like Pibico or Ibico or PHIMS. They are XML based but with a specific formatting. But after having a default review with a GUI like MediAreaInfo. It may be not enough because it is not possible to automate such a review. So we created also a Mediaconch project which is a conformant checker. A conformant checker is both implementation checker and policy checker. The difference between implementation and policy is that implementation is checking against a specification. From ISO, from SMPT, from ITF and so on. And the policy checker is something from a local decision, from an entity. For example, an archive wants to accept only such MP4 and only AVC in MP4. And they need to create a policy in order to check if all the files they have are conformed to their policy. Mediaconch is also a reporter tool. So there is not only one method for doing the reporting. But you can define exactly how you want to export data. If you want to have an XML in order to have an automated process, you can have it. But if you want to directly display something to the end user, you can have a transformation to HTML or text. You define exactly how you want to design your HTML. You can put, for example, the logo of your company and so on. Mediaconch is JPLv3 and MPL2 licenses. So both JPLv2, V3 and MPL. We support for the implementation checker for the moment for Mediaconch. It is a young project. And we were focused on Matroscar, FFV1 video codec, and PCM. It is native to Mediaconch. We can also expand Mediaconch with some plugins. And we can use two plugins for the moment, one for PDF and one for TIF. So Mediaconch for a moment is good for Matroscar, FFV1, PCM, PDF and TIF. For the implementation checker, compared to a specification. But if you want to do policy checking, we use Mediaconch libraries. So you can check formats, any format supported by Mediaconch. With Mediaconch, we have a GUI command line, but we have also a server mode in order to have a not a completely automated process. So every time there is a new file in a folder, we can directly test it and say if the file is conform or not. We can also add some preprocesses, for example. Some people ask us that when they have a new file, we need to use FFMPEG in order to convert it to different file formats. For example, they want to have some broadcast formats and some streaming formats. So we need to convert it in ProRes and we need to convert it in WebM and so on before doing some checks. And we also need to convert to some more open formats. So for example, if it is MXF, they want to convert to Matroscar. And after that, they want to test that FFMPEG did the job well and we need to check the results files. Unfortunately, knowing that the file is not conform is a third thing, but when it is not conform, we need to know why. And we have a feature in our tools. It is initially something internal for our debugging process, but there is some interest in our community. Media Trace Tool is a tool for being able to see exactly the meaning of each bit in a file format. It may be in a container or it may be in a codec H264 or FFV1 and so on. For the moment, it is a bit work in progress because I developed it, I know the bugs in it. I know when I should not use it, but other people don't know it. And sometimes they try to test it on some big files and it is not completely ready for them and there is some style and so on. So we are working on it and if we have more interest in it, we will continue the development. Media Trace is not a dedicated tool. It is a feature available in Media Info GUI or Media Conch GUI and also in the command line. As Media Trace is in Media Info Library, it is the same license as Media Info Lab. So here we have some examples about Media Trace, for example for a Matroska file. If there is a problem in the header of Matroska, for example, we can check, ok, the header of a Matroska is an ebml and we need to check the ebml version or the ebml dog type and so on. Or maybe there is a problem in the tracks header and the Matroska file. We are able to dig exactly in every level of a Matroska file. And even more, if we have a problem in the video layer, for example, we can check the segment, the cluster in a Matroska file and we can see exactly what is the content of H264 or IC3 or DTS. And we can see if there is an issue directly in the codec. It is not only about a container, but we can also check a bit what is the issue in the Wobby stream. So H264, H265 and so on. We don't support only video or audio. We also support, for example, some imaged content. Actually it is based on Media Info and it is used for Media Info debugging. Everything supported by Media Info usually have some content in the media trace feature. So for example, in this example, if we need to check a TIFF file, we have some information and we can see how we can check, for example, the image lens, how it is, it is in IFD and in every section. We have another need sometimes is to able to edit metadata. So addition is sometimes complicated. So we focus only on some specific format. And we developed a tool dedicated to Wave and BWF metadata checking and editing. We check metadata compared to some requirements from different standards. Sometimes there are some requirements, sometimes there are some recommendations. So some people may want to follow only requirements and some others they want to reject even files not based on the recommendation. So depending on people, we have some options. We check only the requirements or we check the requirements and the recommendation. With this tool, we can delete, modify or add metadata. We can also export in CSV file, for example. And this tool is a public domain license. With this editor, we support different guidelines, mostly the one from the US libraries, so FADGI. And from EBU, so we implemented guidelines from FADGI and from EBU BWF coding history and originator reference. This is only based for Wave files. And in some specification, especially the US one, there is some recommendation and some requirements. We support both. Checking the bitstream, the file content is one thing, but it is not enough. Because sometimes the content is not good and we need also to check it. We created a different other tool. It is focused on the digitalized content. So a lot of archives have a lot of analog content, some VHS and so on. They are currently in the digitalization process. And the digitalization may be not good due to the quality of the analog material. And sometimes during the digitalization, there are some artifacts in the video or in the audio. So we analyze also this baseband content, the decoded frame. And QC Tools is based on FFMPEG. We add actually a new graphical user interface on FFMPEG. Because FFMPEG is very powerful, but sometimes too much complex. And we have some non-technical people who need to be able to check the files without having technical knowledge. So the command line is not possible for them. They need a graphical interface. The code is under BSD license for the code created by us. But as we rely on FFMPEG, the complete code is about with BSD and GPL content. So the binary is GPL. So I said we depend a lot of FFMPEG for the MUX, the code and also check. Actually, we rely a lot on the LibAV filter library. So QC Tools is mostly a graphical interface for LibAV filter. But sometimes we are lacking of some filters. So we also added some tests in LibAV filter. So it is directly in FFMPEG upstream. We have a list of filters we use. So we do some checks like broadcast range, crop, peak signal, two-noise radio, and very classic. As it is for analog content, it is usually with interlaced content. So we are focused mostly on field comparison for the moment. But it could be extended to some over tests depending on the needs from the community. QC Tools were initially based on video analysis. But in the upcoming version, we have also audio. We have some tests about the audio. So the classic Air 128, audio phase meter, DC offset, audio diffs and so on. So you can see also, this is the typical interface from QC Tools with some graphs. And it is easy to see if there is a peak in the difference between the different fields. For a video frame, for example, if there is some peaks in the audio, it is easy to zoom and to see if it is only on few frames or it is longer and so on. There is some, okay, some preview. So I want to say about standardization. We have some standardization issue. A lot of people have do some standardization about lossy content. But we need some lossless format for archives. So we are creating an ITF world group for doing Matroska, FFV1 and Flack. For that, we need some sponsorships. We have several companies who work on it. And they don't like to be named, so we don't do. But we have also some public sponsorships like Open Union, thanks to the pre-format project and some US entities. And we have some contributions, but we need some other contributions. So please, some patch for new features, basic bug fixes. Participate to CELAR, please. So if you have some questions. Interaction with QC tools. The FFMpeg and QC tool integration. So do you link against the FFMpeg libraries, or do you just execute the FFMpeg CLI? No, we link to... Do we use the FFMpeg client, the command line? Or do we link to the library? Using the command line is not enough, because we also decode the content. And we add sometimes some filters directly on the content. And there is a graphical interface about the filters dynamically. So we need to decode and to show the filters. So the screenshot was only about the coalf part. But there is also a display part, and there is some filters that are applied to it. So we need to link to the library. So you split the one library. This is a standard GLL share object on Linux and so on. This is a Libavi filter, Libavi codec and so on. And you bundle the whole thing. Yes, we bundle the whole thing under the GPL. So our code is BSD, but the binary is GPL due to FFMpeg. Sorry? Are we compressed yet project? So they're doing a lot of metrics for video quality comparison. So PSNR, PSNR, HBS, MSS.IM, and also CIE. And there's a new metric for Netflix called DMA. So do we use the new quality? Formula from Netflix instead of SIEM or PSNR. C'est la question. Are you aware of that? We are aware of that. But actually our users are more archived. And we work on SD content mostly. So we don't use it for the moment. We don't need it. So it is not implemented on our side. I don't know if FFMpeg has it natively. FFMpeg, for our need, for the need of our community, it is not used. CIED, which is for Chrome. CIED 2000, symmetric for Chrome. Because those are all going to be getting low-mon only metric. FFMpeg, we don't use such formulas. But it could be added on request. But our community does not need it for the moment. Thank you.