 Last presentation, just a quick wrap up. So before I do that, I will introduce myself. My name is Adi Quadio from EBU. I'm the lead on video research and development. And I'm a co-organizer of this, let's say, dev room, mainly led by Christophe. And we try to do it every year. So I would just try to sum up all the interesting presentation that we had today. So bear with me. If you were here, you can also interject or help if the summary is not to your liking. So what did we have this year? So this year, we tried to cover several topics. We covered actually frameworks for multimedia applications. We had UPybe, Jstreamer, JPEG presenting. Then we had a nice presentation about codecs on the new codec coming from the Open Media Alliance. Yeah? Yeah, you can put it the way you want. But I think it means the same anyway. And then we had, let's say, short presentation about stress testing applications and APIs from Max Moros. Moros, yes. And we had a quick review of standards, mainly open standards, but on the audio side and also on the subtitling sites from the colleagues from IRT. Then we covered as well quality control in broad extent with the media info and Cataise struct, which is another project that we'll look into in a few seconds. And we had a few use cases covered. So two use cases, one from Trace TV, not France TV, which was about transcoding, open source transcoding. And we had as well a case from Kaltura, which was more on live streaming based on NGINX and others. So now I will go through. So first of all, we had a quick history, let's say, overview from Kaltura. So for those of you who do not know Kaltura, Kaltura is just, let's say, end-to-end video on demand platform, and they were just basically telling us that they were in the open source domain since 10 years and that they took advantage of the uptake of video since the introduction of YouTube to actually establish their framework. And now they're trying to get into the live, let's say, streaming domain because it's becoming more and more important for broadcasters and not only. So their next actually step is to move from the educational, let's say, or MOOCs type of application that they were providing to more, let's say, content delivery type of services. Then we went to the live, let's say, demo from Jess. So Jess actually showed us how we could quickly using different open source tool, establish a live streamer based on RTMP. It was a very, let's say, speedy demo from her. She managed to do it quite properly. What she mentioned here was there was a very interesting point about whether you would like to use RTMP or WebRTC, and her advice was first of all to use maybe NGINX as a platform because it was more adapted for RTMP. But you could use actually other solutions if you were more interested in using WebRTC. They went for NGINX because they use NGINX in their environment, so basically in cultural enterprise. So they tend to actually go for NGINX by default. So if you have, let's say, a different opinion, you can actually use another tool. But her recommendation for RTMP streaming would be to go for NGINX. Then we had a quick, let's say, presentation on TTML, which is the Time Text Market Language. So it is used for subtitling and it was presented by Andreas Tai from IRT. The point of this presentation was just to show over the whole production and to end media chain, where do we support actually different subtitling standards and formats? So from his point of view as an expert and as the chairman of the EBU Group on the subtitling, most of the open source tools are, let's say, embracing the subtitling standards. Especially the profile IMSC1, I think I, yeah, that's it. So IMSC1 is the profile that is, let's say, taking traction at the moment. And another profile of subtitling, which is the same TTT, is being dropped, basically, by the industry. So we may see more and more the IMSC1, let's say, profile coming up in the next years. And if you would like to have more details on the presentation, on the, let's say, the tools that support the latest profiles for subtitling, you just go through the presentation. You have a list of all the tools there. I tried to capture them here, but I don't think I have all of them there. Last but, I mean, third presentation, Trace TV, very interesting use case, taking different building blocks to build a transcoder. The key point of the presentation was that, first of all, the transcoder was built to serve actually the increasing need from the employees in the company to actually transcode content. So not only content, let's say, producers, but also community managers that have to transcode content to serve their, let's say, web platforms and so forth. And you can correct me if I'm mistaken. But what was interesting in this is that now, they have reached actually quite a good maturity level and they want to share this application. So put it either on a Github and improve as well the, let's say, ability to duplicate this example, this interesting use case. So from what was captured, this use case is based on FFMPEG, basically, and a set of other open source tools, Node.js, Gquery, and so forth. So we look forward to receive an email saying, now the project is on Github and ready to go and to be deployed in other corporations. Jstreamer, so Jstreamer, as we said, is one of this pipeline-based multimedia frameworks and we had another one which was presented, so U-Pipe. Jstreamer is a bit older than U-Pipe, but U-Pipe has a bit more traction today. It's actually taking traction. Sorry? Thank you. Thank you, no problem. I take it after. Okay, so Jstreamer is covering a large spectrum of codecs. There was a bit of confusion on Jstreamer, basically saying that Jstreamer was either a codec or a transcoder and so forth. Jstreamer is more of a framework and you can use it to deliver different use cases and it's part of the propaganda campaign to try and remove this confusion. In terms of support, it supports several codecs up to H265 at least, WebRTC, and what they want to support as a next step is SDI over IP, so Jstreamer was mainly used for let's say low, let's say non-professional applications. I would say non-professional broadcast applications and that's maybe, from my opinion, the difference between U-Pipe and Jstreamer is that U-Pipe would be more driven and used by professional broadcasters than Jstreamer, even though there are few use cases where you could find Jstreamer in professional broadcast applications, but it is not that well, let's say, introducing this in the professional domain. I was trying to go quickly on J-Pack. Interesting use case, J-Pack mainly tried to present to us how they encoded 360 video, so basically it was to show their approach in reducing the, let's say, bandwidth or increasing the bandwidth efficiency, so they tried to tile the picture, so the 360 picture you have, and they tried to stream to your mobile phone or to the device you have, the tile that you are currently viewing in higher quality, so basically how, as you move across the 360 video, you see a refresh of the quality as you go from one view to another because the view you are actually seeing is streamed at the poor lower quality and then increased to a better quality. This, let's say, tiling mechanism is supported by MP4Box, that was actually the main message of the presentation, and you can already trial it. I hope it came across properly during the presentation, but mainly that's it. They were also lobbying for having different, let's say, quality strategies because you can actually use different tiling models to improve the efficiency of your compression or the bandwidth efficiency, and at the moment there are no specific recommendation about a specific tiling model. Facebook actually had previously a splicing model that they abandoned. It was called this pyramidal mode, but they do not use it anymore, so we are kind of going toward the convergence. We have a few tiling models which are now available, but we don't have, let's say, a clear view on which one is the most optimal, basically, for 360 video streaming. So at the moment, MP4Box implements four tiling models. U-Pipe, as I said, roughly new compared to JStreamer, has some traction, mainly driven for the professional broadcast domain, and really a framework to watch. I'm actually very glad to see that it supports 10-bit, so some very professional format, like the V210 there. It's very difficult to find a framework that supports it, and the EBU-R128, so these are actually framework that could be easily integrated in a professional domain, and it's actually nice to see that there are projects which are delivering these protocols. MediaInfo, very nice presentation, lots of tools. I knew only the MediaInfo one, so the metadata extraction tool, and actually the media conformance checker was also introduced, as well as the, let's say, baseband quality control tool, which is the QC tool. I wasn't really sure if it was you that developed the QC tool, so the baseband one wasn't really clear. That's what I think I got, what I captured, actually. So, but it was interesting to see that from a metadata extraction tool, you extended the, let's say, the set of product that you could provide from conformance to, let's say, deep-bit definitions, and down to actually the baseband quality assessment with different metrics, so PSNR, SSIM, and so forth. And one of the point, one of the question that came up was to whether MediaInfo was interested to broaden the set of metrics that would be supported by the baseband tool, and it was just submitted to a future prospect or next steps. MediaInfo is widely used in the professional domain, and it's a very, very nice tool. Catech struct is a parser for files based on the binary binaries. Mainly, the point there was to create a parser that could easily read from a binary file and reduce actually the complexity of, let's say, finding the documentation of different file formats, and so forth. Interesting project, there is a workshop that might be organized in the coming weeks, so if you want more information about this project, send a quick tweet to Catech underscore IO. If you want codec, interesting codec from the Open Media Alliance. Yeah, okay. Again, as I said, you can put it the way you want. This Open Media Alliance came up, it's a consortium of big internet companies, Netflix, Amazon, and so forth are in, and it was a bit of a reaction to the licensing issues around HEVC and H265, mainly. So it's interesting to see that there's a codec coming up. Now from the presentation, what came out was the issue that they have to create a royalty-free codec, basically. So they have several tools, they have almost 50 assessment tools, 50 coding tools, sorry, which are being assessed to improve the, let's say compression quality of the codec, but one of the issue is to find out whether there are underground IPRs or IPRs on those tools. What was said was that this codec may be released kind of, at least the Bstream might be frozen in Q4 2017, but there were previously a couple of dates where it was supposed to be frozen and then postponed again, so therefore there may be there. We may have a new royalty-free codec in Q4 2017, but we'll see. One slightly off-the-track quote from the presenter was that this codec, AV1, is currently more performant than H265, which I would like to see. If it is said, it's really great, and it will be a real challenger if it is royalty-free. So something to look forward to. Fuzz, presented by Max Morales, basically a platform to stress test your APIs and your project. In a nutshell, they have the OSS Fuzz, let's say fuzzing as a service available. So if you want to, let's say, embrace this new technique of stressing your application and so forth, you are welcome to join the project and access the 6,000 cores they have available to stress test your project. AES67 and 70, basically it was a presentation on the standards from our colleague, which is still in the room. It was basically showing for the AES67 that a concatenation of already existing open standards can be used to improve the interoperability for audio over IP transmission. There was one key mention here since we are in the new open source environment that AES67 is already supported by the two framework we showed today, except the connection management. So we look forward to see it supported by your frameworks if it is possible. So if possible, that would be nice. And that's it, roughly in terms of a summary. What I can tell you is it's always a pleasure to organize these dev rooms, but we need contributors for presentations. So we will be here next year, again, the same team. If OpenEd and Open Broadcast Systems are, you know, if they accept to be with us. Exactly, give us good ratings. We try to maintain this track because we saw five years ago that it was really missing. And as you can see, we tend to have a nice audience. So please contribute, bring new presentations and even ideas for new topics that should be addressed. And see you in 2018, maybe, or in September at IBC for those that go to IBC. Thank you very much.