 I don't have the microphone, if you never mind. This man needs no introduction, but because he has run the vast majority of the work for POSTM for the open media room, in conjunction with Adi and Fran and myself, I'd welcome him in here on the floor. Thank you. Thank you. We'll go in and we'll go out and you guys enjoy it. So it's Christophe Massio, and he'll be talking about U-Pipe, a very interesting project that we work on as well. So thank you, Kieran. Maybe I should have changed my t-shirt because I'm not the day-room person now, but I'm the company person now, but they told me not to do that in front of the camera, so it was too dangerous. So a new talk about U-Pipe, we already did one last year and the year before, but it will be on a different topic this time. The first question for people who weren't there at the previous year is, what is U-Pipe? So U-Pipe is a young multimedia framework written in C. We started in 2012, so that means we're quite young. We're not as young now, but still quite young compared to similar projects were born in the 1990s. That means we have, of course, a lot fewer modules of your support, but also we have been able to make some educated choice about the technologies of today. So U-Pipe was initiated by my company, Open Head End, and some of my employees, but there are now three supporting companies, including Kieran as well. We used it, and last time I counted seven contributors. The focus of U-Pipe is a bit different from what you can see with VLC or maybe G-Streamer. We focus on reliability and efficiency and above all, compliance. So that means we don't aim at playing any kind of file that's on the internet. If it doesn't comply to the standard it's supposed to comply to, we will say may. We also focus on broadcasting professional applications. Actually, most of the use cases I have today are on the real world, in the real broadcast world, with real customers in the broadcast domain. The license of U-Pipe is MIT and LGPL, so it's not both. Some modules are MIT. The core is MIT, the headers are MIT. The main pipes are MIT. That is what you need to build a pipeline, basically. Duplication, stuff, sources, file source, file sync, UDP, and so on. All of that is MIT. We have some modules that are LGPL. That's mainly the code that deals with the codec format. So what we call the framers in our lingo. And also, some support libraries are also LGPL, like the baby codec. So the binding code is also LGPL. So those of you whether the previous years, what's new, the question is simple. So too long, don't read. We've had a lot of work on the Eventloop API to make it more complete. So now you don't need to call anymore the Eventloop directly from your program. And it's also easy launching P-threads, P-thread workers. We have a nice utility to dump pipeline in graphViz format. So some of the slides we will see later have been automatically generated from U-pipe using graphViz. You'll see that it's quite nice, quite verbose as well. But the legit bindings, I talked about them last year, but it's finally been merged in. So you can now write a U-pipe client using just Lua. And Lua is interesting for that because it abstracts all the ref counting of the structure. Most of our structures are ref counted. But in Lua, Lua takes care of the ref counts and frees automatically the structure you don't use anymore. In C, you have to do it by hand. So that's interesting. In the feature side, we've had people contributing. The third company contributing an HLS client. So some other frameworks have also HLS clients. But we now do have one where you could choose the bandwidth or the variant you want to have. And you can re-multiplex it, repurpose it in a different format. I will show later an example where we repurpose HLS to a transport stream, a UDP transport stream, standard transport stream. We've had some work on H265 as well. So we are able to decode it and support it. Vang supporting the SDI, if you don't know what that means, don't care if you do bad for you. And well, I wanted to release Previz number five. But the work is not finished yet. But we'll probably do that in the next few weeks. And after that, 1.0, we hope. So unlike Previz here, I'm not here to give you an extensive talk on the insides of your pipe and why it's a good pipeline. But I will give you examples of what you can do with your pipe. And I will start with an inventory of all the modules we have for inputs, output theaters, and so on. And then I talk about use cases, typical use cases that are in production today. So the inputs we have, well, some of them are quite standard. Of course, in the broadcast world, we still have specific connections like SDI and ASI. So we support two vendors for that. Natively, inside your pipe, we support, of course, DeFi, UDP, HTTP, and so on protocols. We have a compliant ESDMUX. Compliance is important for us, as I said. We also have an RTPDMUX to be able to read one or several RTP streams from the same program. HLS Client, I talked about it already. I have here MultiCAD directory. It's a bit of a specific format that allows you to record 24-7 streams in chunks and expire all the chunks. So we can also support that with U-Pipe. It comes from a program that's called MultiCAD that's part of the Videran project. Externally, we support TBV format for sources. It's actually a work in progress. Some of the formats may not work at the moment, because there are still adaptations to do. As for the output, well, again, we have hardware outputs from SDI and ASI that have not been merged at the moment, but they exist somewhere on GitHub. So if you look for it, you can ask us and we'll tell you where it is. Natively, we support, as usual, the five UDP RTP. We also support a compliant ESMUX. So that's one of a kind. That's probably the thing that differentiates us most from other projects in that we have a compliant ESMUX. That output stream that are analyzed by a professional analyzer and they say it's okay. So that's quite a good thing. MultiCAD directory again. And we can use also external libraries. So libv format, this time it works. So in production, I have streams to RTMP, ICAST, and other formats using libv format and GLX, Alizar. And we were talking yesterday about Wayland. The man was talking about the Wayland output also that is working on. So inputs and outputs are okay, but what can you do in between? So first, the filters that we have. So internally, natively, we support, well, the standard, the interlaced. Bleeding, bleeding means to take a picture and you bleed it on top of your video, like a logo, with or without transparency. Bleeding also allows us to do mosaic. I will show you later. Crop, we also have the V210 pack and pack assembly optimized functions. If you don't know what it is, good for you. It's some kind of a format output by some SDI cards for 10-bit video. And so it's more compact than the traditional reports on 16-bits, but it's also much harder to read. External libraries, of course, are very useful for filters. Of course, CBD codec provides most of our decoders and encoders. X264 is also an encoder we use a lot. And the traditional software resampled as well. Leavespix is a recent addition that allows us to do resampling without changing the pitch. So it's actually much nicer in the broadcast world where you have to compensate for a drift while you don't hear it, thanks to Leavespix. And we also have bindings to Leaves EVBI, but just for American subtitling system. Another important piece of our project is what we call the framers. So the framers are analog to what EV codec calls the parsers. So that's basically a place where you have the knowledge about the codecs. So that's pieces of code that will parse the codecs tell you this is a frame, this is the head of a frame, this is 25 FPS for a stream, and interpolates the PTS from frame to frame and so on. But in addition to that, so it's not just a parser, but it also acts as what EV codec again calls bitstream filter, that allows you to transform the format, stream format into another. For instance, in X264, if you want to put X264 into TS, you have an annex code, annex B, that is you put a start code in front of your structures. If you want to put X264 in MP4, it's a different format. It's not start code anymore, but it's based on size. And so this transformation is dealt with by the framers in our project. We also have an interesting mechanism in which all of this is actually performed automatically without even you knowing it. So basically the thing talks to the previous pipes and say, I need annex B. And so the framers will say, I don't receive annex B, so I will convert automatically. And we have support for a few video formats, MPEG-2-H264 and now H265, that's a recent addition, a lot of audio formats and subtitling systems, so telotext and DVB subtitles. Why would you use U-Pipe for broadcast world? Well, we have several assets in addition to those I already mentioned. Our clock system is actually one of the major points. As we said yesterday at the meet-up, in U-Pipe we keep actually three clocks for each packet, the original timestamp that was in the stream, reconstructed timestamps, that's always monotonously increasing. So the demux makes sure that it's always monotonously increasing. And that's what we would call the program timestamp. So it's basically the clock of the encoder, the guys that gives you the stream. And we have the system timestamp. So the system timestamp is based on the clock of your hardware, your machine. And so typically for display you would use the system timestamp. Most of the projects only keep the system timestamp, but that means that if you have a drift, sometimes you have a shorter delay or longer delay, and so you normally have 40 millisecond exactly between frames, and that's a problem for some codecs. So that's why we keep all of those clocks. Also, the system timestamp, I said it was the system time, that's not completely true. Usually it's get time of day or clock get time, but there are also use cases where it's interesting to get the clock of hardware clock, like on the ASI or SDI card. And that's the use case we have in reality. So we can replace all of these by any other hardware clock, if you want. In U-Pipe, everything is dynamic. That means the model of U-Pipe is transport stream. So in transport stream, you can have new elementary stream, you can remove another stream at any time, so you can have subtitling, arriving, all of a sudden. So everything has been built in U-Pipe to allow for automatic forking of a new decoder, a new parser, and so on, if needed. If the user wants it, of course. But so the framework allows that. We also have efficient threading. You decide where you put the threads, not our framework. So if you have a thread, let's say AV codec, usually, you want to depot another thread, you just create what we call worker, and it will move to another thread. If you don't want it, for some reason, because for instance, you have a very low latency, you can work with no thread at all. The framework allows that as long as your core is enough, of course, to do all of that. We also have shared buffers with copy and write and zero copy semantics. So now you can see that more and more, but we've had that since 2012. And we were binding, so I already talked about earlier. So that was for the assets and the inventory. Let's see a few real-world examples. So the first example, I'm not sure you will see properly, but let's try. The first real-world example I have is some kind of a player. So it can be a player on your PC, but it can also be an IRD, Integrated Receiver Decoder, like in the professional world. So these graphs have been made by the UPypeDump API that I talked about earlier. So it's actually everything that is spawned when you start a UPype pipeline. So on top of it, you have the source. Actually, it's the worker, the W worker source. So it's in a different thread for performance reasons. And we read from, in that case, from a file. It can be also a FIFO. Then is coming the DSDMux. So DSDMux is the main thread here, is the main pipe, super pipe, we call it, that has a son, that's the program. And two other sons, the program has two other sons, that the audio and that the video. So inside the DSDMux, it says there are a lot of subpipes that do a lot of things like D-Caps, PSD-Caps, Framers and so on. And on the output, you have your frames. And in two different workers for audio and video, you decode it with EV codec. Here is the part of the pipeline that is with subtitles, bleeding subtitles, if you have some, in our case, we don't have. FFMT is something that uses software scale and the interlace to the interlace and well, put it in the correct format. It's RGB in that case, because we are using the DX, and the TRIG-P and play video and play pipe are used for synchronization. So looks and the audio is also on its own thread, that's why it's a worker. I'm not sure if you can see it properly, I don't see it in that distance, but I hope so. Let's take a little bit more complicated example that the program that does a TS will multiply. So we have a TS at the source, so again, in our worker source here, we have a UDP source this time, RTP decaps, and we want to create another UDP and we re-multiplex in between. We could also transcode, but that would add more complexity to the graph, so the graph is already barely readable, so let's try to keep it simpler. So again, after the source, we enter the TS demux. So again, in that case, we only have one elementary stream, so I chose to have only one video because otherwise it's complicated things a lot. So the internal of the TS demux is the split pipe that allows to select which PID you want, and we also have decoders for PAT and PMT, which has internal structures of the TS, and then you have the TS mix, which is quite symmetric compared to the TS demux program, and well, input elementary stream. The output of the TS mix is then sent to also another worker thread that adds an RTP header and outputs it to UDP. The last use case, because I'm running out of time, is a quite nice application we use in my company for recording, so actually I simplified it a lot because otherwise it would be too big to display on the screen, but basically we have an application where we receive a stream from UDP again, UDP RTP. We record it immediately as TS, but at the same time we decode it, so demux it decode it to get some frames, the key frames for thumbnails, and we recreate JPEGs from the thumbnails. So here you see a dub pipe that one is directly written to the disk and the other is sent to the TS demux again, so now you're used to it. And at the output of the TS demux we have a bit more pipeline. Between the arrows you can see the types, but it's a bit, so AV codec, the interlace, because for JPEG you want an interlace, and this pipeline here is ad hoc, create sums out of the frames we've selected, and we record it with AV codec encoder, this one for JPEG, and write it with a file sync to a file. Okay, last use case, but this time I won't show you a graph because it would be enormous, it's a mosaic. So this is, for instance, a mosaic we have at work, so this is a real life example. So basically all of this is done with U-Pipe, using a pipeline per input, outputting to bleed functions, to bleed pipes, that will bleed each of the frames to a single picture, and so normally it's also live. This is also for the audio, so that's a nice contribution by OBS, it moves with your level. So all of this is done with a U-Pipe pipeline. Other use cases that we have in production that I did not have time to talk about here, so IRD, of course, we have a company using that from TS, we could also imagine it from HLS. Live encoder, live transcoder, to TS, or to RTMP, I SCAST, anything that I use for much support basically. File transcoder, that's something we have in our company as well, from TS usually to MP4, that's a typical use case for us. We also have an MPTS MUX, that's a product from my company as well, based entirely based on U-Pipe and the TS MUX that you've seen earlier. It would be quite a large workflow to show, but it works with U-Pipe. And also something we demonstrated actually at the IBC last year, is a cloud system with overlay. So we've just played out files, decoded files, and added a logo that moves and a banner and a picture inside of a picture, picture in picture and so on. I think I'm right on time for some question, Kieran. I just wanted to, you mentioned HTTP, is it compatible with HTTPS? So you said I talked about HTTP, but is it compatible with HTTPS? At the moment, I don't think so, because the company that provided it doesn't use it, so. All right, the third party works with you? Yes, it's a nameless network operator, I'm not sure we can say who it is, who contributed that part to us, and they did HTTP, they did AES, also unscrambling, so if the stream is crumbled. AES encryption, is it available? Yes, yes, it's already merged in. It also supports whether you have the audio inside, mixed with the video or a separate file, variant, I don't know how you call it, in your M3 or U8. So which MPEG-TS standard do we comply to? First, the ISO, and what I was actually referring to with that is the fact that with T-STD compliance, that means that the timing of a packet has very strong constraints in the T-S specification. You can't send the packet too early, you can't send it too late either, you can't burst, and so on, all of that is specified in the specification, and it's actually very difficult to understand, it took me years. So that's what I was referring to, we also comply to DVB, and we have decoders for SDT and NIT and so on, well, usual, that's not specific to our project, but yeah. The use cases for? Yeah, have you published the code for them? So have we published the code for the use cases? Some of them are online. So the player basically is an example that we call Uplay, that's in the example directory of the UPype repository, so if you want to see our code, it's on GitHub. So the player is, the transcoder of the T-S-Tremux is also, it's a test unit actually called UPype T-S test, that we use to test that our D-Mux and Mux don't send behavior from release to release. The third one was recording is not, it's actually not published yet, probably at some point we will, but not so far. One also very important thing, if you wish to contact us, so we have a website, UPype.org of course, we have our GitHub, and if you wish to talk to us, the best is to come to our IRC, UPype on FreeNode. We also have a mailing list, but usually when I say that, Q1 laughs. So what kind of subtitles standard we support in UPype, that's okay. For Vank? For Vank, ah, for Vank, sorry. So for video and series, for those of you who don't know that video and series, a part of the picture on the SDI interface, the raw video, that embeds some structure, so basically OP47, so that means teletext, basically. So we take that and we turn it into a teletext packet. And all the things that we do. All the American and some other things. All the American SD and all the European SD and the pass through of the US Bank as well, so captions and subtitles. Yeah, I'm not sure it's in Vank, it's in VBI, right? Well, in SD it's VBI. Yeah, in SD it's VBI. I was specifically referring to Vank because I worked on it last summer, and the other thing we support is Coty 104, that's not for subtitling, but that's to determine the timing of the beginning of show, or the end of the show, or splicing, and so on. Very quick, how about DVB sub? So how about DVB sub? The thing is DVB sub, you won't find it in SDI, because there is no, as far as I know, there is no standard to embed it in SDI. From a TS, to a TS, as a PA? From a TS to a TS, we of course demultiplex it and re-multiplex it. I think you have code that allows you to put it on screen, DVB sub. That's actually for title text. Not just for title text. It should work, it's a big map, should work. We don't do title, so we do that. Actually it's never been tested, but it should work. If your question was do we support transcoding to the text to DVB sub at the moment, no. But this is something we're seriously thinking about. Okay.