 So we're going to start the next presentation. So it's yet another multimedia framework called U-Pipe, a product I know a bit about. But this year, Raphael has agreed to do the presentation. So please give a warm welcome to Raphael Carré. Hello. Thank you. So a little bit about myself. I started contributing to VLC in 2006. And I joined U-Pipe three years ago. So what is U-Pipe? It's a G-streamer for broadcast. So the big question is what is broadcast exactly? I've worked for six years in broadcast, and I still don't know exactly what it is. So if I try to guess, it's old school TV. I think when the first TVs were created in black and white, this is a distribution of video or radio for millions or billions of people. So what do we need in broadcast? We need high quality of service, 24 hours per day. Good quality, because people watch TV on bigger screens. So it's not like on the phone where you can have a 144 video. The quality needs to be very good. So as Christophe said, he created the project in around 2012. So about 15 years after creating VLC. So all the mistakes were done in VLC, which we still suffer from today, they're gone. U-Pipe is perfect. So U-Pipe is a pipeline, and of course it comes with all the modules, the plugins, which you can connect together to do anything you want. So it's fast. Of course, it uses a model of events. So when you're not receiving anything, the pipeline is waiting on the file descriptor. So it's not using any CPU, and as soon as you get something incoming, you're waken up by a pole. So you can handle it immediately. So on Linux, we use LibyV. But on Windows, you could use the enlightenment but Windows support is still, it's not there. There was some work done, but the guy left. So if you know about Windows and need a broadcast framework, come and we'll help you add these parts. So it's a bit weird for something modern to use no threads, but actually it's a lie. Each module runs on the main thread, so you don't have synchronization problems in your pipeline. But what you can do is deport one module or part of the pipeline to another thread, and we already have synchronization using lockless and atomics when needed. So you can deport your pipeline and get a thread for free. But from the view of the pipeline, you don't see the thread. Everything runs in sync. Lock button. So of course it's dynamic, so each module can add its own requirements and so on and so on. For example, if your hardware video needs some specific thread, it needs to be aligned on that many bits to be able to perform SIMD or of course zero copy. The pipeline can run directly in the final buffer at every step. So the pipes themselves are really dumb. They just take something in and out and give you an interface for configuration, but all the logic is done in the application. You just use the stupid pipes and you tell them what to do. So as I said, it's broadcast, so it needs to be very reliable because you don't want to be sitting in front of the TV watching some news and suddenly it goes back screen for two seconds, then it comes back because something crashed in the data center. So we have a test suite, which is not complete, but we are working on making it better. We use Valgrind, LibAssan, which was mentioned, and we are fuzzing individual pipes, especially the framers, since they receive untrusted input. So we use AFL and that's it. So what's new? Since last was them, I think we had a release around last was them. And another one is coming today. So 10 contributors, not many new people, but hopefully you will be interested in joining us. So about one commit per day, blah, blah, blah, some stats. The pipeline itself didn't see many changes. There were some improvements, but no bug fixes. So this is cool because it means that the pipeline itself is, as I said, perfect. Oh, don't laugh, it's true. But what we did, we created many new modules, and I'll talk about them. So the first one, the biggest one, maybe, it's the Decklink sync, so the Blackmagic cards, which do SDI output. So it was a lot of work because if you don't know these cards, the manufacturer gives an SDK, which is a cross-platform because it runs on Linux, Windows, and Mac. But the problem is it's proprietary, both the kernel driver and the user space library, and it's crap. You have many, many problems with synchronization. If you add a reference clock, the signal will drop four seconds later or something. So you have to test and guess what's wrong. You ask them, why doesn't your stuff work? Ah, yes, we will fix it in the next release, but then they give you a new release, which adds more bugs. So we have to work around many, many bugs. And now it's stable. It's actually working on your TV. So if you're watching your TV, that module might be behind. We support teletext in a vertical and cilary space. So this is a vertical and cilary, a technology from the old days. It's been there for 50 years. And after the switch to digital TV, they kept it because the guys had learned to work with all technologies and they didn't want to change. So we had to follow. So teletext is mostly used in Europe. In the US, it's cross captions, which we support too. We also do CMT337. What is it exactly? It's a method for transporting compressed audio of a PCM channel. So it's all digital. So you have a start code, which is an audio sample, which would not make sense in a real world, in real audio. And when you notice this start code in the audio, it means the PCM samples that follow are actually compressed data. So you can transport in a stereo track, so two channels PCM, you could transport a compressed AC3 with a 5.1 or Dolby E, which is a codec that is only used in broadcast. And there was a decoder added to FFMPEG last year. It's not much known. And also in CMT337, you can transport PCM. So you have PCM, you have the start code and then PCM behind it, so it doesn't make sense. But you can do it. We also added ASI sync. ASI is a bit like SDI. It's a format for transmitting a video. And we already had a source module to receive the ASI signal and we transmit it as a transport stream, for example. It's designed for MPEGTS because it uses the same 27 MHz clock. We had a DVBCSA, which is an encryption system for satellite, so for pay TV. So you can do it both sides, encryption and decryption. And it's actually a fork. The original version came from Videolan. And that guy forked it and we added SIMD and actually devised bit-slicing implementation. So it's very, very fast. I think a couple of gigs per second on modern hardware. So we added FEC, so for the error correction, which uses the CMT standard. So it's part of the series which defines SDI over IP. But it's not specific to SDI over IP. So any RTP stream, you can add FEC and we can receive it. So you can choose the size of your matrix. And sometimes FEC is not enough. You have too many losses, so you can't reconstruct the matrix perfectly. You're losing video. So we also added ARQ. At last NAB, there was a new solution by iVision called SRT, Secure Reliable Transport. And SRT is a mix of FEC and retransmission. But we chose to use the existing standard, although we don't know any other implementation of the standard. So we can't test it in the real world except against ourselves. So if you test your software with your software, it's always going to be compatible. So we are looking for another ARQ implementation. So the mode is NAC. So each time you lose a packet, you ask for retransmission. And that's why you have a longer latency than with FEC. So really both ways of protecting against packet loss are good, but they have the drawbacks. So it depends on the kind of jitter or loss your network has. And since it's a pipeline, we can actually put one behind the other. But really we should merge them so each one can know that they need to share a common internal state. So for example, if you can correct a packet with FEC, you shouldn't ask for retransmission because it's going to make your bitrate higher and maybe cause more packet loss. So we still did not merge it, and actually we don't know how exactly to do it efficiently. So maybe we'll use them either one and the other, but never together. So I was talking about the SMT337. We actually had an FFNPEG-like moment where two persons were working on competing implementations without telling the others. And we actually committed the two versions, but they are a little bit different because one is a parser, so it will extract the payload only, remove the SMT337 header, which you might not need anymore. But in our case, since we are retransmitting on the SDI output, we need to keep the header. So they work a bit differently. Also you can have 16, 20, or 24 bits mode in the 337, and the parser is only using 16 bits, and the other one is 20 bits, but you could modify them if you needed to support another bitness. We added a free type for text rendering. It's very basic because we don't deal with positioning. If you have a P, for example, the bar of the P is below the screen, so you don't see it. But since we only needed something simple, we write the text in all caps and that works for us. So that's cool. Blank source. What this does is generate an empty stream, nothing, so you can create your outputs and start transmitting nothing, which is useful if you mix it with... Let's say if you want to switch programs, so you start the output on an empty source, and then you replug your actual program to the output which already exists, and it's going smooth. AV sync. It's a raw mixer, so... When you deal with SDI, each audio frame is mixed together with the video, and you have to send them as one packet, one block. So we need to synchronize them at the sample level. And so we use kind of... We can use a blank source for the video, but for the audio, we are actually based on the output clock. So since the input and output clocks differ, you need to resample the audio, because it falls exactly each time. And we use peaks, because with peaks you can do fractional resampling. So going up and down as you need, without erring your cracks. If you lose one of your samples, you would hear a crack. So with this module, we have perfect sounding output. We added V210 decoder after the encoder. So that's the SDI video format. And we made it fast. Actually, James made it fast with AVX2 assembly. And yeah, we did more smaller pipes. Zone plate is a test pattern, which is used in TV. So instead of a blank source, which is black, we could use this one. X065 encoding, which is still slow. Grid module, which is used with blank sources. So this is a seamless switchover. You create your output, which would be your TV channel, and you can switch the programs. You connect one source and the other, and they flip in an instant without artifacts. DTSDI as well. It's a file format used in the Dektec tools. You could do a raw SDI captures. It's a simple letter. It's a new sound, which was the first contribution of a new contributor to U-pipe. So what it does is you set the parameters you want from the wave header that you don't have, since it's raw PCM, and you can do a PCM stream from file. What's coming? DVB satellite source. SDI over IP, which is working on air now, but not yet committed. So 2022, the old standard, which is working, and we are working on the new one, which was just released last year, and it should be working for an app, I hope so. And we'll also work on subframe latency encoders and decoders, so we can actually transmit the decoder video before the rest of the frame has arrived. So let's say if you have a progressive scan on your camera, you can start working on encoding and transmitting the first lines before the scan is done. So that would be cool. And of course, I hope that some of you will be interested in joining the project. So we are open. We are cool. We are a small project. We are cool. Only nice people. So if you are not nice, don't come. Any questions? So are there any questions I can pass them out? Yes. It's more of a comment on the SRT thing. It doesn't do FEC. And one of the big features is that they can, they try to guess the length of the bandwidth so that you can inform your encoder. Okay, I thought it was doing FEC as well. No, they found that FEC is not useful for the internet. That's what they're, that it's more efficient to do retransmissions. So that's what the internet is. Maybe it depends on the link. Maybe on a dedicated link. Yes. FEC would be useful. Yeah. The artificial flow latency is not that low, basically. And I was talking about, we used the standard, but when doing the implementation, I realized that you don't receive the, you don't know the bitrate on the right end of the retransmission. So we had to add a custom RTCP packet to send back the bitrate to the sender and to do some stats. So maybe they tested this RFC2 and so that it wasn't usable for them. Yeah. I know there's a lot of research for it there. It's something that was developed as proprietary software for like five or six years, and they shipped before the open source did. Okay. All stuff. Someone else? Is there any other question? I have a question. I'm coming, I'm coming. Who's running? I'm sorry, I'm coming because you have to be in the camera. Otherwise, people don't hear you. They will be happy to hear me. I just wanted to ask if you're using the Dolby E decoder in your workflow. No, because what we do is transmission. So we transmit the Dolby E as Dolby E. That's the point of using 327 because the equipment, all the equipment on this chain understands this format. So we don't need to decode it to PCM because the, I mean, Dolby E would be, for example, six channels. But if we send it in the compressed form in 327, we can fit six channels in two. So decoding would not make sense because we don't have any. So did you ever test it? I'm just curious because I wasn't really able to test the decoder. No, I didn't. I didn't. Okay, thank you. But I could share some parts maybe. And they would be cool. Okay, just send me. My email was on the beginning of the presentation. You can ask me. Hi. You were mentioning wanting to find other, I mean, testing interoperability for retransmission. So there's GStreamer, which is the U-pipe for everything else. No, but you can try. I mean, essentially there's RTX support in it. Okay, is it using the RFC? Pretty much. The RFC that you mentioned is not the actual implementation of the retransmission. It's a whole bunch of recommendations on how you should behave. Okay, I will have a look at the GStreamer retransmit. Thank you. Any other question? No. Last chance. Well, thank you, Rafael. Thank you for your attention.