 Hello everyone. I will be talking about Gstreamer update on more on the embedded side. So it's really part two of Tim's talk. So Tim will give you like a general overhead. And as Gstreamer, one of the key use cases is people making physical product TVs and stuff. I'm going to talk a lot about all of these. So who am I? My name is Adivik Haidt. I've been working at CLABRA for 11 years. And the Gstreamer stack, Farstream, which is a Gstreamer-based framework, a series of plugins to do video calls that we worked on a bunch of devices. That is less exciting now because everyone is doing the web thing. So I've worked on a variety of multimedia and Gstreamer-based products from security cameras to TVs to cars to all kinds of things. So a very, very, very practice in the last decade. So what kind of embedded devices use Gstreamer? So a lot of people have built them, but many, many more people use them and they've never heard that they were using Gstreamer. So one of the most common one are TVs, setup boxes. I can highlight some of the probably biggest, biggest Gstreamer user on the planet is Samsung and LG TVs. All their smart TVs use Gstreamer for everything that's not linear television. So any internet television on there probably goes through Gstreamer. And a lot of setup boxes from the big vendors. On the top left, you have the Xfinity box from Comcast in the US. They're kind of the biggest cable company in the world. And they build their own open source stack for their setup boxes instead of buying a proprietary one. And all the media tasks goes through Gstreamer. Basically any time you watch TV on a set in the US on Comcast, you're using Gstreamer. There's another one that's kind of cool on the bottom right. They're the guys from Uview in their UK company. And they also make a pretty cool setup box. They're involved in the community. So thumbs up to these guys and they use Gstreamer also. Another one that people might not know is that when you watch a movie on the plane, it's quite likely to be using Gstreamer as both major vendors use Gstreamer on Linux. And one that I think is really cool is on the space station. There's this little thing. It's a camera drone and it flies around the space station. And it's been made by the Japanese space agency, JAXA. And it uses Gstreamer for video. So we have Gstreamer in space, in the air, underground. It's everywhere. In your car also, maybe, if you have the right car. So now that you know where Gstreamer is used, I'll give an overview of some of the features that are new and that are related to embedded devices that are more tourism devices while Tim gives you more generic ones. One of the big ones is improved DMABuff support. DMABuff, it's a Linux kernel mechanism to share hardware friendly buffers that have some specific layout between different subsystems of the kernel. So without that, you need either some kind of specific extensions or to do a memory copy for no good reason just because the different Linux subsystems don't really talk to each other otherwise. They're building a little bit of silos and that helps you bring the data between the different silos in a zero copy way. Especially now that we're doing 4K, people talking about 8K, you can't afford to copy all that memory on these tiny devices. They're not even the big ones. And so that's really, really important. One of the big improvements, so Gstreamer has supported that for a while, but we've been fixing all the little components because this is really about sharing data between all the components. So to make it work across the pipeline, we need to actually fix a bunch of little things everywhere. One of the big improvements this year is that we've added functionality to the T element. The T element is when you have a pipeline, you have something that comes in and you want to send the same thing to two different branches. And previously, this would almost always cause a copy because it prevented both sides from negotiating the same memory layout. Now we've had a very small amount of code that the way it's been intended, even though you send the same thing to two places. The typical example would be you're doing a video call, you have a camera and captures, you want to send it to the encoder and send it to the display for preview. A very, very simple use case. That didn't work with zero copy without hacks, and now it just works out of the box. That's a really huge improvement. Next is my mega-slide about video for Linux. Video for Linux is the Linux kernel API that everyone knows as the one for webcams, but it's also the API that is recommended for implementing hardware encoders and decoders on Linux. We have Gstreamer has probably the most complete support for this API, where as far as I know, FFMpeg just had something merged recently, but as far as I know we're the only one that really implement all of the features. Different implementations like the one in Chrome OS, they actually have one copy of the code for each different hardware with different quirks. We try to avoid that and we tell the kernel driver maintainers to just fix their drivers and follow the API. The big thing here in that is the support for different hardware. Now we do. If you have a hardware encoder, this is using a bunch of boards that have upstream support, things like the Snapdragon 410c, but also the upstream IMXX driver for Calcoda, and a bunch of these other things actually just work out of the box with Gstreamer and the upstream kernel without needing any board-specific hacks. Another thing that we've changed since last year, it's a little thing, but it's really improved the lives of everyone, is that we have stable element names. The way the plugin works is that once you start it, it looks at the hardware that's present and generates an element for each block of hardware, and previously it would have an element named 0, 1, 5, 25 a bit in a way that feels a bit random to the user. Now we actually decided the first one that we find for each one, we just give it the fixed name so that you can say, well, you can say, oh, I know I have an H264 encoder, I just do V4L2H264 deck, bang, it works. You don't have to, it's going to be called deck 23 or deck 25 and you have no way to get stuff. So it means it just works with GSE launch. Another big, big improvement is that now we have default to DME buff in the decoder. So that means that if you do decoder display, now it's 0 copy by default on most platforms. So that's been a huge improvement. Otherwise you had to change properties and twiddle it to make it work. It could work and people shipped it, but it was a lot of work for nothing. Now we have all of the clean negotiations, so it should just work. I think it was merged very recently in the last weeks is the ability to change resolutions after runtime for the decoder. So previously if you encoded stream change resolution, you had to stop the entire pipeline, stop the decoder. So this is really important for adaptive streaming type things like dash and HLS because as they adapt the bit rate, they often change resolutions. These are the main ones on the video for Linux side. Next KMS. So KMS is the Linux API to display things on the screen. On the desktop, we mostly use it through Wayland or to a compositor or through a GL stack, but on many embedded boards you just want to show video full screen or show video in a rectangle. And you want it to be as efficient as possible. You want to keep the GPU off or maybe not touch the GPU at all. Maybe you don't have a GPU. And so in these cases, you use the KMS API directly, and G-Streme has a plugin for that. And it has been improved quite a bit last year as it was merged, I think, last year, two years ago. And there's been a lot of people actually using that. One of the big improvements is that now it proposes hardware buffers from the sync so that you can have a software decoder right to a buffer that can be directly displayed on the sync. This uses dumb buffers, which is like the more basic ones because we can't negotiate others yet. We added video overlay support. That means that you can actually say I want something else than full screen. It's kind of a big thing. So you can do things like full picture-in-picture, which are quite used in embedded boards. It now supports a lot more formats and devices out of the box. Sadly, the KMS API is not fully generic across devices. So we actually have to, we have a white list of devices that are known to work and what it works. So that is a big one. And also a lot less bugs. There's been a lot of bug fixes there because people use that stuff now that it's upstream and that has been a huge improvement. Now it just works most of the time. Next one is something called OpenMax. OpenMax is, well, tried to be Gstreamer designed by a committee of large corporations. Sadly, none of these corporations actually use it anymore except for Android. So OpenMax is, as far as we know, it's dead upstream. The committee has not met in five or six years. The last, there was a draft 1.2 that was rejected by the Kronos senior board and now it's not going anywhere. So just don't use it if you can. Sadly, we have to because embedded vendors have proprietary code and there's OpenMax plugins and often it's the only way to access the hardware. So we've been working on improving support for some of these. The Xilinx one now is quite good. We have pretty complex 4K to 6.5 use cases with a lot of extensions, with DMA buff support and everything. So that has been something that Guillaume there has been working. We have support for Tizonia. Tizonia is an open source OpenMax implementation that is not as terrible as the Bellagio implementation, but why do they do that? I don't know why, but someone is actually adding support for this to Gstreamer and Tizonia, they've implemented some of the features of the rejected 1.2 draft that fix some of the worst problems with OpenMax. And we've also added more, the OpenMax standard defines a bunch of properties that encoders have. We had very few in this room. We basically only had the bitrate. Now a bit more are being implemented. I should mention this is all about OpenMax IL, the intermediate layer. There's also OpenMax AL and DL, and no one uses them. DL has never been implemented, the driver layer, the device layer, and the AL application layer is actually implemented in Android, but I don't know if anyone actually uses that. So basically OpenMax now is the Android API, and people just write OpenMax elements that do the very minimum to work in Android, and that's it. Don't do that. There's been a couple more things on embedded OpenGL support. The big one this year is that we have support for the Vivante proprietary driver for IMX6 boards, that's quite user-automotive. This allows them to have higher performance drawing on the screen, basically. Also, as Tim mentioned, the OpenGL API and G-streamer after 10 or 15 years of development is finally as a frozen API, so you can use it without the fear that it's going to all be rewritten next year. That's also, yes, we can also export DMABuff from the OpenGL stack to push it into a different application, for example. So you can generate the DMABuff from a texture inside the G-streamer pipeline and then push that towards the next process or something like over DMABuff. So Tim mentioned this quickly. I'll mention it too because I think it's really cool. It's a mechanism to split the G-streamer pipeline into two processes or more. So you can take a sync or a bit of pipeline that acts as a sync, that's the end, and put it in a different process and have it act as if it was a slave of the first one. So this is really useful because some proprietary APIs for video decoding display need to run as root, basically. It's been done, they're basically, if they could make you put all your code in the curl, I'm sure they would. So they're like, this is just a hardware, it's all of ours, so we don't care about security. So some people care a bit more, especially now that you have internet video. So you can have the part that does all the networking and separate it from the part that does the video decoding that touches the hardware. It's kind of neat, it's very easy to use. Some little things, something I discovered this week when I was reading the Git laws preparing this talk is that the RTP, issue64, issue64 decoders can now take the memory from the downstream element, the next element, the pipeline, and write directly in it. This means that if you have a hardware decoder that needs linear memory or some kind of CMA or something special, you can have it allocate the memory and just the depilator will just write directly in it. And RTSP source finally reduces the regular debugging system. So in the near future, we have VRM modifiers are coming. This is another thing for zero copy, is to use memory that is like tiled or weirdly compressed so that Gstreamer doesn't know anything about, but we can at least connect it between decoders and sync. I have an intern working on testing Gstreamer on embedded boards automatically. Right now our CI is only on PC. We'd like to have it work on embedded. And maybe before us to state let's go back. Any questions? Yes. Are there plans to improve valencing, for example, to support multi-planar formats? Multi-planar? Sorry? Multi-planar pixel formats. Yeah, I think Nicola right behind you can answer. He's working on that. So to support actually multi-planar support, in well-in sync, we needed actually a well-in compositor that supports it, which wasn't the case. And we just started recently with a bit of a hack in western, but it kind of works. We started testing it. And it does, and it actually does support multi-fd, if you have DMA buffs or multi-allocation. So it's already in place. And it should already be working in 112, if not it's working in Gitmaster. But we're actively working on that. Any other question? In the back? Thanks for your talk. You mentioned OpenMax is dead. And I'm just curious what would be a replacement to focus on? So it depends what you use OpenMax for. There's kind of two things. There's OpenMax as a pipeline API, and that's G-streamer. But no one actually uses OpenMax like that as a codec, as an API for stateless codecs that have a large upace component. Right now, I will tell you there's no good answer. If Wim, the original architect of G-streamers, is working on something called Pipewire, and under there he has a plug-in API, and I hope that can be a replacement. But right now, software for your codec if you need some user space code. Basically, I would recommend either to write the most simple API you can, and then be forced to wrap it in the different things like G-streamer or FFMPEG. I won't have a good answer. Sorry. Any other question? But OpenMax makes everyone's life painful, so please don't do that. Yes, the Raspberry Pi also there, basically the hardware API is an IPC to a different device. It is almost a one-to-one to OpenMax, so GST-OMX was actually written for the Raspberry Pi. Any other question? Yeah. Yep. WebRTC on embedded devices. You mentioned that there were some... Yes. I haven't actually tried any WebRTC bin, but I suppose it works. One last question. No? No.