 OK, my name is Michael. I'm working in the graphics team of Penguin Tronics. And today I will tell you something about embedded video playback systems and how to build them using open source. So what's an embedded video playback system? We have a screen in your airplane. You want to watch movies there. You need some video playback for that. You can do other stuff on that, but mainly you want to watch movies. Or while driving a car, you want to watch movies or look at the route, but you want to watch movies, of course. Or here we have an example of a smart TV. For example, you can put it into a museum and show some videos which explain more details to the current stuff you are looking at. So that's what I mean by an embedded video playback system. Furthermore, I will start with reducing the features because the previous systems all do more stuff than playing videos, but we will only focus on video stuff. Then have a look at the status quo. If you go to a random website, what do you get there? Then how we can do all this using open source. And then I have a short glimpse into the future of what might be next steps where we can work and improve everything. So the features. I drew a small mock-up of the application we are going to build. On the left-hand side, you see a user interface. We have videos, A, B, C, and D. And they are playing a short preview of the video that you can. And then as a user, you just select one of the videos. And it's played back in full screen on your whole display. And of course, you want to have open GL acceleration for all of that to make it responsive. And because we are on an embedded system, we just need it. So for the system, we are using an IMAX 6 SOC, which is built by FreeScale. And the SOC features a chips and media video decoder. And we want a Galque 3000 GPU, at least in the plus platform. So if you're using something before the plus variant, you have a Vyvante GC2000 there. On top of this, SOC features. We need a driver for the decoder, an OpenGL driver for our GPU. And on top to implement the actual features, we have some video input, so our files. Then have some software to drive the driver and control the decoding. Then we have some graphical user interface, as I said before, which in turn uses the OpenGL. And then sends the whole output to some display. So that's the system we are going to build here. So the first step is go to your vendor's home page and download some BSP. You get a usually Yocto package BSP with a Linux kernel, user space libraries, so basically everything you need. The Linux kernel you get on the vendor's page is usually really old. So for example, on the IMAX 6, you either get a 3.14 or a 4.1. So you don't really want to use that anymore. What's even worse, for the GPU and the video decoding, we get only binary block drivers. So we cannot look at the source code. We cannot really debug it. We cannot fix it. And that's for the core parts of our system. I'm not sure if you want to do that. And these impose obstacles for debugging and the maintenance of our system. So can we do everything without these blobs and just use open source software from upstream? So let's look at our system. Start at the user interface. We're using QML for that. It's a language that is a bit similar to HTML. It's pretty easy to define user interfaces with that. It uses Q in the background. Because of that, it can use OpenGL for the acceleration of compositing. And for example, we have a demo built with that based on our mockup before. You can see a photo of the demo here or the demo in action up in front. I hope it's plain, but I guess. The whole application consists of 150 lines of QML code. So that's really very little code and has some interaction, some features. So that's impressive. And for the actual demo, we need 200 more lines of C++ code, which is necessary to control the video. So we can stop the video and mute the videos. So that has to be done in C++. So where I said before, we are using OpenGL for that. The OpenGL driver is usually from Vwanta and is a block driver. We have an alternative here, the Etnavif driver, which is a reverse engineered driver for the Vwanta GPU. It's available upstream in Mesa since version 17 and in Linux since Linux 4.5. It implements OpenGL, of course, and therefore we can just use it from Qt and we can composite our user interface and especially the video frames we have in hardware, which is really usable because you don't want to do that copying in software. So now comes the problem. How do we get the video frames into the Etnavif driver? And for that, there is no solution in GStream upstream yet. We wrote this ourselves as the GST video item. You see down here. And it does a zero copy import from GStreamer to QML or Etnavif using DMA handles in the panel. And yeah. So this is one very important part I'd like to emphasize once more. So we do not need to copy when we go from GStreamer to QML. And then we have some auto plug-in, which is a mechanism in GStreamer to build up the pipeline. Very simplified. It looks like this. We have a file source for reading the files from our file system. Then we do some demaxing for getting it from the container format to the raw stream, some parsing. And then we need a decoder for our video data. And we also want to do that in hardware and don't want to do that in software. So what can we use for that? There is a Coder driver in the Linux kernel. It's the config item video Coder. You enable it. And if you're running on an IMX6 and everything is configured correctly, you will see a depth video X device node for this from the driver. This implements our video for Linux Mem2Mem device. And fortunately for these devices, we have an element for GStreamer. So we use this GStreamer element. This uses the kernel driver. And everything magically works. Then FreeScale or now NXP puts some hardware customizations on their SOC. These are implemented in the IPU. And do some, for example, some untiling of the output of the Coder for the actual scan out on the display. So drivers for that are about to be mainlined in the Linux kernel. And that's pretty nice. Unfortunately, on the Coder, we still have closed source firmware. So we cannot really look into the Coder. We have to take a firmware uploader to the driver. But maybe someone wants to write a firmware for that. But it's not there yet. So if we go back to the system architecture, we have, again, our SOF down here. Let's start with the video files. The video files go into GStreamer. GStreamer uses the video for Linux Coder driver in the Linux kernel to use the hardware decoder. Then we use the zero copy sync to jump over to Qt, which uses Mesa and EtnaViv for compositing the user interface and the video frames, and then forwards everything to the display. So everything in here is open source. So what's next? We have to find an upstream solution for the GStreamer to EtnaViv interface. So basically, the GST video item is all before. We might use other compositors instead of QML and Qt, for example, some Wayland compositor. And another idea is to use adaptive streaming with different bit rates and different video files for the preview video and the full screen video so that we can play different video qualities there. So I'm already at the conclusion. I first looked at the binary block drivers and the issues with debugging and maintenance of the binary block drivers and the vendor kernels. Then I showed how to build a user interface with EtnaViv and QML using open source. Then we looked at the video decoding, which is done by GStreamer using the video for Linux Coder driver. And I had a short glimpse into future work using Wayland and adaptive streaming. So as a conclusion, I showed that embedded video playback does not require block drivers anymore. With that, I'd like to thank you all for your attention. And if you want to have a look at the actual hardware and demo that's up front, you can come here and play around with it. Thank you. So if there are any questions, feel free to ask or come to me. Yep. You said that GStreamer has no QML integration available. But since about here, there's an element called QMLGL sync. And maybe you can list it here. So the question was, or the remark was that there is a QML sync. I wasn't aware of that sync, but I'm not sure if it uses the serial copy and EGL image upload. It does. It's based on the GL driving. And based on Android and other platforms, it's completely serial copy. So it should work. OK. So if you're using the GSTGL, already it should work. I'd like to thank you again. And if you have any further questions, come up to the front. Thank you.