 OK. I think it's about time to start. Hi, I'm Robert Foss. I work for Collabra, as you may have guessed. And I was not named after free and open source software. It's actually my mother's last name, too. She does not write software, not at all. Either way, working for Collabra means that I work on open source for a living. I mostly do Linux graphics work, and lately that has led me to do a fair amount of Android work. And this talk is going to be about where Android meets the mainline graphic stack. We're gonna cover a bit of ground here, but before we go into the details, let's have a look at what we're actually going to talk about. We're going to talk about the Android graphic stack and what it looks like and how it compares to the mainline graphic stack and where we are now with this work and why all of this stuff actually matters. So what does the Android graphic stack look like? This is sort of roughly it. It's a slight simplification, but these are the basic layers. All the pieces that we care about are here. On top, of course, we have the apps. This is the stuff we really care about. It's the whole point. That's what our users want. And below, we have Surface Flinger, which is the glue that lets different applications draw pixels to the same screen. It organizes the layers that different applications submit to it. And this is an example of what it does. It's an ordinary Android desktop and it contains a handful of different elements. Each of the different elements is provided by a different application. So if we look on top, there's the status bar. And to the right, we have the navigation bar. And if you notice in that corner, they sort of overlap a little bit. These layers are often transparent. And the majority of what you're seeing is the background layer, on top of which all of the other layers are rendered. So these layers have a depth order and they're just stacked on top of each other. The process of rendering this is called composing layers. And it means just stacking layers on top of each other. These layers and the composition of them is communicated through the HWC2 API. Below Surface Flinger, we have the non-kernel parts of the graphics driver. For various reasons like security and convenience, the graphics drivers don't entirely live in kernel space. And at the bottom... Oh, sorry. This is also where OpenGL and Vulkan and memory allocators and stuff like that is implemented. And in the case of Android, it's also where the HWC2 API is implemented. And at the bottom of the stack, we find the kernel, of course. So what's the HWC2 API then? It's an API that is used to communicate between Surface Flinger and the hardware drivers. It allows applications to submit layers to be composed onto the screen. Each of these layers is a buffer with some properties attached to it. It also provides some abstractions for graphical objects It's maybe not all that interesting, but it's very useful. And the whole idea of having the HWC2 product or API to begin with is to allow offloading work from the GPU onto specialized hardware. Display hardware has support for composing a handful of layers, maybe four. Four is a typical number. More is certainly possible, but it's a matter of diminishing returns. So this display hardware can compose layers faster and more energy efficiently than a GPU can. And this, of course, frees the GPU up to doing actual GPU stuff like OpenGL or playing games or whatever you like. So doing this work more efficiently means that you couldn't run your GPU faster or for longer, or your CPU faster or for longer for that matter, which really matters in the mobile space. So if that's the Android stack, what does the mainline stack look like? It sort of looks like this. This is the proprietary part that you would expect to see on an Android device. These are the drivers from Qualcomm, ARM, or whomever your GPU vendor is. And for the Android platform, these drivers implement the HWC2 API. So what's the open source equivalent of that blob then? Apart from just being a GPU driver, it also has to implement the HWC2 API, and the HWC2 API is not used for most Linux graphics stack. It's basically only used for the Chromium OS one and the Android one. So it's not what you'd see on Yorubuntu or Yoruf or whatever. The part that implements this API on the open source side is called DRM Hardware Composer, and it's a shim between drivers and surface flinger. All it really does is to implement the HWC2 API on top of the mainline graphic stack. And it was developed by Google, mainly by Jean-Paul over there, and Zach Reisner. And it was initially intended to be used only for Chromium OS, which is the OS that runs on your Chromebooks. If we have a deeper look into the driver, we can see that it is implemented in a few parts. DRM being the kernel subsystem that handles graphics and display hardware. LiveDRM is the user space library that simplifies talking to the DRM subsystem. And lastly, there's Mesa, which implements OpenGL, OpenCL, and so on. But there's one more thing. There is a groundlock. We're getting into the details at this point, but it's an important part. This module handles buffer allocations and associating properties to buffers. Properties like color format, buffer sizes, for example. There are a few implementations of groundlock, and they're slightly different. There's GBM groundlock, which is fairly commonly used, which was written by Rob Herring. There's DRM groundlock and Mini-GBM, Mini-GBM being used in Chromium OS, I believe. So, hopefully, this gives you an idea of where DRM hardware composer fits in, but what does it actually do? It receives layers through the HWC2 API, and each layer contains a buffer and some properties like opposition, cropping information, damage information, damage being parts of a layer that has changed since the last frame, which is useful to have if you want to optimize your display stack, which just lets you do less work. Basically, if there is no damage, you can just use the old stuff. That's good. So, when you have all these layers, you'll have to decide how to send them to the hardware, and display hardware is only able to support like four layers, basically, some more, some less, and adding more as a matter of diminishing returns. So, what happens when we have more layers than the hardware supports? This is fairly common, and what we basically have to do is squash some layers together until we reach the number of layers that is supported by the hardware. This is done through OpenGL or Vulkan, hopefully Vulkan soon. And choosing which layers and how to squash them leaves you some room for optimizations. Like, maybe you'll want to squash the layers that are very small, or, yeah, so that you'll be able to do the least amount of work possible on the less energy-efficient implementation of compositing. So, lastly, when we've organized all this stuff, we have to map each layer to a DRM plane. So, DRM planes are an abstraction for the same thing as layers, but they're just called different things. They're called planes instead of layers for whatever reason. They do the exact same thing. So, it's a buffer attached to some properties, but they're what's fed to the graphics hardware for output. For example, you likely want to put a hardware composer layer or cursor layer on the DRM cursor plane just so that it'll be handled in the most optimal way possible. Hardware oftentimes has fast pads for certain cases, and having a cursor plane is pretty common. So, having covered what this stuff is, let's have a look at what the current status is. Why did this happen now? Well, there are a few reasons. DRM hardware composer has been around for a while, and about a year ago, no, almost two years ago, the revision was bumped from one to two, and revision two requires synchronization support, which means that you can sort of wait for operations to be carried out on a buffer, and when they're done, you can just be notified that the buffer is ready for you to use now, which simplifies stuff overall. And in order to do this, implement hardware composer two, we needed buffer synchronization support, which was originally implemented in Android kernels, and then later last year, moved to Mainline Linux, and that work was carried out by Gustavo Paduan, a colleague of mine, and this is now supported by some, not all, GPU drivers. And the second part as to why now is the ADF, the atomic display framework. Hardware composer two requires synchronization, and synchronization support is only provided by one of the Linux kernel display frameworks, so KMS does not support it, but ADF does, and this is basically what all of the modern GPU drivers use, so it's not very uncommon. So does this stuff even work? And the answer is yes, on a few platforms. For example, the IMX6, which is a very common platform in the embedded space. It uses a Vivante GPU. There are a few flavors of these GPUs, but the GC3000 has full Android support, and it's been developed, or the open source driver has been developed by Christian Jeminer, Lukas Stack, and Vladimir van der Laan. So now there exists a high quality driver for Vante GPUs, and it's called Etnaviv, and if you're curious about it, you should try running it. It works really well, and if it doesn't, Christian, Lukas, or Vladimir are pretty great to talk to, so just file your complaints with them. There's also the DB410C. This is a platform supplied by Qualcomm. It's a cell phone platform that has been repurposed. It has the Android 306 GPU, which runs the Fridrino drivers, which were written by the driver wizard Rob Clark. There's also the DB820C, which is another more modern platform supplied by Qualcomm. It is under active development, and support is being mainline for it, but it's maybe not there fully yet. It has an Adreno 560 GPU, and that is also supported by the Fridrino drivers. And lastly, there's the high-key 960. It's based on the Kirin 960 SoC, which has a Mali G71 GPU. The Mali G71 GPU does not have any functional open source drivers for it yet. There's an effort to write these drivers, but it's still the early days. The G71 hardware is internally called Bifrost by ARM, and there's a project called Bi-Openly that is intending to open source these drivers for that specific architecture. However, there not being any open source drivers does not prevent us from using DRM hardware composer, so there's an effort towards bringing a platform up and running based on that. So this is the new section of the talk. Previously, the DRM hardware composer project came about from the Chromium OS project needing it, and because of this, DRM hardware composer was hosted on the Chromium OS Garret, which was good, but maybe not ideal, but with the help of Sean Paul, the project has now been moved to free-desktop.org, and development is now done on the DRI Devil mailing list, which is how other Linux graphics projects are developed. So thanks to Google and Sean Paul, but also free-desktop.org for hosting us. So this is the last point of the talk. Why does any of this matter? We use open source software not because it feels good, but because it is better, and a very specific point as to it being better is the long-term support. If you deliver a product, some products like embedded devices, for example, they require serious long-term support. These devices aren't replaced every year or every fifth year, so hardware support has to be available for a very long time. 20 years of support is not unusual, unreasonable for some applications. It really does depend on your field. And with open source drivers, support can be provided by anyone. So when you develop one of these serious embedded product, you yourself will be obligated to support the products for a long time, which means that you have to make sure that you have the ability to fix issues for a long time. And if you're using a proprietary driver, your only option when it comes to support is going to the vendor and asking for it. And maybe they will say yes, but they will surely charge you for it. Additionally, long-term support runs the risk of a vendor disappearing, or ending their support for a product for any number of reasons, like promoting a newer product or going out of business. Many things can happen, especially in such a long time span. We also want to push the industry forward towards open source. I say we, and I think it's true for all of us in the open source community, both for personal reasons, like it being neat and giving us warm, fuzzy feelings, but also for professional reasons. It gives us more options and enables us to do more. So if you're a GPU manufacturer, DRM Hybrid Composer allows you to spend less on drivers. You won't have to do as much testing. Your Linux driver will be your Android driver. And additionally, you'll be able to leverage the common code that is already maintained and high quality. Like Mason and Kernel DRM give you a lot of these things for free. The atomic display framework from DRM gives you a really good base to build your driver upon. And Mason gives you much of what you'll need for Vulkan or OpenGL support. Mason and DRM are also very well tested. Many drivers are already based on both and have shipped for many years. So lots of even the rarest edge cases have been found and fixed. Optimizations that are applied to other drivers may also apply to yours. Maybe you won't even have to commit any changes at all. It might be in the common code. Or in some cases, you'll be able to do very similar things to what the other drivers are doing and reap the benefits without having to do all that much work. Just a smaller amount of work. Yeah, thank you. That's it. Does anyone have any questions? So the evaluation boards you presented, do all of them now run Android on a mainline kernel? Is that? The DB410C does. The IMX6 does. The 820 does not run mainline at all yet, as far as I understand it. It's getting there. And the high key 960, I'm a lot less sure about its status, honestly. I know that it's being actively worked on, so that's certainly the goal. Okay, so definitely yes for the two mature ones. Yes. And the other ones are up and coming. Yes. Great, thank you. You cannot buy the 820C yet, but that's coming. Hi, is there an overhead in going essentially through two graphic stacks, both in time and in memory usage? So none of these layers are copied, like only references are moved. So I'm sure there's some amount of overhead, but I would call it negligible. It's not like copying entire buffers around. That's all handled by Growlock. That provides us that for free. And a follow-up question. Imagination technologies, GPUs, any progress on that that you know of? Not that I know of. As far as I know, no one dares to try. Yeah, I thought that would be the answer. Any other questions? With the recent release of Android 8, the driver model has changed through the stuff that's arranged and announced in project trouble. Does that affect this picture at all? It does affect this solution a little bit. Most of this project trouble stuff is over what we're doing. So the layer split that it sort of introduces is not in the middle of what we're doing. It's all like on top of us. So it's not that bad. It's no worse than moving between any other Android versions. And also, I might add that I think Android 8 is up and running. Rob Herring, I think, has it up and running. Would it work with closed-source OpenGL drivers? Yes. There's nothing preventing you from using it with closed-source drivers. Then I think it might work with Renaissance hardware. Okay. Yeah, I'd certainly be open to discussing that further. It's very interesting. I have a question. Why did the Android community run away from X and Wayland? I can't answer that. I think they had a quicker pace. Basically, they wanted to ship stuff immediately. So this hardware compositing stuff is a hugely power-saving feature, and it's something that you really want on Android. And I don't think anyone was prepared to wait for Wayland to be prepared and ready and have good hardware composition support. All right. No more questions? Oh, one there. What are the chances of us ever seeing a mobile phone with an open stack? I think the Purism phone that just sort of cleared crowdfunding will use something. I'm entirely sure that they'll use DRM Hardware Composer, but I would assume so. John? So we did ship DRM Hardware Composer on the Pixel C tablet, but it's using NVIDIA's vendor driver for GL. So half. Yeah, I think that's it for me. Thanks, everyone.