 We can get started. So this talk is going to be about the embedded GPU space and what's going on, what's been happening, and what you can look forward to in the future. But before we get into that, who am I? My name is Robert Foss. I live in Germany. I'm a software engineer at Colabra. And I work in the open source graphic space. So that means doing kernel work, Mesa work, Android work, Wayland work, that kind of stuff. It's all over the place. But this is going to be a talk about the different graphics vendors and what you can look forward to. So let's get started. The first vendor I want to talk about is Intel. And they have a very, very long history of being good, solid, open source contributors. They started in 2004 with the i9-15 driver, and it's never stopped, essentially. They're still doing super good work. And their driver supports the very latest OpenGL and Vulkan standards. And this is what their timeline of development looks like. It's a bit truncated because none of the other vendors have been around for that long. So it starts at 2009. But there's a few interesting features of this diagram. One is that all of their hardware is supported. That's not the case for every vendor. But since Intel has been around for such a long time, all of their hardware is very, very well supported. Another interesting feature is the blue driver, the blue dot, the Iris Gen 8 Plus item. And the Iris driver is the name for Intel's new graphics driver. And it is as open source as the old one, but it's using the same driver framework that the other open source graphics drivers are using. It's called Gallium. And it's a framework for building a graphics driver. It gives you a lot of stuff for free. And Intel has chosen, since about one or two years ago, to look into building a Gallium-based driver. And it's been paying off. This driver is included in the latest version of Mesa. And it performs better in some circumstances than the previous one that has had 15 years of development poured into it. And that's quite impressive. Specifically, they quoted CPU performance and lower CPU overhead as being a large contributing factor to wanting to switch. And that's good news for not just Intel, but it's good news for all of the GPU driver vendors. Because that means the resources that Intel pour into testing and development goes to all of us, essentially. Whether you're on a Vivante GPU or an ARM GPU or whatever, you're also going to see some benefits from this, be it better stability through testing or better performance through optimizations. So that's very interesting. And the last, almost equally as interesting part, is the Gen 12 GPU. So it looks like just another one of their integrated GPUs. But Gen 12 is, as far as I understand it, going to support being run as a dedicated GPU. So this is going to mean that they're going to offer a higher performance, dedicated, separate GPU. I would assume that it's going to be used for primarily server enterprise-type workloads. So it's probably not going to be in your next gaming rig. But who knows? The graphic support is there, anyway. So on to the next vendor, AMD. They're also a really good open source citizen. They've been around since 2009 in this space when they decided to start opening up their documentation for their GPUs. And providing this documentation essentially means that you don't have to reverse engineer their driver in order for you to start writing your own new driver. So taking away the reverse engineering work really lowers the threshold for a driver to emerge. And since then, a lot of AMD drivers have emerged. There's almost too many. I'll go into it, but there's a forest of slightly different AMD drivers. All of them, or all of the current ones, support the latest OpenGL and Vulkan standards. So you can expect it to work with anything, essentially. And this is what the timeline looks like. So if you look to the left, the blue ones are drivers or they're platforms that are supported by the kernel Radeon driver. The Radeon driver targets the really old hardware. Think the past 15 years up to like five years ago. Super old hardware. And the red dots on the timelines support everything else, essentially, up to the very latest GPUs. The ones released a few months ago. They are all supported upstream, both in the kernel and on Mesa, which is very nice. But apart from the Mesa driver, AMD also supplies their own user space driver. That is also open source. And one is called AMD GPU, and the other one is called AMD Vulkan. And they support OpenGL and Vulkan, of course. These are not the community drivers, but they are fully and totally open source, but they're maintained by AMD, essentially. AMD also provides resources for the community drivers, which seems like a crazy amount of extra work. Not just going with only the community drivers, but I can also see their point of view of wanting to reuse a driver across all platforms. But a driver like that is not something that can be included into the Mesa or kernel project. Then there's NVIDIA, which is a very interesting story. They've been around for a long time. Since 2010, there's been an open source driver called Nouveau. NVIDIA has never really contributed towards it. They have, in the very limited case of Tegra, which is their embedded platforms. They've contributed some code towards the Nouveau project. The Nouveau project supports the latest OpenGL standards, but it doesn't really matter. It's still not usable, and we're going to get into that. So if you look at the reverse engineering timeline here, this is essentially all of the GPUs that have been released by NVIDIA in this time span. And if we look at the support for GPUs in the upstream kernel and Mesa projects, it's also essentially all of the GPUs. But it's still not usable, because in order for you to make the GPU go faster or go fast, you need to load a firmer blob provided by NVIDIA. This allows the GPU to increase the frequency at which it's operating from the baseline, the lowest safety levels, to the actual performance levels. This blob exists in the driver. They ship, and it is usable. You could use it with the open source drivers and just load it up. However, they, through legalese, prohibit this from being allowed. So while it's entirely technically possible, Debian can't, for example, ship this blob. Nor can you nor can I. Only NVIDIA is allowed to ship it. So that effectively means that Nouveau is not useful. You can't use it. And this is something that would be super simple to fix if people were interested in doing that. Then there's imagination. So we used to see a lot of imagination GPUs in a bunch of different places, from Apple devices, which is maybe not what this talk is targeting, but to single board computers and a bunch of things. They're not quite as common anymore. They also have essentially no upstream support at all. Imagination has written a sort of stub kernel driver, but it can't be accepted into the Linux kernel since there's no user space 3D driver that actually uses it. They have a few Mesa patches floating around, but they don't even try to offer 3D support in them. So that essentially means that that can't be merged either. So until they change their mind, there's not a lot to be said for imagination either. Unlike many of the other drivers, the community has shown very little interest in reverse engineering their GPUs. That's partly because they're very complicated. They're slightly different from the other sets of GPUs. But unfortunately, that means that no one has really picked this up in a serious way. Imagination is the last vendor that I know of that doesn't have any type of open source GPU, which is interesting, and something that they maybe should take note of. There's a trajectory here, and the trajectory is that every GPU has an open source driver, except for them. Onto Qualcomm, which is a much better story. So the Fridrino driver, which targets the Adreno GPUs of Qualcomm, has been developed since 2013. And Qualcomm has been supporting this GPU. Or at least lately, they've been supporting it directly and indirectly with actual developer time, which is really nice to see. And there's a little piece of trivia in the name of their GPU, the Adreno. So Qualcomm bought the mobile handset of AMD in 2009, including their GPUs, which is why there are GPUs called Adreno. It's a word play on Radeon. This is what the reverse engineering process has looked like for the Qualcomm GPUs. These are all of their GPUs, so every single one is supported. And this reverse engineering work was done by Rob Clark and Elie Merkin and others in the community. And if we look at the results of that, you see that there's not a lot of lag between a driver being reverse engineered and it being supported in the upstream projects. Maybe it's six months a year at most. It's just pretty incredible, given that to reverse engineer a GPU, it's not a trivial task. And this is partly done just for fun, because people enjoy it. Or very few are paid to do this reverse engineering work. And then we have Broadcom, which is, I guess, a very recent citizen to the open source driver community. In 2015, they started developing the VC4 driver. The VC4 is the GPU that's in the Raspberry Pi. Or rather, the Raspberry Pi 1 through 3. The Raspberry Pi 4 ships the VC5 GPU. Oh, sorry, 6 GPU, I believe. And there was essentially no reverse engineering done for this driver, since it was sponsored entirely by Broadcom. And they clearly have documentation in-house. And they essentially hired a community person, Eric Anhalt, to write this driver. And he did until very recently. So this is what the timeline looks like for Broadcom. It's not a lot of GPUs in there. There's the VC4 and the V3D driver. The V3D driver targets VC5 and VC6 GPUs. The next vendor is Vivante. And a driver was starting to be developed in 2015 for their hardware. Originally, it was entirely community-driven. And it was based on reverse engineering since 2012. Since then, reverse engineering and development has been sponsored amongst or some of it by aircraft suppliers. If you saw the talk before mine, there is a real problem with long-term support and GPUs and proprietary drivers. If you want to offer long-term support, that's not the one year or five years or even 10 years. Or in the case of aircraft suppliers, like 20 years, you really can't rely on a vendor both being there and being willing to supply you with an actual proprietary driver with the latest fixes. So some of the people in our industry have chosen to just forego the proprietary driver and sponsor the development of a new one. And this is what the reverse engineering timeline looks like. And it resulted in a driver rather recently. So this driver is shipped now as far as I know. And in actual aircrafts, and it's used should be like a sign of a vote of confidence. To further illustrate where this driver is, here's a benchmark. It's just a random benchmark, basically. And it shows that the open source driver has essentially 80% of the performance of the proprietary one. And I would say that this is like a worst case. If your application performs poorly, the situation can surely be improved. Or if you have a weird use case that maybe isn't supported by the proprietary driver, that's something that could be added as well. So let's look at the last vendor here. They're sort of the last ones to the party. They're not entirely willingly at the party. But there's some really exciting stuff to be said about ARM. So in the past years, since essentially 2012, there's been a very slow burning type of reverse engineering effort. In 2012, a guy called Luke Verhagen started reverse engineering the Mali 400 series of GPUs. They're pretty low-end, relatively simple GPUs. And he created a prototype, which unfortunately was never truly open sourced. He gave some talks about it. And it did work, at least in some cases. But since the code was never really published, not a lot happened. And the effort sort of died out until 2017, where Keyang decided to pick up the work. Keyang is a developer at AMD, which is interesting. This is pretty common. A thing you see quite often in this space, where a developer of one company is prohibited from contributing towards his driver. So he develops a driver for another company instead, because he has the knowledge. And he wants to make an open source driver, but he can't make it for his own hardware. So in 2017, Keyang picked this up for the Mali 400 series. This driver is called Lima. And in 2018, there a new driver for the Mali T series and G series of GPUs called Panfrost. It was created by Alyssa Rosenschweig and Connor Abbott. And this is essentially the current middle and high end AMD GPUs. And they've been reverse engineered from scratch. And very recently, both Panfrost and Lima have landed in the kernel and the Mesa repositories. So they're fully supported by, or not fully supported. They're supported by open source drivers, both. And currently, the Panfrost driver runs Wayland, runs 3D apps. And Collabra has decided to contribute towards it, too. And we do contribute to full-time engineers to working on this, so we're trying to push this forward. But this started as a community process. And without the community, we wouldn't be anywhere, essentially. If you're curious about what this looks like, we have a demo at our booth. You can come play some SuperTux cart with us. We're all pretty terrible, so I'm sure you'd win. So that's very exciting. But there's some more stuff coming down the line. So a big thing is OpenCL and OpenCL support. And it's been a big thing for a long time because it's essentially a large step from not supporting it to supporting it. And the community interest, unlike for 3D, is a lot smaller. So we depend on client work, essentially. A client would have to come to us and say, we want OpenCL. Make this happen. Here's a bag of money. And then we could go into development mode. Currently, a few of the drivers, however, are seeing some interest in some work being done to support OpenCL, the Fredrino driver, the Nouveau driver, and the EthnaVib driver. So these are mostly intended to be used in the embedded space. And the work itself has come in the form of enabling a new or the most modern compiler intermediate representation that Mesa supports, called NIR. So NIR is one part of enabling OpenCL. The other part is having an OpenCL compiler front end that works and is compatible with NIR. And that essentially means LLVM and getting LLVM to support recent OpenCL features. Some work is being done in this space, too. It's also not done, but hopefully we could see OpenCL support merged into LLVM soon, maybe this year, maybe next year. Then there is Vulkan Compute, which is already working for the two big driver developers in the Mesa space. So Intel and AMD have Vulkan Drivers, and their Vulkan Drivers already support Vulkan Compute. So that's very interesting, and especially how it relates to OpenCL. If you ask Kronos, the standards body that are responsible for both Vulkan, OpenCL, and OpenGL, they will tell you that Vulkan Compute is not an OpenCL replacement. It's not meant to be. So that might be an interesting data point, even if the support for some of these drivers is pretty good for Vulkan Compute. It may not be something that you should hope too much for in terms of solving your embedded Compute issues. And then there's Cycle. And Cycle is a layer that is intended to be built on top of OpenCL, essentially. So think of it as CUDA, essentially. It's a single source language that allows you to compile both the Compute kernel and your application in a single source file. It also solves some other issues. So that's something to look forward to. And that's also a standard that's backed by Kronos. As for the bigger picture, some drivers are extremely mature and have been around for a long, long time. Some are newer, yet still very mature. The community drivers now all share the same code base, especially with Intel now moving to using the Gallium framework. This is very much the case that what benefits one driver will likely benefit the others as well, be it stability through testing or better performance through optimization. There's a lot to be gained for everyone when one of the vendors makes some contribution. Reverse engineering, a driver. From reverse engineering to having something actually upstream takes something like one to seven years. That's pretty hand-wavy, but it's a number. Maybe the average is a lot closer to one or two years. And Compute is still on the way, as it always has been. We'll see. Maybe next year, maybe the year after that. But maybe the more interesting question to ask yourself, why do you even care about running open source drivers? Why does it matter? My NVIDIA proprietary driver works just fine. It's great. It's performance. It supports all the use cases I have. But there are some real important thoughts or things to think about this. If you want to support your product for a seriously long time, be it one year or 20 years, getting a vendor to actually support their proprietary driver is, A, going to be hard, or B, going to cost you a lot of money. So that's an important question to have in mind when choosing what software stack you want to use, especially if you're developing actual physical products. And the performance of the open source driver is mostly on par, sometimes better, sometimes worse, with the proprietary ones. It really depends on which vendor we're talking about here. But for Intel and AMD, that's certainly the case. For Vivante, it depends on your application. For ARM, it's, I don't know. If it's working, we're very happy. The performance is maybe good in some cases, but it's not competitive with the proprietary blob. Another very important question to ask yourself is how are we going to debug this thing? If you have simple debugging, your time to market is going to be lower. That's just a fact. Getting the insight that you need to solve an issue immediately really matters. And it matters when you're in the most critical phase of development, like bringing a thing up. If you don't have any insight, it'll just take you longer. And lastly, having old hardware supported for a long time and means that maybe you'll see new features added to your old hardware, especially when it comes to the community graphics drivers that share the Gallium framework. Like, you get a lot of stuff for free, just because it's the same code base. If you make an improvement to one driver, it may be available to the other ones. And yeah, what's not to like about that? And that's essentially it. That's everything I wanted to say. Does anyone have any questions? Shit. How to do it? There's barely support for OpenCL as it is. So I would say no. But the intention is, of course, to do it as well as possible. And when you're developing support for a feature like OpenCL, a big one, having the debug tools that other people will need yourself is something that you want, right? You want your development to be easy. And then as a result, other people's development process will be easy as well. But as for the actual way to do it, I can't tell you that. All right. I think that's the general thing about the open source space, or sorry, the compute space, especially within open source. There's a lot to be done, and some of the work is underway, and some is yet to start. Any other questions? Yeah. For panfrost? It's upstream. Oh, it's already upstream? Yes. We're running the upstream version in our booth. So in Mesa already? In Mesa and in the kernel. Yep, it's there. You can run your normal desktop. We run GNOME Shell on our demo. It's just. How are the notifications about the show? No, none. Yeah, it just works. I mean, it's not flawless, but it does work. Any other questions? Sorry, louder. Not for that specific benchmark? No. I didn't look into it. I just ran a benchmark to have some numbers to show you that the performance, while not always better, is competitive, or in the right ballpark. I'm sure if you wanted better performance, it could be improved, specifically for that case or other cases. Any other questions? So the Panfrost driver targets both the G-Series and the T-Series of ARM GPUs. However, we're only currently using or testing against the T-Series. The G-Series is further out, I think. It's been reverse engineered to a large extent, but very little actual development has been done towards supporting that platform. I mean, there is a project like that, but unfortunately, I don't think Panfrost is on there. It's called Mesa Tracker. So for the other drivers, like Atnaviv, you can have a look at that. Panfrost should maybe at some point have that enabled as well. All right, any more questions? In that case, I think we're done. Thank you all for coming.