 Hello, okay So I'll start I'm Neil Roberts. I work at EGALIA. I'm going to be talking about embedded graphics drivers in MISA So I'm going to give a brief overview of just graphics in general then I'm going to Do another overview of the graphics stack in Linux and then Present MISA and the architecture of MISA And then finally some of the embedded drivers that are available in MISA Okay, so GPUs the basic idea of a GPU obviously is to It's a device that has you give it Some graphics memory with a with an image in it and it does something to Scan that image out onto the display device So that's the minimum support that a GPU needed to do So obviously GPUs have been becoming Progressively more more complicated So these days they're basically pretty much general purpose processes and that you can use to run arbitrary programs So we call them shaders in the graphics world So, so yeah, the GPU can basically run arbitrary programs, but they're Specialized They're different from the programs that you run on the CPU because the GPU is designed to be highly threaded So it can operate on multiple Inputs at the same time and each of those threads as well Rosso use SIMD, so it's a single input multiple data single instructions are in multiple data so you can have Many many cores many many many threads running at the same time and each of those threads could do for example the equivalent of Doing 16 operations at a time whereas on the CPU you might have say four cores and Generally each core would only be doing one operation at a time in an instruction. So yeah, the GPU is multi-purpose, but highly optimized for doing highly paralyzable tasks So I say that the GPU is These days is is generally just completely programmable, but it still contains a lot of fixed function Graphics specific capabilities, so obviously it's got the Graphics connector to connect to the display and it also has hardware to do things like texture sampling So when you render some geometry and you want to put an image on that Geometry it has special hardware to rapidly access Images textures and do filtering and scaling on on on the textures. It has other specific hardware features like Primitive assembly, so if you give it a list of triangles It knows how to take a list of vertices and convert them into Separate triangles and for example if you draw a triangle that is partially off the screen It knows how to cut that triangle up And generate more triangles so that the only triangles that are available are the ones on the screen So yeah, it still has a lot of fixed function pieces as well Okay, so that's just the the general idea of a modern GPU and now I'll talk about the the graphic stack in Linux So this is a slightly modified diagram from Wikipedia So at the top left you have your application which for interesting applications. It's usually a game and that uses So for any application a normal application, even if it's not using any 3d Would use either the Waylon protocol or the x11 protocol To create its windows and handle its input so an application using 3d will still use one of these protocols so either It will use either the Waylon protocol or the x11 protocol to talk to The thing managing the display so that is traditionally the x-server but nowadays it can just be a Waylon compositor as well So the application directly communicates that communicates with that over an IPC mechanism and Then it also talks to MISA so MISA is just a library So when I say talking I just mean it's calling API calls So and it will use one of the standard graphics API's which basically means either Vulkan or OpenGL So the idea with MISA is it's an implementation of These two graphics API's so that again could be written for obviously for one of these API's not directly written for MISA and It can work on other platforms as well such as on Windows or another implementation of the API and so MISA does the work of Translating those API calls in this in the graphics API into system calls to talk to the kernel driver and There so the part in the kernel will handle Allocating the actual buffers on the graphics memory or programming the registers on the GPU and There's also the other side on the left of the diagram. There's the KMS That's kernel mode setting. That's the bit that Sets up the display With the right resolution and configuration and tells it to where to read from so when you've finished with the GPU rendering into a buffer That will be passed off into KMS and KMS will tell there's play hardware to scan out from that particular buffer Okay, so that's an overview of the next graphics So so the main thing I want to talk about is those two API's because that's the bit that MISA handles So start with a short history of OpenGL So the first version of OpenGL was released in 1992 OpenGL 1.0 It was a very different beast in those days. So it was designed Around the graphics hardware of the day, which was very fixed functionality So it's not like what I was saying before where the graphics device is highly programmable. It really is just a fixed piece of hardware to draw triangles with With programmable registers to set the color for the the triangles for example So the API reflected that so this is a short code snippet showing example of how you would use the Original OpenGL API. So you really just it's lots of API calls. So you have an API to Set the color for the subsequent vertices And then you have an API call to set a position for example or set other attributes on each of the vertices and as far as I understand in the original SGI hardware that more or less translated into poking registers for When have you called these fixed function APIs Okay, so that has changed in OpenGL. So all the way even in way back in 2004 OpenGL 2.0 was released and that changed everything to introduce the concept of shaders so that is starting down the path of Making the API programmable to take advantage of programmable graphics hardware So now instead of just saying I want this triangle to be in this color Or I want these vertices to be transformed with this matrix You can write a short program and give it to the OpenGL implementation and say When you want to work out the color Do these calculations for me and that will be your color and the same for transforming the vertices you can say Transform it in exactly this way Yeah, and the OpenGL was very progressive at the time because Yeah, use the really high level language GLSL which is close to C So it is Supports a lot higher level constructs such as ifs and while loops that Weren't necessarily easy to do on the hardware of the day But they yeah, it was great that they did that. It's really forward thinking for hardware that can really handle that well now Okay, so yeah, this is just an example of GLSL shader just one line GLFragColor that is a variable that's going to represent the color of the fragment we're calculating So this is a fragment shader to calculate the color of the fragment It assigns to this GLFragColor to say the the ultimate color of the fragment And it's just doing a little calculation there to work out a color based on the position of the fragment in the screen Okay So in the so after GL 2.0 OpenGL has been continuously progressing and getting more and more Bias towards the programmable part rather than the fixed function part and more and more of the new functionality is just implemented in GLSL so in the entirely programmable part and More and more bits of the fixed function API That we're in open GL 1.1 slowly being deprecated and disappearing so it's really just becoming an API That helps you pass off calculations onto the GPU So the other major thing with Since that's changed since open GL 1.0 is that when you specify your input data So for example when you specify Your vertices rather than calling a function called for each vertex You really want to try and minimize the amount of Function calls you do now. So instead you tend to try and put everything in a buffer So you might have a really big buffer containing all the data for your vertices And instead of calling a function for each one of those vertices you just point Open GL to the buffer and then you call just a couple of API calls to describe the layout of the vertex data in that buffer And then you can reuse that layout and reuse that buffer multiple times So instead of every time you want to draw a complicated model with like thousands of vertices instead of doing thousands of calls you just Have an object representing the that state and that buffer and you just say okay draw that thing Again like you did last time So there's a lot less API overhead So as well as a desktop open GL the original open GL at some point there was they were released Open GL ES. So that is for embedded devices So the the idea is that yeah open GL as Because it's has a long history. It's built up a lot of stuff that Doesn't really make sense on modern hardware or just never really made sense on hardware at all and it is just like Like a convenience library, which is a strange thing to have in a library that's meant to be Specification where there would be multiple implementations. So the idea with open GLS is to remove all of the stuff That it's difficult to implement on embedded hardware and just all the stuff all the legacy stuff that in general is just not really related to hardware So the so There's there's two versions. There's EL Open GL a GL ES 1.0, which is similar to the GL 1.0 idea of having fixed function hardware and then there's GL ES 2 and 3 they're basically the same Which is using the programmable hardware as well so these days open GL ES and the modern versions of open GL have really the same ideas in mind because Open GL 4 and 4.6 the latest version is really trying to get rid of the old ways of doing things The same way the open GL ES has tried to remove them. So they're converging quite a lot but the difference is that Yeah, open GL Really has a lot of things that are for high-powered GPUs and the GL ES Is more reluctant to add things to the API that a difficult to implement on low-power embedded devices Okay, so that's the summary of open GL so Now we have the shiny new API as well Vulkan So Vulkan was released in 2016 It's basically a progression from the frustrations that developers have of using open GL It's a completely clean break from the API of open GL So it removes all of the old things that don't make sense anymore And the the main idea is that it is as close to the hardware as possible So it really is just like the minimum abstraction you can have over the different hardware And it but it tries to Not have any like intelligence or any trying tried to do things automatically for your application It just presents the hardware as it is So the the great power comes with great responsibility and So now with Vulkan You the application has the responsibility to manage the buffers and the synchronization by itself So for example open GL would do a lot of helpful magic for you say for example if you render to a texture and Then you later use that texture as a source in a subsequent render GL will know that you've You were Previously writing to that and now you want to read from it. So it will do whatever magic is necessary to make that Reading block until the writing is finished whereas so the problem with that of course is that It is a lot of magic going on in open GL behind the scenes and it's difficult for an application to know exactly what's going on So it can't It's difficult to get the most efficiency out of the hardware So whereas with Vulkan It doesn't have any guarantees if you write to a texture and then try and read from it and it's not ready yet It would just go wrong. It's your problem So you have to explicitly tell it all the synchronization points And when you allocate a buffer you have to make sure that that buffer is alive until GPU is finished with it and Yeah, so there's a lot more power So of course Vulkan is harder to use But Yeah, it gives you a lot more opportunity to take advantage of the hardware. So in these days, I think it's the right way to go because Anyway, when you're using open GL with a with a modern application Most of the time you're going to be using some upper layers like unity or whatever to make a simplified Interface for the game developer. So it's better to put all of the common Management of the buffers and everything into something like unity rather than making all the driver implementers implement it And then taking away the flexibility from the application developers so the other thing with Vulkan is that It basically replaces both open GL and open GL ES It's designed with embedded hardware in mind right from the outset. So I Think when they were designing Vulkan, they made sure that It it doesn't do anything that doesn't also make sense on embedded hardware because a lot of embedded hardware works in a different way with a very limited amount of memory and for example renders are only a small section called the tile on the of the rendering Yeah, with the limited memory. So at least Vulkan was designed from the outset to make sure that kind of thing is fine in the API And the other thing it does is When you implement a Certain version of Vulkan you're saying that there's like a minimum amount of support you can do but then Even features that are in Vulkan core can be optional so if you have Some complicated hardware feature It's difficult to implement and on an embedded device. You can still implement Vulkan, but you can say I don't sport this particular thing So it is really The future API to replace to rule them all Okay, so that that's the two main graphics APIs So yeah, they're saying in the Linux land These APIs Are implemented by Misa, so I'm going to represent me present Misa so Misa is an open-source implementation of OpenGL and Vulkan of the open gel Vulkan specifications It works on a variety of hardware and it's a user space library As I was just saying it's used best library that interacts with the the kernel driver via system calls And so it was originally started in 1995 to implement version 1.0 And they originally only used software rendering But it was designed in such a way that it could support Hardware rendering in future and that's really paid off now In the modern Misa really the hardware is the most important thing and the software implementations more like As a fallback if you can't get the hardware to work So we're 19 versions later in those 20 years Around version 19.2 So yeah, Misa contains a lot of drivers for all sorts of different hardware Do you think the main original open-source one is the the Intel driver but there's now open-source drivers for AMD Radeon hardware, NVIDIA And embedded devices like the Broadcom and the Qualcomm And of course it still has software renders And it now has multiple of them which have different advantages Okay So for a long time Misa was really behind the curve with the OpenGL spec It took a long time to catch up to OpenGL 3 and to implement geometry shaders But lately it really is on the cutting edge and the developers that work on Misa They are directly involved in the In Kronos, so Kronos is the organization that looks after OpenGL and they're they're right on the Cutting edge and helping to shape OpenGL as well So Misa now supports OpenGL 4.6 latest version and OpenGL ES latest version and the latest version of Open as well So I say that Misa supports these but that doesn't necessarily mean that all of the drivers in Misa support them So each driver can advertise what extensions it supports and Those those capabilities those extensions Misa uses that to work out what version of OpenGL to advertise when you use that particular driver So different drivers have different versions of OpenGL So there's a handy website when if you want to it's called MisaMatrix.com I think If you go here you can see what particular versions each driver Sports in Misa So you can see the base Misa Sports GL 4.6 and I think the Intel driver I965 is the currently the only one to support GL 4.6 Okay, so now talk about The internal architecture of Misa so You have your application on the left. It is going to be using one of the APIs The graphics API so in this example, I'm going to show for the OpenGL API. So that just calls the OpenGL function calls And then there's a state tracker in Misa which Does the initial tracking of the state for Those function calls so for example when you bind the texture it keeps track of what's which text to you about was bound So yeah, I have to say the OpenGL API. It's it's very It's implemented like a big global blob of state. So it's like You set a color and then that's the color for the rest of the operations So it's really just like flicking switches on a big graphics machine So yeah, the the initial Misa straight tracker keeps track of which switches you switched On that big state device So then there's the DRI the direct rendering interface which is like a big struct with a load of function call pointers to Call back into the driver. So the Misa state tracker really just just does the minimal Handling of the API to to translate that down into something more manageable for the drivers Like for example, it unifies OpenGL ES and OpenGL into the same callbacks So for the The Intel driver because it was one of the first drivers in Misa That is just directly implementing that big table of function pointers the DRI interface So yeah, so Yeah, it just implements the DRI directly But then the more modern way to implement a driver in Misa is to use thing called gallium so gallium is like It's meant to be like a Really low-level graphics API that you can implement So you just have your driver implement this really low-level graphics API and then The you can Gallium module on top of that will will translate the DRI calls They'll effectively translate your your upper level API your OpenGL into this really low-level API Which is really convenient to implement on on your In your back-end driver, so I Think the gallium it handles things like when you So mostly when you the hardware when you program all these State things it doesn't That's not really how the hardware works. What you tend to do is have like a buffer with a structure Which has all the state and then you just tell the hardware now look at my state from this buffer So you really want to whenever something programs some state you want to be able to put that into a buffer and Then you might have multiple different states in an application so that might end up being multiple different buffers and I think gallium can help you cache the different Different sets of switches that someone has programmed in the with the openGL API To make so that it's easier to Translate that into different sets of buffers that you can tell the hardware to use So there's a lot of common code for doing the state caching that gallium can handle for you Yeah, it's common code. So there's no need to implement it in every driver So MISA works out with driver to use by using a kernel API to Query the PCI ID And then it has a big table of the drivers and it picks the driver based on that And if that fails it can always fall back to one of the software renders So I think this slide covers what I said before So in a as I was saying the the modern hardware is basically Just a big programmable multi-purpose machine so When you're working in graphics a large part of what you're doing is actually nothing to do with graphics And you end up working in compilers instead so a major part of the driver is It is a compiler to compile the GLSL to your native hardware instruction set So this is just an overview of how that works in MISA, so You have your GLSL shader coming in on the top left MISA immediately Converts that into an abstract syntax tree, which is just like a tree in memory representation of the shader It will convert that down to GLSL IR, which is like a high-level Intermediate representation so that instead of just being a tree. It's Well, it's still a tree, but it's instructions and it knows when you have a variable name It knows what variable you're actually referring to And that gets lowered again down to something called NER NIR, which is I think new intermediate representation And that is much lower. So it's no longer a tree. It's just a sequence of Instructions and if you know some compiler theory it can use SSA as well So that that is really quite a low-level Representation of the shader and many of the optimizations In MISA happen net So by the time you've finished with your NIR, you've got really quite a good optimized representation of the shader So now a lot of the drivers just directly take that NIR representation and Translate it to their their machine instructions. So that happens for example to free during your driver and the info driver But I think a Lot of the gallium drivers perhaps takes the old way of doing it as well is to use Another intermediate language called TGSI, which I guess is meant to be an intermediate representation just for for gallium and Yeah, that the TGSI implementation can do even more optimizations on the shader and So some of the drivers directly translate from TGSI and yet other drivers do Even another step and pass that TGSI into LLVM and LLVM is is a separate compiler project, which is really meant to be for CPUs and But it does a whole lot of work going on obviously and it has a lot of optimizations. So If drivers can take advantage of that, I guess that's good. But yeah, the problem is it is meant for CPUs so Sometimes it does things that aren't appropriate for the GPU because for example the GPU because of the way it uses Cindy It doesn't do loops and branches in the same way So if LLVM is assuming that it can just do jumps, it's not going to work for the GPU so I think generally that's being phased out and I Think the the radian driver for example that is using LLVM is they have a project to do a new driver Which doesn't use LLVM a new compiler Okay So I just saw some some examples of those IR representations. So if we take this simple GLSL shader, which Just calculates the color using a logarithm. I don't know why but it does that and When you convert that into GLSL IR It looks something like this. So you can see It's still quite close to the GLSL. So it's represented as a tree. So there's like that addition operation is Yeah, it's represented as a tree with the sub expressions and so on so eventually that will get Converted down to a nerve representation. So now you can see it's no longer a tree It's really just a sequence of operations and each operation has a result the necessary value and the subsequent instructions can use the results of those other operations to do further operations and you can see there's no assignment anymore the assignment the assignment has disappeared. So it It tries to get rid of all the variables and just do Yeah, a linear sequence of operations as much as possible So eventually that's going to get converted down to The hardware instruction set. So this is an example on the Intel I965 driver And you can see now as part of the translation from there We've got actual register numbers in there. So part of the the work of the driver of Converting from there is to allocate register numbers to each of those intermediate operations and as you can see That 16 in brackets at the end of every opcode That means To do each operation 16 times simultaneously. So that's what we mean by a SIMD instruction So each ad there is going to do 16 ads simultaneously. So if you imagine the fragment shader it could work on a four by four grid of fragments and Operate and have a single thread calculate the color for all of those 16 fragments at the same time Okay, so On to the point of the talk which was to talk about embedded drivers. So Just a brief mention of the existing embedded drivers that are already in MISA So we have the Fridrino driver, which is for the Qualcomm Adrino devices it was started in 2012 by Rob Clark and it's a reverse engineered driver, which means that Rob had the existing binary closed-source driver and and gave it Example shaders and Examined what compiled code came out of the end of the binary driver and what registers it poked and Compared and made tweaks enough tweaks to the inputs until you could figure out what happened on the outputs and With that information that's enough information to try and work out how what the actual hardware does and then implement an entirely new driver So the driver now is in a really good state it implements GL 3.1 and DLS 3.1 And it's inactive development by Google and us at Igalia So it's in quite a few devices It's in a few it's in the Nexus phones and the pixel 3a for example, and you can divide you can buy development boards With this Qualcomm chip in So there's also the VC4 driver Which is for Broadcom video core for GPUs. So the main interesting use for that is in the Raspberry Pi So that was written by Eric Gann-Holt when he was working at Broadcom and unlike The Fredrino driver this came out of Broadcom releasing the documents for the GPU so they would need to have reverse engineering It was implemented With with the spec And that's also in a good state. It sports open GL 3.1 And it's also under continuous development And also a guy there is working on that as well So in the same vein we have a driver for the video core 6GPU which is in the latest Raspberry Pi Apparently, it's a very different architecture from the video core 4 So it really needs a new driver That was also started by Eric Gann-Holt and it's being continued by Igalia Okay, and there's also the pan-frost driver for the ARM Mali GPUs. It's used in some Chromebooks This is another reverse engineering task started by Alisa Rosenzweig and It has recently been merged into MISA master and apparently ARM has made some contributions to it too now So in the last XDC so the X developers conference in 2019 there was a Lightning talk show demoing pan-frost running desktop GL 2.0 So it's looking pretty good and they're looking to support OpenGL 3 and Vulkan And that's an image which obviously proves that pan-frost works because there's a picture of SuperTaxCat But that's a picturesque doll from the Lightning talk Okay, I think that's all Thanks, is there any questions You didn't mention Vivante in your list of drivers. Is that dead? Yeah, the yeah, or basically what the state of the Etna with free driver Yeah, I didn't miss it on purpose. I just I don't know enough about it Okay, thank you Thanks for talk. Can you go to a five slide five? Yeah, this one Can you Describe how is for example we have some compositor like we learned all weeks So that let's say we have we learned how it's done if we use hardware Compositing for we learned do we have some connection between the wayland and for example MISA? Or we have just or we just communicate user using this lower layer between Yeah, so yeah, the what the compositor is using MISA as well and Yeah, so I guess I could have drawn a line across as well from the compositor up to MISA 3d so I guess the way that works is when you when you use the MISA these days you can just directly render into a buffer So you don't really need to it doesn't really need to be related to the display so you can just say allocate enough memory in in graphics memory to have to contain the image and then you can say Render into that please So it's possible to have an application that doesn't rely on the Compositor or display server, so I guess you could consider a wayland compositor to be an application that to begin with it doesn't Necessarily use the display it just renders into a buffer But then it also has the know-how to give that to the display server to the display controller and make that scan out Okay, thanks. What about Tegra, are we stuck with the blob or is NVIDIA's recently Started to release documentation. Does does that help? Yeah, again, I'm afraid I don't I can't I don't have enough information Update on that Very quickly to answer the question on on Tegra. So Nouveau does support. I think Tegra K1 and Tegra X1 To their full extent and MESA so that includes kernel and MESA as well After that, it's a little bit stale sadly, but but yeah for older SOCs you have you have good support actually Thank you What is the motivation behind that? I mean we are using native driver for embedded system and Why do I should use that in fact? So could you repeat the question? I mean what is the motivation behind the MESA 3D? We are using for embedded system. I mean we are using the native drivers from the vendor in fact normally And why should I use the MESA? I mean what are what are the benefits? Oh, do you mean what's the benefits of MESA on top of just using a binary driver? Yeah Well, I guess you could get into a philosophical question of whether you value open source I guess I would say having an open source implementation is is great Yeah, I guess if you're a game developer I Get the impression that a lot of times there's bugs in closed-source drivers and it is Difficult to get support if you're just like an independent game developer and a lot of times I think I hear stories that Is really helpful to be able to look into the code and see exactly what the driver is doing and figure out that Why isn't this working? This is supposed I'm doing what the spec is telling me to do. Why isn't this working in? Yeah, I think it is a practical Help as well to have the open source to have the code to try and work out what the driver is doing Do you have benchmarks compared to the native drivers does it work well? I Don't have any benchmarks off the top of my head, but I guess it depends on the drivers. I mean, I guess because Well, I don't know about the embedded drivers, but the Intel driver for example is developed by Intel So it is really they know the hardware and it is really performant I Guess the board com driver as well So yeah, I guess it varies from the hardware I mean, I guess the reverse engineered projects They're always going to be a step behind because just the way it works You need to wait until the the driver is out and It's really hard to do something better than the blob driver if you're just Prodding the broad driver the blob driver to see how it works So yeah, it depends on the driver, but I think most of them are good Yeah, thanks. Hi, so The issue of Open-source graphics drivers is something that's very close to my heart. I work mostly with Android and doing Android porting and The main advantage of having an open-source driver is that you get the source code You can compile it for whichever kernel you want whichever Platform you want you're not stuck with just the version that was compiled by the Vendor so from my point of view that means I can then choose my kernel I can then compile The the graphics driver which with whichever kernel I want. I'm not stuck with whatever The vendor gives me and the other big news here. I think is that Google are really behind this So so far as I understand Google have mandated that all pixel devices will run Mesa 3d They don't want any closed source graphics drivers on on pixel devices Certainly, that's the case of the pixel 3a as you as you had on the slide so far as I understand But I haven't checked it's also the case of the pixel 4 But that'll be interesting to to check that out. So yeah, open source graphics have come a long way in the last Whatever seven years Thank you. I Think we've run out of time. Thank you