 Hey everyone, welcome to open source graphics and introduction to converting bits to triangles. I'm going to be talking about 3D graphics for the next 13 minutes or so and let's get started. First of all, I am not yet an experienced graphics developer, so take my words with a grain of salt and please correct me if I'm wrong. I have a lot to cover during the next 13 minutes, so I might be a little quick. Let me know in the chat below if I need to go slower and I'll try and base myself. All right, so this presentation is going to be a brief overview of the Linux open source graphics stack. I will not be covering how to develop a GPU driver or use a OpenGL APIs or explaining what GPUs are and how they work and I will specifically be talking about the Linux space, essentially. All right, so for the rest of the talk, I would like you to visualize 3D graphics as a pipeline. It really helps to visualize it as a pipeline because it is essentially a pipeline in the hardware and all of the software that's built around it works like a pipeline. I would like you to keep that visualization in your head as we go along in the talk. Cool, so what are the parts of a pipeline? We have three big parts of the pipeline. The first big part is the graphics API. The second big part are the user space drivers and the third big part are the gun space drivers. So the graphics API, what are they? The graphics APIs are essentially a entry point for graphic applications and libraries and they abstract the GPU pipeline configuration manipulation. Parts of this pipeline of the 3D pipeline are essentially configurable and some parts are programmable. We have a couple of libraries and APIs that allow you to configure and manipulate this pipeline such as OpenGL, Vulkan and Direct3D. Like I said before, parts of this pipeline are programmable through a separate programming language such as GLSL or HLSL. These programs are passed as a part of the pipeline configuration and are compiled by the drivers themselves to generate hardware specific byte code. This byte code is similar to what you would see on the CPU side but it's specific to every single GPU out there. We have two philosophies in terms of graphics APIs which are very commonly used these days. One of them is OpenGL and the other one is Vulkan. Vulkan is the new kid on the block whereas OpenGL is the more mature API that's been around for a while. OpenGL tries to hide as much as possible the GPU internals whereas Vulkan allows for much more fine-grained control. Vulkan moves a lot of the complexity from the driver into the Vulkan application which means more work for you as an application developer but less work for the CPU which essentially leads to improved performance on CPU bound workloads. If you would like to be a little more lazier, I would recommend using OpenGL but if you want to really go after every single frame, Vulkan is the way to go. Since GPUs are complex beasts, the drivers are pretty complex as well. We do not want all of this complexity within the kernel and we do not need to run all of this complexity in a privileged context which is why we tend to separate some of the responsibilities into user space and kernel space. First of all, I am going to be talking about kernel drivers. The kernel drivers specifically deal with memory management, command streams, submission and scheduling, interrupts and signaling. Kern drivers that have an open source user space driver live in the line of street under drivers GPU DRM. Whereas kernel drivers that interact with a closed source user space driver are out of tree. These will never be accepted in the kernel and I am not going to be talking about those. One of the first things that I mentioned earlier was that kernel drivers, one of the responsibilities of kernel driver is memory management. You have two frameworks within the kernel to allow for this. One of them is GEM and the other one is DTM. GEM is what all the kernel drivers use except for VMware which uses DTM. GPU drivers using GEM typically will allocate buffer objects. But before we go further ahead, what are buffer objects? Buffer objects are essentially memory regions within the GPU that allow you to upload graphics data such as textures, vertexes. They also allow for a lot of other things but for the sake of simplicity for the stock, we are just going to imagine these as memory regions where you can upload data. Releasing these buffer objects is done through IOCLs. Allocating and releasing these buffer objects is done through IOCLs and IOCL is essentially an API call into the kernel with multiple parameters which allow interactions with kernel space drivers. These IOCLs also provide for submission of command streams. Command streams are essentially jobs that run on the GPU. They are a full job description that get run on the GPU and one of the responsibilities of a kernel driver is to make sure that the buffer objects which contain the data required by a job are properly mapped during command stream runtime. So once you've submitted your job, it actually doesn't get run immediately. It actually gets queued to the GPU. This is primarily because several processes might be using the GPU in parallel or the GPU resource might be busy with the resource when the request comes in. Each driver has a IOCL for command stream submission but before you can submit command streams, you also need to describe their dependencies since command streams and jobs can depend on each other. The user space driver knows the inter command stream dependencies and the scheduler needs to know about these constraints as well. So DRM provides a generic scheduling framework called DRM shed which allows you to describe these dependencies in the kernel and then allows you to schedule the jobs onto the GPU in the right order. So the next thing that we're going to talk about are user space drivers. The roles of a user space driver are exposing one or several graphics API like we talked about earlier such as OpenGL, OpenGL ES and Vulkan. They maintain the API specific state machine since as we talked about earlier, the graphics pipeline is configurable. We need to maintain this configuration in memory. And that's one of the roles of the user space driver. They are also responsible for compiling shaders into hardware specific bytecode as well as creating, populating and submitting command streams. Command streams as we just talked about, the jobs that get run on the GPU but their dependencies are actually specified in user space. And one of the other roles of a user space driver is to interact with the windowing system. This is exposed through APIs such as EGL as well as GLX. EGL is used by Wayland and GLX is the API that's used by X11 clients. Everything good so far? Awesome. We have a couple of approaches for describing graphics APIs, the first one being OpenGL. The way it works in Mesa is Mesa has a framework that allows you to write a driver called Gallium. Gallium is a state tracker. The way it works with Gallium is that it takes away all the responsibility for state tracking and leaves the responsibility to interact with the kernel to the graphics driver, which means that as a graphics developer, you do not need to know anything about state tracking usually. Whereas with Vulkan, Vulkan has Kronos, which is the consortium that maintains these APIs, has its own driver loader. And Mesa just provides the Vulkan drivers and the Vulkan drivers are loaded by this Kronos driver loader. And there is no abstraction for Vulkan drivers. I'm not going to be talking pre-Gallium simply because most modern drivers in Mesa will be using Gallium, the Gallium framework. All right, so let's dive into Gallium a little bit. Essentially, like I said earlier, 3D graphics are a pipeline. Functionality exposed by the kernel through Iocples is consumed by drivers within Mesa, such as Panfrost, Etnavive, Fridrino, Intel, IRIS. This functionality is usually things like memory mapping as well as figuring out demand stream dependencies. And all of this functionality is exposed through virtual functions to Gallium 3D. Gallium 3D will call these virtual functions if they're implemented properly in order to and leverages these virtual functions to provide a framework that can allow you to write an OpenGL implementation on drop. So the OpenGL implementation consumes the Gallium framework which consumes the user space drivers beneath it and which consumes the kernel space driver beneath it. The user space drivers as well as the Gallium framework as well as the OpenGL implementation are all part of Mesa 3D. And everything else is part of the kernel. So those are the only two moving parts that you have to keep in mind when it comes to the source kernel. So that was the OpenGL story. Moving on to the Vulkan story, like I said, there is no common framework and the driver loader will simply call the right functions, the right virtual functions in the driver itself. And that is it. There is no state fragment functionality exposed on the Vulkan side of things. One of the other crucial aspects of the user space drivers is the shader compilation. As I said, part of the pipeline is programmable and compilation of these programs is a pretty crucial aspect. It usually involves transformations and optimization passes. What do I mean by that? So something like GLSL would be transformed to Spurvy, which is an intermediate representation, which would then go through an optimization pass and then can be converted to NIR, which is yet another intermediate representation, which will then be optimized once again before being consumed by the driver compilers. These driver compilers will then transform either TGSI or NIR into one of the instruction sets that is actually understandable by the GPU that we're targeting and then optimize it once more before finally compiling it into bytecode. And this bytecode is actually what gets run on the GPU at the end. On the GPU at the end. I'm sorry. All right. So now we've submitted a job, jobs run. What happens next? Well, once the job is finished on the GPU, the hardware reports the job status that I'm done through an interrupt. This information has to be propagated to user space, essentially to allow for synchronization between the hardware and the actual software itself. This is done through the concept of fences. Fences are exposed as sync objects to user space. And fences can be placed on buffer objects as well as command streams. And the way to look at fences is essentially there are two kinds of fences, infences and out fences. If my job finishes, it can signal a out fence. And another job can link to that out fence as an infence. And once this infence is triggered, my second job can run. So that's about it, really. That's kind of the big, big picture when it comes to the Linux open source graphic stack. The GPU topic, although it's quite vast, if you would like to start working on this, I would recommend finding a driver, finding a feature that is missing a buggy and stick with it till you get it working. It takes a lot of patience, but fortunately enough, Mesa has a really good testing platform. It's called Piglet and DQP. And if you find a test for some functionality, you will almost always find a test for the functionality that's missing. And that allows you to basically do test-driven development, which makes your life a lot more easier in order to write new features for a driver. I'm going to link some resources in the slides. I would highly recommend you go through these and look them up and read through them. Since it is quite a vast topic, it can feel very overwhelming in the beginning, but just giving it a little bit of patience and giving it a little bit of time really does help grasp everything. And that's it. Does anyone else have questions? I guess I'm done a little bit earlier than I thought I would be, but I can take questions for the next 10 minutes. Thank you so much. First of all, thanks, Ron. Are there questions? I haven't seen any in the shared notes. We can clean up there, but I bet there's something in the chat. So, everybody, help us moderate. What questions do we have for Ron? Are you still living in Barcelona? Yes. That's where we last met, which is awesome. I'll come over again. Yes, please do. So, but are there any real questions from the audience? Dear Chet, everybody has two news. So, I think Jonathan asked where Pipefire fits into the pipeline. I believe, okay, so I do not work on screen recording, but my understanding is that Pipefire would interact, would be a Weyland protocol or that it would interact with Weyland in order to do frame capture. All right. And yeah, quick interruption, commercial break. After this session right away, we'll have the last two working group presentations. So, please don't abandon us, stay around. It's going to stay exciting, not as exciting as the graphics bits, of course, but a different kind of exciting. More questions? Feel free to send me an email as well, in case there are questions later on, or feel free to hit me up on IRC. I'd be happy to take questions there. It doesn't even bite. There's something actually in the question document now. What about Android? Is that the same? Hopefully in the future? Yes. Android has a different graphic stack overall, but I believe there are plans to bring Mesa to Android in order to be able to use something like Weyland on an Android device. All right. And then the second question here, we still have five minutes almost. Can we get those end-of-presentation links in a clickable format? Right. So, that presentation was, the slides were actually a little different, but I do actually have, I've uploaded the new slides now, and at the end, yes, there you go. I have, I put them in nice clickable formats. So, when we upload the slides... Clicky, clicky. Awesome. Yes. Then there's a question, maybe we'll start with the fourth question first, change order drivers. Are they in C or C++? Everything is in C. All right. And then exciting projects for the future that one is maybe burning longer answer. Maybe not. I can't think of anything that I can discuss right now. The future will be boring or secret. All right.