 All right, so let's continue with more kernel stuff, more driver stuff. Please welcome my colleague Paul on GRMKMS driver side APIs. Thank you guys. So contradictory to what is on the slide, this is not for them 2033, but forgive me about that. Yeah, I will start with a little bit of introduction about myself. So I'm Paul Schalkowski. I'm working at Bootlin. We are an engineering service company working mostly on Linux and also bootloader and build system aspects. I've been working on graphics and multimedia topics, essentially, on the DRM and VFALL2 frameworks of Linux. I live in the southwest of France. And before I continue with the slides, I will make a little bit of a disclaimer. So these slides are a little bit of a reference manual to the GRMKMS internal API. So it's something I would have wanted to have around when I wrote my first DRM driver. So this is kind of intended towards a specific audience, which is people who will actually work on DRM. So I'm going to spend a little bit of time on general concepts and things like that, but it's mostly going to be a review of the API itself. So lots of function calls, structures, and so on. So don't be surprised if you find this a little bit boring also. So I kind of apologize in advance, but I'm sure that this will be useful for some people at least. So let's begin with that introduction. And first, let's talk about the general display hardware pipeline, just to kind of get clear ideas about that. We have a number of different things that are connected together in this pipeline. So this really represents the data flow of a typical display chain. So it starts with something we call the frame buffer here, which really represents the pixel data that you have in memory that you want to show on the screen. It gets connected to a plane, which is really an association of the pixel with some attributes, like how should the pixels be rotated? Should they be scaled? What's the pixel format that the pixels are represented in? And if we have different frame buffers that we can show on the same screen, how they should be mixed together. So should they be stacked one on top of each other? Which one goes on top of which? Things like that. So that's a notion of plane. Then after that, we have something called the CRTC, which comes from the cathode ray tube controller. It's obviously legacy wording, but we still use that to talk about the parts of the display controller that generates timings. So display works with specific timings, which means that you have to send the pixels in a specific order and at a specific rate. So the CRTC is really the component that will grab the pixels from memory and kind of stream them at the right rate, depending on what was configured. So this notion of creating kind of like a synchronous, feed a synchronous FIFO for transmitting the pixel, that's the job of the CRTC. Then after that, we have a component called the encoder. The encoder is really there to receive those pixels that are correctly timed and to translate that into a particular physical encapsulation for a display interface. So for example, if you're using HDMI, the physical encapsulation is called TMDS for transition minimized differential signaling. And so the encoder will just take the pixels and format them with the specific HDMI format and TMDS encoding on the actual physical lines. So this is where you usually plug your cable. But you can also have another component between your encoder and your monitor, which is called a bridge, and the bridge is there to do some transcoding. So transcoding means that you go from a specific display interface to another one. So for example, you might have a display controller that has some HDMI output, but you actually really want to use display ports. So in the middle, you can put a bridge that does the HDMI to display ports translation, basically. So you can have one bridge, but you can actually have many more bridges that are chained. OK, so it's not necessarily just one. It can be multiple bridges, one after the other. And at the end, you get a final connector where you expect people to plug a cable or have a panel connected. So that's the last part, which is the panel or the monitor. The difference I make between the two is that the panel is something that is kind of always connected to your system. It's like your smartphone, your tablet, your laptop. You have a screen, and you're not expected to unplug that screen. It's just always there. At the contrary, a monitor works more like with a cable and a particular connector. So you can decide to switch the monitor to use different one. And we're going to see that this has some implications. So that's a general pipeline. The components from plane to encoder are part of what we call the display controller. And the interface between, let's say, the encoder or the bridge and the panel or monitor is what we call the connector. So we're going to see that these elements are represented in DRM-KMS. So in terms of actual hardware that we find, there's basically two different, I would say, groups of hardware. The first one is the typical graphics cards that are usually PCI Express that probably everyone knows. That's what you get in your big PCs. It's usually used on x86, even though now you can find that on big powerful machines or other architectures as well. And in the embedded case, we usually find the display controller in the system on a chip directly. Then we can have those bridges that I mentioned, which can either be components inside a system on a chip or they can also be discrete components outside of it. So you can have dedicated bridge chips and you can also have bridge units inside the SOC. They will mostly be represented the same way. We don't make such a big difference between the two. And at the end, we have our display connectors and panels, which are like physical connectors that we have on the boards or the graphics cards. Then I'm also giving some precision about how memory is managed, because that really depends on the hardware that we're using. There's basically three different cases. The first one is dedicated memory, which is basically on a graphics card. You have dedicated RAM that will be used just for the graphics things. It's not the same as the RAM that your system is using. So that's the first case. And then you have two different cases that use the same memory as the general system. This is what we call shared memory. And you can have two different types of shared memory. The first one is when your display device has an IOMMU. So it's able to map pages. OK, so in that case, you can use anonymous pages of memory from your system memory. And you can use scatter gather to create a virtually contiguous buffer with those pages. But if you don't have an IOMMU, then your display device cannot do any mapping. And you have to have a contiguous memory area. That's what we call reserved contiguous memory. There's also some concern about cache management, because when you're writing the pixels from your CPU into the shared memory, into the DRAM, you have to make sure that the pixels are actually pushed to the memory, and they don't stay in the cache. And the same if you were to read something from that memory, you would also need to make sure that you're not reading data from, well, like old data from your cache. So you have some cache maintenance operations that sometimes need to be done. So we're going to see later how DRAM handles this. Final introduction slide about the Linux and user space support. This talk is about DRAM KMS. But there is another legacy interface called FB Dev, which has lots of issues and generally quite bad performance. So the plan is to remove it eventually. But unfortunately, some people are still using it. And there are still cases where it's actually hard to replace. But please refrain from using this API in the future. Instead, use DRAM KMS, which is the modern API that has lots of flexibility. You can configure all the different elements that I mentioned from the display pipeline. It has lots of different APIs for lots of different useful things. We'll talk about the memory management using GEM and TTM. It can do zero copy DMA buff, which means that you can actually import data from another device without copying it to another buffer. You can just reference an existing buffer and take the data in for your display. This is, for example, very useful for GPU rendering where you don't want to copy the data from the GPU into the memory of your display engine. Instead, you use zero copy. And then you can just reuse the same buffer. Other features like fences for synchronization and the Atomic UAPI, which is quite important. We'll have a few slides about this. Generally speaking, the Atomic UAPI is there to allow you to group changes that you want to make to the display pipeline and apply them all at the same time. Instead of doing it sequentially in multiple calls, that might result in intermediate states being shown on the screen, which is not something that we want. So nowadays in user space, DRM KMS is used pretty much by everything. FB.dev is still supported as a fallback in many components and also in some, I would say, quick and dirty projects, which might be actually stuff in production. But yeah, it's, again, a bad idea. So please stop doing that. And yeah, most of the components that we want to use are supporting DRM KMS. So libraries, display servers, tools, lots of things. So now the support is really, really good. So no excuse. That's what I mean by that. Okay, so let's jump right into the DRM KMS internals on the kernel side. Okay, so this talk is not really about the UAPI. It's really more about the kernel API, the internal kernel API. So where do we start? Well, just like any driver in Linux, we have a bus infrastructure that is there to provide us with a device. Okay, so this bus infrastructure really depends on the hardware that we're talking about. If we're talking about big graphics cards, we're probably using the PCI bus. Okay, so we write a PCI driver. For system and HIP units, it's the platform bus that we use. So we get a platform device and so on for a few different ones. Special mention for Mypi DSI, which is only a display specific type of bus. Okay, so on Mypi DSI, it's actually integrated as the bus infrastructure in Linux. So you get a Mypi DSI device, which you will use to create your DRM driver. So by DRM driver, we can actually mean a few different things. There is essentially three types of devices that we are going to register from this bus infrastructure. The main one is the DRM device, which is there for display controllers and it's the one that will actually expose the UAPI to software in user space. But we can also have bridge drivers because the bridge are independent from the DRM device and from the display controller in general. And we can also have DRM panels, which are also separate drivers and separate devices. So the idea is that you're going to be able to connect those things together. And in order to do that, you need to be able to, for each component, to be able to contact one another. And so the first thing you need to do is to have some topology that tells you which component is connected to which. So this is done usually on embedded using device tree and what we call the device tree graph. So this is a particular syntax in device tree that uses the port and endpoint properties. So we can have an example just here. On the left side, we have a display controller and it has this ports node here, okay? And so then we have a number of ports that are described. So I'm only showing one here. So that's a port, that's the endpoint. And we have this remote endpoint property here, which designates another node. So it's another device. In that case, it's a bridge device. And we have in the node for this other device, we have the reciprocal remote endpoint, which points back to this one. So it's like a bi-directional link. And this is how we can actually create the topology between these different components. Here, we also have a panel, which is actually a sub node of the DSI controller. That's because, like I mentioned, on DSI, the device, well, the DSI device actually creates a bus. So we can have panel devices on that bus. And so this panel is an example of such. So you don't need to use the port and endpoint representation. You can just make it a child. And then the DSI controller knows how to find this one, again, using device tree. Okay, so now let's focus on the display controller drivers. Okay, so let's take a look at kind of how we create such a driver, what do we need to do and what data structures we're going to deal with and what kind of key functions we're going to need to use to build up our driver. So the main data structure that we need to declare to create a display controller driver is the strike DRM driver. So in there, we're going to populate a driver's feature field, which is a bit field of a number of different flags. So for a display controller driver, we want to set the driver mode set flag to indicate that this device can actually perform a mode set so it can configure a CRTC and push pixels out, basically. Atomic means that, oh, sorry. Atomic means that it supports the Atomic API, which not all drivers do, but nowadays it's pretty common. And it's a good thing because it's a new and advanced API. And there is the driver gem flag, which is there to indicate that we're going to use the gem memory manager that I'm going to talk about a little bit later. We have some file operations that are basically callbacks when user space calls into the device node that is created. So we have defines for that, which makes it easier. Then we have a bunch of information which is not very useful and other callbacks which are also related to gem memory manager. So again, I'm going to mention those a little bit later. So again, this is like, oh, sorry. This is a static declaration that we put like on top of the driver. And then using this static declaration, we are going to get a DRM device which really identifies the device from the user space perspective. So it's going to create nodes in slash dev, one for the card especially. And this is what user space is able to use to make some IOC tools and to configure the display pipeline. So how does it work at probe? We create this DRM device using the static DRM driver definition using DRM DevLock. There is also a DevM variant. So then you don't have to care about the cleanup part if it's done correctly. There is also something I wanted to mention that not a lot of drivers do but it's actually a good thing to have. It's the DRM driver's only check which will actually look at the channel command line and look for the no mode set parameter. And if this parameter is set, it will just give up and not register the driver. So no mode set really just means that you don't want to have any driver that will change the display configuration. I'm not sure why you would want that but this is something that exists. So you might as well want to honor this parameter. Then at the end when we have allocated our device we can register it after we register individual components involved in the pipeline. So I'm going to go over these different components and at the end we want to call DRM DevRegister to say, okay, now we are ready to expose this device to user space and from this point on we can have calls from user space. So these components that we need to register are the ones that I mentioned in the global display pipeline. So we have our plane, our CRTC, our encoder, and connector. So the general order to register them is first the plane, then CRTC, then encoder, then connector but we actually have to create links back and forth to make sure that each component is kind of connected to other ones because it's a pipeline. So the order is actually not super important and you can do it in slightly different order but that's just generally how it's done. For the remove callback of the display controller driver you can register with this helper and there is also something important which is this DRM atomic helper shutdown which will de-configure all of the CRTCs and all of the pipelines that are running before you actually shut down the device. So this is just to make sure that the hardware is not active when you decide to unload the device. There are also similar helpers for suspend and resume. So at suspend it will kind of create a copy of the current state of the pipeline configuration. It will keep you to side, disable everything and then when you resume from sleep it will with this helper restore the state and make sure that everything goes back to what it was when you suspended. So basically in your driver you don't have to do this manually you can just call these helpers and it will do it right for you. So that's pretty nice. Okay, now I'm back to memory management in more details because that's something you have to take care of in the display controller driver. So in DRM there's basically two memory managers. First one is called TTM, second one is called JAM. So TTM is kind of like a big and complex beast. It was kind of designed to cover all possible use cases and to be extremely extensive and everything but in the end it turned out to be quite difficult to use. But it's the only one that supports dedicated video memory. Okay, so if you're writing a driver for graphics card that has dedicated RAM then you have to use TTM, the transition table manager because it will kind of keep track of the state of memory on both sides. Okay, on the contrary the second memory manager called JAM, the graphics execution manager which is used by almost every embedded DRM driver is much simpler. It's more like a collection of helpers that the drivers can use but it only supports shared system memory. Okay, so you cannot use that for a graphics card driver that has attached memory. So like I said, to use JAM you just have to use these defined with the file operations and ops. Okay, so you just put those in the definition of your struct DRM driver and it will automatically bind JAM into your driver so you don't have to do more than that. There is a variant with the DOM create operation which is a callback that you can implement yourself to apply specific hardware constraints when creating the memory for a frame buffer. So typically there might be alignment constraints related to the stride and stuff like that. So this is how you can have your own. But other than that, JAM will manage the memory allocation by itself. Generally it will allocate DMA buffers with DMA alloc.wc which is right combined. It's a form of a coherent memory. It will check if there is an IOMMU or not. So if you have an IOMMU it will allocate non-contiguous pages and create virtual memory mapping but if you don't have an IOMMU it will actually use what we call contiguous memory. So I'm gonna talk about that just next. It also supports non-coherent allocations meaning that you have to do the cache management yourself. There's some functions to help with that but in the vast majority of cases people want coherent memory because it's just easier to use and it has just more advantages. There is a helper function that you can use to get the DMA address for a specific frame buffer. So this is what drivers will use to actually configure the physical or virtual memory that the hardware will actually use to read the pixels from the planes. So that's kind of one important helper for that. So let's focus a little bit on the contiguous memory allocation. In Linux there is a framework I guess called CMA for the contiguous memory allocator. And this one is really there to ensure that we can allocate large areas of memory that are contiguous in memory. So it means that it's not pages all around the place but instead it's one big buffer. And for multimedia if you want to have high dimensions like even full HD it will take up a few megabytes. And so having an allocation of a few megabytes that always succeeds is quite a challenge unless you have lots of memory but in the embedded context you probably don't have that much. So this is why this CMA API is actually going to reserve an area of memory that will be dedicated to this purpose. So you can decide of the size. Okay, you have a default pool that is available for every device that needs CMA. And you can decide on the side either using a K-config option or using a command line parameters. You can say I'm going to dedicate 200 megs to CMA and then your display driver that needs to use CMA for allocation will have 200 megs that are kind of guaranteed to be available. But of course, if other devices use the same pool they might also fill it up. So this is why you can actually have a dedicated pool just for your device. This is something you can declare with device tree. Okay, so you can say I want to create a pool of this much memory that is only available to my graphics display device. So that's an example of how it goes in device tree. You have this reserve memory node declaration with a particular node here with the shared DMA pool compatible. And you're going to link to that region using the memory region property of your display engine in device tree. All right, so that's pretty much it for memory. Now we're going to move on to the next step which is the mode config. It's like a general top level DRM object that is there to ease the frame buffer allocation. So we have to configure this mode config with a few parameters, essentially the maximum dimensions for the frame buffer. Some fallback preferred depth. And we have some callback functions which are mostly boilerplates, meaning that as soon as you are using gem you can just feel the fields of these functions to existing functions provided by DRM. So there is one to create a frame buffer. Okay, so this will be called when user space requests a frame buffer to be created. There is one to validate the general atomic commits that user space creates when using the atomic API. And there is one to actually apply the atomic commit. And this is actually the entry point that will trigger the whole atomic mechanism and go on to call similar functions on the different components. So the mode config is one of the first things that you need to configure before registering your device. So you can call DRMM mode config in it. The extra M means DRM managed and this one will automatically call back the cleanup counterparts. Okay, so you don't have to care about it explicitly. And this will call the destroy functions of the different components when you are done using the DRM device. There is also a helper for reset, which will call similarly all the reset functions for all the components that you register subsequently. Okay, now a little bit of more details about atomic and atomic support. So like I said, the atomic is an API that allows user space to group a number of changes to the display pipeline together and to apply them at exactly the same time. So in order to do that, user space just provides a list of property changes, but the framework is actually going to derive that into a new state. So the state is like the collection of all the different properties about all the different components that we need to keep track of and that we need to use to configure the hardware. And basically the framework is going to create, whenever there is a commit, the framework is going to create a new state, but you also get access to the previous state, to the old state. And you have a different structure of the atomic state for each component. So you have like the plain state, you have the CRTC state and so on. Before that, before atomic, there was the non-atomic API, which has dedicated callbacks in the drivers, but now they shouldn't be used anymore because atomic is really the way to go. So I'm only going to mention the atomic callbacks here. So let's start with the planes. We have to explicitly create planes before we register our device. It's one of the components that we need to care about. So planes have types. You have primary, overlay and cursor. So the names are kind of obvious, at least for cursor. Primary is generally a plane that covers the whole active space of the screen. And overlay is kind of a plane that can be smaller and puts like above or under the primary one. But in practice, there is usually no big difference between primary and overlay. You just need to assign one as primary, but yeah, it's not super relevant. You can do everything with just overlay and yeah, you know. So you have to indicate which CRTC can be connected to the plane. Okay, so sometimes a plane can be connected to multiple CRTCs. So this is actually a bit mask of the CRTC index. And for each plane, you indicate which pixel formats are supported and which modifiers are supported. So the modifiers are basically a way to say that the order in which the pixels are stored is not the usual linear or raster order, but it's something different. Then we have functions as well, which are again, callbacks that are just filled with spoiler plate stuff. So you can just use these functions. There is the reset and destroy, which I mentioned are called by the DRM mode configure cleanup and reset. So yeah, and then you have helpers to manage the atomic states. So duplicate and destroy the states. And finally update and disable the plane, which is the one that will start the mechanism for updating the hardware configuration of the plane. But it's not actually this callback that does it. This is really just a callback to the general logic here. So that's why it's boilerplate. And the actual callbacks are in the helper functions. The helper functions get the DRM atomic states. Okay, so in those functions, you can inspect the atomic state and check what changed, how do I need to configure the hardware, essentially. So the state has the currently attached CRTC for the plane and the frame buffer that we want to show on that plane, as well as a number of properties, okay. And in the helper functions, we actually have the check update and disable callbacks, which is where the driver implementer is actually going to configure the hardware using the states, using the new states, but it can also compare with the old state, which is also available. At probe, you register your planes with DRM universal plane in it, and you register those helper functions with a specific call, and then you can configure specific plane properties. So there are some generic plane properties which are already registered by the framework, and this is what user space is going to use to configure the plane. So for example, it will say the dimensions of the plane, the position of the screen, the rotation, things like that, but you can create more. You can indicate the plane-wide alpha property, the stacking order. So yeah, the rotation, blend mode, and scaling filter, in case the plane is scaled. But you can also add custom properties, and there is actually more than that, which are available. So this is really a flexible way to configure things. And again, the properties, the values of these properties will be available in the atomic states. So you can then grab those configuration elements and apply them to the hardware in those DRM helper functions. In the display, yeah, in the display control driver, we're going to deal with a number of structures for metadata. The first one is the mode, which really has the timings that we want to configure on the CRTC, and it also has a number of elements characterizing the signal. For example, the polarity of the signal, which sampling edge should be used, things like that. So we have the mode just for the timings and the display info, which kind of extends the timings with some flags for the signal characteristics, and also for the bus format, which is actually used on the display interface. The mode and actually the display info are retrieved either statically, they can be hard coded by specific drivers, or they can be dynamically read from the EDID, which is something that monitors have in an APROM on the monitor itself. So the display controller will go and read this EDID and then derive these two informations, which can then be used in the atomic states to configure things. Okay, so we have the connector data. So that's the connector part after the plane. We have, yeah, again, a number of things. So there's a type of connector for the display interface indication. There's a status to know if the connector is connected or not, and the list of modes that were retrieved for this connector. So the mode is really tied to the connector. The list of mode is tied to the connector, and then a single mode will be applied to a CRTC. We also have our functions, which are mostly boilerplates, so I'm gonna skip that. And what's important is really the atomic state, which has the associated CRTC encoder for this connector, and also properties, because connectors also have properties. In those helper functions, which is where we actually do things, we have a callback to get modes, which can use EDID or callback into a panel to get the modes from the panel. Also something to validate and fix up the modes that are retrieved, and a callback to detect the status of the connector, which will be called by the framework on various occasions. So the probe sequence is really boilerplates. I'm going to skip that. And skip to hotplug. So generally a connector can be hotplugged, because you want to plug your HDMI cable and plug it, et cetera. So there's usually a line for detecting that, that will change state. So it can be as easy as just having a GPIO to read the state of that line and to know if there's something connected or not. Sometimes it can be a register that you read on the hardware, it kind of depends. But what's important is that sometimes you have an interrupt associated with that, and sometimes not. So when you do have an interrupt, it's easy. You have an interrupt callback, and you just have to report that there was a hotplugged detect IRQ event. And then the framework will callback to the detect function to know the new connection state. But if you don't have an IRQ, you can also use active polling with helpers that will call the detect function 10 times a second, and it will check if the state has changed. So even if you don't have an IRQ, DRM makes it easy for you to still support hotplugging. So that's it for connectors. Now moving on to the CRTC configuration, which is really where most of the work happens. The CRTC structure itself doesn't have much information, mostly some legacy stuff for compatibility. And really we are going to get the configuration of the CRTC from the atomic state. We also have these CRTC functions, which are essentially boilerplate. The only two important parts to implement for a display driver are enable vblank and disable vblank, which is about enabling and disabling the vblank interrupt, which signals the start of a frame. So whenever a new frame is being transmitted to the display, you get a vblank interrupt, and this will be used to perform what we call page flipping, which is about switching the frame buffer exactly at a time that it is not being sent, so that you avoid a problem called tearing. If you do that switch in the middle of transmission, then you will get half of the old frame and half of the new frame. So we need that vblank interrupt to be able to synchronize to the beginning of a new frame to switch our buffers. So this is the responsibility of the CRTC to enable or disable this vblank interrupt. Like I said, we are going to use the atomic state to configure our CRTC, and it has all of the things that we need to know about, essentially the mode. So there is the adjusted mode, which was, let's say, tweaked by the different components and the mode, just mode, which is the one that was requested by user space. So user space will get the list of modes from the DRM connector, and it will choose one, one of the modes, and push it to the CRTC. So that's what we get as the mode, and then we have these, sorry, these mode valid and fix up callbacks, which will be there to evict some modes that cannot be supported by this display controller, and potentially with fix up, it might change the timings a little bit. So that's what we get with adjusted mode, and this is the one that should be used to actually configure the timings to the hardware. So we use the fields of this structure to configure the hardware registers, and this is how the CRTC is going to apply the correct timing to the display flow. It also deals with, again, the VBlank event. I think I'm mentioning it just next. And very important are the helper functions again. So I mentioned the mode valid and fix up, and there's also ones to check the atomic states. Okay, so for example, yeah, it will be there to say if the, what user space wants is correct or not, and it will enable VBlank when you enable the CRTC, which is where the configuration happens, and disable with VBlank off. Okay, so I'm going to skip the VBlank reporting. So this is just the process to register your CRTC. You can see it involves the planes as well. So you have to kind of indicate which plane is primary and which is overlay. All right, next up is the encoder. So the encoder is very simple. It doesn't actually have a state. So you just have to set the cleanup callback for the functions to properly destroy this encoder at the end. So the helper functions are quite simple. Atomic enable and disable to just enable and disable the encoder. Usually you don't need to configure anything. And if you need to do that, you can just use the CRTC state, which is also available from this callbacks. And that will be it. So encoder is usually quite simple. Of course, you need to attach your encoder to a connector and also to a CRTC. All right, that's it for the display controller setup, basically. So you do the mode config connector encoder CRTC and from that you have support for your display controller. I also mentioned the bridge and panels as extra components that can be supported. So with the bridge, it works a little bit the same. You also have a notion of atomic states. You have a number of fields that should be configured, but it's a separate driver. So a bridge is a separate driver from the display controller driver. We have some functions. Some are boilerplates and some are actually where we configure things, essentially these ones. So attach and detach is how we are going to connect a bridge to a specific encoder. We also have the mode validation and fix up. And there is a whole thing about negotiation for the input and output bus formats for this bridge, which is mostly useful to chain multiple bridges together. So actually in most drivers, these are not really implemented. All right, so like I said, it has a bridge state, which has some information about the bus format, but again, that's mostly useful for chaining the bridges. Okay, yeah, so this is how you configure and register your bridge from the bridge driver. And then from the display controller, you call this DRMOF fine panel or bridge function, which will give you a handle to the bridge that is connected using the device tree graph and endpoint topology, like I mentioned earlier. For the bridge, the bridge driver will create the connector itself, so you don't need to do that on the display controller driver. And like I said, you can chain multiple bridges. So in that case, it's the final bridge that will register the DRM connector and not the first one because the connector is really at the edge of the bridge chain. For the panels, very briefly, the interface is also quite simple. There's no atomic states, so you just have those callbacks to set up the panel. It's always attached to a, oh, this is off, yeah, to a backlight device. Okay, so backlight and panel are attached and this is done, let's say at the API level. So in your panel driver, you don't need to explicitly enable and disable backlight. You just need to attach a backlight device to your panel and the KMS framework will automatically enable and disable it at the right time. Okay, so the integration for that also uses the DRM-OF final bridge call from the side of the display controller and this is how it will get a handle to the panel and then it can use various functions to enable, disable, prepare and prepare and get the modes from the panel. There is now a new abstraction in DRM called DRM panel bridge and the general idea is that instead of having two different APIs for the panel and for the bridge, we are going to represent everything as a bridge. So this is kind of an abstraction that will wrap the panel API under the bridge API and then for display controller drivers, you can just use the DRM-OF get bridge instead of this one and then you just get a bridge regardless of whether it's an actual bridge or a panel. So this is a lot easier. It will also manage the connector. So that means a lot less work to do for your display controller drivers. So this is really the new API that everyone should use to deal with panel and bridges the same way. This slide is just kind of a list of generic drivers that you can use for panel and bridges. Mostly panel simple is the one that is used by almost anyone who needs to support a panel that doesn't require a particular register configuration. So it's just a static list of modes, okay, because the panel will provide the modes to the connector. Yeah, just one thing I wanted to mention about specific drivers. So if you're writing a panel driver, please be careful about how you name it because there is often a confusion between the name of the panel and the name of the LCD controller that is driving the panel. So LCD controller can be used with multiple panels in multiple different configurations. So please do not create a device tree compatible for a specific LCD controller because it doesn't identify a specific panel. It just identifies the chips that can be used in lots of different ways. So it's perfectly fine to have a common driver for the same LCD controller, but it needs to have specific compatibles for each panel that is using this LCD controller. So this is quite important because this confusion exists in the tree as of today and after the compatible is pushed, it's too late. So please be careful about that and don't make this confusion. This is an example of how you can support one in panel simple. So this is the static mode. This is the panel description which has a few more things like the media bus format and the compatible that links everything together. So with just these three entries, you can support a panel with static modes. So that's pretty nice. Yeah, that's the final slide about repository, different repositories for DRM. So depending on which area you're working on, you have to submit patches for one of these trees. Mostly for embedded, we are using the DRM MISC subsystem. Okay, so this is where you should send patches to. That's it for me. I know this was quite a bit of a rush and lots of information at the same time. Hopefully if you are interested in these topics, you will go back and read the slides slowly and hopefully that will be useful. So thanks everybody. And if you have questions, I'll be happy to answer them.