 Thanks for attending my talk, today I am going to talk with DRM topic in Linux, especially about PDSI, the main goal of this talk I will explain in detail with respect to the protocol introduction. Basically, I am a embedded Linux engineer, I work for a company called AMERDA solutions. Basically the work we are focusing mainly on the open source stuff, start with the boot order to Linux kernel. I have been working almost many years with respect to embedded Linux, I read some kind of a contributions in the Linux and boot order with respect to all the short forms from Xerings to Rockchip and Alwinner. I maintain some subsystems in U-boot and some part of BSI drivers in Linux and from build point of view I usually use the build root actor, so I have some kind of a work over there as well. This is the main goal of this topic, the idea of having this talk was like the experiences that I made, that I worked till from last two years. Since we are embedded Linux consultants, we have faces from many issues with respect to the DSI, not exactly the protocol but with the lack of resources in the industrial market. So I will start briefly explaining about MIPI protocol and how that protocol is integrated in the Linux kernel. So if someone wants to have a new MIPI DSI controller or host panel driver, how you can integrate with the Linux and finally I will share all my knowledge and experience that I made on the MIPI DSI and the Linux. I will give some real-time examples of how I bring up these MIPI DSI panels in the Linux kernel. I am not too much expert on the DRM but I will try to make as simple as possible. These are the typical legend of my talk. As I said, I will start with the MIPI DSI protocol and the second topic would be how we integrate on the Linux kernel and the topic I will explain about the challenges that I made while working with the MIPI DSI. How many of you are aware about the display interface in any PC or embedded devices? I will start with the display interface from the nutshell. This is how the display interface in PC area where start from the IBM PCs or somewhere where the graphic controller is mainly on the north bridge. So we have an interface media that can connect with a monitor or some kind of a display panel. So the display have a separate north bridge unless with respect to the other PC and other things. The interface media between the graphic controller versus display IC can be some kind of a HDMA or VG or something, some kind of a thing, this is a traditional PC compatible area. So when we go with the embedded, we have something like internal bus, something that is specific to some kind of a AXE or AXE2 or something. And we have a host controller, display controller that will interface with the display IC. So the display IC can be any, it can be a monitor or it can be some kind of a panel that have some kind, some vendor panel controller on it. So the idea is different something like when you go with the embedded, the connection between the display IC versus host display controller is something like an interface that is, that have some kind of a specifications that require to make the embedded requirements. Though those interfaces are something like divided into serial and parallel, these interfaces, whether I need to go with the serial or parallel, it mainly depends on the applications or which kind of a market you are targeting about those embatteds. Whether I wanted to go with the automotive or I wanted to go with the medical or something. So based on your requirement, we need to choose which one is, which one, which of the display interface is better for your, your purpose. So on top of this, we have this serial and parallel display interfaces. These are the typical interfaces that we are trying to use on the embedded, embedded domain. We have a parallel RGB that is called DPI in the Linux term. So we have, we have some kind of interface called LVDS. So the LVDS works with a parallel RGB, but it has a differential with respect to host and the peripheral communication. So the, the same media, the LVDS can be acted as a parallel serial based on the design of display interface between the host and the peripheral. So the third one is the MIPI DSI. The MIPI is a mobile interface that is alliance from a few years back with respect to camera and display. So this DSI is a part of that MIPI DSI alliance. This is purely a serial interface, like it can have a less number of files that can shoot for any of industrial applications and it can be a high-performance, low-power protocol. And third one on the HDMI and EDP are the normal conventional parallel display interfaces, which is mostly used in high-end embedded devices where you want to go with the servers or you want to create some kind of a laptop PC in the embedded devices. So the MIPI DSI, this talk is more concentrated on, as I said, they were the MIPI controller on the host side, SOC side, we have a display controller on the panel side. So that the communication between display controller and display IC could become some kind of a vice, like data and clock vials. Because vice, it kind of a serial, as I said, it's kind of a serial communication, the handshake happens start with the clock and once the handshake is done, the communication between the host and display can be done via lanes, the data lanes. This particular protocol is termed as a DPI. So where if you see the data lane, the lane means it's a differential of two pins like D plus and D minus. The term lane comes with a DPI specification where if I wanted to communicate between host to peripheral with the four data lines, you should have eight data lines actually D plus and D minus. So it can be considered as a four lanes. The how many lanes you can go with further, it depends on the particular application of the embedded device. So for example, if I wanted to go with some kind of automotive device where I wanted to use a seven inch display or something, I want a fast interface where I can use that into some kind of a cars, I can go with a four lane. If I can consider something like a small display where I can go with some kind of a refrigerator, I want to use the embedded device so I can go with a two lane device or something like that. So these two, the lanes are differential pairs as I said. This is a typical MIPI DSI interface that can connect the DSI panel. As a DSI is more popular in the embedded market, there was a DSI bridges are coming into picture where the SOC vendors or the board vendors comes with a DSI bridge where the end users can be converted those bridges into different interfaces. So in this diagram, we can use the same bridge, we can use the same display, see we can connect the same display with a DSI panel. If you have a DSI to RGB, I can connect to RGB panel, something like that. This is something like some kind of a maintenance easy and redundant use of the same bus using two different interfaces. This is a main DSI protocol. As I said, it's a three-way handshaking, there was a pie and the lane management and low level protocol and application. So the communication between the host versus peripheral can be done via this layout. So the physical layer, the actual lane management can happen between the low level protocol versus physical layer can be done by using some kind of a DSI commands where host and peripheral can communicate between the data using those commands. So these two commands can be, these two interface interaction between the host and peripheral can be defined by MIPI aliens where it has a set of DSI commands. So the host wanted to communicate with some kind of a buffer, some kind of a frame buffer of some size, he can do some kind of a commands. So the commands should be have some kind of a payload. So the host and peripheral can communicate those by means of those commands. This is something like a normal serial communication protocol. There are two different modes, operating modes that are in DSI. One is a command mode and the second one is a video mode. The command mode is something like a bidirectional where if you, when you turn on the any system that the display panel versus host can be done by some kind of hands shaking, those operating mode called as a command. And once you have some, if you open some kind of interface or some kind of a web browser on the laptop or some memory device, the interface between the host and peripheral can be communicated through some kind of a frame buffer in the software. Those can be considered as a video mode. In the video mode, based on the embedded applications usage, we have three different kind of a modes for that. One is a non-burst and sync pulse. And second one is a non-burst with sync event and third one is a burst. These video modes is mostly with respect to how fast the communication between host or peripheral. Say for example, it's a kind of a compression. Say if you go with the burst mode, the pixels from the frame buffer can be time compressed. So something like GJP in the software. So you can compress the data between the host and peripheral using some kind of a time compressed algorithm in the DSI control itself. So the frame buffer from host to peripheral is going deputatively faster than what you use in the video mode. These are the two typical packets which are used in the DSI controller. One is a short packet and second one is a long packet. The short packet has two data widths, like you cannot send more than two with short packet and the long packet you can have a multiple number of data size. So each packet has a data ID so that host can, peripheral can send some kind of a packet so that host can identify that packet. And categorize it with whether it is a short packet or a long packet based on the data length. And it can identify again with the data ID. So that the respect to communication can be happened on top of the packet management, what I described in the previous slide. This is what the main protocol looks like. And I will briefly describe about how these protocols integrate in the Linux DRM. As I said, in the normal display controller area, we have a display engine. That is the main master of the display control and the embedded. So the display engine can communicate between any of the controller interfaces like it can be HDMI or it can be MAPDSI, it can be LVDS or it can be PAL or something. So the display engine is a key or a master display pipeline. So every display engine has a separate display engine drivers. Those drivers can be communicated within DRM core. We have a DRM core display render manager core in the Linux. So these display engine drivers can be registered with some kind of interface APS called DRM and score dev and score register. So if you go with the HDMI controller, so HDMI here can be slave for the display engine. So the display pipeline start with the display engine and then HDMI. So this is a master and this can be a slave. So the all vendor HDMI driver as a controller, so it can also register with the DRM core. Things will be more complex like where we have a GPU controller. In the GPU Mali, for example, GPU controller have a GPU, GPU drivers. Those also register with the DRM core. All these interfaces are registered with the DRM core so that upper layer interfaces, some kind of a X11 or VLAN can communicate within these display engines by using the DRM core. Some kind of abstraction for the APIs. So there comes with the MIP-DSI. There was a core called DRM-DSI core where the MIP-DSI controller, as I said the controller, right? The controller can register with the DSI core. So it can interact with the DRM core. The attached panel can be any of the panel. It can be RGB or it can be LVDS and it can be MIP-DSI. So those should have some separate areas called DRM panel cores. This is how interaction between the DRM core versus the panel and the controller looks like. And there was one more complex area where the bridge can be created, right? Those have a DRM bridge. So if I wanted to communicate with them, if I have a bridge that I need to connect parallel RGB, the bridge can be the MIP-DSI bridge. So it can register with the DRM bridge. It can go with the DRM panel and it will create a simple panel driver. That is how the topology looks like. All these interfaces will further communicate with the user space, something like this. So the idea behind the DRM core is DRM-DSI core. Everything lies in the same picture, but I just specified for the understanding. This is a big picture of a Linux DRM subsystem. So it can be classified based on the controllers. Start with the display engine to display panel and display bridge. I will briefly explain each and every part in the further slides. Within examples, so yes, that's it. And as I said, the Mali GPUs, some of the GPUs are not part of the Linux kernel yet. The Mali is also not part of the Linux. But if you want to make that into Linux, you need to compare it separately in binary blobs. So the Mali can create level underscore Mali X, some kind of a Mali that can interface to the user space. This is a case with early stages of Linux. So in the early stages of IBM PC, we don't have any kind of a DRM at all at the time, basically. It's a starting point where we have a common frame buffer. Frame buffer is something like some kind of a malloc that can pump the pixels from user space to the display controller in the kernel. In the early stages, we have a frame buffer driver. So each frame buffer driver can interact with the frame buffer core that can interact directly to these Intel display controllers. So there is no dedicated path for expanding the new functionalities in this model. So that is where the DRM comes. As I said, the DRM core has these parts. The first part, as I said, in the display engine, right? In the top, there was a display engine that can create some kind of a hardware blocks called as a planes. So user space can create a frame buffer. These planes can create some kind of authenticated pixels. So that can communicate between the end display controller so that the pixel data can be converted based on each and every stages. So the pixel data can sometimes require the encoder. Say for example, if I wanted to picture something on the HDMI display, I cannot directly picture the penguin or some kind of a logo onto HDMI. We need some kind of a encoder that can convert the raw image to that HDMI formatable image. Those places, we need a HDMI encoder, some kind of HDMI encoder. So at the end, we need to have some kind of a converter that is called, based on the interfaces, it can be HDMI, DSID, BI, LVDS converters. All this part of DRM core. So each and every driver should have, say for example, if the SOC or a display controller comes with a new future in the Linux. So you need to support these layers in respect to drivers. Then only it can be categorized as a DRM display engine drivers. So based on the applications or interfaces, you need to create these converters and encoders. As I said, we have a DRM mode encoder for TMDS that is for HDMI. And encoder for DSA and encoder for NUN means it's a parallel RGB. Encoder for LVDS means it's for LVDS interface. Similar to encoders, we have a connectors. The connectors for HDMI and connectors for DSA and DAPI. DPA means a parallel RGB connector. So we have a LVDS connector, something like that. So all this code, we need to create when you support a new display controller in the Linux. This kind of a structure or this kind of a framework called as KMS, kernel mode settings. This would avoid the frame buffer, legacy frame buffer area, and it come into picture in the law. I know that most of the development is going on this area. So this is the main DRM core where you can interact all the encoders and respect to connectors. There was a next topic in the core area called as a jump, particularly about memory allocation of frame buffers. Since the display is a accurate thing, we cannot do some kind of a memory allocation with respect to the character driver. So we can do some kind of APIs where it can cache a lock or some kind of a memory pool or something. So those can be compared with the display engine. So that is where the jump comes with. The jump is mainly, jump can be some kind of a DMA buffer driver, so that can interact directly to the DRM core. So once the KMS can interact with the DRM core, the jump is also interact with the DRM core. The jump can be converted directly to the RAM. So once the frame buffer coming from the user space, it can create the malloc pools and it can give to the buffers to the KMS or DRM core. So the memory allocation here can be more contagious and it can be faster than the normal frame buffers. That is how the jump looks like. And as I said, the core, I will give you a sample drivers on the planes. This is KMS. This is a typical all-vendor display engine driver where we need to create a plane up to this area. So we have to register DRM Dev register. So the DRM Dev register will register the core with respect to some DRM specification. Once that is fine, it will register to the frame buffer. So the upper layer can interact with the DRM core now. So there are some kind of a KMS helper functions where you need to create some kind of a pole in it or something so that it can create a planes. Those planes can be further processed into nest sections. This is how the typical display engine driver looks like in all-vendor. So this is one part of the KMS. And the next part is the CRTC. CRTC is a common word like cathode or tube in the Linux, but we need to support those layers into, that means once the planes comes from the plane area of display engine, we need to create some kind of CRTS images. So those images can be further processed into nest sections. So we need to create some kind of a driver as well for the CRT. This is a typical CRT driver in the all-vendor where there was a Sanxi engine layer in it. Those are control-specific CRTs. So we need to create those planes with respect to the inputs coming from the planes and further processed into next layers. And this is a typical MIP-DSA control layer where I need to manage the encoder and connector. So the bind call will create a encoder in it. As I said, the encoder has some kind of a macro. That macro is encoded in DSA. We need to even create a connector that is connector underscore DSA. Once we need to create all this connector and encoder, we need to give the panel driver, panel core so that I have a controller that is returned to these connectors so that panel driver can enumerate those connectors and encoders. It can enumerate during the Linux boot up. If you have a bridge, you need to attach the bridge as well, if your controller or MIP-DSA control supports bridge as well. And there was a panel area. This is a simple panel. I have worked with some kind of a banana pie panel. It's a four-tip in FPC. I need to create some kind of a panel where the panel driver will interact the panel core. So once that is interactive with the panel core, it can directly go to the DRM core, something like that. And in this particular exercise, I have just used the Mali 400 driver. That is from the ARM. I just reused it in the Linux kernel and just with the Linux DRM and just tried simple QT application how it runs with. It's just like a normal OpenGL application where I need to run some kind of Hello World application that looks like the application looks like this in the panel area. This is simple sort of the application where I can run the Hello World in the command line in the Linux. So the panel driver typically register with the DRM underscore panel ad. So once the DRM underscore panel ad register with the DRM panel core, it can interact with the DSI. So the panel will send the communication between the host by means of some kind of a short underscore longer packets. Those part, those initialization code is with respect to the vendor panel. So how we need to program the panel is more specific to the panel driver. So we need to rewrite all this panel initialization code in a particular format so that the DRM panel core can understand those initializations. So the host can interact with those commands. It can finally bump the frame buffer into the panel. This is a simple panel driver in the Linux where I work with the panel area. The panel has some kind of a DSI probe where you need to attach the panel. If you have a panel and you need to specify even how much is a lane size, whether it is a full lane and what kind of a operating mode you want. Do you want a burst mode or do you want a video mode or something like that? We need to specify in the panel probe. And the panel has some set of APIs like prepare, unprepared, enable, disable panels. So we need to pump all this panel APIs to register in the Linux DRM panel core. So once that is done we need to even give the DSI command set. This is a typical panel command set to the host controller where 0x20 is a DSI peripheral command ID. So that the host can identify it's a 0x20 that means I need to turn on the display. So if you see in the 0x21 I need to turn on the vertical display like the display on the vertical size. Those commands are most specific to the DSI data sheet. So the v-sync, h-sync, mostly on the display parameters those are specific to the characteristics of the display looks like, how much height and weight of the display we need to use. Something like this. And that is how the panel driver looks like. And we have a bridge driver. I will slightly explain how bridge driver looks like. This is a same panel. I have the same panel I can use in the DSI to RGB bridge. The ICN 6211 is a bridge controller from Chip 1. There's a company called Chip 1. They produce the ICN 6211. The bridge can convert the DSI package to RGB. So the end user can reuse the RGB on their end boards. So we need to write a bridge driver that can interact with the DRM. And we can also write some kind of a simple RGB driver that can be a panel simple. So both can interact and it can convert the DSI to RGB package. Something like that. This is how the typical DRM bridge driver looks like. The blue slice is what exactly the stack looks like. In this exercise I just use the GL mark ES2 where I use a Mason Lib DRM. So once you have a Mipi bridge, those bridge can be converted into some kind of a DRM core. Those kind of packages can be done like a simple way what I told you on the DRM panel. This is a simple application I ran with the GLM mask 2 on this bridge. And this is a DSI to RGB bridge. As I said, in the panel driver we need to do the same thing for format and lanes and the mode flags everything. And if you see in the connector unit, I'm using the DPI. Like this particular bridge is converting Mipi DSI to RGB. So I need to use RGB bridge. So sorry, I need to use RGB converter. So I use the DRM converter DPI. So once the core identifies the conversion between DSI to DPI. So DRM bridge can be pumped those factors to the DRM core. So during the boot up it can identify the parallel bridge. So based on the display pipeline management, it can detect the parallel display interface rather than the actual DSI. This is the pipeline that I'm talking about with respect to DSI panel till now. So we have a DSI controller. It has a DSI output that can convert it to the panel where I can interact with the panel by means of this this pipeline. So the DSI controller again have a back display pipeline where it can go to the display engine as well. Something like that. And this is a typical pipeline with respect to bridge. So I have a DSI controller. I can use some kind of a bridge. The bridge can be chip one. As I said, chip one, the company. And one of the bridge output I can pump it on the panel. Though the banana S2 is a parallel panel compatible. So the pipeline ends with the panel and it has interface with the bridge. So it can interface to the DSI controller and the DSI controller again interface to the display engine. That is how the display pipeline looks like. And the in the last topic, I briefly explain my experiences with respect to Mipidei SI. If anyone wants to have some kind of a work in the future want to do on this area, maybe these slides can help pull. These are the typical hacks I'm based on my experiences. We do some kind of a controllers in the all winner. Basically we don't have data sheets at all. We do the reverse engineering of the controller register spec. So first point is you need to identify the controller hacks. Do you have a proper data sheet for the controller to write a driver or not? If not, you can go for the BSP and cross verify everything, all the details. And the most important details for the slide that I'm trying to concentrate here, like we have a DRM where the display parameters can be identified as a back porch, front porch, sink and sink start. Those are the typically common for all the all the all the SOCs, all the respect to controllers. But in some display controls and some vendors come up with the different equations. If you see in the last two points, LCDXY and LCDHY. So the HY here is different than the normal DRM. So the all winner using the separate different formula to calculate the front porch and back porch. That is the main difference basically. These are the typical areas where you need to look into the controller side. And we have a panel side. As I said, the panel has a panel IC as well as bridge or something, right? Just identify whether the panel has the same IC or not because some of the vendors comes with a panel with some vendor ID. And they ended up with the other panel with they have some controller in the panel. Though that is X vendor and the actual panel is the Y vendor. So you need to identify whether the both the panels are the same vendor or not. And you need to also identify whether the panel has a bridge or not. If it is a bridge, you need to write a bridge driver separately. Otherwise the topology or display pipeline cannot match with respect to those drivers. These are the sample drivers, the panel feying some kind of a driver that is a purely IC driver which has the vendor is using the same controller in-house controller with the same panel. And this was a panel called Citonex STDL7701. The panel controller is different but the actual panel is from the tech star. It's a different vendor. And the third one is a chip one. He is a BIDG IC vendor. So he used DSA to RGB BIDG. So the BIDG can be used to any of the panel controllers or something. The last point is vendor panel installation code. This is a code where most of the information cannot be liable to outside because we don't know why the panel programming code is not outside. So there are some possibilities to reverse engineer these. So you can boot the existing BSP where the panel works. In the Linux we have some kind of a reg map where you can read the registers of the panel. The controller has a panel initiation register. So you can read those register and identify the sequence. Or you can go with the BSP code and identify the sequence. Those are the main possibilities if and only if you don't have a programming data sheet. And this is of the simple test with the GPI because even though we need to write so many drivers on the display in general panel area, we need to test some kind of a GPU. So then only it can do some kind of a task. And the GPU has two main parts. One is user space libraries like libmally or something. And we have a kernel space GPU. Like we have a Wavanta and Mali. Identify whether those GPU drivers are part of Linux kernel or not. If not, we need to build for the separate modules. There is no separate option at all. And these are the GPU drivers that are used for the all-in and Rockchip. If anyone wants to interest, you can directly use these areas. We have a separate, the all-in and Mali 400 is compatible with the main line Linux. We have a package manager in the build route. So you can use it to run on top of main line Linux. And Rockchip Mali 70, which is using RK3399. We, it does not have any compatible with aspect to Linux kernel as of now, but we have libraries available in the build route. But I just tried some kind of Mali GPU drivers in the ARM, but it quite worked with Qt file. And if you wanted to start with, once everything is set up, if you want to start with testing some kind of a graphics, just start with the config and score log where you can see the penguins on the display side. And start, and next we can go with some kind of a Qt application where it can use some kind of a small libraries and then some kind of a Qt application. And if you want to try further, go with some kind of a graphics application, like a GLMOS, it can have a major drivers and everything. And finally, if you want a full-fledged windowing system, you need to go X11 and VLAT. These are the main differences that I used while preparing these slides and my experiences. And mostly all the slides are based on my experience only because I just worked on several panels and controllers since from last three years. So I used a specification data sheet from the MIPI and I have used some kind of a GPA guide from the Linux kernel and some talk from the maximum report. These are my setup. In this setup, you can see it can have all classes of panels. The last kernel panel, the class was a two-lane device. It's a video format. It is used in refrigerators. And the middle one is a burst mode, four-lane. And the last one is a non-burst four-lane These are the typical classes of the MIPI-DSI in the market, basically right now. Yes, if you have any questions, maybe it's based on the panel sometimes. But sometimes the same panel can work with burst or non-burst. And if you wanted to use that particular panel into an embedded system where you wanted to have fast data rates or fast kind of a framework for display on the device, so you can go for the burst mode. I can say that you can use the same if the burst, if the panel supports the burst mode, it can also work with the video mode. The burst is faster than the normal video mode. I have 125 commands in my panel sequence. I don't know what the commands are as well because the main problem is we don't know what exactly those commands are at all. Even I reverse engineer it, I can find some of the commands where I can, based on my display parameters, whether it's vertical, back porch, front porch, or display actor. But other commands I don't understand. Even if I comment those commands, the display cannot work. I still need to use because even if you have those set of commands, Linux maintenance can take those commands because even though they don't know what exactly those are actually. This is the main gray area. Yeah. And even the main problem like these panels are coming mostly on the chain on Taiwan, right? Even if you buy the new panels, we cannot get any data sheets for that. Yes? No, no, not exactly. The command mode and video mode can come together, I guess. There is no separate entities because these are the kind of, my experience, those command modes are kind of interface areas in the starting point, like a handshake. So the actual buffering between the host and peripherals can be done in the video mode. But those burst mode can be different. It can be based on the panel itself. Up to video mode, you can get the panels. But if you have some kind of a burst mode, you can go with a different kind of vendors. Those vendors can sometimes come with a burst or may not be burst. It's up to their design or something. And if you wanted to use a burst or non-burst, it's up to our design, basically. In this particular board where the refrigerator is using in the right side, they don't use the burst panel. But the middle one is a burst panel. So that means it's purely based on the applications. But at the same time, the controller should also support the burst. Any questions? Yeah, I didn't understand. Sorry. Yeah, we can do that. I wrote some kind of a bridge driver where the bridge can interact directly to the RSA to RGP bridge. So most of the bridge code is starting point at that right now. That means there are very few bridge drivers right now where it can convert these different interfaces, basically. But Zockchip has a separate bridge driver that not only converts these interfaces, but it has some kind of an interface from the host to the referral where you can communicate with the data packets by converting the interfaces, something like that. Okay, I think I'm done. Thanks. Thank you.