 So welcome everybody for the last session of the day. I understand that you mostly hate to fall asleep probably that's why in this room Okay, I don't know if Thomas was in the room Anyway, so thank you for coming. This is gonna be Talk about CDF and most probably the last talk about CDF at least in the current form. So The next one I'm going to propose will be either about great CDF has been accepted and we're going to merge that in a kernel We'll do something else it started too long ago it was Actually, if I'm not mistaken so one year two months and one week ago exactly since I posted the first version And since then I don't know if any of you know your Greek mythology But I felt a bit a bit like that. Does anyone know what that is? No one Come on Yeah, well, okay So that's the Dan ideas In the Greek metal mythology there are women who were sentenced to pour water in jug that Had leaks in the bottom for the rest of well, I would say the life But I should the rest of the death and that's small as how I feel about CDF It's just a never-ending story and every time I talk about CDF with people every time I have meeting Every time I try to give a talk in a conscience. Well People come up with new ideas and instead of moving the project forward it actually sometimes feel it just keeps going backward So we're going to try to summarize that today See where we come from what problems we're trying to solve the different proposal that have been posted and until today how they have been accepted or rejected or something in between and At the end of the talk I actually wanted to have a question and session that should be a bit more of a discussion session Because I want you to take this as an opportunity to get feedback from from you And to see how we can try one more time to get to get CDF something that looks like it upstream so Does any of you want me to start by the quick overview of the display concepts in the couple of previews Previous stokes have given about KMS or display. I started with an overview so everybody will be on the same page And there are many people who knows who know nothing about the display in the room One on the back two don't be shy three. Okay. There's a couple So just briefly go through the key concepts that won't take long Everybody else can fall asleep and I'll signal when you have to wake up Display it's Display the next nowadays. I'm not talking about 3d. I'm not talking about GPU the the hardware acceleration I'm just talking talking about the display getting pixels out of the screen that getting images out of the screen And that's a really basic concept. You actually have two key concepts in that The first one is the concept of scan out. So scan out is taking Pictures taking images that are stored in a memory buffer somewhere that's called a frame buffer in the terminology that we use and Read that there's a hardware component that will read that memory and that will do whatever is needed to get those Pixels and a physical bus so you can think about the scan out phase as going from memory from a memory frame buffer To a physical display bus that can be any kind of any kind of bus parallel bus where you get your uncompressed pixels You can have hdm. I can have pretty much anything you want But it's just a transition between the memory and the physical world when you get your electrical signals signals out Out of the system. So that's the first key concept The second key concept is composition Composition composing is the act of taking several frames several images And putting them together to create a single one so that can involve scaling input images that can involve rotating them That can involve pretty much any operation you can do in all the the input images and then blend them together With trans transparency possibly and just create one output frame that is going to be sent out of the system So usually the component that's responsible for the scan out process is going to be responsible for the composition process as well So that that just really the two key concepts in display if you understand that that's more or less What display hardware do today? If we look at what we have in the Linux kernel now because we're talking about Linux We we have a couple of display APIs. I'm not even going to mention FB there Well, too late. I've mentioned it, but I'm not going to talk about it Today in the kernel the display of choice is KMS And that's what you that's what you need to use if you will write a new display driver go for KMS driver If you need to write an entirely new display stack under Ubuntu people in the room If you need to do that go for KMS as well don't think about FB dev Everybody is going away from FB dev. Android doesn't require FB dev There's no reason to use it in any embedded space. There's no reason to use it in the desktop space either It's it's a thing of the past. So we're going KMS and I'm briefly going to introduce a KMS concept because that will be relevant to CDF So KMS Splits the hardware models the hardware in a couple of blocks on your left-hand side You get the green boxes that represent represent the memory object So that's frame buffers that kind of images that start in memory. So that's outside of the hardware. It's it's a memory It's all sort of display hardware And then you get what they call the CRTC. So that's a CRT controller It's pretty old name that would be replaced by something else nowadays But that's a piece of hardware that will that will handle composing and that will handle the Scanner process After that you get pixels on a physical bus out of the CRTC That bus can be inside USOC It can be outside of USOC on the board, but it's still a bus where you transmit pixels directly And that goes to encoders so an encoder is a chip that will take pixels and input and output pixels and in between it will while translate between LVDS and HGMI for instance Parallel RGB display port. Well, what whatever conversion that you need to do that's handled by an encoder The KMS model supports multiple CRTCs and display on the on this picture just for the sake of simplicity But you can have several CRTCs You can have several encoders that are connected to the same CRTC or different CRTCs And at the output of the encoder you still have a physical bus with signals going on that and that goes then to connectors So connectors KMS you have to remember KMS comes from the desktop days And on the desktop days when you had a graphic card while you have connectors on that You didn't have you didn't have panels that were directly embedded inside your desktop Computer case, which would have been a bit useless. So you get connectors at the output of the encoders and Nowadays those correspond to either real connectors They could actually be the display panels as well in laptops in embedded systems And that's really the three objects that KMS is based on so that's the CRTC That's the encoder and as a connector and that creates your whole display pipeline in the embedded world There's a bit of split so frame buffers. They're always in memory. That's a memory object The CRTCs all is inside USOC and the encoder can be either inside the SOC as well Or you can have encoders that are located outside of the SOC the connector obviously is not part of the SOC That's on the board That's Really the KMS core device device model We could talk about the frame buffers and memory management that but that's a bit unrelated to CDS I got a couple of slides on that I will go back to that if there are questions at the end of the talk But let's keep frame buffers. Let's keep game objects. Let's keep buffer sharing Mode is still a key concept that we need to go through because that's really rated to CDF as well So when you go to display You get what we call display mode so display mode is just parameters that are used to configure the way you display will output the image physically and Part of display mode is a really important parameters that is just the width and the eighth of the image So that's a resolution of your display. That's the core parameters that users will see But then we have a couple of other parameters that are related to the that created display timings And that's related to horizontal and vertical blanking So when when you will get an image output on a physical bus The pixels will be output one after the other and at the end of the line you have what we call a blanking So that's a time during which there's no active pixel being sent And that is used usually by the hardware that receives the image to prepare for the next line And to be able to tell the difference between The active part of the image and the blanking and know that we're going to new line We have a synchronization pulse So we have synchronization signals in the horizontal direction and in the vertical direction And the timing of the pulse in the blanking area is something that needs to be to be configured And that's a core parameter of the display mode That's usually that's something of the user the end user will see because you they mostly can't but resolution of the display The timing is really important on the for the developers for the drivers So timings are gonna add with as I mentioned as there are synchronization pulses that have a width You have a new react called the back porch. That's after the sync pulse You have the activity of the image and you have the front porch. That's before the sync pulse You have to imagine that you have that just one image, but you can have several images You are you actually have several images going one after the other So the front porch to react that's right over here is actually also right before the sync pulse of the next frame In KMS, it's going to be differently So this the values are computed a bit differently, but that doesn't matter much here We can go back to that as well if we have any questions The important the important operation in KMS has created to the mode is well, the name is pretty simple It is it's the mode set operation KMS stands for kernel mode setting. So the main purpose of that is to set the mode of the display And it takes lots of parameters. So basically it will tell you get to use a space called turning the display hardware I want to use this this CRTC and I want to scan out this frame buffer And I want the output of that to be available on a couple of connectors Well at least one otherwise There's nothing being displayed and you can have several connectors as well when you when you want to clone your display And you want to use a given mode. So you also may want to crop the frame buffer so that's Parameters that's that have passed as well. So there's a set of parameters passed to the driver It's a pretty simple API just a single call But behind the scene there's a lot to configure when you want to do about that because you won't you will want to Configure UCRTCs and all the connectors and all the encoders you have in the system and verify that the mode That user that the user will give you will work on the on the many turners connected to your system So there's lots of work in KMS handles that there's lots of Helper functions that that make your life easier when you when you want to ride the driver But the basic concept of mode setting in just this it's just a single function call Another concept that's not really well That's not part of KMS at the moment is the media controller concept and that's something I need to to go through quickly as well because that's a key part of CDF also So I'm not going to skip that the media controller aims at modeling complex media hardware and Exposing the topology of the hardware to use a space That's the core purpose of the media controller So it's a model of the hardware and a way to expose the topology of the hardware to use a space It's models the hardware as a graph as you can see over here That's actually a pretty simple one compared to what we have today in embedded devices it models a graph as entities so boxes that are connecting it through links the arrows between the entities and on every entities you get connection points where you can attach your rows and We call those pads So you get you get an entity that has a couple of properties It has a numerical ID. It has a name. It has a type will couple five these like Information like that. So that's a building block a hardware building block the media controller only models the hardware So there's no software concept in there Then you go to pads so that's a connection point on an entity if you have a display panel It will have a single input that just one pad if you have an hdmi encoder that takes our GB signals and that outputs hdmi You get two parts and you can have entities that have more less parts depending on what they do So a pad is just a number it has an index and that's pretty much it And then we connect entities through their parts through their pads and for that we use the links So a link object is just a source a sink and a couple of Information about the link whether it's active and not for instance because you can break and recreate links and media controller will activate and deactivate the links So that's really what the media controller is about Modeling hardware and exposing that to user space So it is a kernel side with objects that you can use to model the hardware And then you got to use space API that exposes that use space that you space can see what's inside The media controller is not used to configure all the hardware underneath That's a responsibility of while the came a subsystem the video for Linux subsystem the alza subsystem when we deal with audio So it's totally separate problem Just exposing things to use a space so that the other subsystems will not have to do that themselves And we'll be able to build on top of that now getting to CDF I want to go through the different problems that I had to solve It started extremely simple The first couple of problems that I got tends to solve was actually to create device rebinding's Format display hardware. Well, I'm not sure that's extremely simple. I should have scrubbed that example, but back then I had FB dev devices Well, one a bit of device. I had to create a device rebinding. So they're just a Device inside the SOC one output a panel outside So I thought, OK, that's not going to be difficult. I'm just going to create device rebindings for my device and for the panels and just have a connection in between. And I realized that to be able to create device rebindings, you actually had to standardize things a bit. Because on the panel side, if I want to describe a panel in the device tree, I have to describe the properties of the panel. And to be able to use that properly, I will have to describe them in a more or less standard way. So I looked in the kernel and tried to see how panels were handled by the display drivers. And then I realized that we actually have lots of display, lots of panel drivers, that are more or less generic. There's a panel driver model that we have, but that's really tied to FB. So it wouldn't work with KMS, which was a pretty bad thing. And we also had panel drivers that were actually specific to different drivers. So TI had panel drivers for the OMAP. Samsung had panel drivers for the Exynos. There were a couple of other ones. So if you have the same display panel in two different systems, one using a TI OMAP chip, one using a Samsung Exynos chip, well, you had to write two drivers for that. So it was a bit of a mess. I thought, OK, can't we maybe create a single panel driver model that would work in the kernel? So that's the second problem I thought that I would have to solve before creating the DT bindings. And I thought, OK, that's not going to be really difficult because what's a panel? It's a simple piece of hardware. It takes video streams as an input. Usually, it's not configurable at all. It has a default mode, a native mode. You just need to query the panel driver to know the mode and just use that as the only possible output resolution that you can use. And well, maybe you've got two GPIOs to control the panel, to control the backlight. There may be a couple of regulators, but really simple stuff. And then I realized that actually we have more complex panels. You have panels that you can control through I2C, for instance, or SPI. And you also have panels that you can control through video bus. There's a standard called MIPI DSI and MIPI DBI. So those are two video buses, a serial bus and a parallel bus that just send data, video data, to the panel. But that can also be used to control the panel. So it can be used to send control messages, to read information back. So you've got a video bus that sends video stream to the panel. But it can also be used to get information back. And I had a couple of panels like that on the platforms that I needed to support. And I realized that we actually didn't have an infrastructure in the kernel that could control bus. So the MIPI DSI bus, there was no model for that. And once again, there was a model inside the TI or map the display driver, inside the Samsung Exynos display driver, but nothing that would be common. So I thought, OK, if I want to solve that problem, well, I have to create an infrastructure in the kernel that I could use to support the different MIPI buses for the panel. And then about that time, I also got asked to write a KMS driver for display hardware. And it was actually for the same display hardware that I had an FBDF driver for. So I didn't want to have two panel drivers for the same panel want to be used by FBDF and want to be used by KMS. And so for that reason, I thought that part of the problem was also to create a framework that was not tied to FBDF, not tied to KMS either. But that could be shared between the two. So that's like the four problems that I thought I needed to solve. And then came round two after talking with people, sending a couple of RFC patches upstream, well, two mailing lists. There came a couple of other problems. I realized that, well, it's great to have a framework that can support the panels and have DT bandings with that. But it's not only panels. I mentioned we have encoders. They can be inside the SOC or outside the SOC. So you have, for instance, HDMN transmitters that you want to control through I2C. And well, we needed drivers for that. We had a couple of drivers, but they were really tied either to a specific subsystem specific API or tied to a specific driver. And that's something that I felt I should try to solve. Because having a panel framework in the kernel, that was modeling the panel as an object with an input bus and abstract operations and that. I thought, OK, it's not going to be difficult to extend that to actually support encoders as well. So bridges, encoders, transmitters, that's small as the same concept, just different words. And I wanted to create something that would support that as well that wouldn't require to have one framework for the panel, one framework for the encoders, and have two completely separate things that would need to interact with each other. It would be more complex in the end. So I wanted to do that as well. And the problem for that is that, well, I had to write DT biolings. That was my first goal. And when you think about it, when you have encoders and panels, you can create a pretty complex display pipeline. It's not just about one CRTC going directly to a panel. You have one or more encoders in between. You can chain them. You can have multiple encoders connected to the same output. You can have multiple panels connected to the same encoder. It's a graph. It's not just a linear pipeline. And for that reason, I thought, OK, well, my device rebindings that I thought would be pretty simple in the first place, they're going to be able to describe complete videograph. So it probably became more complex. And with the device rebindings, I also realized that we have another issue that comes with that. Before device 3, when we were living just with the platform data, and on the desktop side as well, that's the same thing, what we usually had is that you had one platform device that was your display device, and a big data structure with platform data for that device. And in that data structure, it was like a list of encoders and a list of panels. So a display driver, when it gets pro, when it probes a device, when it initializes a device, it will go through that information and say, oh, I got a panel over there. So I'm going to create the panel device. It creates a panel device, then the panel driver is loaded, is associated with that, is bound with the device. And at that point, you get the panel object, you got a driver that works with that, and you can work with it. With the device 3, it's completely asynchronous. You can't rely on the display driver creating devices because the devices are located in different places in the device 3. If you have an SQS-C HGMI encoder, it's under the SQS-C master, the SQS-C controller node. That's right on top of here. So you get your SQS-C, you get the SQS-C controller node, and you get your HGMI encoder, that's a child of that node. You also have your display controller node that's inside the SQS-C on the same bus as the SQS-C controller. And while you can have your panel, you can have an HGMI connector described in the device 3 as well. So they're located in different places in the device 3 and there's no way you can predict the order in which they're going to be probed. So when your display driver gets probed, well, the panels might not be there yet. The HGMI encoder might not be there yet. It will come later. So that's not really a DT issue by itself, but it showed that when moving to the device 3, the model we had to initialize all the drivers that broke. So one more problem I had to solve. And then, well, that's not really a secret, but that's not the core goals of CDF at the moment, but one thing I want to do is to actually share the drivers. Well, not really for the panels, but definitely for the encoders, for instance, between KMS, FBDF, and video for Linux. And actually, FBDF is out of the equation at the moment, so it just KMS and video for Linux. But still we get two completely separate subsystems that don't interact with each other. And we have chips, we have components that are used by one or by the other. And we don't want to duplicate the drivers. So that's an additional problem. That's long-term. That's not in the CDF proposition that have been sent to the list. That's, yeah, when if CDF is accepted, as a next step, I might work on that. Bonus problem, we actually have panels that sit on multiple control buses. So you have a panel that has an I2C interface, and that also can receive comments through the video bus. And you have to control part of the panel, you have to configure part of the panel through I2C, and then send configuration messages on the other bus, and then go back to the I2C. And that's something that we don't model right in Linux. So that's not the core issue of CDF, but if we can solve that, that's great as well. Couple of use cases, and I want to emphasize the fact that they actually real use cases. And that's things that I've made up completely that do not exist in hardware. It's things that exist that I work on, things that I will work on in the near future. The first one, I've put names, company names, so that you can see the real. That's a renaissance hardware, that's a display, the display, the DE with the display unit, so there's a display controller for renaissance. Doesn't matter much if you can really read the small print, but basically what's in blue is inside the SoC, and what's in green is outside the SoC. So we got encoders that can be chained. We got an HDMI connector, VGA connector, and a panel on the output, and a whole topology inside. So that's a hardware I need to support. If you remember, in KMS, we got a model with a CRTC, an encoder, and a connector. And in a power plant like this, you have more than three blocks. So we have to more or less push that in the KMS model and group blocks together. But at some point in the future, we might actually want to expose that to user space somehow because there are links that can be controlled in between. And if you group all the blue blocks as a single block, well, the links disappear all of a sudden. So they have a default configuration on boot time, and you can't change that. The next one is also a renaissance hardware. Apologize for simplifying the diagram a bit. The hardware is more complex than that. That's a video processing unit. So it takes images from memory, process them, there's a scalar, there's a composer with alpha blending and writes it back to memory. On the top you get your memory, your form memory inputs. There are form memory outputs over here. And a couple of processing blocks. There's some missing in the middle. So that's totally unrelated to the display. It works from memory to memory, except that the engineers thought that it would actually be good if we could output the images that get generated by that instead of outputting them to memory, they should go to the display as well. That's useful. So there's a small block over here on the lower left corner that has a pad on the bottom that's connected to nothing. That's because it goes to the display controller over here. So we get two separate devices. It's really two different IP cars inside the SOC. They're handled by completely separate drivers. One KMS driver for the display controller, one video phone X driver for the video processing engine. They don't interact with each other, but there's a link in the hardware that we have no way to model at the moment. Next use case, Xilinx FPGAs. Well, it can be an eFPGA, but I'm working with Xilinx. So that's the name I'm using. So we get the capture pipeline in green. You get a camera sensor that you could find in a cell phone or any kind of device. And you get an HDMI decoder. So you've got an HDMI connector to input images. You get a couple of processing blocks. You got MUX inside so they can re-route the streams and you can capture using DMA to memory one of the two streams directly or you can scale one of them. And you can also send one right to an HDMI encoder and out to an HDMI connector. So you get a complete pipeline from the left that goes through the hardware and out to the HDMI connector without going through memory at any point. There's no memory involved. You can get your sensor image, process that and send it to HDMI. That's video capture. It has output as well, but there's no way we're going to support that directly with the KMS driver now because KMS is modeled around the concept of frame buffer that's in memory and we have no memory in here. We got another pipeline in blue. That's a display pipeline. So you got a scaler. You got a composer that can compose a full image or a scale image and output that on HDMI as well. The diagram is a bit simplified, but that just to show you the key concepts. So we get one video following device. We got one KMS device and we have two chips that can be used by either. Exactly the same piece of hardware, two instances on the board, but in one case that needs to be a video following driver to work with the video following subsystem. In the other case it needs to work with the KMS subsystem. Same for the scaler. Exactly the same scalers, two instances. One used on the camera side, one used on the display side. And it's an FPGA so you can do pretty much whatever you want with the configuration. You can just instantiate all the blocks you want and you can connect them any way you want so you can actually do something like this. So all of a sudden we have a single device. We don't have two totally separate devices anymore. You get something that can be considered as a capture side, as a display side, but there's a link between the two. And so it's back to the previous use case with Renaissance where we actually have KMS devices and video following devices that need to talk with each other. So they need to talk and they need to share drivers. Going now to what is called nowadays a CDF. It actually started with a different name. The first version was called the Generic Panel Framework. That was posted as I mentioned a year, two months and one week ago. Well, I don't count that. It's because I've checked it today. And that's version, the first version only aimed at supporting the panel. So the first round of problems I mentioned just supporting display panels and that's it. And I came up with a device model for that. So right in the middle in blue you got your panel drive. So your panel driver is going to be configured. It's going to take either platform data or it's going to take device tree data. That's on the right-hand side. So that's configuration data that is passed to the panel driver using the, well the platform data mechanism or the device tree passing your driver. That's pretty common. Then the panel driver needs to talk with the panel using whatever control bus the panel uses. So if the panel is an I2C or an SPI device it will use the I2C or SPI API. Possibly it could be memory mapped. Or it could be controlled through the DSI video bus. So you get a couple of options underneath but that's pretty usual. If you get an I2C control panel it's going to be an I2C driver. It's going to work with the I2C subsystem as usual. No rocket science in there. And then you get to display controller driver that needs to interact with the panel. And there are two interactions that are needed. The first one I called it the configuration. So for a really simple panel there's not much to do. It's more, it's really about querying information. I want to know what resolution in what mode you support because I need to configure myself for that mode. You also need to enable and disable the panel. So it's really control and configuration of the panel. But the panel also gets video data. It receives video data. So we get video operation that can be used to start and stop the video stream. So that's really pretty simple that model. You get to your display controller driver that gets called by DFB Dev on the KMS API on the top and then it will call to the panel driver that will call back to display controller driver to control the video stream. That was the model. Then as I mentioned I realized that it was too limited and I wanted to expose more to support more than just panels and namely encoders. So I came up, I renamed the proposal and renamed that to the common display framework. So it's not just panels anymore. Now we're talking about encoders as well. And the first version of that, it was actually called V2 because there was the common, the generic panel framework before, was using a control model that we referred as the Russians doll model. So basically you have a pipeline, just a linear pipeline. That's the only thing we use to support with several encoders and that will end at some point with a panel. And to control that, the display controller driver on the left side will just call the next entity to see. So we get the big green box that contains a transcoder and that also contains a panel that's connected to it. So it will call to that driver that knows that it's connected to a panel. So it will just forward the call. So the calls come from the display controller and go from driver to driver from device to device until it reaches the inner device. And at that point the panel controller will call back on the video controller API to say I want a video stream or start or stop the video stream up to the display control driver. There was quite a lot of positive feedback. So there were people who took the patches and posted reworked versions and say, well, this looks good, but I don't really like this part or that part or I got a better proposal, I got a better idea for this. So we got the common display framework dash T which stands for Tommy Volcano from TI. The common display framework dash TF for Tomas Figue from Samsung. And then a couple of other patches that's part of the actual display drivers and panel drivers and encoded drivers to the common display framework to show that it worked. So that was quite positive. At the same time there's also negative feedback but I'll go to that later. So I got feedback and it worked on the third version of that. So it was keeping the same name still the common display framework. And the third version would, instead of modeling the hardware, just a linear pipeline because we realized that we actually had hardware that wasn't linear. Well, use a model that's pretty similar to the media controller model. So it's modeling the hardware as entities, building blocks connected to each other. So it's exactly what I showed you earlier about the media controller is the same key concepts except that people didn't like the name of pad. They preferred port for some reason. It's just a naming issue, just backshedding as usual but really it's exactly the same model that we have. So the display entity, if you look at and the struct display entity, it's actually ended a media entity in there. So it's really an extension of the media entity object. And while it has a couple of properties like an owner driver, it's reference counted, there's a name and there's a couple of operations that implemented dimension, things like controlling the video buses, there's a state and there's also an important point it's that at the end, there's what I call the display entity notifier. And that aims at solving the advisory problem I mentioned before and the fact that we're actually using asynchronous probing. So in the CDF model, when your display driver is registered, it will build a list of all the panels, of all the encoders, all the pieces it needs. And with that list, it will call a CDF API, the CDF notifier API, giving the list and say, I want to be notified when all those devices will be available. As I mentioned, they probe asynchronously, you can't know when they will be available, so it's asked to be notified. When the drivers that are associated with the panels, with the encoders, with whatever chip you have, probe the device, then they will register themself with a CDF call and when all of them are available, then the display driver will be called back. By the way, if you want to interrupt with a question that's totally fine, but I'm going to answer the question I haven't received yet, which is why don't I use deferred probing? That's a mechanism we have in the kernel. When the driver probes a device, if the resources it needs are not ready yet, like the panel is not there yet, I could just, the driver can just request to be re-probed, to re-probe the device later. The reason why we can do that is that we can actually have circular dependencies. You can have your display controller providing a clock that is going to be used by the encoder. With other clocks, you can't access the encoder device, like it's needed to access the I2C bus in which the encoder device sits. And the display driver, the display controller needs the panel, or it needs the encoder. So there's loop in there. We can't use deferred probing on both sides. Otherwise, the display controller will say, well, I will try again later when the panel will be there. And the panel will say, well, I will try again later when the display controller will be there. And you're left with nothing. So in the CDF model, all the entities use deferred probing and the display controller uses the notifier. And we express, in DT, in the device stream, the topology of the hardware. So the topology that is modeled by the media controller, that's expressed in the device stream. So we have in the tree, which we express all the links between the devices. So in this example, I got an NGMI encoder that has two parts. There's an input and there's an output. It's a pretty simple chip. And there's a property in the device stream that says, oh, this part, this input is connected to the output of the display. And you get properties in the other nodes as well saying my output is connected to this or connected to that. So all the connections are expressed in some kind of overlay to the device tree because the device tree is based under the representation of the control buses on where the device sits from a control point of view. And we overlay on top of that the representation of all the links between the devices. And we get code in CDF that the driver can use to say, here's my device node in the device tree. Please go through all the links and find all the devices that I will need. And that's how you build a list of the devices that you need and then register the notify. There was still positive feedback but it's a bit less than before. With someone working in a MEP-DSI bus implementation. So I mentioned that MEP-DSI, MEP-DBI were problems I needed to solve but they actually completely separate from the CDF concept. It's just adding a new control bus to the kernel to be able to control the panels but it's not tied to the model that we have in CDF. So I've posted that as part of the CDF patches because I needed it for the hardware I was working on and people thought that it was part of CDF but it's totally separate. I'm fine with pretty much any solution for that. What I'm really concerned about is the device model and the device tree bindings and the probing model and everything that's in CDF. So MEP-DSI, MEP-DBI, what can be considered as out of the equation at the moment. Then the couple of patches that I've posted to the list that actually solved part of the problem that I had. That's the Deer and Bridge infrastructure that creates support for a component that will sit on a pipeline between a CRTC and encoder more or less. So instead of, if you have more than just one encoder you can use the Deer and Bridge to model that. So it solves just part of the problem. There's also a display panel support RFC or patches that got posted and that got merged if I'm not mistaken with a proposal to just support a panel in DRM. At first I wanted to have an API that could work with DRM or KMS and FBDF, FBDF is out of the equation so DRM model is fine. Fourth, RFC of CDF that's to be done. Not there yet, I'm going to work on that. So I'm going to take all the feedback I've received, a complete implementation that I've started and see how it flies or doesn't. It's going to be based on a different configuration model. So instead of having the Russian adults model with all entities forwarding the calls and knowing about each other, there will be a central piece of code that will know about the whole pipeline. It's a linear pipeline in this case, but it can be tree-based. And that piece of code will handle the configuration. It can be a pretty complex process but it's going to be outside of the entity drivers because one of my goals is to keep the entity drivers as simple as possible. They just need to care about themselves and not about the connections between themselves. So that will be an external piece of code. To control the video stream, same as the previous model, so we go at some point to the entity at the end of the pipeline and we just ask for the video bus to be enabled or disabled. So that doesn't change much and that will not change in the fourth version. There's, well, running a bit out of time and another thing that's that interesting except for questions later, but there's a set of operations that are implemented by the entity drivers, controlling video operations. And finally, I got quite a lot of pushback. People told me, well, this is great. Except that you know you're trying to solve too many problems at the same time. So can't you just cut that in small steps and like solve the first step and push that to mainline and then go to the second step and do that until you reach your goal? Well, that's a bit of personal story. I'm going skiing next winter. So I live in Belgium, right above there, and I want to go all the way to the arts in France. And what I would like to do is take my car and drive just straight ahead. That's a shorter path, shorter sparse. There we go. I'd like to have that thrown in front of me and would like to do that. And that's impossible and understand that. People will ask me to cut that in steps and take a couple of side steps and well, that's fine. I mean, we're going to a bit more windy road. Well, why not? I mean, the scenery looks nice. It's going to take a bit longer, but if it can get me there, well, that's fine. But the thing is, all the people I've talked to about the various steps, they told me, you know, for that step actually, well, that's an occasion ID, but you know, for just that you should do it this way. Which is actually not a step in that direction but in the other direction. And they got points. I mean, for that particular steps, it's really valid. I mean, I think my solution is good, but they think the solution is better and the solution, definitely not bad. So what happens is that if you take all those steps and you add them up in the way people would like me to take, well, I'm going to end up skiing in the Caribbean and that's not going to work. So that's my problem at the moment, right? I got a big picture problem I need to solve that's made of several problems. There are different people who care about different parts or people who do not care at all. And they want me to solve the part I care about and they want me to solve it in a way that's not really compatible with the big problem. Contact information, the same as usual, we'll get to the questions, but if you want to contact me, there's my email address and the mailing list and I'm still here for the rest of the evening and tomorrow. So no discussions, don't be shy. There's a microphone in the front. You hear me okay? I guess. My question is normally what I deal with in long term is bring up a new silicon and new displays with new hardware and one of the major issues I've had with doing that with KMS and related without frame buff, FBDF is that it can be very, very difficult to find out which aspects of the chain of events has not been working properly and it's very difficult to inject debug data in there. For instance, I simply just want to, I read Pixel on the first coordinate of 00 so I can look at it under an O-Scope. That has been a major problem in order to do these types of diagnostics. Based on what I was seeing here, it should be a much easier solution in order to do that. Has anything been accounted for in trying to do that type of diagnostics early on in the development? Okay, there are two parts to that. The first one is that if you want to actually get a pixel on the screen, you have to have support for the whole chain, right? So that's not something that's going to change. You will still need drivers for all the companies in the system and you will still have to have them working properly. That also, two parts of your problem to diagnose this problem. If you want to say, I want this red pixel on that part of the screen so that I can check. Well, you need a user space component to actually configure the hardware to do that. So we don't have that many test tools for KMS. We got a couple of them and the situation is improving, but that's definitely something we could address. It's not really, well, it's not related to CDF itself because it's more of a user space API problem and having the proper test tools to help you because you need to spend a week understanding the API and writing your own test tool and then not knowing if it's working properly and that's definitely an issue. So that's something that we can definitely work on on the KMS side. Then on the kernel side, as I mentioned, you still have to have support for your whole chain. One thing that's modularizing the drivers and the way we support the hardware would bring is that if you reuse one component between different systems, or if you use one component that's already supported in the mainline for a different system, then, well, the driver might not be bug-free, obviously, but at least you will have a starting point that you will not have to hack on and that could be, hopefully, trusted, or at least that wouldn't be the first candidate when you check for problems. Well, my concern here is that each one of the chains that you're talking about I'd like to see the ability to inject or get debug data on each one of those levels to make sure that I can identify at which point the issue is failing. That we're, you know, I can say at this point, everything beyond that chain is working properly. Now I can go to the next step and see if it's working properly. I mean, is that a reasonable? That's a bit difficult because the pipeline I showed you are harder pipelines, right? So you inject the pixel, on the memory side, you create a frame buffer with your red pixel and then it will go through the hardware. There's no software way to, in the middle, try to inject something or to read it back because it goes on video buses that you don't have access to. I mean, it's just harder buses. So I think that's... The entities that you modelled are controlling entities. Yeah, so on the software side, all the entities that you see in the graphs, they just control the corresponding hardware and the graph that shows the video streams, the video stream is going on the hardware from chip to chip or from IP card to IP card. If you're talking about like, for instance, a bridge chip on the outside, you can pull that and actually see it. One of the problems that we have with a lot of the HDMI bridges going from parallel to HDMI is when you set up the driver, you're sending one data and it thinks it's grabbing another set of data and creating something totally different. Yeah, I have to have some sort of point to work with it. And in the past, the only real way to do that has been to use FB Dev because of such simple aspects of working for the driver. Well, the thing is it's not true. I don't think it's really an FB Dev issue. I mean, from FB Dev came that just use space API problem. If you have bugs on the driver side and you don't configure like your HDMI transmitter properly, it doesn't matter whether you use the FB Dev or the KMS API and use space, right? Agreed in that aspect, but the difference is because KMS and using based on what I'm seeing here, you have much more items in the chain, much more places for the failure to occur. Well, you don't have to. I mean, both model the hardware. So if you get a simple hardware, then you have a simple chain. If you get really complex hardware, like chain encoders, for instance, then you have to model that because it needs to be controlled. But if you get a simple chain, we don't really add complexity under software side. The model can model complex chains and complex hardware, but it doesn't make it more complex if the hardware is simple. So you're saying early on in the sequence like, for instance, if I wanted to model something that was as simple as going directly out of the DPI to the HDMI with no other encoders involved, that's possible and easy to do based on what you're saying. Yeah, in the KMS or the CDF model, that's, we don't want to make that more complex than necessary. We just need a model that can express complex hardware, but that doesn't create a complex model of the hardware simple. Next one. Yeah, I'll give you the microphone. If you can just pass it around. Just wanted to know how does this will integrate with the existing KMS API? So the important point I actually haven't mentioned that is that CDF does not change the kernel to use a space API. So it's just an internal model that can model you hardware and that can allow you to have a modularized support with different drivers and have all the information in the kernel. It might expose that information to use a space through the media control API, but that's not even required. That would just be something on the side that you can use to have fun, but it's definitely not required. So the kernel to use space API will not be modified at this point, it's really important. It will, the KMS API will need to be extended in the future because we're getting more complex hardware that at some point we won't be able to support properly. It will be harder features that have no support in the kernel to use space API and it will have to extend KMS for that, but that's totally orthogonal to the CDF development. I don't want to change the kernel to use space API otherwise there's no way it's going to be accepted. So basically KMS will call CDF? Yes, so it's not that it's a layer underneath that KMS will call, it's just the KMS driver will use the CDF helper functions to create and to control a complex piece of hardware that's made of different entities and different chips. And it's not required. I mean we will still be able to create a KMS driver that doesn't use CDF. It's, you can think of it as a helper that can help you support something that's complex, especially in the embedded space when you have lots of companies that handle by different drivers. Yes, we have maybe time for one last question. Okay, so I mentioned quickly go through that. So this control model, right? Okay, so the idea is that I want to move the control code out of the entity drivers to something called a pipeline controller and there's no way we'll be able to create a pipeline controller that works with every hardware that exists. So the idea is to first support linear pipelines and have a generic pipeline controller code for that that will more or less forward the, propagate the configuration from entities to entities and from pad to pad. So it will like set the video mode on this pad then get the corresponding video mode from DTE or the other pad and propagate that on the next entity and do that until the other pipelines. That's more or less the idea. It's pretty similar to what we do with video for Linux except that we don't have a pipeline controller in the kernel but it's in user space in that case. So controlling all the parameters of all the entities that's available to use space in video for Linux nowadays, but this pipeline controller code will be in the kernel. Then if you have a more complex pipeline that's really specific to your system then you can create your own pipeline controller code. If you have a more complex pipeline that might have like a branch but that is still pretty generic then you can extend the pipeline controller and create a different one because you have different pipeline model. So I want that to be pretty dynamic because there's no way I'm going to get it right in the first try. So I want to create one that's pretty simple that just supports linear pipelines because that's the majority of the hardware nowadays and something that we can extend later. But the idea is that I don't want to have the complexity of the configuration so I don't want to have code inside all entities that will need to care about the other entities around because otherwise every entity driver needs to be more complex. I want to isolate the complexity in a single place. But we can definitely discuss that afterwards. I think we're running a bit out of time. Yeah, it's 25 and I think that bus is leaving in like five minutes. So thank you.