 So, my name is Danby Grass. I work at NSP semi-conductors, supporting our SSEs upstream in Zephyr. And today I want to talk about the state of graphics support in Zephyr Autos and, you know, where we can go from where we are now. So, first off, just kind of a discussion of what we're gonna go over today. First, we're gonna have just a brief entry to 2D graphics for anyone in the ring that's not familiar with graphics, just so we kind of, I'll have a baseline, and then we're gonna look at what graphics support exists in Zephyr today. From there, we're going to look at what graphics hardware we want to support within Zephyr and use that to define some use cases. And then we're gonna review a couple of existing APIs, just kind of to see what is available in the landscape because there's no need to reinvent the wheel if we don't have to, right? And we're gonna look at the kind of current working state API proposal I have for Zephyr and then discuss some of the future work that needs to be done to move this forwards. And then from there, we're gonna try and leave some time for open discussion. So, let's go ahead and start just with a basic intro to 2D graphics. So, there's kind of two parts of 2D graphics. There's vector graphics and master graphics. So, vector graphics is a mathematical description of an image. So, the idea here is something like a font or a logo, anything that you can define with a series of curves and arcs and that can scale up and down. So, for example here, this is just Zephyr rendered in a nice cursive font. And then with graphics, that's just a 2D pixel array. So, it's best for stuff like photographs and other images. And if you can define it with vector graphics, you're gonna be able to scale it. You're not gonna have issues with upscaling, downscaling interpolation, stuff like that. So, now let's go ahead and discuss blending versus clipping. So, this is two kind of additional things to be aware of with graphics. Blending is the idea of taking two rosters and mix them together. It's typically more useful when they have an alpha component. You can see an example here of a couple of different ways you can blend two shapes together, putting one over the other and how the alpha components mix. You can also use that for kind of more complicated clipping operations. Clipping is kind of a subset of blending. It's, for example, when you're drawing a path, you can clip two within a rectangle right here. You can see you've got a circle path and you're clipping it down to get a different shape out of it. So, it's just kind of a basic overview of graphics terminology. And now, let's go ahead and jump into path. So, this is kind of the final thing I wanna cover with graphics. It's useful for 2D graphics because this is the way that you typically define a vector graphics definition. So, like SVG uses this, right? Scale vector graphics. So, 2D vector graphics engines use this as well. This is an example of a path here. Move a virtual cursor to a coordinate, draw a line to a different one, and then we have a quadratic curve, which just has one control point that denotes how that curve actually, the shape that it's gonna have, and then a qubit curve, which the two control points there allows us to have an inflection point. Additionally, here you can use path to define the way you're gonna draw, say, a P. For example, with a P, you have the outer path. You define the outer shape of P. What a P looks like, I hope. And then you have the inner shape that's kind of just a semi-circle. And what you can do there is you can define how that path is filled. So, you can say if my pixel lies in multiple paths, then I'm not gonna fill it. So, that's how you get that gap in the middle of the shape. So, this is kind of the way that you interact with a vector graphics engine generally, and it can help us define an API for how we would work with them in Zephyr. So, now let's go over the current state of Zephyr graphics support. So, first off, the CFP subsystem. So, this is a subsystem that we've had in Zephyr for quite a while. It allows you generally to use character-framed buffer fonts. So, the idea here is that you can take a font and you can convert that into a raster, and then you can render it onto a monochrome display. There was also recently a PR merge that allows you to do basic shape drawing. But the key with all of this is that it's kind of monochrome only. So, this is not meant for more complicated multi-color displays. There's no concept of things like blending or complex shapes. LVGL is where we go typically for more advanced displays, displays that have multi-color. You can use it with monochrome as well. But if you're gonna go for multi-color stuff, typically we're going towards LVGL. Upstream LVGL does have support for several graphics acceleration engines, but Zephyr currently doesn't enable this. So, right now I believe we're on 8.2 and our module revision and upstream is 8.3 of seven or something, but the Zephyr enablement doesn't have any of those graphics accelerators ported over. But LVGL does right now have a rich set of APIs, the graphics accelerators can implement. It's just since we don't, we're doing all drawing and software with Zephyr right now. So, with that, let's start discussing a couple of the hardware engines. The idea of going over these hardware engines is kind of just to give you an idea of what exists in the landscape, because obviously that's what we wanna support. We wanna try and define a generic set of APIs for Zephyr. So, first off, these are two peripherals in NXP. So, the first one, the PXP airs. This is a kind of architecture diagram of it on the side. It supports the ability of doing blending. So, we can take two different surfaces, a surface with an alpha component and a process surface and blend them together. And there's a subset of all the possible blending modes that you can do here with these two different buffers. And then beyond that, you can also do rotation after you've done that blending. So, there's just a simple rotations, 270, 180 and 90 degrees, but you can use that, for example, if your output display has a different orientation than the buffer you're putting in. Additionally, you can scale and you can do color space conversion. So, if you've got an RGB fast side display, but you have an alpha component in the buffers you're putting in, you can convert it before you output it to the final display. Additionally, it has this feature where you can output directly to an LCD control. And that's really interesting because it allows you to reduce the net amount of SRAM use generally if you have the PXP. And it's a normal use case. You've got a screen size buffer, which on some of these larger screens can be quite a bit SRAM. And then a second screen size buffer and you're telling the PXP, go from screen or from buffer one and output into buffer two with color space conversion or rotation or something like that. But it also has the ability to kind of do a handshake process via you can see here, this double buffer, the IRM double buffer to handshake with the LCD controller and use a much smaller buffer to do that process of output and display. So then the smart DMA is a second peripheral on our cover here. This is a simpler peripheral, but it kind of has some of the same features, particularly pixel format conversion and rotation. And then this key one of direct output display. So for both these peripherals, I wanted to highlight them because both of them have this ability to pipe directly to the display. And for those who don't know, it was Zephyr's graphics API or I guess display API right now. There's not really a way to write directly to hardware with something like rotation in the mix. You can say, just write a whole buffer to a display, but there's no way to say, hey, when you write that buffer, go ahead and rotate it or convert the pixel format or something like that. DMA2D is a peripheral from ST. I wanted to highlight here. So this is somewhat of a new way to the PXP. It's a 2D DMA engine generally intended for graphics operations. It allows you to blend the foreground surface onto a background via kind of a preset equation that uses the alpha components of each buffer. It also allows you to just set a buffer to a solid color. That's a fast clear. So, you know, if you just want a red background, you can just tell the peripheral, give me a red background. You can also do color space conversions and it also has the ability to use color bookup tables. If you're doing something like say, you'll have an RGB 565 display, but you only want to use eight bits for your pixels. You know, you can use a color bookup table to convert a subset of all the RGB 565 colors or yeah, convert it and do all the RGB 665 colors. So these are kind of more the complicated graphics engine. So GC3225 NanoIv, these are two IP bucks that are integrated into NXPs SSEs. It's based on the Vontae GPU IP. So these are both supported by the VG Light API and they have quite a few features in them. They have the ability to do blending as well as vector-based paths. So I was highlighting earlier, you know, the concept of that vector-based path with SVG. This is where this shows up on these vector graphics engines. So this is a render output actually from I think the GC355 and you can see, you know, we've got a gradient right here on this iffy little D and we've got kind of a couple of other simple shapes, but you could see how you could extend this out into UI elements and stuff like that, drawing the, you know, the rounding of a rectangle to make a rounded rectangle as a button or something like that. And additionally, you can do stuff like clipping an output image within a path and there's this concept of using a transformation matrix. So you can do scaling and rotation on a path or a buffer and you can also translate it as well. So with this, now I kind of want to discuss the use cases that we have for Zephyr. So the division here that I kind of see is we have more basic engines, things like the PXP and AMA2D and then we have more complicated vector graphics engines that also can in some cases do raster graphics like the clipping within paths. So the first thing with these basic vector graphics that use case I think is useful because, you know, there's a possibility to do some of this with accelerated software and doing, for example, there isn't a hardware engine I'm aware of someone has one that I don't know about. I'd love to hear about it after the presentation that can do vector graphics rendering but doesn't use this kind of arbitrary path architecture. But having APIs that are specifically for, you know, draw a simple rectangle and draw an ellipse, draw an arc, stuff like that allows you to potentially do software acceleration if you want to implement an optimized software renderer say for Cortex-M. And then arbitrary vector pass, this is meant to take full advantage of those things like that GC355 where you can pipe in a series of path commands that will then draw you an arbitrary shape. And then from there is the basic raster graphics. So this is for things like the PXP, the AMA2D. I think that the reasoning here is essentially that we want to be able to support these with a API that just works for them specifically and that doesn't, you know, require them to implement part of the API and software if you tried to do kind of the more advanced clipping or an arbitrary rotation on the PXP you'd have to do fall back to software acceleration essentially when the PXP was doing anything besides these fixed rotation angles. So we want to also expose things like blending there. And then for advanced raster graphics, that's again trying to take advantage of these more advanced vector graphics engines. The idea there is we want to be able to leverage this fully because if you've got, you know, an application that wants to draw a image, you know, within a circle, you want to be able to do that type of clipping or something with the APIs that we expose. And then finally, direct output to display. So this is kind of an interesting one, but I think there's a lot of value in exposing it because there's two, these two peripherals from an XP that are capable of going straight to a display and don't need some type of intermediate buffer to do it. So now let's start looking at a couple of APIs. So the first one I want to go over is the VG Light API. I mentioned this earlier, this is enabled for the Vonti API, that GPU, the GC3G5 and the GC NanoLight V. And what this is capable of is essentially doing path drawing based on draw commands. It's primarily, I mean, it's in the name vector graphics Light API. It's primarily the vector graphics API, but you can essentially do path drawing with the draw commands. You can do gradient solid color fill, rotation and scale. If this looks a lot like the GC3G5, that's because this API in many ways meant to support those GPUs, right? But it also enables raster graphics and then a subset of this kind of ported up blending ones. That's just a list of all the different ways you can blend together to buffers. So this is designed by Varys Hogan for use on those Vonti 2D GPUs. But the problem with this API in my view is that it doesn't have simple graphics functions. So you can't really use it on something like the PXP or DMA2D. It's only capable of working with these more complicated engines. So we leave performance on the table for peripheral or for SSEs that only implement those peripherals. Additionally, we don't have a way to write directly to displays, which there, you know, we're also leaving the ability on the table to potentially save SRAM by just piping directly to a display and not having a buffer we render back into. So at least you're back and I wanna highlight this one. It's not an API in the traditional sense, but LVGL has already done some of this work. LVGL has already worked with a lot of these IP blocks and supported them. And in doing that, they've had to define a way to interact with these IP blocks that are relatively generic. So LVGL has vector graphics APIs for drawing a whole bunch of different shapes, including like a polygon, but there's no support for things like arbitrary paths. It has raster graphics, the ability to do an arbitrary transformation essentially with output coordinates, and then again a subset of ported above blending modes. Interestingly, you could do direct output display with LVGL. So there's this display flush callback you get, and there's no real requirement from LVGL on how you use that. You could leverage a peripheral to just output directly to the display in that callback. What you couldn't do there is you couldn't also implement rotation right there, unless you were aware specifically how your display pipeline looked, but we can't really do this in as much of a generic way. It's gonna be a little bit more hardware specific. Each display pipeline's gonna need a little bit of code modification to support it. And additionally, it doesn't have that support for arbitrary paths, but I think it's worth highlighting because it can work on all of the listed hardware engines. This is the only API I'm aware of that could be leveraged that way. ARM2D is one I wanna highlight as well. So ARM2D is kind of in the early stages, but it's generally intended as an optimized software backend for Cortex MMC use, but it supports the ability to custom GPU backends as well. So this has simple draw operations and some simple raster operations as well, and the ability to color space conversion. But the drawing APIs are limited. There's no direct output to display. I think the big thing here is that this is an example of doing software backends. It's something to look at as what can be done just as an optimized renderer on a Cortex-M MCU. So with that, I'm gonna kind of go into an API proposal. It's worth noting this is certainly an initial draft, but it's kind of, I wanna use it as kind of a discussion point. A place to start from to discuss, what do we wanna do from here? Do we want to define an API? Do we want to use something coming from LVGL or from ARM2D? But essentially the idea here, we discussed the use cases and the API is intended to align with them. So we have the ability to do a couple of general shapes. And you can see on the side, I've listed what each peripheral actually could do. So like the simple rectangle, that's no rounded corners, anything like that. That can be implemented on quite a few peripherals because a lot of these peripherals let you just do like a fast clear. And that's just a rectangle if you bound that fast clear within a certain area. And then we also have the ability to do arbitrary pass drawing. So the value there is that we can still take full advantage of these more complicated peripherals because where this would likely at least initially be implemented if we use an API like this is in LVGL where you already have that backend. And then we could potentially use it for other graphical accelerators or an application could use this directly. And then this GPG flash, I'll discuss it more in the next slide, but this runs like a GPU operations. All these GPUs have an architecture where you can queue up an operation, multiple operations. And then flush it out and run them all and to take advantage of the hardware best we want to queue them all up and then run them as soon as we want to render at the end. And then this right display is intended to satisfy those peripherals like the smart DMA and the PXP where you have the ability to output directly to a display. So let's go into a little more detail about how you would use the API. So the key concept of this API proposal is the idea of a render operations queue on these more complicated GPUs. You can queue up quite a few operations before you actually do a, I apologize if these APIs are a little bit small, kind of summarize it. The idea here is that basically you are queuing up a series of operations. So you're queuing up like a draw operation for an arc, a draw for a line, and maybe a blitz or a copy of an image. And then you are flushing those operations. So the idea there is that if you queue up multiple operations and get better performance out of your GPU, you can get better utilization on other ones at the GPU talk right before this. But yeah, you want to queue operations to do best hardware utilization possible. Then once you've queued up all these operations, you flush them out. So the typical use case is that you're going to, when you flush the queue operations, there's gonna be a render that comes out to another buffer. And then you're gonna take that buffer and stream it to your display with the display write API. But then we also want to, with this 2D write display, have the ability to output directly to a display. So in that case, you would call that API last in your process of queuing up operations, call the flush API, and then you'd get an output directly to the display hardware. So that's really meant to satisfy those things like the PXP, because we want to take advantage there of the ability to do something like rotation or scaling or color space conversion right at the end of the pipeline. Okay, so with that, the thing I want to highlight here is the idea of a virtual GPU driver. So this is meant to enable the use case where maybe you have multiple peripherals, each is better at something else. So for example, on the RT1170 is actually what this might be used on. We have the GC325, that's a vector graphics engine that can also do some loading operations. And then we have the PXP. And where the PXP can really be useful is something like display rotation at the end of your pipeline. So the idea here of this GPU shared node is that you can define multiple graphics engines that all implement this GPU API, and then the shared node will expose kind of a wrapper around the API. It'll implement the same API, but under the hood, if you call the API against the GPU shared node, it will route to whichever engine you need to use for the specific operation. So for example here, you've got kind of a call graph, right? There is a process using the LVGL task handler, and then that calls a function that we would implement within the LVGL backend to draw a rectangle. And from there, that calls into the GPU API. And you can see we have two calls here because we're first going to that virtual emulated GPU driver, and then that goes down to the lower level. And then from there, we go into whatever hardware structure we have and then directs with the hardware. So from there, I kind of want to discuss a couple of alternatives. So the first alternative I want to discuss here is LVGL, right? I highlighted this earlier. It's really worth noting because there is a lot of support already for LVGL. LVGL upstream, if we synchronize with it, has support for these hardware blocks that just needs to be brought over to Zephyr. There's the fact it's a proven abstraction layer. But the big downside here is that we don't get the ability to be portable. So we're locked into LVGL if we do all our support there, there's no ability to do another drawing framework or another framework like QT. There's no ability for someone to develop directly against any GPU API because we don't have an API. We're just implaning to a backend. But we'd get a lot of support for free that we otherwise wouldn't. So that's a big advantage there. VGL is another one I want to highlight. The particular reason is because it's already a well-defined API that has some of the things that a more mature API, like a 3D API might have things like conformance tests and implementations. But it is managed by one company. It's managed by a very silicon. And it doesn't have APIs for things like the DMA2D or the PXP. So with that, I kind of want to discuss future work for a bit. So the first thing here is LVGL module updates. So right now we have a kind of legacy version of LVGL, frankly, in Zephyr. There is a PR to move that forwards to, I think, 8.3, which will get some of the graphics acceleration code. But right now that's kind of a process. We have no ability to just kind of synchronize with LVGL. I'm open to discussion. Personally, I think what we want to do here isn't to keep holding our own downstream fork of LVGL. I think we want to try to upstream our support for Zephyr into LVGL. For example, we do this with mcaboot. We have mcaboot, just we copy the upstream repo. And if you open a PR to the Zephyr version of mcaboot, you get told, send that upstream. And the advantage there is obviously we'd track upstream LVGL. We wouldn't have this issue where we fall back out of sync. We'd get that support right now. And even if we don't go the route of implementing in LVGL, we still need these module updates to use LVGL, which is kind of the primary graphics framework we have because of the fact that LVGL didn't really finalize their definition for a lot of this GPU backend until the more recent revisions. From there, and in an RFC here to open discussion, and then if we settle on something, actually working on API implementations. And yeah, that concludes my presentation. I wanted to leave a lot of the time at the end here because frankly, graphics hardware is kind of proprietary. And I've looked around, I've looked at the parts that I'm aware of it or available, but I wanted to leave time to discuss what other use cases we think there are and what we want to support, basically. So yeah, any questions, comments? Thank you, Daniel, for putting that together. You highlighted a line item on the, if you go back to the previous slide. Is it, can you create an initial RFC, and actually just start it with some of the items you have in there? Yeah, I think that's a plan to do an initial RFC with that and then go from there. I mean, right now, if there's any discussions, anyone sees immediate issues. I'd love to discuss them because I think that it's, we're very much kind of at the beginning of like these graphics is already starting to show up. Okay, I have a follow-up question on your, on the pipelining stuff, when you were talking about the, on the slide, that one there, yeah, would, is the display, trying to figure out the last command you send to get it to kick off? Does it have to, is your intent to have the underlying drivers for various hardware to know if it has to split the actions because there's too many queued up commands that are being put down? Yeah, likely. I think the initial kind of way I was envisioning is that we would have return goods, essentially you get a return good that says you need to flush the GPU. You're out of queued operations. We could also do this in a subsystem if we're okay with the additional overhead there. And that's, I guess, an option I'm considering as well is kind of putting more subsystem stuff in. I think one of the big drivers here will be performance. If we find we need a subsystem to get better performance out of the GPU versus just looking at return codes, then that is probably the way we'd go. I mean, the main thing here is we want to try and queue as many operations as possible and avoid doing a flush until we absolutely have to. That also comes into play if we're, you know, on some of these engines, since some of the operations are only, you know, only some of the operations are supported, we're gonna run into issues where you go to do a more complicated render and you need to render in software. So then you have to flush out the GPU because you need to get your buffer up today with whatever you've done, you know, whatever renders you've queued basically. That gives me one more question. Sure. On the rendering in software versus using hardware, how would you anticipate the work, the development flow here as far as who's gonna, who would implement the software? Would you envision like the first implementation is just doing everything in hardware and if that platform doesn't have it, then do software and then as people add drivers, they would have to add software if they don't cover it or... I think the implementation will look more like we do a, you know, we have hardware renders that implement what they can. And then initially, it's just accepted, you know, if an APS is not implemented, you fall back to whatever your framework's software rendering is the reason is because like LVGL, for example, will fall back to software rendering if you say this is not supported, right? So we have that for LVGL right now. We can implement just what operations hardware supports. I think long-term it'd be great if we, you know, can get some type of optimized software back in specifically for Zephyr that takes maybe better advantage of the architecture of a specific MCU or something like that to do it, you know, on the Cortex-M core. But shorter term because of the fact that LVGL does support just falling back to software rendering, we can just use that essentially when you get, you know, an operation that's not supported by the GPU. Okay, thank you. So Carl's is asking, okay, saying, thanks for the presentation. As you might know, Ambig SLCs have been added upstream. These embed a GPU from ThinkSilicon. This is a quite advanced GPU. Have you considered devices like that or ST Neo Chrome 2.5D GPU? I haven't, no, but this is exactly why I put the stock together because there's, you know, a lot of GPUs moving forwards. And I, you know, I'll take a look at those. I think that we're seeing more and more stuff coming into Zephyr that has GPUs. And the idea here is to try and define an API for it as we get them in and kind of try and collect use cases right now is the stage I'm at. Okay, and we have another one from Maureen. Has there been discussions on the LVGL community around the API shortcomings that you're trying to address? I am not aware of any, but I think, and I think it's also for LVGL, they generally, you know, because of the way they design their API, right? If you, the shortcoming that they have that I see is that you don't have the ability to write directly to a display, but for LVGL, they just have this display flush callback. So there's no real problem for them because they're envisioning, you know, in the way that a lot of vendors integrated is they just kind of implement these GPU backends and any type of flush operation specifically for their hardware. So if you are an embedded developer and you're doing an LVGL port and you know you need to use the PXP to rotate by 90 degrees for your specific application, great. You can just change your display flush callback to invoke the PXP and do that rotation for you. But for Zephyr, you know, I wanna push something a little bit more generic. I think that, you know, there's value in opening that discussion with LVGL. I think the main thing I want to, you know, look at doing is seeing if we can just upstream support for Zephyr there because then we have the advantage of getting, you know, more of a conversation going there. But I haven't seen any specific discussions on there and no. So this is actually more for the audience here. Daniel's talk here was also somewhat converted to a birds of a feather, which is, I wanted to help facilitate something here. I mean, how many here actually have platforms or whatever that they need to consider graphics support like this in their products or in their SOCs, right? So that's good to see. So one of the things that, you know, Daniel will get involved with here though is that the plan is to try to maybe set up an ad hoc or more official kind of working group type of thing within the Zephyr framework so that we can get these discussions going and get some code put together that we can start, you know, trying these things out to see what's a good direction for Zephyr to go in. And you know, he's very active in the Zephyr community so you have no problem finding him and reaching him. But I would encourage each and all of you guys who have an interest or have the expertise behind this to get involved and help flush these things out. You know, the proposal he has up here is, do you have any code yet on this? You have some code? Just the API definition. Okay. But I mean, you know. He has another job too, so. Yeah, no, I mean, kind of the stage I was at was I didn't want to go right a whole bunch of backends and then have somebody raise their hand and say, well, this is not going to work at all, right? But I have done the initial kind of API definition and put together like structs and stuff like that that you could pass in. So it's at the stage where we could, if this is the API I wanted to go with, we could start actually. Prototype. Somewhat close to being able to prototype out a driver and start seeing how the performance actually is. Right. Yeah, I happen to know his manager, so. Just real quick to the growing interest and so forth. I don't know if we've, if there's an RFC for this and then if we've raised in the arch review as well, just to garner, again, more to find who's interested. Yeah, no, that is the plan to open RFC with us and go from there. So is there also, are you planning to also do some more support for intrusive stuff or scheduling jobs as well? It's something that I have thought about. I think, like I was saying earlier, the thing that's gonna drive if we do stuff like scheduling jobs and stuff more like, what Linux does with managing that is really gonna be performance. If we're seeing that there's a lot of pipeline bubbles, a lot of issues we're running into with performance, I think it would make sense to start putting in some type of subsystem that handles job scheduling. But this being embedded, I don't wanna put anything in, we don't need, right? If it can work without it, we should do it without it. If we're running into a bunch of issues with performance then we should start considering that, yeah. Yeah, that would also be interesting to see. I use some other kinds of tools, so it would be really interesting to see. Thanks. Sure. Yeah, thanks for talking and for driving this initiative. I think I really want to emphasize that I like this virtual GPU idea, kind of capabilities thing that will map pretty well to embedded hardware. And I'm not closely work with GPUs, but at this conference I've seen two examples where the API can be kind of tested. It's a smartwatches, two talks. One was shown at the booth, I think Nordic MCU showed it. And you could really test it and it has a work. Yeah, the smartwatch demo, you mean? Yeah. That's using one of these GPUs. That's using the GC panel IP. Okay. And another was the Zephyr smartwatch projects in nine months, I don't remember the talk. But yeah, maybe mapping this API to that, at least for the second project, the code is open source and yeah, would be great. Yeah, no, I think that'll be, I mean, and I think the one that NSP has with the smartwatch demo will probably be one of the testbeds for this to see how it actually functions because it's a real use case, right? We can see how the acceleration actually works. There's also, as we do API completeness here, we'll need tests that render rather than to a display to a frame buffer and then mem compare the frame buffer to what we actually expect. But yeah. So actually, I tried to use LVGL with 1064 and 1170 to try and like, I don't know, see what we could do. And obviously it doesn't work like you said. So I was kind of curious like, you said you had a PR, like, is there something where, let's say short-term? Sure, yeah, so we're using it with the 1064, were you trying to use like PXP or something like that? Yeah, don't have an open PR. I think short-term we could potentially do the PXP as like a DMA engine if we needed to. If it's Durgan, we can talk after and we can discuss it. I think there's also the fact that for right now, at least there's always the option of just putting something in the LVGL flush callback because that's still pretty extensible. But the PXP I think is one of those that we could support without a full API for the case of like rotation or something like that, because it's just at the end of the pipeline, we could all get in with the LCDI off-driver or something like that if we needed to. Okay, there's some like PR or something, I'd be interested to see if we could get this in the future. Actively know, but it's easily. Yeah. Okay, I don't think there's any more questions, Daniel. I think the action item I heard was for you to get the RC out there. Okay, good. Thank you very much for your presentation. Thank y'all. Thank you very much for coming.