 So here, can you do a quick demo of how it works? Sure. So let's just take this as an example. So basically what we do is we supply these application templates, which have all the necessary driver code and stuff to be able to program your application onto one of the development boards, STM32 development boards. So in this case, this is an application made for the STM32F769 discovery. And we just have two screens. So we have a screen one with a button and an image. And basically what the designer will do is generate screen definition code for you and some spots. We take this code with you, enter your favorite compiler, like ID or IEL, or IR. And for each of these images, the tools of TouchFX will do the conversion into some spots, pixel arrays with those particular image formats, those particular clips in your font that you're using for that particular size and the quality. And basically then what we'll do is just to add some life to this application is use something we call interactions. So for instance, this particular interaction will transfer control to a screen two using a slide transition from the east. And basically this screen will do the same thing for another button that has a text just going the other way. If we take a look at some of the code that's being generated by the designer, it's going in here. So this is something that's modified by the designer. So anything you write in here will just get overwritten. But what you also get is a concrete implementation of that particular class inside here that you can use to modify and do whatever you want, custom in your own code. We'll not get overwritten by the designer. So let's see what happens when we add this interaction. So this interaction was doing the screen transition, right? So what actually happens is the designer will generate callback. And we can recognize this from what we did in the designer, right? So it is going to screen two using a slide transition. And something else we've done is add a new interaction that calls a virtual function. It's called Fun. And this is something that you can implement as a developer to make the button call something in your back end, right? Make the button turn on an LED or whatever. Send something through a queue to another peripheral task or something else. So let's see this running in the simulator. So basically the design has nothing to do with target hardware. It's just a generator for screen definitions in C++ and a generator for C++ code for your image assets that you can place in external quad-spy memory. This will get moved by the Chromad chip to the framebuffer using hardware alpha blending. So here's the demo that we made, just a simple demo. And basically when we press this button, run target, what will happen then is that it will execute this particular command. So you can kind of hook into these commands and execute whatever you want. So in our case, for the application templates, what happens is that it will just call a make file that knows how to flash this particular bot because we know the quad-spy tip is on there. So if we just press run target, now it'll call make. It'll compile using RGC. It'll program the bot using SCLink with this particular flash loader for the STM3769 scurry. So let's just wait a bit. We can see, so while it's doing that, let's have a look at some of the codes that's being generated by the image converter. So what you can see here, for the one state of the button, it's just generating a huge sparse array of pixel data for this particular image format, these dimensions. So that's one of the states for the button. So it's still a program target code. So it's flashing the target right here? Yeah. This cable is a bit wonky, so sometimes it probably say it can't connect to target now. OK. So it's important to have a good cable. Yeah. And so is that the final step or? So what happens now is that, you can see, couldn't connect to target, I'll just do it again. So what happens now is that the designer and the image converter tools and the font converter tools have generated all the necessary C++ code for this particular application. Then you'll just take your other C++ code, put it into your IR card project, and then you'll start your operating system task, maybe three hours, and start your scheduler. And then some external signal will trigger the hardware abstraction layer and charge effects, and we'll take the application and keep it running, then render some of these screens to the frame buffer, which will then get clocked out by the LCD controller to the screen. Yeah. This is the easiest way to do advanced UIs. Yeah. So now it's done flashing, and we can see the application we made. So it's pretty fast to get started with prototyping if you have one of these development boards, because you can just select these templates, and they're guaranteed to work. You can also, if you want, since 5.0 of Kube Max, you can now select this is graphics framework. This is choosing the board selector here. Then you can just, what happens then if you generate your project over here, generate your code. What happens is that you'll get a trust effects project here, so you might recognize the structure from Kube firmware pack applications and demos. And what's new is that you'll get this folder here, the charge effects folder, which is the same structure as usual. And you'll have some target specific code here, charge controller implementation for this particular screen, DMA health support for GSI. And all this stuff will then get imported into this particular IR project in this case, but so we're constantly adding support or improving support with Kube Max. So the integration is kind of ongoing, but we are working constantly with the Kube Max team to improve integration.