 Let's start the actually session now. Welcome to all of you. My name is Jørgen Mugin, and I will be your host of today's webinar of hardware integration with ToasterFX. The team we have today, you can see it here. In a minute, Martin will take over. But besides Martin here in front, you see Jesper. Jesper is our product manager, heading our technical team. He's available answering questions as we go along. And we have Søren taking care of all the technical stuff, making everything go smoothly, hopefully. Okay, thank you for joining everyone. And please be in mind, put up questions as we go along. We will compact on them in writing, and also some of them checking them verbally. To you, Martin, please. Thank you, Jørgen. Hi, guys. So my name is Martin. I'm a software developer with Tomographyx, and I'll be your guide in today's webinar. So what we'll be doing today is try to get a better understanding of the touch-gift structure generally. And we will develop an application for the STM32F769 Discovery Board, this board. And we'll be using STM32Cube drivers to interact with a few peripherals. And once we've completed all the board setup and verify that all the peripherals are working, we will actually create something that we call an application template in Touch-GFX. It's basically a chunk of code that you can base all your graphical applications on, and that can be distributed and used from within the Touch-GFX designer. And as Jørgen just said, just please ask questions as we go along. After each topic, I'll try to take a short break so we can take some questions. So, yeah, go ahead and do that. So let's see. Our goals today, as we covered just before, won't be involving porting Touch-GFX to new mark controllers and new boards. We won't be covering stuff like general real-time operating system principles and primitives. We'll be using some. And we'll get into those, of course. We won't be getting into CubeMX application generation or project generation. And we won't be getting into specific STIM32 Cube drivers. So what you can actually do is you can go to this link. This is for the F7, the STIM32 F7. You can actually go there, download the Cube package, find a lot of projects and examples that relate to I2C, UART, Quadspy, whatever. And you can use that C code and you can kind of use it along with what we're going to learn today, which is how to kind of interact with peripherals and reflect those changes inside our graphical application, Touch-GFX application. So that's a good idea to do. Okay, so are there any questions at this point? Jan? No? None of this time. Good. Let's carry on. So the agenda today is try to get from A to C. Let's start by talking about relevant hardware. What does it take? What is the minimal hardware setup to kind of run your Touch-GFX applications, static applications? So let's talk about software layers in Touch-GFX and how STIM32 Cube drivers on ST devices kind of affect those layers. And let's go into talking about application architecture when we start developing an application. So we'll be talking about the model view presenter pattern that we use for applications. We'll then get into the core topic for today, how to integrate with hardware. So what we'll be doing today is actually just integrating with a few hardware peripherals on this board, which is going to be just an LED and a button. And this is because then you can actually download the code that we have available today and you can kind of run it on your own board and play around. And we'll finish up by creating the application template and try to distribute that and use it in the designer. And then we'll finish off with a summary and then a Q&A. All right? So let's get into it. So this figure is actually something that you may have seen in a previous webinar. This is something that tells us this is the basic hardware that ChargeGFX needs to run. So we need some RAM for frame buffer, internal, external. We need some flash for our image assets. We need a microcontroller, of course, and we need a display that is typically touch enabled. We have customers that don't use touch displays, but you can still use TouchFX, of course. So what we'll be adding on today is just interacting with some of the peripherals on this board, which is the LED and one of the buttons, as we mentioned briefly. And once we've done this, then we have kind of a chunk of code and board setup or a configuration code that tells us that this board is actually working and then we can at some point in this webinar kind of turn that into an application template that we can distribute to other developers that they can use to develop their graphical applications on top of this particular board or any board. Any questions at this point, Jan? Well, only some general questions we will answer here in writing. Okay, sure. Okay, so you might also have seen this particular slide in a previous webinar. So this is the general software layer architecture for TouchFX. So we have on top the application layer. You know, you write your code, your screen definitions, you have your graphics and your text definitions. Then we have the core, which is where all the rendering, timing and event handling occurs. Then you have the operating system abstraction layer, which is what we require developers to provide if it's operating system we don't support so that we can synchronize framework for access through some OS primitives like semaphores. Then we have the hardware abstraction layer in TouchFX, which is basically the touch controller, interrupt, handling, priority, synchronizing with LCD, handling, DMAs transfers and stuff like that. So generally this HAL layer is where we drive the application forward and protect the frame buffer. And this is something that we won't be covering today, but will probably be a separate webinar because it's quite extensive. So today we'll just be covering this particular board and ST has already provided the board package for us through the link that I provided earlier. And we'll be using that to kind of make an application come to life on this board so we can interact with some of the peripherals. So what parts of these layers are actually affected by cube drivers? So we have the board package, which is kind of all the hardware bring up setting up the quad-spy, setting up the RAM, setting up the LCD, et cetera. And then we have the driver package, which is more TouchGFX minded, which is the MCU that we need to handle synchronization with the LCD. We need to specify how to initialize DMA transfers and we need to use some cube driver code to interact with the touch controller like I to see you something. And so I'll just briefly show you something that if you're familiar with STM32 cube, you can actually see here that this folder here is something that you may recognize from some of the STM32 cube driver packs. This is just basically a copy of that and it contains all the BSP files for this particular discovery board. And here we have a folder which is more MCU specific. This is also something that you'd recognize if you knew the structure of the cube driver packs. So this contains a lot of files related to the F7. So that's just a quick intro there. Okay. So one way to set up your board is to use something like CubeMX, which generates C projects. You can use, I mean, some of the driver packs that they've already provided. And this is something that is required. And as I said, we won't be going into specific drivers today, but we'll be touching on some of them and go through the cube packs and take a look at these examples and projects to get specific examples for I2C, and then you can use it along with what we're going to learn today. So Touch Sapphire is not a display driver, it's basically a piece of software that can render user-defined screens to a frame buffer, relying on hardware events to drive that process. That's what a Touch Certainly is a part of that process. So every time I mentioned, every time I mention STM32 Cube drivers today, it's going to be basically to communicate with peripherals only. So all the porting stuff is not going to be mentioned today. We are working on improving the integration between STM32 Cube and TouchFX. Before we dive into the next topic, let's see if there are any questions. Well, yeah, let's take a few of them. But they are in a more general range here. Do you have available projects on touchtheeffect.com with communication tasks? We don't specifically, and as I mentioned before, because if you download for ST, the STM32 Cube driver packs, you have a wide range of different projects that touch on UART, ITC, usage, quad-spy, SD RAM setup, everything. So I recommend you check those out. I think you'll find all your answers inside those packs. What I'm going to show you today is how to use some of that C code that you'll find in the Cube driver packs, how you can use that code to kind of integrate or make your applications in TouchFX to come to life. So no, we don't provide specific examples for different peripherals because this is something that's very board specific. And for these ST boards, there are so many different examples available already provided by ST. So let's dive into the application architecture before we start developing. So a brief recap of this slide. This is something that you might have seen on a previous webinar. The right side is something that was covered in the first webinar the last time. So I've kind of tried to gray it out a bit because we won't be focusing on that much today. Instead, we'll be focusing on the gray box, the back-end control system and its integration with the model. So the model is kind of the heart of TouchFX applications. Basically, the model will get ticked by a rate of, you know, we're basically going to be synchronizing with the LCD and so the model is going to get ticked at maybe 60 Hertz. So you could do all your hardware integration here but you might risk blocking the GUI task. So what you can do is if you have something that doesn't take a long time to sample, you can do it in the GUI task in the model tick. Maybe you don't want it to be something to be pulled at 60 Hertz so you could kind of skip some of the ticks and then do it less frequently. But I mean, you're not going to get a very finely granulated control of your polling or your interaction in this way. So what you could do is just add on some more tasks and it doesn't matter if it's not an important task, you could give it a lower priority because the GUI task sleeps a lot while it's waiting for the MA transfers to complete so this might be a fine way to do it. And this is what we're going to be doing today. We're going to be basically creating three tasks, a GUI task, a task for the LED interaction and a task for the button interaction. So this is the application that we'll be making today. On your left you have a basic image. So what we'll be using this for is we're going to be using the button on the discovery board to drive an animation based on this image. And on your right you have a simple toggle button which will enable and disable green LED on the board. And we'll have a webcam that kind of shows you what goes on when we program the board. So I've tried to highlight what we'll be interacting with. So yeah, the user button and one of the LEDs. Okay, so we'll be creating an application on the STM32F769 discovery board. We'll be interacting with the LED and the user button. We'll be doing it two ways. Actually we'll be configuring the button as a basic GPIO input. And we're going to be configuring as an external interrupt connecting it to one of the interrupt lines. And using multiple tasks, free access tasks. And we'll be using cues for the inter-task communication. And we'll try to show the relevant code changes for each of these. And I'll try to take it slow because the update frequency of the webinar might not be fast enough for everybody. So I'll try to take it slow so that we won't miss any code. Okay, so we'll start off by creating a simulator version. And the peripherals we'll be using here is we'll be using the keyboard as the input peripheral, as the button. We'll use the console window as an output peripheral just to see that we can turn on and off an LED. For the target version, we'll be touching on this just to show you that we'll be using some of the STM32Q files here. We'll be using the GPIO C file, which is MCU specific. And it relates to interrupt handles, among other things. And then the discovery.c file, which is board specific and allows us to set up and reject with the LED and the user button. And then I will be connecting the user button. The user button is connected to one of the external interrupt lines, line zero, and we'll be using defining an interrupt hand of this purpose. So what we'll be doing is through the designer, you can run the simulator in a simulated environment and you can also run your application on target. And what we'll be doing actually is we'll be using STLink, this utility here. So based so we have an external flash load configured here and we'll connect to our targets and we can program this board using this. Or we can use the make files that we generate using the designer which already uses this tool in a command line version. Briefly on the designer. We had some questions on the first webinar that we thought it would be a good idea to integrate into this one. So the designer actually just generates code based on the contents of your project and it only touches the generated folder. So you work in screen definition classes that are derived from the generated ones, which means that the designer is not going to be overriding any of your work. You can add any additional target specific code files to your compiler projects and it won't get overwritten. So we'll just be using the designer to generate some code for us and then we'll be modifying some of the inherited versions of those classes. So let's, before we dive into that, see if there are any questions, John. Yeah, we have one, well, I think you had to open the editor. What is the name of the editor Martin is using? It says. Oh, this one? Yeah. Yeah, that's sublime text. We actually got the same question in the last webinar. So some of us use this editor. It's kind of like text made if you use Max. It's a pretty fast editor. It has a good fuzzy search. If I want to find something like the model, I'll just type something and it'll give me some suggestions. So anything, I can type anything, it'll give me some suggestions. It's pretty easy to navigate. Yeah, so. Well, actually, we got an input here that it's really hard for them to actually to see the font is too small. Maybe this is, I don't know if you can make it bigger. How about that? Yeah, perfect. Can we get some input on this size? Yeah, please, please come back on if you're able to actually read it now. Yeah, okay. Thank you. Okay, so before we dive into. Yeah, so any, any other questions, John? No. Okay. Okay, so let's let's open the design and start. So this is the application that we saw previously on a slide. So this particular image here is going to be, we'll be using that image to kind of drive an animation here. So you can see here it's a basic image. We also, the design also supports something called an animated image. But basically we want to control this image and it's the way it looks by controlling the button on the board. And this particular button is what we'll be using to control the LED. It's a toggle button. So it's either on or off. And we have one single interaction defined in this design that is, when this button is clicked, we can call a new virtual function that will define the name of here. And that is something that we can then override inside our derived classes and then add some custom functionality. Okay, so let's have a look. So what I'm going to open now is the base class for the screen one. So I'm not sure if you all saw. One comment, Martin, switch screen is too small to see. It's a comment. Let me just open this. So it's full screen. How's that? Okay. So screen one base is something that is the HP and the CPP file are generated by the design, as you can see up here. So the path is generated, GUI generated. So this is the files that we're looking at now. So we check out what the design has generated for us. It has generated a button handler for the toggle button and it has created a virtual method that we asked it to, the updated LED state. So what it says here is that override and implement this function in your concrete view. Okay. And if we check the code for this, so we're still inside the generated GUI generated folder. Here the callback handler for the button just defined like this. This is also generated by the sign. It checks, is this the button that we pressed then call this virtual function, right? So now we can kind of get to work. So we're actually done with the designer now. We've kind of created our layout and we've told the button that it should actually be calling a virtual method when it's pressed. So let's just minimize this for now. So if we check the screen one view, this is the inherited version of the base class that was generated by the designer. So you can see this is located inside GUI, include and GUI source. And the other generated screens were located inside, generated, GUI generated. So if we check out this file here, so basically what we've done here is override the virtual method, update LED state. So for now we're just discussing the LED flow throughout the application. And if we check out the view, which is located inside the GUI source folder, basically what we're going to be doing when this method is called is we're going to be talking to our presenter and telling it the state of the button, the current state of the button which might be on or off. And let me just go back to one of the slides here. So the flow is basically the presenter updates the view. The presenter can update the model and receive state changes from the model. The view can talk to the presenter and receive events from the presenter. And we'll be using the model to kind of make all this happen. So let's have a look at the model for a second. So what happens here? So basically what we talked about before was that the tick method of the model was kind of the heart of the application. It gets ticked based on the synchronization of the LCD. And so here we're going to be, instead of doing our sampling here, we're going to be talking to some free artist message queues. And I'll be getting into that as well. But if we take a look at the screen one presenter, so the view just called this particular method with the state of the button. The presenter as we just saw on the slide, we are architecture slide, is going to call the model with the state. So if we check out this method on the model which is located inside the GUI source model folder, for now, since we're just using the console as a peripheral, we'll just be printing out the state, right? And we'll be saving the state and we'll be updating the LED state, but this is only if this is not a simulator. This is for the target version. So now we're just kind of verifying that our behavior in the simulator is correct. So what are we going to use for simulating the button? We're going to be using something, a method called handle key event, which we can override inside what we call the front end application. This is located inside the common folder, right? So we'll just be calling a model if the key is one, calling model a method on the model called button pressed. And this is the same, we'll see that this is the same method that will be called, let's go back to the model, the same method that will be called if we receive something from the button task. So that's just a teaser. Okay, so I think we're ready to try this application. So we've just verified that the flow of the button from the view into the model and we verified what happens from the peripheral, the front end application, the key event from the keyboard and it's going to drive this application. It's going to drive the animation. And if we just take a quick look at that, what it's going to be doing is going to be calling a method called advanced animation. It's going to change the bitmap ID of this image and it's just going to keep cycling that until it hits the maximum ID and it's going to reverse the direction and then it's going to reverse again when it hits the base ID. So let's just try to build this application and run it. So this is actually something that you can achieve from the designer by just clicking run simulator, but I like to use the environment here because then we kind of see what's going on. So the environment is something that you probably have on your desktop. It looks like this. So let's see it. So here we go. So this is our peripheral, which is the LED for now. So when I press 1, this is going to simulate a hardware pressed button, right? So it's going to advance an animation. We can keep it pressed. So it's going to continuously update like this. We can single stab it. We can click the button and it's going to update the LED state, right? Okay. So now that we've kind of taken care of that, let's go into talking about how to actually integrate with our hardware. This is the most important, most interesting thing today, I think. So basically we'll be defining two tasks. Let's have a look at how those are defined. Now we'll just be going to the target main.cvp. We have three tasks. We have the GUI task, which you will always do. We have two additional tasks called the LCD task and the button task. And something that I mentioned that we wouldn't be covering today is the hardware initialization and the touch defection initialization. And so this hardware initialization is basically where all the STM32 cube driver code is working and smatching, setting up the peripheral, setting up the quad spy, the RAM, the LCD, et cetera. And this is where all the touch defection magic happens, which is what we call the touch defection export. So we're going to be defining, this defines which bit depth to use, which DMA class to use, how those DMA transfers are going to be happening. And then we're just going to start the schedule. So if we go back to the model, what we can see here is that the only input to the application is the button press. So the LED interaction is happening from the view. So we're receiving a press on the screen and then we're going to be wanting to activate one of the LEDs. And for the button, we're going to be continuously checking if a particular queue has a message for us and that means the button has been pressed. And for the LED state, when we click the toggle button on the view, what happens is that this method is going to be called and then we're going to send a message to a different message queue, which is in the LED task. So let's have a look at the LED task. So here we're going to create a message queue. This is actually a STM32Cube call. So we're going to initialize one of the LEDs. We're going to continuously, every 200 milliseconds, we're going to be checking this message queue if the GUI task has sent us a message and we're going to be turning it on and off depending on the value. These on and off methods are also STM32Cube calls. So for the button task, and these are just located inside the target folder. You can place these anywhere you want. These are not faster. They will be overwritten by the designer as long as they're a part of your make file or your project. So here we're going to be initializing the button in a GPI mode and we're going to be sampling the state of that button every 50 milliseconds. So this call and this call are both STM32Cube driver calls. We're going to be creating a button message queue that we're going to be using to communicate from this task if we experienced a button press. And this is something that, this is the message queue that the GUI task is evaluating in its tick method continuously. Okay, so I think we're ready to actually try this application, right? So instead of calling make simulator, we're going to be calling the make target you see make file. And I've already programmed the external flash because there's a lot of images involved in this application. So instead we can just flash the internal flash with the code changes. So, and do we have, yeah. Okay, so it's programming the board. Okay, so you should be seeing now. So basically what happens here is that when we press this button, the green LED is actually turning on. It's turning off. And this is because of the flow that we talked about from the view to the presenter to the model from the model through a message queue a free access message queue to a task that continually pulls this message queue for information. For the button, we can single step as we did with the keyboard or we can continuously advance the animation. So what happens here is that the button task had a free access queue that used to communicate with the GUI task and the GUI task continuously checked if there are any values, any information inside this message queue and then calls the something we need to see here from the model again. Button pressed, we call the model listener, a method on the model listener and the model listener is something that all presenters is an interface that all presenters implement. And if we follow that flow, what we can see here is that we override this method because we're interested in the signal and if we go to the code we'll actually just be calling the view and telling it to advance the animation and we saw the code for that already. So this is just to show you, so now we've talked about the flow all the way from a peripheral through message queue in a task to another task from that task to the active presenter and then to the view and the other way around from the view to the presenter to the model through a message queue to different tasks that evaluates that and then talks to peripheral. Okay? So something else I've prepared here is to have a... So this was a GPIO configuration of the button. So what we can do actually here is make it interrupt based instead. And this kind of changes a little bit how the button task looks. So now we're not doing anything inside the task. We can use it for something else so we can remove it and then define the message queue somewhere else and then we have this because the button is connected to the external line 0. We'll define this interrupt handler which is defined inside the interrupt vector table as well. We'll be checking if this message queue already has a message waiting that hasn't been taken by the model yet and then we'll be sending a message from this interrupt handler and just notice that we'll be sending it using a different method that is called from ISR. So this version of the XQ send is basically just a version that does not have a code to block your task or to halt your task. So you can actually also call this from a task but you won't get the expected behavior maybe. In the end we'll be calling the... This is actually a call to STM32Cube drivers as well. So what happens in this method, I won't be showing that, is that it's going to clear the interrupt so that it won't be firing all the time. So again this is something that is STM32Cube specific that we can use based on this driver package. So let's just see if the behavior has changed now. We'll be compiling it, programming the board again. So the behavior has changed now because we'll only be getting calls to this interrupt handler every time we press the button. So the LED is still working and the button is now only single stepping the animation because we're only going to be receiving one interrupt every time it's pressed. So those are just two different ways of using STM32Cube driver code to perform this peripheral interaction. So just to summarize, you could use your GUI task model tick if your sample time was small like maybe a millisecond and we could use tasks to have that more finely granulated and since the GUI task is sleeping a lot while it's waiting for the AMA transfers to the frame buffer to complete, we can actually just... So it'll sleep a lot and our peripheral task will get scheduled in, so that's fine. So let's turn back to slideshow. See if there are any questions. Yeah, thank you, Martin. Yeah, actually we have a lot of questions and this is great and please keep them coming. One about the model view presenter. Why using this? And what are the advantages of using the model view presenter? Well, it's a nice abstraction. Well, it's a good abstraction pattern so basically the model view presenter pattern as compared to the MBC pattern is something where the view is actually active and actually has an active role in this application and the presenter is the middle man so it talks to both the model and the presenter and since we just talked about hardware integration and peripheral integration, this is a good way to kind of have a asynchronous access from the model to an active presenter so the model can just say are there any presenters that are interested in this particular message and if an active presenter isn't it won't have that method implemented. So for us and for this general flow of how we do these applications, this is a nice architecture. Yeah, thank you and we can add that it's a really recognized way of doing UI applications. We have also some general, some is pointing at supported IDEs and what is actually available in TouchGFX evaluation. Can you comment this? Yeah, so the touch... And one is specific, asking for Kyle, microvision 5 if this is supported. Sure, so the TouchGFX evaluation version has no functional limitations. The only limitation as you can read in the license agreement when you install the designer is actually that you cannot use it to create projects or products, so there are no limitations except that you won't have the ability to create products and you will have a watermark that appears randomly. What we're doing actually when we use the designer is we are creating Kyle and IR projects and Visual Studio projects and GCMAKE files for a particular application. With Kyle 5 support as well. So what you're saying is that the Kyle microvision 5 IR is fully supported. Projects are coming prepared for these IDs. Exactly, but for some supported boards the Kyle version is still Kyle 4. We haven't gone back and upgraded all the different projects every time a new ID version comes out. And we have an additional comment to this. What about MGU Expressio for IMX? Yeah, so I'm not quite sure about the exact roadmap for this, but this is something that we are looking at and have tested inside the lab. And I can add actually for some boards it is supported from MGU Expressio. Projects are created for some of the NXP boards. Okay, so let's continue. So now that we've kind of verified that the hardware is working, the peripherals are working, we're kind of happy with this board bring up. Let's pretend we've done a complete board of such effects and we've written some drivers and made some stuff work. We kind of want to distribute this to whoever else is wanting to develop for this board. So we can actually using one of the tools that we have in the designer kind of pack all this up and then we can distribute the file we call an application template and people can use this from inside their own designers. So let's just have a quick look at that. So there is actually, let me show this browser here, there is actually an article for creating a custom application template. And what it tells us is that, you know, what we should do is now that we have all this code, all these drivers, et cetera, in here, what we should do is kind of clean it up so it has a minimal size. And then we can call touched effects pack to generate an application template. So let's just try to do that and show you how to get that into the designer. So I'll just, so here is the command. So it's located inside the designer folder inside your touched effects installation and then tgfx.exe pack. We'll start with just the D argument which is kind of what generates the pack for us, prepares some adjacent file for us that we can use to describe this pack to everyone else. And before we do that, we're just gonna close down the designer because it's locking some of the files that we need. Let's just try to run it. So I haven't really cleaned anything, so it's gonna be, it might be a lot larger than it should be. So now it's found the touched effects file, which is the designer file. So I'm just gonna start processing that and it's gonna pack up all the files. And while it's doing that, I can just show you what I've already done here. So what it's already done is created a adjacent file for me and I've called this the webinar template. The board name is obviously a 769 discovery. It's a touched effects application template and I provide a custom image that will be displayed inside the designer. So now it's created the zip file for us. So the article now tells us to do all this stuff, describe the application, provide the proper type, the touched effects application template and then use the RC argument to create the final application template, which is the TPA file that will be distributing. So let's have a look at that. So I've already done this. So while it's doing its work, okay, so it finished before me. I'll just be taking this file and according to the article, we're gonna be installing it inside the packages folder of the designer. So let's go to touched effects 493, designer app packages, let's put this in here. And then when we update our designer, start it again. We should have access to this application template. So what basically happened is that it stripped all the GUI that we used for testing and now when we create a new application based on this webinar template, we're just gonna start with a blank UI because it's a new project. What happens now is that it's gonna create a new application called MyApplicationFor and it's gonna have all this porting code and all this STM32Cube driver code that we generated using Cubamex or took from a driver pack or something. And now it's kind of ready for people to use and create their own applications. Let's just make it finish. So here we have, we're gonna be staring at a blank canvas and we can just start creating our own application now. So that's it basically. So now we've gone all the way from talking about some of the hardware requirements. We've talked about the different layers of TouchGFX. We've talked about the cube drivers and how they fit into the various layers. We've developed a few concrete applications that test this hardware and these drivers and we've distributed that application template and used it to create a new application. So this is kind of all the way from A to C without discussing the porting specifics. This is something that we're gonna be talking about in a different webinar as well as how to protect the frame buffer using real-time operating system primitives, which is the operating system abstraction layer that we saw in a previous slide. So this is just pretending that we already have the port and we already have the drivers and we communicate with some of the peripherals and showed you how to kind of make your application come to life both from the GUI to the peripherals and from the peripherals to the GUI. So are there any questions at this point? This is the end of my talk and I just wanna say thank you for listening. I hope you enjoyed it and got something out of it. Yeah, we have some questions. Most of them are correct. We have one asking about the IDEs, the plans for actually supporting Atolic True Studio and I can actually answer this one right at the moment. We are not supporting Atolic True Studio though a lot of customers are using it. We have a guide and we actually have some forum guidelines also from customers supporting you in actually doing trust effects projects in True Studio of Atolic. We are working close with ST, so both with integration with ST Cube but also ST IDEs as Atolic True Studio. So this is definitely something that will be more integrated and supported looking forward. And I just wanna add that this is a question that we get a lot. What about Atolic integration? What about Eclipse? Anything Eclipse based, STM32 workbench as well. And this is something that we will be creating an article for because the process is actually fairly simple because if you pretend that you generated an Atolic project from CubeMX for instance, basically it generates C project for you. So you'll just rename main.c to main.cpp, call a few methods and basically then you'll have trust effects integrated. So the process is really simple and we'll be creating an article that shows you how to do that. Yes, yes, Chromart is actually... Please repeat my question. I didn't... Oh, so yeah, using the Chromart is actually the very core of how we perform well on ST devices. So that is extremely central. Okay, good. Before we close down, I will actually ask you to still keep on putting in questions. I would like to launch a poll here for actually looking into a new topics for upcoming webinars. So please put in your vote here and we will take this into account. We are in the process of planning webinars for the rest of the year. We are just counting technical webinars like this one, also counting more actual graphical design, how to do graphical designs, which is also an important part of a good UI solution. Yeah, please make them coming here and I will close it in a few seconds. We can see the results here. Yeah, and you are saying here that general porting parts is of your high interest. And we will definitely do this. I think also there were some questions about configuring of the DMA and so on, but this is not a topic of this webinar, but we will do this in the future. But also more advanced application development using the thirsty effects designer and also doing more sophisticated coding by hand in C++. Great. Regarding ARTSAS, let me ask you about your usage of your OS. Please put in your vote here. We are using free ARTSAS in our examples and it comes with thirsty effects. But thirsty effects is independent on the ARTSAS, so you can choose anyone. And there is a guide helping you in updating the OS wrapper and porting to another ARTSAS. Just give it a few more seconds here. I will share it. Oh, it's quite clear here. Free ARTSAS. The others, please put in in writing what you are using here. And we will also have this in consideration doing future seminars on ARTSAS. Let's go back to the slides. Martin, just please. We will be online some minutes after the webinar here answering questions. And I'm just checking if some is coming in here. And please, when you leave, please go through the survey and give us input from this webinar. This will be appreciated very much. Several of you are asking how to actually get the video here, how that will be made available. CERN will send you tomorrow a link for the recordings. So you can download and share as you like. And we will also keep them in our website so you can always go back and get it from our website. Next slide. I just have one comment here. One is saying, I don't use an ARTSAS. Fine, an ARTSAS is not required. You can run on the bare metal. And yes, if this is what you want, no problem, trust your fix is not needing an ARTSAS. You have a comment for this? No, that's correct. It's limited in where you can do all your calculations, like in the porch areas. But yes, you can run without an ARTSAS, just fine. Okay, thank you everyone for attending today's webinar. We will be back with a list of upcoming webinars. And I hope you have enjoyed and learned some of the issues you were looking for. Please put in your evaluation on the survey and have a great day. Thank you for now. Bye.