 Welcome to this presentation on the STM32 MCUs in a Graphical User Interface context. A graphical application implies defining a Graphical User Interface layout and perform its rendering by processing all the graphical elements that composite, but also requires to send the output of this rendering to the display. In the first section, we will focus on the specific IPs included in some of the STM32 microcontrollers that are dedicated to the rendering process. Then we will review the display interfaces that are supported by STMicrocontrollers. The last two sections will focus on the STEcosystem offer in the context of graphical application and more specifically how the touch-effect solution fits into it. This is an overview of the Graphic Subsystem. The inputs are bitmaps and some text also that are converted into bitmaps anyway and are stored in flash memory. From this flash memory, the content is put together to fill the frame buffer which is stored in RAM memory. And finally, this frame buffer is transferred to the display panel. So we have here two processes, first the update, the build of the frame buffer and then the partial or full refresh of the display given this frame buffer. We will focus in this first section on the graphics enabler that allows to accelerate the update of the frame buffer. What is the frame buffer filling complexity? A frame drawing is very CPU demanding operation. It requires to process each and every pixel of the frame buffer several times per second. Several times mean 60 times per second. For SVGA definition, it means that 480,000 pixels must be processed several times per second. The drawing involves mainly numerical computational operations which are addition and multiplication. No logical operations. And these operations are very repetitive since they have to be done on each pixel. And it requires also many memory accesses, reading from the input bitmaps and writing to the frame buffer. So this is very suitable for a dedicated IP to the engine instead of the general proposed cortex that can be required to do other tasks in the application. That's why I still introduce the Chromat Accelerator. It is much more efficient than the cortex. It has a direct DMA access to memory and is dedicated to pixel processing. And compared to a graphical processing unit based approach that the GPU is found in most smartphones, for example, or on heavy systems such as a PC for example, it is much smaller and it gives a much cheaper silicone. And it doesn't need extra RAM to process. And finally it covers 90% of top-of-the-line graphical human machine interface use cases. Here is a video that shows the difference when rendering a simple layout with images moving in front of a background with transparency computation, blending computation. We will see in this video the difference of the CPU load when the Chromat is enabled and when it's not. So this demo is available in the CUBE firmware package. So here the Chromat is enabled. We are blending. So you saw the fading effect of the ST logo appearance. The .jfx logo has also some transparency information. And they are blend in front of or at the back of a given image that has also transparency in the tree. So when the Chromat is disabled, you see that the MCU load moved to here it's enabled. So it's 4%. When it's disabled it goes up to 80% of the CPU load. The Chromat features are of course memory to memory demo transfer with a programmable rectangle area which allows to do some per-object alpha blending but also a per pixel alpha blending. So here we only update a part of the frame buffer with the logo taking into account a global alpha or a per pixel alpha in case of glyphs. Just as a reminder, a glyphs is the image representation of a character. The pixel format converter supports in input and output all the regular RGB formats from 888 to 565 but also smaller ones like the 1555 format. On the contrary, it does not support in output some lookup table format because this would imply the calculation of a color lookup table. But the memory to memory operation without pixel format conversion can copy data independently of the format. So we can imagine that we have some input lookup table format images that are copied directly to the frame buffer which is also in a L8 lookup table format. So it's quite a flexible approach even if the lookup table formats are not supported in output. And it also supports YCBCR format for a higher motion GPU frame rate. All the details of this format can be found in the application note 4943. The hardware jpeg accelerator is a fully hardware jpeg compressor and decompressor and it supports motion jpeg videos. It can then offload the CPU load for the moving jpeg processing. The moving jpeg can be used and are mostly used for short videos such as for example tutorials to replace ink cartridge in printers. From 8 frames per second, you can go up to 20 frames per second using the hardware jpeg decoder. Again, the application note is dedicated to this jpeg accelerator. We are not going through all the details of all the IP here. Just mentioning what we mean by supporting the graphic acceleration in our MCU family. So the application note is the 4996. The ChromeJRC is also an optimization. It's not really an accelerator. It allows to support the non-square display and it's fully configurable. The idea is to have all the access to the firm buffer controlled by a memory management unit that will decide if a pixel is actually stored on that in the memory. And if it's not stored, it will return a default value or do not write the value. So the gain is around 20% of RAM. So you only store the dark blue zone here in the actual memory. The rest is a virtual. So it is accessed by the chromat for example, but only by address. So if the chromat wants to access an address that is in this pink zone, it will be given a virtual value. This is most suitable for wearable devices and smartwatches notably. The application note dedicated to the ChromeJRC is the 5051. So in summary, the rendering process split is as follows. So the chromat is designed for most of the rendering operation. That is data copy with or without blading, data fill rectangle with or without transparency and the pixel format conversion. The hardware GPEG decoder is in charge of decoding the GPEG images and encoding if needed. And all other operations are handled by the cortex. This includes the smart rendering processing, the algorithm that will determine which zone of the frame buffer needs to be refreshed and some primitive drawing also like circles of polygons and all the geometric transformation such as rotations or scaling. So when designing a graphical interface layout, you have to keep in mind that all the processing will not necessarily be accelerated by the chromat. Some transformation are handled by a CPU. So you may see a raise of the CPU usage in this case. And all the advanced color effects are also handled by the cortex. So now we are done with the frame buffer update and we will focus on the display panel refresh. First, a reminder of display architecture. So the main panel of the display containing all the pixel values is controlled by two drivers, the source and gate driver for columns and rows. And this one is controlled by the timing controller. So any displays have such a timing controller as well as a backlight usually. Then these two parts, the display controller and the frame buffer are not in every display. And that's why we ST decided to introduce an LTDC controller that we will see in the next slide. So in case the chosen display doesn't have a display controller, an embedded one, we provide an LTDC component, which is a TFT LCD controller. And the frame buffer is then stored on the MCU side also, either in internal RAM or in external RAM. So the LTDC key features, the LTDC controller is a 24-bit RGB parallel pixel output up to XGA definition. This is not a physical limitation. It's a practical one because when refreshing the screen, we are not dealing only with the memory bandwidth used by the LTDC to access the frame buffer. We have also to consider that within the same pipe that goes to the frame buffer, the chromat, for example, will also update another frame buffer in the same memory. So it will go within the same bus. So in practical, we did internally some benchmark, and this XGA size was the one that was the most suitable to support most of the use cases. But on the paper in the LTDC application note, we will see that the LTDC is able to address much larger screen definition. So depending on the use case, depending on the complexity of your user interface, you may address larger display, larger definition at least. So the LTDC is programmable in terms of display size and position. So it's able to refresh only part of the screen. We can configure of course all the timings and the polarity to address the widest range of displays. And supported input epixel format are RGB formats, but also from a lookup table format. It has access to the HB or AXI, depending on the MCU, interface to access internal and external memories. So it has a direct access to these memories. And integrates also a differing module. In a few words, differing is the way to give the illusion that a given screen with a given color depth has a higher one. So it's a way to erase the effect of a low color depth on a given screen by creating some extra pixel color, but within the screen color palette. The MIPi DSi host is also a graphic enabler in the sense that even if it relies on the LTDC components, it allows to use the MIPi DSi connection. And this MIPi DSi connection supports a much higher data transmission, but it especially allows a low pin count for a display interface. Only 4 to 6. So you don't have all the power lines for each RGB component. You have only 4 to 6 because the data is serialized. So for that, you will need of course a DSi display that supports this kind of data reception, but you need a DSi host on the MCU side. And that's what this component is for. And it also includes DFI directly, so you do not have to add an extra component to convert the data at the output of the DSi host. The application note for this DSi host component is the 4860. So the STM32 family supports all the main display interfaces. It comes from the Intel 8080 and Motorola 6050 LCD interface for small resolution. In this case, the FMC is used to access the display, to send the command to the display. But this display has then an internal controller and an internal frame buffer. The chromat is in charge of filling the back frame buffer here. When it's ready, it is sent to the FMC to the display. For the medium resolution of 2XGA display, the LTDC is used. So in this case, the chromat is still in charge of filling the frame buffer and the LTDC sends constantly one of the two frame buffer, depending of which one is ready to the display. And the display this time has no internal controller. Another use case is, for example, when the STM32, the chosen MCU, does not have enough swarm to store the two frame buffers. So in this case, the FMC is used again, but not to connect the display, but to access the external SDRAM containing the frame buffer. And that's exactly the use case. That is quite a popular one. That's a use case when we can see the limitation in terms of the LTDC supported definition. Because both chromat and LTDC will have access through this pipe to the frame buffer. So even if LTDC is able to transfer very large definition, there will be no room left for the chromat to access the frame buffer and update it. So we have to find a compromise between the constant flow of the LTDC reading and sending the frame buffer to the display and keeping some room for the chromat to update this frame buffer. And eventually, for the application itself, accessing this external SDRAM for extra data. The MAPDSI is also for medium resolution with a high pixel density graphical interface and supports the three scenarios we just listed, either using the LTDC or the FMC. The only difference here is the connection between the MCU and the display, which is reduced to a few pins. Let's now have a look at the STM32 ecosystem. We first have a global reminder and then we will focus on the 3GFX integration. The STM32 ecosystem is first composed of hardware development tools. So it's a set of evaluation boards and programming probes. But it's also composed of the corresponding software development tools, including the configuration tools, debugging tools and programming tools. And it comes also with some external software stack, such as drivers for the integrated devices, such as a microphone or displays or touch panel, but also some operating system stack and USB stack. And the first block of the STM32 ecosystem is the support part, which is composed of a website containing all the product definition and all the documentation of each product, but also the support site, such as the community, the ST community, and the online support system. All these four blocks are open source, can be provided by partners. For example, the embedded software part are provided by partners, as well as some support part and is designed for the ST product. So in terms of hardware development tools, ST propose some, first some STM32 nuclear boards, then some discovery kits, and also some evaluation boards. The difference between the three is that the nuclear boards are dedicated for flexible prototyping in the sense that they have a large number of pins that are connected directly to the board so that they can be connected to external devices and they are highly configurable. On the contrary, the discovery kits have already some devices connected to it that cannot be changed easily, but usually are dedicated to a specific feature. And for the discovery kits with a graphic enabled MCU inside, usually we found a display with a touch controller and also some buttons to navigate through it if the touch controller is not mandatory. Finally, the evaluation board are much more expensive boards but have all the devices, the possible devices connected in terms of a microphone, of a temperature sensor, of buttons, a display, everything, every device is connected to it. That's why also they are more expensive. And these kits are proposed for all the STM32 families from F0 to MP1. The hardware development tools also include some STM32 nuclear expansion board for a specific functionality such as wireless connectivity, blue energy or USB. So these are sold separately and can be plugged using Arduino connectors usually on the nuclear boards or also on some evaluation boards. And also we can found some third-party boards that are fully packaged for a specific feature. Regarding the software development tools we first have the STM32 Cubemix tool which is used for the MCU configuration, the full configuration. You have access to every pins of the MCU and can configure per peripheral the proper settings. And it also automatically generates from this configuration, from this visual configuration, it generates automatically the source code. This source code can be then used in an IDE, so in the integrated development environment. One is proposed for free by ST, which is the STM32 Cubed IDE which is an Eclipse-based IDE but we also support some other IDE such as IR or KIL and this allows to compile and debug the application that you generated. The STM32 Cubed Programmer is the tool to update the MCU when you flash a given binary program to the MCU but also to modify the option byte to enable a specific feature of the MCU. And the STM32 Cubed Monitor is a tool, quite a recently released tool to monitor in a visual way most of the time on a graph, for example, a specific variable of your application. On a given binary version of your application you select with this tool which variable you will watch on a graph view or compare to other so it's a really interesting way to also perform some debugging through monitoring of variables. So what about the TouchFX integration in this ST ecosystem? What does it mean developing with TouchFX? Enabling TouchFX involves to first create a user interface application. This is done using the TouchFX Designer which allows to create a user interface combining graphical elements and handling user events and also communicating with the non-UI part of the system. Once you have this application you must plug the TouchFX library to the actual hardware. This is done with the abstraction layer development. It is not only the hardware abstraction layer because the TouchFX application also requires some software components such as locking mechanism which are usually provided by a real-time operating system but can be also programmed in software. So it's more the AL than the HAL. The idea is to build a software layer that connects the TouchFX library with the hardware. For the ST development kit there are referrals so the screens that are on discovery kits and the external memory and external SD RAM that are on our kits the board-specific package, the BSP is provided by ST. So the old application templates that are contained in the TouchFX Designer contains this board-specific package you do not have to care about it as long as you work on a ST development kit. When you are working on a specific custom design with a screen that is not known by ST with an external RAM also that is outside the set of external RAM ST uses you need to configure it with the Qubemix along with the TouchFX plugin. So that's the way TouchFX is integrated in the ST ecosystem first using a standalone TouchFX Designer and then in the Qubemix for a specific configuration as a plugin. So we saw in the previous presentation that the TouchFX is composed of three blocks the TouchFX Designer the TouchFX TGFX Engine and the TouchFX Generator so we now focus on this one so the TouchFX comes as a plugin of Qubemix so you will find this term either TouchFX Generator or TouchFX Plugin or Xcube TouchFX this points to the same block which is the TouchFX plugin but it comes in several several terms so it's a better interaction with the Designer tool because in this case you can generate the code from the Qubemix taking into account the TouchFX configuration but you can still generate the code from the TouchFX Designer without breaking this link with Qubemix it allows an independent version management which means that before this plugin approach the TouchFX version was tied to the Qubemix version we could not add some more features to the TouchFX integration we could not add some extra features to the TouchFX until the next Qubemix release it is now possible and it's completely MCU independent you are free to enable the TouchFX plugin any MCU you choose it's then up to you to be sure that you will be able to connect properly through LCDC or through FMC or through SPI or maybe using a GPI use the Qubemix plugin will help customers in defining the TouchFX abstraction layer for their custom board but it will not do it for them it will only give some guidelines some empty function that needs to be filled depending on the chosen hardware the guidance will also consist in enabling the required IP for example the CRC is mandatory when using TouchFX while it's not a graphical component but it for confidentiality reason to ensure that only SC board will run the TouchFX library the CRC module is used it will help configure this IP for example if in the TouchFX plugin you set a given definition but this definition does not correspond to the LCDC definition it will give you a warning and that we will see that during the lab it will also help configure the TouchFX from rock in general setting the buffer management strategy and setting also the display definition we identified that developing with TouchFX can be done in two approaches the first use case when someone wants to do some prototyping to a proof of concept but on STM32 display kits so an STM32 discovery kit for example or evaluation kit then only the TouchFX designer is required because it contains all the template for most of the development kits at least all the ones that have a graphic accelerator inside the use case B is for real product development on STM32 based custom board in this case we will first use the STM32 kubemix to configure UMCU along with the TouchFX generator to configure the TouchFX abstraction layer and then use the TouchFX designer to edit your user interface layout so more details from the use case A so the use case A you use the TouchFX designer to start your project you select a pre-made application template so it will usually called AT for the specific kit that you choose and then you can create your interface generate the code build the application and directly flash from the TouchFX designer you don't have to go outside it for a simple application to flash it directly on the board then if you have to adapt the hardware configuration for example enabling some other IP such as ADC or another timer or you have to tune the graphical user interface for the logic then you will open the associated IOC file the IOC file is the kubemix project that is provided for most of the application templates dedicated to our dev kit and using this IOC file you will edit the kubemix project in which the TouchFX generator will be already enabled and configured according to your board and you can even the IDE among kubemix IDE, IR or KIL the default one will be kubemix IDE but it can be changed using the kubemix project and then you will modify build and flash this application using the selected toolchain just be careful that this IOC file compatible with the kubemix that supports the JFix plugin is not yet available for every application template there are still some specific application templates for specific development kits that are in a legacy version which is not compatible with the latest kubemix version 5.5 the one that supports the kubemix generator but the target of course is that all application templates come in version 3.0 and thus contain a kubemix project compatible with the JFix plugin in use case B you use a custom stm32 based board then you have to start from the stm32 kubemix and select the MCU in the MCU list and then enable the JFix generator from this new project we will see just in the next slide how to enable it after the first generation of the code of your project you will have a dot part file created and this dot part file will be the one to be used to open the 3rd JFix designer and start developing your user interface layout it's different from the 3rd JFix file because in fact it takes as input the configuration done in the 3rd JFix generator in terms of display definition and the chosen ID so once you open this dot part file it will create a dot 3rd JFix project file which is an official 3rd JFix designer project file in the kubemix project you will be free to select your preferred ID either kubiide, IR or KIL and you will need to adapt the board bringer up code and the specific code that is needed for your specific hardware that cannot be managed by kubemix so we are talking about the BSP part, the board specific package of your setup and finally you can build and flash using your selected ID and note that here the 3rd JFix designer is capable of retrieving the selected ID so all the code that will update from the 3rd JFix designer will be if you add new files for example when you add screen, you add new source files there will be added to the ID project automatically because 3rd JFix designer is able to know what has been selected in kubemix so how do we practically enable the plugin in kubemix you have to go in this additional software button which will open a dialog box and at the end of this dialog box or it's at the end today because it's in alphabetical order it may change but you can find the xcubed JFix expression back and you have to set it to the JFix generator it's by default not selected once you did that there is a new IP here in the IP list which is the xcubed JFix plugin you just have to enable the graphical application checkbox and you will then be able to set up the settings the important settings that the JFix library needs to generate the initialization code of the 3rd JFix abstraction layer just note that the way to enable the JFix plugin will not change but this list of settings may evolve and there may be other settings in the future release they may be organized differently also but the documentation will be modified accordingly of course this is just a view to show you how to enable the plugin in the tool so this plugin will generate some code and this generated code comes into several parts first you have the HAL part the hardware abstraction layer part the HAL part contains a mandatory class and a virtual function that are needed by the library to properly work also part of the generated code is some customization of this HAL part based on the 3rd JFix generator settings for example the framework size as soon as you have set it in the 3rd JFix plugin the generated code will take this into account as well as the framework management if you have a single or double firm buffer but all the settings that are you set in the plugin will generate some specific source file that will be also read only the last part is the part of the source code that is generated only once by the plugin and then must be maintained by the user or edited most of the time this is where you will add some specific initialization of your code it inherits from the generated HAL part the classes that have been generated automatically will be the mother classes of this class and also it generated only once and it needs to be adapted according to your devices data sheet for example transferring the pixel from a frame buffer over SPI or enabling a PWM for the LCD backlight this cannot be set in the QBMX project it has to be set manually depending on your display data sheet and using this the JFX HAL user part you are free to over read or even some functions that are defined in the generated part because they are not relevant in your case this is the overview in fact of this approach the core part is the read only part of the framework so this is where the HAL class is defined the HAL class is a very generic class that contains some fields for display width and display 8 but they are not set in this class these are all virtual classes as well for the DMA these are some of the functions that will be used by the library to do some data copy and data fill but then they will be customized here by the generator if you enable the chromat for example so this port part will be customized automatically for part of it by the generator and manually for the rest for specific initialization so this is in the F4HAL STM32 H4HAL class that inherits from this generic HAL class that will be implemented the SPE call for display refresh or the LTDC interrupt in case of LTDC communication with the display and in this user STM32 F4DMA class this is where you will find the chromat SPE calls if you enable the chromat another way to show the 2GFX integration is to see it as a software layer first you have the user application the user application consists in what you define in user interface in terms of image of content, of logic and of organization of the value of the interface and also the 3GFX configuration then you have the 3GFX call on which the user application is based and the call is the part that comes with the source code the 3GFX framework so all the defined widgets the rendering engine the event handling also of this library and then this core part is based on the OS wrapper element and on the 3GFX generated abstraction layer so this is the interaction with the hardware the actual hardware is the how you will manage the synchronization of the library with the rendering process so the OS wrapper can be free RTOS for example but it can be any OS as long as you provide the proper function the proper semaphore mechanism but using another OS and it can also be without OS as long as you provide locking mechanism that can be available a volatile variable that is set to 0 or 1 and it is set by an interrupt a timer interrupt every 16ms so that the 3GFX library can have a signal to start the rendering process so it's possible to use it without OS but then you have to be very careful on the synchronization of the rendering of the frame regarding the refresh of the screen on the 3GFX the communication with the hardware you have several possibilities either you use the state development kit HL which is provided by Indocube firmware and so in the application templates of the designer with underlying low level drivers for each of the devices or you use your custom board HL using your custom user low level drivers so practically in the case of the state development kit you will configure all this using the 3GFX designer only so the layout is configured within it the 3GFX configuration will be also set by default through the application template you choose for the development kit and the same way for the 3RTOS and the development kit low level drivers by default all the application templates are using 3RTOS on the custom design use case it's a bit different so the 3GFX designer will allow you to configure the user application part as far as the layout is concerned and also part of the logic and will also use the underlying 3GFX core classes but for the 3GFX configuration you will need to use the 3GFX generator and the 3GFX generator will also help you configure the 3GFX communication with the system and the OS wrapper part because it's within the 3GFX generator that you will configure whether you use 3RTOS or not and the last part missing part is if you choose another OS for example this is then up to you within QBID to set up correctly the OS wrapper implementation as well as if you use your custom devices you have to provide and adapt the abstraction layer of 3GFX using QBID so as a summary there are two main use cases when working with 3GFX designer for ST development kit prototyping or you use the ST ecosystem for custom board which consists of the QBID configuration tool and the XCube 3GFX expansion pack to enable 3GFX we only covered the main principle in this presentation because there is still much to say about 3GFX and fortunately today there is a very in-depth documentation of 3GFX which can be found at this address it contains some basic concepts of what is a frame buffer what is a graphic engine what is the main loop of the frame rendering it also contains some guidelines some user guide on how to use and configure the 3GFX generator section parameter it contains also a section on the abstraction layer to understand what is covered by the 3GFX plugin and what should be done by the user and also some general issues that can be encountered when bringing up a board what to look first for example if you have no display you have to be sure that you can set the display to a full color before trying to refresh it with the 3GFX library there is some basic concepts on non-issues when developing on a custom board and also you can find some kind of rendering flow examples to understand what is needed in terms of synchronization and why you need a semaphore locking system so that the library can work properly additionally to help you develop in 3GFX so this help may come from the designing of the user interface but also up to the hardware configuration of a specific custom design we have a list of partners that can help you giving up some training giving just some support on a given topic and as you can see Mjolnir is still very active in supporting a 3GFX application thank you for your attention this is the end of this presentation