 Welcome back here at the exhibition forum in Hall 3 after the lunch break and thanks for stopping by. We have an afternoon full of information and presentations ahead of us until 6pm. And to start with our first topic today, this afternoon is implementing high-end on entry-level microcontrollers. And our speaker is Nicolas Santini from ST Microelectronics. Good afternoon everyone. Welcome to this session. I will start with a fast quick survey. How many of you just raised your hand? How many of you are already working on a graphical interface on a microcontroller? Okay, nice. So most big third, let's say. So you are aware that it's not... doing that is not just plugging a display to an existing board, right? It's much more complicated than that. It requires usually additional memories, either RAM or flash. And it requires also software stack to manage all the interaction and the life cycle of the graphical interface. So I'm Nicolas Santini and today I will give you some tips and some strategy to approach this high-end graphical user interface on the entry-level microcontrollers. The challenge on entry-level microcontrollers is that we have almost nothing. We will see that. So first step, I will do quite a quick review of what does it mean to handle an advanced graphical user interface on a microcontroller, high-end or low-end, to set a common vocabulary, and also to give you the chance to understand exactly what is the rendering process meaning and how to trick it. Then we focus on the entry-level MCUs and the tricks that we are giving to customers in ST Microelectronics to help them implement their graphical user interface, and I will then conclude this session. So what does it mean to, first, what is a high-end graphical user interface? What makes it nice? This is a graphical user interface. It's functional. It's like a home thermostat application. It gives you all the information you need, but okay, is it what we call an advanced user interface? We do not think so. So what makes a nicer user interface? So this is the same information, but in a different way. So with a larger, first, a larger range of colors. In this case, we have only 16 colors, possible colors. Here we can have a much larger range of colors using 16 bits or 24-bit color depths. An advanced user interface is also non-square graphic elements. Here we have only square elements. Okay, that's functional, but imagine that every day, every morning, every evening, and every afternoon you check temperature, and you see that. Okay, come on. No, I certainly prefer to see something more smooth, a smartphone-like interface. The Bitmap support is also a great way to add some texture to your user interface. Here it's a background, a great background. It's quite dull. But with some Bitmaps, only one for the background, not many Bitmaps, one for the background, could give some texture to the user interface and make it more like home. And of course, some smooth animations. When you open a menu, you don't just want it to pop up on the screen. You can see some animation opening the menu. So that's not an exhaustive list of what makes an advanced graphical interface, but it's the kind of thing that we have to keep in mind. What is the typical architecture for controlling a graphical interface? There's a display, but not only, we will say that, with a regular, let's say, microcontroller. So first you have the microcontroller itself, of course, and it's connected to a display. So that's the very first step, no display, no graphical interface. Let's agree on that. Then you have to sort what we call the assets. The assets is the set of Bitmaps that you will use in your user interface. For example, for buttons, you can have two images for one button, and these images will be converted into Bitmaps, and one is for the pressed state and one is for the real estate. So all these Bitmaps need to be stored in a memory that is read-only and persistent. When you restart your device, you don't want to lose all your assets. So in this case, what we usually do is connect an external flash. You can use, of course, an internal flash if it's sufficient on your microcontroller, but usually we connect an external flash. And then we have to start the frame buffer. So the frame buffer is all the pixel values that you will send constantly to the display so that it shows something. So usually it's at 60 hertz. You have to send a huge amount of RAM to the display so that you see something. So the memory needed for the frame buffer is a read-write area because you will read it to put it on the display and write to it to update the next frame to be displayed. In this case, we can use, again, the internal RAM of the frame buffer, but usually it's not that big. So we connect an external one. So the process of the rendering we suggest after is to read from the flash the assets, compose your scene, compose your next frame to be displayed, put it into the frame buffer in RAM, and then the frame buffer is transferred to the display. So that's the typical architecture. And keep in mind this interface for the next slide. So it's the usual one with the two circles that are turning. It's a progress circles, let's say. This is the rendering process. The rendering process is the action of gathering all the bitmaps that compose a given screen, gathering all the user interactions that have been processed at some point. So pressing a button or moving a widget. And prepare the next frame buffer. So we have first the input bitmaps. We have also some shape descriptions. This will be the two circles that we can see moving on our interface. We have the frame buffer. And we have the display in this. So as we saw, the input bitmaps are in flash. The frame buffer is in RAM. And the shape description are not in the core CPU. They are processed by the core CPU at runtime. So here are our background image. Some buttons with two states in time for each button. Some declarations. And some numbers that are in two colors. So you have to also store all the fonts, all the characters that you need to display. All the possible value. I was only displaying 25 or 26. But you can imagine that the temperature can go very high or very low. So you have to store as bitmap all the fonts as well in the flash. And then you have this description. So there are visuals here. But normally it's just circle information, yellow color, and then computed at runtime. So the first step of the rendering process, after gathering all the information to build the scene, is to update the frame buffer and finally transfer to the display. So we have several cases in microcontroller that embeds a TFT controller. The frame buffer is constantly sent to the display. But we can see, we can see. We will see that it's not possible on the entry level microcontroller. And keep in mind also that this frame buffer, what does it mean? It means the definition of the screen multiplied by the color depth in kilobytes. So for this small screen, I'm talking about a screen of this size, it's 150 kilobytes. That are from that we will not use for your application. Sorry, the graphical interface of course. So what about entry level MCU? What are the constraints of such MCUs? So first we have a limited run. Remember the 150 kilobytes on STM32G0, which is one of STM microelectronics entry level MCUs, we have only 36 kilobytes of run. And you still want, I'm sure, to do something else than displaying your user interface. So it's not possible to store a full frame buffer in the internal run. And on top of that, you have no way to extend this run. There is no on this device on G0 again. You have no way to connect an external SD run. You may have ways to connect an external flash, no SD run. And you need it for the frame buffer. And on top of that, the only way you have to connect a display is an SPI interface. An SPI interface is fine for a display that has internal run, but can lead to cut-in effect. The cut-in effect is exaggerated here. But it means that you can see with your eyes that the screen is updated from top to down or maybe left to right. But you can see it when you change from one screen to another. This is not a nice effect to show on your device. But the advantage on the other side of this SPI display is that they have their own graphical run, what's called a G-run. The G-run is a full frame buffer, in fact, inside the display. So it makes the display a bit more expensive. But it allows the display to refresh itself. So when you turn on the display, it automatically reads its internal memory to display what's in it. By default, there is nothing. So that's a very good point because we have... Remember that we have no way to store the frame buffer on the microcontroller. But now I'm telling you that the frame buffer, in fact, it's already in the display. That's a good point. We just have to send updates of this frame buffer to update the display. And so now we will switch to what kinds of basic rules combined with a smart rendering strategy. So the basic rules will be at the designing of your interface. And the smart rendering will be up to you. But we have some tools that will help you do that. And you do not have to reinvent the wheel on this topic. Believe me, it can take all your development time. So please use what is existing and we will see that at the end. So the design recommendation we have found, though, is some of them. But it's a general idea behind is to reduce as much as possible the pixel processing. The Cortex-M is not meant for pixel processing. It's a very repetitive task. It's on mathematical computations. It's not meant for that. So reduce the pixel processing by first reducing the size of your assets. You can still use some bitmaps, but you can adapt the color. For example, for a bitmap like this, does it make sense to use 16-bit color format? No, you can use a much reduced format like a lookup table format to store these bitmaps. And it will be converted at one time to be displayed in 16-bits. In Limit, also, what you can see here, the chessboard-like gray and white texture means that this area is transparent. Transparency means that when you are rendering your frame buffer, you must read the bitmap with the transparent area. You must also read the bitmap that may be behind the background bitmap. And then combine these two and update the frame buffer. So it means that you do three times more processing than if you use a bitmap without transparency. So I do not say remove bitmap because that's the only way to get run-shaped. But I say do not use bitmap when you only lose bitmap when it is necessary. And for this very example, this is a button that we have in the previous interface. These are basically shapes, just shapes, round circles and lines. So why not use this kind of structure instead of a bitmap to display this button and this applies to many other simple icons that you can have on the user interface. The other point is also linked to pixel processing is prefer small area update. We are talking about this display with an internal G-Round. You cannot see with your eyes when you update only this area of the display. But if you start updating, let's say you have split your interface. This bar of buttons, this area is the number, and this bar of buttons. If you update this and the widget is this size, you will see that you are updating. You will have the curtain effect I was talking previously. If you split your interface, that's exactly the way it is split it actually. This way you will be able to update only this part without the user noticing that you are on an entry-level MCU. Because I want to care about this. The user wants a nice experience, a rich user experience. Because I want to know how it was for you to connect the display and stuff like that. What's the experience? Use small widgets. Of course, avoid movable widgets. And prefer a single-screen application. Nicholas, you just told me that you could do some screen transition for a nicer interface. That's true. But if you have, let's say, 10-screen for your application, maybe only five are required. So you will save a lot of space as well, and you will save also some screen transition. That doesn't mean that you cannot do some full-screen transition in a nice way on this device. You have to use adapted screen transition. We will say that just after. Before that, I need to talk about you about the framework strategy. This is the most common case I was talking about on the high-end and middle-end microcontrollers. So the MCU, the microcontrollers, updates not directly to him. Sometimes you have some graphic accelerator, but let's not talk about this for now. You update the frame buffer with the area that needs to be updated. And then it's sent constantly to the display so the display gets refreshed. So this is the case where we have a full-frame buffer. This is not our case on a three-level MCU. So we can think about another frame buffer strategy. It's called the partial frame buffer. So in this case, we need a display with Geron. That's the case. So it has an internal frame buffer. But still, we have to prepare the data that will be sent to the... There will be some computation needed. For example, the circle, you have to compute the pixel that will be drawn when drawing a circle. So they have to be prepared in RAM, but not in a full-frame buffer. And in this case, we use only... This is an example of three blocks of RAM. So each block will be a few lines, maybe. This is... we can configure that. And in this case, we can render the first block. So you have to be... Your engine has to be smart enough to know that, okay, I need to update, let's say, this part of the screen. But I had only three blocks and none of these blocks is exactly this size. So I will split the rendering into several blocks and update part of the screen. So using first block, we transmit it to the display using the serial peripheral interface. And in the meantime, you can update the second block that will contain the second part of the area you need to update. And in the same way, the transferring, you can render the next part. So this partial-frame buffer allows, in fact, to update sequentially your display. And this is very important on the entry-level MCU because of the full-screen transition, mostly. Because it allows you to hide, in fact, the curtain effect. Using the partial-frame buffer, you are able to say that, okay, my display will... Anyway, when I send a full-screen to my display, it will do like this. Because it's slow and you have some fast SPI, but sometimes I got some customers that have a slow SPI. I don't know why, so it is slow. But in this way, we are able to hide the fact that it's always come from top to down because we are updating sequentially the display. So starting for the middle of the screen, each time drawing only one or two maybe, two lines until the display is fully refreshed. So that's the way to, let's say, hide the misery somehow. But at the end, it seems that this transition is on purpose. And you can adapt, of course, that to other situations when you update from left to right or blocks by blocks. You cannot do anything customized the way you want, but the idea is to have a second shell update of this internal buffer. And the other important point is to only send the area that needs to be updated. In this example, we have the feeling that these two circles are rotating. In fact, they are not rotating at all. They are just updating the ages of each circle. This is a simulation of what is done by the rendering engine. And it's very clear that the blinking gray area are only the amount of pixel that are sent to the display at every 16 milliseconds. So that's done by what we call the smart rendering engine after implement. Maybe not. So, as a conclusion, we can see that somehow the advanced graphicals on top are not reserved for high-end microcontrollers. You can think about an existing product and just adding a display. It's possible. You have to think about a few things, but it's still possible. It's so possible to switch to another microcontroller, but not necessarily to the most expensive one, because you need a user interface. There are some levels that can be used at graphical user interface design time, so when you are selecting the object that will be displayed, and the transition and the animation, and also at graphical user interface rendering time, and that's, of course, the most important part because it can get very complicated to build a graphical engine. And that's why, fortunately, so follow these basic rules. Accept trade-offs. I was talking about design at design time. We may be, so you have a designer team that says, okay, it will be beautiful with this sliding animation. And as a developer, you may say, that's not impossible, but let's consider the entire cost of our product. If you want to do that, we will have to add some extra memory. And maybe we will not fit, and the customer will not be ready to pay that much more for just an advanced user interface. So you can reduce and adapt doing some trade-offs at your requirements. And, of course, the most important is to use the right tools. I was telling not to reinvent the wheel. At ST Microelectronics, we have a solution for that, which comes for free on any STM42 microcontrollers. And it has a cut-in-edge graphic engine. So what you see, sorry, so the previous slide with the gray area, is in fact the PC simulator of the TouchGFX designer. So when you create an interface, you create all the logic of it, and you can emulate it on PC. And you have a specific function that will display exactly which part of the display is updated. Which part of the frame buffer, sorry, is updated. So that's the non-visible part of the TouchGFX framework. The visible part is the designer. So I invite you, I really invite you at the end, to come to our booth to meet me and ask any question on the designer itself. So it allows you, without any specific knowledge on design, you can build an interface and test its logic directly on the tool. And it's completely open for customization. Of course, the graphic engine is like not a black box, but it's delivered in binary. And all the rest of the widgets are delivered in source code. So you can adapt it to your exact need. And it's functional on all ST evaluation kits, starting from the G0 nuclear board with a shield that contains a display and some external flash as well. And up to the latest SMU5 discovery kit that is really dedicated to graphic application But in our case, in this session, the most important is that this one has 2.5 megabytes of RAM available and 4 megabytes of flash available. So maybe you would say, okay, it's more expensive than an H7, for example. I would say yes, but think about the entire system. In this case, you do not have to add external RAM or external flash. It's all in the Metro controller. So that's where it ends for me. If you have any questions, I invite you first to take the microphone that is just there open to you. And if you are shy or you just come to our booth in all 4A stands 1, 4, 8, and I will be there to answer any of your questions.