 Hello everybody and welcome to today's presentation. My name is Emil Pedersen and I'm a part of the STM32 graphics teams which are responsible for TouchGFX. Today we're going to dig into some basic concepts when we're talking about embedded graphics. We're going to start with an introduction before we move into some of the generic embedded graphics parts like hardware, color formats, frame buffer, then we're going to relate to the graphics engine and look at that in relation to the TouchGFX engine. We're going to talk a bit about something we call the main loop, also the frame buffer here again, how to impact performance when doing embedded graphics and lastly talk about what operating system is useful for when doing embedded graphics. The goal of this presentation is to give you, to have some general knowledge about embedded graphics, the area about the hardware, you know, color and frame buffer as I mentioned in the agenda and then also keep going with the basic knowledge but looking at how the TouchGFX or graphic engines can relate to TouchGFX. Doing some further reading you can find a lot of help and a lot of information on our documentation site support.touchgfx.com. Also the slides in this presentation will have a little link in the right hand corner where you can find the information on our documentation sites which are related to the slides that I'm talking about and finally this presentation is based on the section of our documentation called embedded graphics basic concepts which should be easy to remember since it's the title of this presentation. So my colleague Romain has previously introduced you to this TouchGFX development process where we have some main components and main activities. Throughout our presentation and documentation we normally relate to one of these points but since this is a basic concept of embedded graphics we are going to get in touch with all concepts and activities both talking about the hardware, the display board, but moving all the way up to UI development, TouchFX UI application. But what is embedded graphics? Embedded graphics can be many different kind of things, more or less all embedded devices with a graphical display showing some sort of graphics all the way from modern smartphone with a high resolution and fancy 3D animations all the way to old 8-bit MCU which have a segment display on an 18x2 LCD display showing just fonts, text, simple characters. What is definitely not is computer's tablets with dedicated GPUs, dedicated processor to run these graphics, but more important what is for TouchGFX is UI's running on an STM32 mic controller so interactive applications in some sort of 2D, 2.5D, adding a little bit of depth if you can see in some of our demos, so more or less user interface is running smooth which is usually at 60 to 60 frames per second, 30 to 60 frames per second. Just to give a quick idea of what I mean or what we mean when we say embedded graphics I have taken this video where we have the STM32H735G disco or the 35 chip running a demo made by actually me and we can see here that we have some user interfaces so we are able to interact, get some information from the screen displaying graphics and images and generally an application where you can interact with and control something and get some information out of. So moving into the hardware part so that creating an embedded graphic solution takes more than four pieces of hardware but from our point of view there is four critical components that is very related and necessary when doing hardware for embedded graphics of course there's going to be a lot more hardware related to this to get fixed up running but these are the four key components as we see them. We got the MTU, the RAM, the flash and display. The flash is where we store all the static data so images, text, fonts that we're going to use throughout the application things that we can prepare before running so not things that has been manipulated at one time so to speak. The RAM is where we have the frame buffer which I'm going to talk more about but generally is where we store the image that we're preparing to show on the display. The display is a bit self-explanatory is where we have the graphic shown to the user where we can see what we have created and what is creating in our application and of course the MCU is the one that as we say are doing the heavy lifting so the one that are transferring the data from flash to RAM and from RAM to the display doing all the manipulation doing the rendering throughout the process. Of course sometimes we can offload some of the work for the MCU with some of the hardware accelerators that some of the use have for example often our STM32 microcontrollers useful graphics have the chromat that can help help give a better performance when doing embedded graphics. This is a very brief introduction to the hardware part so if you want to find more I have attached a link to our hardware selection part in this slide where you can find out a lot more about the hardware so the interfaces is someone that you need or that you can get and use when you're doing embedded graphics. Moving to the next concept color formats when we're creating graphics we're creating digital images so to create these images we need to break them down into small single components and these are called pixels. These pixels combined then make the image that we show on the display so get a good example of what we mean when we say a pixel is a single color component and if we look here at a picture of a display I have drawn this red square around a small part of it which I have zoomed up or zoomed into over here and if you look we can see there is a lot of small squares or dots which have a single color and these combined makes the image that we see so we define the single dots and then put it into an image where we can't really see the edges anymore but everything looks like a small half circle for example as we see here. These pixels are defined to define the color for these pixels we use something called an RGB value which is a red green and blue where we can give the pixel a value from 0 to 255 for each red green and blue value. Depending on how much red how much green how much blue we put in it we get different colors for example if we give everything a zero value so we don't put any red green or blue in it we get it all black and if we put all our red green and blue color into it we get white and so forth as we get a solid green here and if we have a little bit red and none of all the others we get a bit darker red but still we don't get a mixed color as we do here with purple where we have post of red and some blue in it. Another important value to attach to this number is called alpha value which describes the opacity in an image so how transparent our bit actually is can we see the things that are behind this bit with the something we use when we do UI development. We can attach this to our RGB value and then we have an RGB A where the A stands for alpha describing how transparent our pixel is. If we have alpha of 255 we are completely solid and we can see through it but as this number becomes lower and closer to zero we can see through more and more of our pixel as you can kind of see here where we have a gray bar behind and the lower our alphas the more of the gray bar is visible and the less the color of a pixel kind of dominates. So this is more red than this is green. This is more red than this is green. Another important element when we describe pixels is the concept of color depths which is basically the amount of information we use to describe a pixels or the amount of bits therefore also called bits per pixel. This can range for one to 32 where one is that we simply have we simply have one or zero to describe our color saying for example one black one zero black and one white all the way up to 32 where we have eight bits for describing red eight bits for describing green and eight bits for describing blue and also eight bits for describing the alpha. As just to give a kind of an idea of what this means having a lot of bits or a lot of information to describe the colors we can see here in these two examples where we have 24 bits per pixel applications and the same applications but where we have transferred it to eight bits per pixels. We can see here that the transition for the dark green lighter green is way smoother than it is down here simply because we are able to use way many different types of green because we have a lot more a lot more colors because we can have a combination that is way higher of color green than we have with only eight bit per pixels where we actually only have two bits for red and two bits for green and two bits for blue. To kind of compensate for having a low amount of bits per pixel we can use a dillering where we are adding noise to the pixels to kind of smoothen out the difference between our pixels and then this is a good example of using eight bits per pixels with dillering where we get a bit more smoother and a bit more close to our high-end image as you can see. The cost of using more bits for pixel is memory so we cannot just always pick 24. Sometimes if you want to save some memory we have to go with a lower and actually in a lot of cases using for example 16 bits per pixel where we have five red six green and five blue can be just as good if you're doing the correct graphics and if your application is designed for just running 16 bit per pixels. Also an important point to take is when you're talking about touchfx is that we always use the value from 0 to 255 and then the graphic engine touchfx will transform this value into a 16 bit per pixel value for example and all the eight bit per pixels so the user for the user standpoint with developing the application other than having to take into account that the graphics should not be too complex they developing the same if it was 16 or 24. Next moving on to the frame buffer which i have mentioned a couple of time. The frame buffer as I said is where we keep the image or where we store the image that we are going to transfer to the display and to the user. Therefore the frame buffer contains pixel informations as mentioned where we can combine all the different colors into an image. To create this image we store them in a 2d memory block where we arrange them as we want our picture to look like. So the 2d memory block is kind of accessible as a as a coordinate system or graph where we have points like 0.0 and all the way up to a resolution which could be something like 800 times 800 and 480 but since we are starting at 0.0 this would be 799 and 479. Then in each of these locations in our 2d memory block we will store the pixel value as we can see down here we can store for example of in our 0.0 corner at our frame buffer we can store the value 0.0 to give the value black. You might notice here that we don't store our alpha value and that is because since our frame buffer is what we're going to finally show on the display we don't have anything in the background so we don't really want to be transparent because this is going to be one solid image where we're going to use the where i'm going to describe what the alpha value can be used for later on when talking about doing the ui and developing the ui. So here that we put the value black for example at 0.0 or the value red at 1.0 this is going to be transferred into an actual color when we move it to the display kind of visualized over here where we have the black square and the red square and again we have a green square at 0.1 corresponding with the pixel value that we added over here. So talking about frame buffer we can have three different strategies which are going to impact performance. We can have something called more than one which when doing touch effects is more or less two or always two. We can have the one frame buffer strategy and less than one frame buffer strategy. As I just mentioned this is going to the selection of frame buffer strategy and the amount of frame buffers is going to impact your performance thereby selecting the more than one strategy will give you the best performance but then again as it is with the bit per pixels this is also going to come at a cost of memory. To give you an idea of how this is going to impact your memory we can pretty easily calculate the frame buffer cost based on the notice that we have now because the cost of a frame buffer is the width or the resolution which is the width in your resolution the height in your resolution the bit per pixels so the amount of information that you have to describe the colors in your pixels and the number of frame buffers and divide that by eight bits. So for example if we have an application where we have a resolution with the width of 800 the height of 480 a color depth of 24 bit per pixels and two frame buffers this is going to require 2.3 megabytes of memory in your RAM and can be a pretty steep cost for for your application and therefore also something that you need to take into account that for getting these high performance or the best performance setup it's going to be at a cost of memory. Moving on to graphics engine within the concept of embedded graphics in relation to the touch effects graphics engine is what we are using a retained mode graphics engine. What that means is that a retained mode graphic engine lets the user manipulate an abstract model which the engine then translates into real drawing operations that is going to be displayed. Thereby we help the user making some more abstract definition that eases the process of defining some elements for example drawing a box we can define the area this drug box is drawn in the color and then we can the engine would translate all this into an actual happening so the user doesn't have to specify each pixel for example that is being drawn onto the display. Generally the components in this engine that we're using are the components that we are normally using to draw it are called widgets. Widgets are what the user can interact with and modify manipulate setup to display the graphics that they want. These widgets can be interacted with both in the designer where we have a lot of predefined widgets and then predefined elements that can be done to these widgets but also in C++ code so we can customize and do exactly what we want with our user code. We can even create our own widgets or the user can create out their own widgets if they're comfortable with going that much deep in with doing the user applications. By using these widgets we enable the touch effect engine to handle all the drawing and ensuring that drawing is starting the best possible way which is something I'm also going to come back to a couple of times within this presentation. So the way the touch effect engine works is we're running in what you can call a main loop. The main loop is a loop that runs infinite which has basically three steps collect, update and render. The collect steps is where we basically collect all types of events. The user input which can be for example interacting with touch screen so clicking your pressing on a touch screen and drawing dragging some your finger across the touch screen or something like that and also buttons different buttons that you can have attached to your application where for example if you have an up and down button you can move up and down the application. We also collect events from the back end in this step used to stun via setup or a function called tick that we have in the touch effects engine where tick is a function that is called every time we start doing this this loop every time we start collecting our events and this is where we can ask the back end or other modules that is attached to our graphic applications if there is any information that we want to use in our graphic application. Then we're taking all these updates and change the state of the widgets changes based on the information that we get from all these objects we can use that and change the state of the widgets interact with the components in our user applications. Finally when we have updated all our widgets as we talked about then touch effects engine will handle all the rendering and this is a very important part where touch effects will run through the rendering of these widgets and find out what areas needs to be drawn so we don't spend too much time on updating the frame buffer for example if half the screen is not going to be something new we are only going to update the other half of the screen where there actually is going to be something going on so we don't draw things that is not necessary. When a complete loop is done the touch effects will wait for a signal usually from the display before it starts running in the loop again this is done to ensure that we are synchronized with the display so for example if the display is running at a certain speed we don't transfer two flame buffers before the display is ready to show a new one so we transfer a flame buffer and then if we not synchronize with the display we could start transferring a flame buffer before the display is actually ready to receive this one meaning that it's not going to show that flame buffer on the screen. Also having a fixed rate that we are transferring frames to the screen showing these images that we put in the flame buffer and it allows us to use this synchronization to define the animation speed that we are doing for example if we have an animation going across the screen from one side of the screen to the other side of the screen we can say that this should be done in example 66 if we have an application running at 60 hertz and then we have this animation for one second. To kind of showcase how this works I have made a little example as we see here where we press a button and the touch effects logo will be displayed to kind of walk you through how this relates to the main loop in touch effects I will actually drag the the simulator of this little example into into this presentation so we can see how these steps relate to this so if we if we take the simulation the simulator as explained and taken on the production we we have a pc simulator that enables us to run our ui application on the computer to kind of quick doing prototype and see if our elements is placed correctly and acting in the in a way we are expected of course this is not going to be the same as on the hardware but still it can give you a good idea of how your application looks like. Instead of a finger on a touch screen I here have the mouse which simulates my finger and first what I'm going to do is I'm going to click on this button. As we can see here if we go through the loop touch effects has collected a click event or press event on this button and it's informing the widget button that is being pressed the widget button then knows should display an image showing that the button is pressed as happened here. Now when I click then the widget is being updated until I show the pressed image and that is being notified in the rendering process and this is being rendered into the frame buffer. Notice that this is only a click event that I'm still holding the mouse button down so I'm still pressing into this button. When releasing here the button is then being informed that it's no longer being pressed and it will show the image of a relief button as you can see. Also I'm using this button interaction to inform another my application that it should show the touch effects logo when I press this button or when this button is being released and thereby first we are being informed to render the area around the button and when it's released the image widgets displaying the touch effects logo is then being informed to be visible which will result in that one being updated rendering into the frame buffer to show the touch effects logo on the screen. So it's only in relevant cases that we're actually using these inputs to something. For example if I press another place on our application here nothing will happen because nothing in our application is connected to that point of the touch screen that I'm interacting with. So of course all this collecting is only directing it into the widget that is relevant. Now frame buffer and the main loop so it's very essential to understand how the main loop and the frame buffer works together as I said the main loop is where we are doing all the collection of event updating our graphics and rendering it and when we're rendering rendering it's all into the frame buffer. So when something is rendered into the frame buffer then the next time we get a signal as described in the main loop the frame buffer that we have prepared will then be transferred to the screen. Now we're going to start looking at setup where we have two frame buffers because this is the most optimal setup that you can have for your application. We can see here on this diagram that we can transfer the frame buffer simultaneously as we do our main loop do all the rendering collecting and updating of our application. So here we can see how the main loop collects updates render while we transfer one frame buffer because we have two frame buffers we can do all this in another frame buffer. So in this situation we are transferring frame buffer one which means that we can do all this rendering into our frame buffer two that is when we are getting a new signal ready to to be transferred to the display and then we can do all our collecting update rendering into the frame buffer one. This is done in a lot of applications most commonly done at 60 hertz per second or 60 hertz. So this is done a lot of times therefore there is a situation where the application is just standing and doing nothing because nothing is changing and if nothing is collected thereby nothing is updated and nothing is rendered we are just retransmitted the frame buffer. So here we can see that we are doing our loop and transferring frame buffer one while this one is done into frame buffer two but now since nothing is going on in our application we are not doing anything we are retransmitting frame buffer two while this time we have detected some changes in our application and do rendering and preparing frame buffer one. Now if you have some complex graphics for example where we have to calculate changes in the image or so on you a lot of ways can can get your graphics to be too complicated. This is also a subject i'm going to talk about later on. Then the rendering is going to be so slow that within this time limit that our display has we are not we are not done with the rendering meaning that we haven't fully prepared the next frame buffer. This was resolved in the retransmission of the old frame buffer because since we have two frame buffers the one that we are transferred to the screen is untouched and this one we can train for again. So as we see here the rendering phase is way too long and we don't reach we are not done when we get to the point where the signal asks for a new frame buffer for a new image and therefore we are just retransmitting the frame buffer one. So this will in some case make your application go slower for example an animation will move slower and can have an impact on performance and in some cases this can go undetected if it's not happening all the time. If we look at the one frame buffer strategy we are now not able to draw into a frame buffer that we are not displaying meaning that the same frame buffer that we are transmitted to the display is also the same frame buffer that we have to that we are that we're going to transmit to the display. So as we see here we're doing transfer transferring a frame buffer one while we are doing collect update rendering loop but then we again we have to transfer frame buffer one again so we're not able to to have a backup frame buffer which always contains the image that is on the screen. Because we are transferring the same frame buffer as we are drawing into this creates a risk of transferring part of the old frame because as we see here again we're not done rendering before we are transferring a frame buffer again meaning that maybe only half of our screen is half of our frame buffers being updated and the rest still contains our old image that we are showing on the screen. This can create an effect called tearing for example which as we can see in the image down here we have transferred the top or the top of our application so if we are starting transferring from zero to zero point zero and also rendering from zero point zero the top of our application is the new frame or the new image we want to show on the screen but since we are not done we still have the bottom of our application stuck in the old frame the old image that we sent to display since this is in the box that's moving from for example side to side or moving around this is actually in the UI taken from the designer the top frame buffer which is moving or the image which is moving towards the right then the top part of the image would have moved something to the right but the old image is still stuck in the previous position and we'll get this effect of it's being slowly torn apart and also called tearing. There is different solutions to ensure this does not happen different algorithms that you could work and of course you want to assume this only to draw when a transfer is done so when we transfer is done and we are sure that we can draw everything before the next transfer has been done also drawing into the part of the frame buffer that that already has been sent so yeah there are different ways of kind of ensuring different elements that can help us ensure that it's not going to happen but it still creates a higher risk because we just we can just transmit the old frame we always we only have that frame buffer that we have been working into working with as the thing to send. As you can see here this does have some performance impacts and in generally what is good performance and how do we ensure that we get the performance that is desired to kind of define what good performance is and what we see in touch effects good performance is basically just when we get the desired graphic and the animations moving as we want to also at a high rate where we don't experience lacking or it being kind of conky or anything but basically getting the application with the graphics animations that we are expected that we desire to have on the screen. Usually when working with embedded graphics as I've mentioned a couple of time we have a frame rate of 60 hertz meaning that the loop that we are doing has 16.67 milliseconds to do this whole loop and as explained earlier if we don't reach that point if we're not fast enough if we can finish the main loop within 16.67 microseconds then we will in some cases retransmit the old frame buffer or retransmit the old image at least which will lower the frame weight and at times we do this this has some impacts on the general frame weight so if we for example always skip one frame meaning that we always are one frame too slow we will have the frame rate and then we will have a 30 hertz frame rate or at least the frames per seconds in our application will be at 30. I have taken a small video of the same application running on two different boards to kind of give you an idea of the impact and then what can see as not as good performance and as good performance because here we have two different boards our older board the STM32 F746 and the new STM32-8735 running with the same applications which is easy to do with touch effects since these two have the same resolution this is a menu launcher that I have created for our newer out-of-the-box demos and we have these spheres moving around in a sort of tree or to have the perspective where we can select different sub-demos within this out-of-the-box demo this is a quite heavy application with a lot of elements in it and can be quite heavy to run and also as we can see here the older board or the older application setup runs quite slower and is way more chunky in this movement which will be perceived and at least now in as not as good performance we can see how it kind of moves in step where related to the one on the right here we have the same application on our newer and faster board where everything is moving way smoother this could be moving a little bit slower than the 60 hertz because it is still quite heavy but again as I said since this is moving smoothly we are don't really that concerned about not moving in the 60 hertz because it still gives a good perception and you still perceive it as a good performance of course it is important to remember that you don't necessarily need a high performance MCU to run a good graphic application it depends on the application that you're designing and the graphics that you want so you are still able to run beautiful graphics on a lower performance MCU you just need to design your application in a way that is suitable for the hardware that you run it on and therefore it's important to discuss what actually have an effect on the rendering time and thereby on the performance in your application the first thing is the amount of the screen that you're updating in your applications thereby also the resolution of your application so if you have a low resolution on your display you're not updating as large an area comparing to if you have a higher resolution but of course also the area on the screen no matter the resolution that you're updating the larger area the more pixels we need to render thereby the more information we need to process and this takes longer time and has effect on the rendering time another element is the layer in the graphics that you're using so coming back to discussing this concept of widgets if we have a lot of widgets on top of each other every time we we have to draw a widget we draw this one in the frame buffer but if you then have another widget on top of that we then take the image in the frame buffer and then draw this new widget on top of all the information in your frame buffer so for example if you have three widgets as we see here we have a background image a frame text and every time we take change something in the text since it is close to our frame it will have some impact on the frame as well so the image of the frame and finally also the background because that is of course in behind and what we see behind the text so in this process we would first draw the background into a frame buffer then we will take this frame and draw that into the frame buffer and to know kind of the image that we are drawing our text on we then use the image in the frame buffer which is of the background and the frame and draw the text onto that image in the frame buffer this is three drawing processes and as I just said we are drawing the text onto the image of the frame and the background so why not just combine for example this frame and the background into one image so have a single pg file for example that we can prepare before we run our application where we have the background with the frame on of course in some cases you want to move the frame and the text around and then we can combine it in the background image but if you are able to then we will draw the background with the frame one into the frame buffer and the text on top of that and thereby skipping and drawing an operation yes and also coming back to the amount of screen and we're updating if you are updating the area with the frame when you're changing the text would be a larger area than just updating the text for example and thereby we would require more computation when drawing if we were updating the area with the frame because that could seem like a good idea because that is where the text is placed instead of just updating the text which would also be a way to enhance performance without actually having any impact on the way that our widget would or our application would look in the end so there is also some small tricks and there is definitely a lot more stuff to read if you go into a UI development section that is not going to be covered in this basic concept section but just to give you a good idea of there are some things that we can do to achieve performance without actually even impacting the application or how the look and the feel of the application will be in the end another very important thing to take into consideration when doing your application regarding the rendering time is the widgets that we are choosing to do in our application because some widgets is having or some widgets requires a lot more computation a lot more rendering than other widgets and therefore those widgets will of course have a bigger impact on our performance as you see here we have four different widgets the a box widget an image something called texture mapper in a circle and i will just kind of try to describe to you why the complexity of a widget can have an impact on the rendering time in just a second but also another important element to take into account is the transparency that a widget has because if it has some level of transparent we need to calculate the color that is being showed in our application when combining the background or what is in the frame buffer with that application or with that widget that has some transparency so touch effects need to at one time define how this widget is going to look because we need to calculate how much of the green is going to be shown and how much of the gray background is going to be visible through our widget and of course this will also require some calculations with or rendering which will have an impact on our rendering time and thereby our performance a good way to also achieve better performance is to utilize your hardware and this is not only by getting an mcu with a high processing speed but also using some of the hardware support that you can get for rendering that a lot of mcu comes with a really good example is that most of the hgm32 micro controllers that we are using for graphics and that the graphics team has prepared and also created application templates for discovery kits for comes with the chromat which helps you or helps the mcu doing some offloading some of the processing when doing graphics helping transferring the images onto the screen and so on to the mcu for example can use its computing power on something else in the same time as we are using the chromat to handle some of these rendering elements and some of these graphics related elements when doing the graphics at one time so yes as i said i would come back to discussing a bit about how different graphics can have a different impact on the performance when we are doing our applications so again i have created a small application as we see here in the and i'm able to show you in the touch effects simulator so i will just quickly track that interview yes as i said i would come back to discussing how different widgets can have a different impact on rendering and as we can see here i have prepared a small application or here is a picture of an application that i have prepared where we are using different widgets and i will just quickly track that interview here so again we have the four types of widgets which are called a box an image a texture matter and a circle the box is a very simple widget to draw and doesn't have that much of an impact on performance because the box just consists of pixels in a rectangle that is defined to a certain color so basically what we are doing of course in a simplified way is to inform or is to take the color that we have defined for a box and place it within a certain area in a frame buffer so this is a quick quick operation where we for example are saying blue here put into or taking this blue color and telling which coordinates in our frame buffer to put in the blue color and around it telling which colors to put in the yellow color the performance of this widget the image is quite similar to the box but instead of just taking a defined color we are taking the information that we are getting about our image which is stored in the flash and taking that one from the flash and putting it into a frame buffer so reading the information we know where to draw these blue color and this blue color and then we just need to define where we're putting it so we're just informing the widget is just taking the information from the flash about the image and putting in the area where which we have defined which is also just a quick get the information and put in a frame buff operation not much else has to be done as we saw in the image of the pic in the pixel slide the first one is that these pixels especially when we are trying to do sharp edges not like these which are moving or which are placed kind of in the align with how the pixels are created but these which are moving a bit across here on the sides we can have some alpha to ensure that this this X looks doesn't see all the edges that if we just had this blue color all the way to the edge of the image so we have a bit of alpha to kind of create this beautiful edges and not having something that is looks where you can see the change from yellow to blue in the pixels but it's minimal blending that we're doing on the side and again all the information about the transparency and so forth is stored in the image in the flash which we're just getting and putting in a frame buffer so we have to do a quick blend but not calculating much more than the blending and the edges of this X moving on to a widget called the texture maver the texture maver is again a widget that takes an image but instead of just moving the image around based on how it looked in the flash or the predefined image before we are running our application the texture maver is able to do some manipulation with the image so for example here we are moving the image or we are rotating the image around we can also zoom the image or scale the image up or down thereby manipulating with the image at runtime so this is not predefined how this image should look and therefore we need to calculate how the image should look when we rotate it a bit for example this calculation requires a lot more from the MCU than just copying and therefore the rendering time for drawing this widget and for calculating how it should look where the blue pixels should be put and where it should be completely transparent requires more of our performance and takes a longer time in the rendering process than it takes just to draw an image where we're just simply copying information into the frame buffer or more or less copying information of course there is some alpha blending as I said this one again is just one color and is could be perceived as simple as the box but as we can see again here this is the circle widget that is rotating at runtime again we haven't defined how it should look this one and we can also enlarge the circle change the width of the outer circle here or make it completely solid so we need to calculate again how to do these edges how much alpha blending as there is with the edges of an image should be the color of these pixels and all that so quite similar to the texture map where we need to calculate which color the different pixels have and this calculation process again takes more time so has a bigger total on the rendering and therefore using images where we need to do some or losing using widgets where we need to do some calculation at one time can have a bigger impact on our performance of course we could also just have a series of images like in a gif to rotate our x but then this would require a lot more images and therefore would have some cost in our flash and we require a bigger flash than you could have with using something like the texture map but then maybe achieve a better performance so those elements have to be taken into account when we are doing the development of the application let's talk a bit about why operation system can be useful when doing embedded graphics because in an embedded application these embedded devices usually are handling more than just a ui for example if you have a weather application you can have something like a temperature sensor which measures the indoor temperature but also a connection to a web api where you can get your weather information that you will display on the screen therefore we have several different elements going on in our embedded device which can make it quite important to have the option of interleaving the tasks this will ensure that the UI is not blocked by a process or that the UI is not blocking a process for example getting these weather information from the web api can maybe take some time and if we are just waiting for this information to get in the UI could be in responsive or we won't be able to show an animation showing that we are downloading information from the internet therefore it is important to switch between our UI task and the Wi-Fi task connecting or getting this information at the same time so it won't have any impact on the performance of our application and also in some cases we are doing more crucial tasks than showing a good UI and thereby we won't have been a situation where the UI is waiting for inputs or where the UI is blocking the tasks that are more crucial than the UI so using a real-time operating system will enable us to prioritize these tasks and ensure that we have the best performing UI but not at the cost of something that could be more important in our embedded application also for example going back to the weather station or the weather application we want this information from a temperature sensor and from the web api or from the weather data that we are downloading to be shown on our UI and using a real-time operating system we will be able to communicate between these tasks by utilizing the RTOS message queue or the message setup that this real-time operating system offers us in TouchGFX applications or the application templates that you can find for TouchGFX we have prepared TouchFX to run with the free RTOS the free operating system and good examples of how they can be used can be finding out designer and you can also go to a documentation and find more information about how to set up this free RTOS setup of course you can run with other real-time operating systems and that you can also find information about how to do on our documentation page some application is maybe a small application or has a low complexity where we won't need to add this extra element of complexity by adding the operating system and therefore you're also able to run embedded devices and better graphics or more important TouchFX on a setup that is not using a real-time operating system and this you can also find information about how to set up at our documentation just yeah so if you want to find more information about how to get started and how to do TouchFX development you can go to our documentation and here's a couple of links of where it would be natural to to continue if you want to dig into how the hardware part and how to prepare your embedded application hardware wise you can go into our hardware selection introduction documentation where you can also find more information about what we as stm microelectronics can offer you regarding the different mcu's and the different interfaces and so forth that you can have when doing embedded applications if you want to get started with the UI development part you can go to our UI development introduction and we also have a presentation and a workshop that you can watch to get more information about how to do this i want to say thank you for following this presentation and i hope that you can find it useful and you can use it for get starting with your embedded applications development