 Hi everybody and welcome to the UI development for the mentors presentation. My name is Romain Dielman and my name is Emil Pedersen We are a part of the STM32 graphics team who are responsible of TouchGFX And it's gonna be us who are taking you through today's presentation The agenda of today's presentation is That we are starting with a short introduction before we're going into the software architecture behind UI applications with TouchGFX Then we're going to talk a bit about the designer in the three sections working with the TouchGFX designer designer user guide and UI components how they are used in the designer before we are going to talk about how to working with the User code when doing TouchGFX With this presentation We hope that you can get some fundamental knowledge about UI development with TouchGFX So the basic software architecture Getting started with the designer and its UI components and also discovering some of the engine features and how to work with them in user code If you want to read more you can of course always go to our documentation support TouchGFX.com Also all the slides in this presentation will have a link in the bottom right corner where you can access the relevant documentation for the slide that we are looking into Also, you can go into the UI development introduction and the UI development section which is generally The documentation section that we are working all our slides and our presentation from in the whole TouchGFX development concept we will be focusing on the Component TouchGFX UI application and the activity UI development since this is a UI development fundamental presentation As mentioned in the agenda in the introduction We will start out by talking about the software architecture so you can get some knowledge about how to set up your UI application and how to interact with the different elements When doing a TouchGFX application The design pattern that we are working with When we are doing TouchGFX applications is the model view presenter pattern or as we call it MVP The MVP consists of three elements the view the presenter and the model Just a quick give you an overview what these three elements are doing the view is where we are doing all the UI directly UI related stuff and The presenter is where we are handling all the business logic behind the UI and The model is where we are handling the data that we use in our application The benefits of this is that we are able to better separate the different elements in our code To decouple the code Which enable us to do better unit testing So test the model the presenter and the view separately and also enable us to better change elements in our Application if needed. So for example, if we want to update the UI with some new graphics We can change the view or Maybe a bit of the presenter as well without having to change everything in our application or if we are updating the back end or updating How we collect data It's only the model Or maybe a bit of the present presenter that we need to update in our application So digging a bit more Into these three elements in the MVP starting out with the view view The view is where we are creating the UI elements or adding our widgets Which you can save to our UI application And it's also where we are doing the manipulation to the widgets to the UI elements so for example is where we are starting something to animate across the screen or Handling the change of a button when it's pressed Which is all done in the view the view Design pattern wise is where we are receiving our user input. Of course Completely technical standpoint the inputs are received in the hardware Extraction layer when we receive some presses on the touch screen, but when we're designing our application We get these inputs in the view So for example, if we press a button the view is where we are informed that the button is being pressed and Then we can for you to send these information to the presenter or start some animation for example in the view so As I just said the view is also where we are sending information that are received or are Changed in the view to the presenter enabling the presenter to have this data to do its job The view will also be updated based on data that it receives from the presenter Moving on to the presenter as I just talked about the presenter receives Informations and sends information to the view, but the presenter will also receive and send information between the model therefore The presenter works as a mediator between the model and the view Where it sometimes receives some data that we want to show in our UI from the model But in some cases has to translate into something that is relevant for the view to show The view and the presenter also works together just those two By handling all the business logic when we want to do in our view in the presenter So if we have has to be some calculations and some stuff that is UI related we do that in the presenter and Not in the view where we are only handling the updating and the changing of our graphics Therefore the presenter doesn't have any real knowledge about The UI implementation. It just has the knowledge about the data that we want to show in our view Finally the model the third element in our MVP pattern Is where we are as mentioned a couple of time storing the data in a UI application But as you can see here in our diagram showing how the MVP pattern Works and communicates within the different elements. We have added a fourth element here Called the Bagin model and that is because that in a lot of cases You're not only handling the UI on the mic controller. You are running your touch effects application on But you're also doing some other stuff and in It's in the model where we are doing the communication with the rest of the system running on your microcontroller So for a quick example if you're having a temperature sensor, which value you're going to show on the view It's only in the model where you communicate with the temperature sensor about the value that you want to show In the view and therefore if you change the temperature sensor or change actually also the microcontroller we're running on It's only the model we need to modify to handle the communication With the back end where the temperature sensor or the new mic controller has some new settings and of course This is done by passing the data To the presenter. So when we get a new temperature measurement, we can send that to the presenter which can convert it into Celsius Fahrenheit Kelvin whatever we need and even if the UI application can show all three values We can select between which value we want to convert the data to in the presenter So handling the business logic of what we are showing in the view So to help with the understanding of this MVP concept. I have created a small demo to show how the MVP pattern can be used in an application as You can see here. There is a small gif of it showing where we have an application with a clock and a button where you can update the clock From and also a command prompt here Where we show some of the data that has been used so using a function within touch effects when you're building it in a simulator but that can print information throughout the application when you're running it on your PC So I will quickly drag this in so we can discuss how it works while interacting with the application So yes, here we have the simulator and the command prompt So the concept of this demo is that when we press the update clock button as just mentioned we are going Through the presenter to the model and request for the current time So since we are running on a PC, I'm using a time dot age functionalities Where I get a time string and this string I will send to the presenter Which will then extract the information that we need to update our clock widgets So an analog clock widget and a digital clock widget we have here. So when I press update clock We can then see that the view request a time update for the presenter Which then request the time string that it needs to extract the information then the model Will get the time string we are the built-in function and The presenter will then take this string divided into an into hours minutes and seconds and finally Update the UI with the new time integers as it is in this case So if you press update clock again The same process will happen and this is basically how the MVP concept work where we in the UI Detects an interaction Update clock in this case then Through the mediator the presenter requests something or tells the model to do something in this case is get a new time string that we will then Do the conversion of handling the business part of getting the information out of in the presenter and finally update the UI with the information that we have Converted the string into in the presenter Another architectural concept that we work with when doing touch effects applications is the screen concept The screen concept means that we have a logical grouping of the UI and the UI logic giving the application a number of screens Which each consists of a view class and our presenter class so a view and presenter as We know it for the MVP pattern So as we can see here a Application can consist of a number of screens So screen one screen two and screen three for example, which each has their own viewer and presenter What they share is the model that we know from the MVP pattern But only one screen will communicate with that model at the time also illustrated Here This is because there is only one screen active at the time when an application is running Therefore, we also need to store our data in the model as been mentioned in the MVP section If we want to use this data again, or if we want to share this data with another screen one of the reasons why it's Beneficial to have this screen concept is That we are able To maintain our application easier because we don't have big elements For a UI so we can divide it into small chunks, which can Individually be maintained Because these screens actually don't know directly anything about each other only through the model Therefore, it's also easy to change or remove screens from your project Again because these are not directly related to each other, but only by the information to share through the model another important benefit that you have with the Screen concept it the minimizes the amount of RAM memory thereby Having a less cost when it comes to the RAM that we need In our application and this is because that since we only have one active screen at the time we only need to Consume the element or we only need to consume the amount of memory for the largest screen And thereby we are only allocating the memory for the largest screen in our applications To help with the understanding of the screen concept. I have again created a small example a small application To kind of show how the screen concept Can be used or is used in and touch the effects application Again, we see here a small video of the Application, but I will quickly drag in the application I made so I can explain you How it works and how it relates to the screen concepts while I'm interacting with the application So as we see here, we now have a quite similar Application to the one we did for the MVP example where we have our clock digit clock and our Update clock button, but also we have this cock wheel here, which is a button that takes us to and setting screen Where we can choose between 12 hour mode and 24 hour mode for our digital clock So if we press update clock, we will again here request the time in The model and the model will send the string to the presenter which is going to be converted into a To the data that we're going to see on our screen, but also The clock will ask for 24 hour Clock or 12 hour clock depending on what we have selected So now we're going to another screen. So the clock was its own screen and now we have the setting screen Which we are now having active and therefore all the information about how the clock is looking since this wasn't something that was There when we started the demo is Actually forgotten if we then select the 12 hour clock mode. We can see here that We are sending the selected mode To the presenter and the presenter is sending it to the model So if you remember what I said in the MVP section also is that the action we are doing with the application is actually when I'm pressing the application even though it's the Bagging of the hardware extraction layer that is handling all the idea of how I'm Interacting with the touchscreen. It's actually through the view that we are sending the information of me interacting with the application Now when we go back We're already here actually have an example of What happens when we don't share data between the model because when we left the screen as I said The clock that we have updated would be forgotten because this was not something that we initialized in the UI beforehand and since we didn't save the clock settings We are going back to the clock as it was when we started the UI when we set up all our UI elements But if we press update clock now Because we saved save the clock format We'll get the updated time in our two clocks, but we can see here that a format has been changed to 12 hour mode and If some of you noticed before when I entered this screen, we are actually also again requesting the clock mode because we're gonna want to make sure that the correct Settings is set here in our radio buttons So therefore as we initializing the data we are asking the model Which clock mode we are so we can tell the button Which clock mode has been selected So the view in the settings are asking the presenter for a clock format and Then all the way back the model is sending this to the view The third concept that we are working with with doing UI development for touch effects is a concept of Generated code versus user code as We can see here. We have three different layers of code The code in the engine the code that is generated and the code that's created by the user the engine code is The code that we have in our library that we are using and that is the basis for touch effects applications But we can't do anything with when we're creating our applications So these are vital code, but it's not something that we can change The generated code and the user code is what we are creating when we're doing our application So it's what the code that's added to the application Throughout the creation process of an application The generated code is code that is generated by the touch effects designer When you add widgets add components to your application to your screens in the touch effects designer then code or C++ code That we finally compile and run in our application is generated by the designer This code is therefore also Regenerated every time you press the generated code option or you simulate or you run target or anything So the code in this section is therefore read only because this is reserved For the code that's generated by touch effects The user code on the other side is the code Or this yeah, the section of the code that is designed for the user To put in his or hers own pieces of code to kind of utilize what that is generated by touch effects The user code by the other hand is where the user is able to put in Hills or hers own handwritten code which either The user section on the other hand is where you The user section on the other hand of the code is where you are able to put in your own handwritten code either to The user section on the other hand is where the user is able to put in their own code Either to create custom functions that hasn't The user code on the other hand is where the user our own the user The user code on the other hand is where the user are able to put in their own code to create Their application and interact with touch effects by C++ code so if we look here we can see that we have the view base class and this is where the generated clothes are Put in generated by the designer which in general are called something with base as we also have from the front end Application base and the front end heap base these classes Again is read only and cannot be manipulated with at least they will be overwritten every time You generate code from the touch effects designer, but the user code is only generated when you create a new screen for example and These places are designed for you to put in your own code and these are actually Inherit or children of the base class or a child of the base class and Therefore are able to interact with the code or with the things you have created in designer But as we see here, we are only doing graphic elements in the designer so if we go back to the MVP concept it is in the user code we define our presenter and Therefore this won't be generated by itself The front end application and the front end heap you can go and read more about in our documentation Because these are classes that will be handled if you are a bit more advanced when working with touch effects but more or less it's things that helps you define memory and change between screens But in general they can be this The user code section on The other hand is designed for the user to add their own handwritten code to and therefore these files Which are within this section are only generated once when you create a new screen for your project as we can see All the generated code are named base in the ending of the name Which means that this is the read-only files where the code generated by the designer put and then Related directly to these Are the same names, but without base at the ending and these are the files where the user can put in their code Because this relation We have here We are able to interact with the elements added in the generated code so if you have added a box we can actually Manipulate that also from user code because of this relation as we remember From the MVP concept The presenter is where we handle the business logic, and that is also why we don't have a generated Class or generated files for the presenter because this is only the UI elements that we add In the designer and the presenter is therefore directly related to the presenter function in our engine So to get an understanding of How all this generated code versus user code works? I will walk you through a couple of screenshots to give you an idea of How the process works? So if we start here in the designer We are creating a circle with the circle widget as we can see here where we set up some properties Out here for our designer What they are doing is not that important at the moment. It's more the process of how the designer creating this But here we can see The properties that you can set in designer and you can see The circle on the canvas in the screen that we have created Now the designer will generate a lot of code for you so you don't have to do this as we can see here Which relates directly to the properties that you have set within the designer Finally, we have here a user code section where you can do your own manipulation of the code So for example, this example is actually rotating the circle So we are doing some code here where we are able to rotate the circle So just to go from the start We are adding this circle to the canvas to our screen Giving it some properties, which will be generated by the designer in a read-only file So as it says you should not modify this Then finally we are adding some extra capabilities to the widget by doing some custom code some handwritten user code And finally we have this application of a spinning circle Todd GFX designer is the main tool used in the UI development activity With this tool users will set up design and create the looks of the project Todd GFX designer supplies multiple widgets like buttons images or graphs a Widget is an abstract definition of something that can be drawn on the screen and be interacted with Users can add widgets to their UI and customize them with the supplied properties The other in which they are added determines the order in which they are displayed The Todd GFX generated code for the widgets can be used as a source of inspiration Or as a base by users to create custom widgets Containers are component containing child nodes, which can be widgets or other containers In Todd GFX designer the childs are added to a container by dragging them within the container in the tree view Containers can be used for many things for example moving multiple elements at a time Since a position of the child elements are related to the parent container As you can see in the GIF Elements within a container will be displayed if their coordinates match its display window The tool includes a simulator to replicate the running UI prototype instead of flashing on target or project every time This can also be used for projects without hardware configured yet There are two ways to run the simulator using Todd GFX designer or using the Todd GFX environment For testings it is possible to create user code that will be only run when using the simulator Todd GFX projects are compiled using GCC or Visual Studio or by using the tool chain of your choice Be aware that the initial active tool chain is SEM32 QBID Another tool chain can be chosen within QBMX Building a project generates a binary file which is flashed on target using SEM32 Q programmer or ST links for older projects GCC is used as default to run a project on target from Todd GFX designer The generating and running commands can be overwritten by users A Todd GFX project is a C++ project meaning it can be debugged as any regular C++ project Debugging by running on target is useful for performance testing such as checking the animation speed, the update frequency and responsiveness of the UI elements But debugging with the simulator is much faster and more efficient for testing the look and logic of the UI You can also debug using the debug printer option which will print information on the display Todd GFX designer offers multiple UI templates which showcase the functionalities of widgets and the options of designer The code for those examples and demos can be used as a source of inspiration to get started with a custom project Some ST evaluation kits also have dedicated demos called online application which can explore advanced concepts and features The startup window is the landing page when opening Todd GFX designer This is where new projects can be configured and created Users can start projects from scratch or select an application template for an STM32 based platform or UI examples or demos Those can be useful to understand how to get started with Todd GFX designer and to understand how some of the widgets work After creating a project from the startup window users can start developing their UI, but let's go first through the tabs and options The main window consists of a navigation bar command buttons a notification bar and a detailed log The navigation between the four main pages is done through the navigation bar The pages are canvas for the drag and drop application building images for management of the images needed for a project text for the management of the text and typographies and config to configure the various project settings Within Todd GFX designer the code for the UI can be generated by pressing the command generate code or with the key f7 It is also possible to run the simulator or flash projects on the board using the dedicated buttons or by pressing respectively f5 or f6 The status of a running command is shown in the notification bar Clicking on it opens a detailed log to have more information on what went right or wrong The file menu is made of three elements file edit and help items The file menu is used for creating editing and saving projects A link to the official forum is also given if you need of support or to ask other questions The images view is where users can add and edit the images needed for the project It consists of three columns The tree view on the left which gives another view of the images available and which folder they are stored The table view gives more information on the images used in the UI The properties view is where users can edit the properties of an image like the bit per pixel format or where they are stored in the board The text view is where users can handle all the texts used in the UI It is where the fonts and typographies can also be edited Using texts in touch effects designer can be quite complicated Therefore this section is explained in more detail in the related articles in the documentation The config view is where users can configure advanced settings for the project Like for example overriding the compiling and flashing commands The canvas is the view used for building the graphical parts of an application By providing a visual representation of the interface as it will look like while running The dynamic aspects like animations and interactions between parts of the systems are also described here This view consists of three sections The left section is the tree view which displays the widgets used in the UI Additional screens and custom containers are added through here And the screen and widgets order can be changed by dragging them below or above others in the tree The add button widget will open the widget menu Where users can select which widget they want to add to their UI Those widgets will be displayed in a canvas view And their size and positions can be manipulated by users within the center section Representing the looks of the selected screen The right bar contains the properties and interaction tabs The properties tab is where the properties of screens and widgets are set Like their positions and width and height The interaction tab is used to define widgets and screen interactions For example a fading image when entering the main screen at startup Interactions are often the heart of a UI project The interactions set within this tab in TouchFX Designer Will generate all the code necessary to have them running Users can modify the logics of those interactions or add new ones Through user code to add more complexity to the animations and interactions The widget menu orders the widgets available in TouchFX Designer By categories based on their properties and usage For example the different types of button widgets are under a category called buttons Additional widgets may be added with future releases of TouchFX Designer Clicking on an icon of a widget will add that widget With the name and initial settings to the selected screen Creating custom containers will spawn a new category at the bottom of the widget menu Where they will be made available to be selected for the UI They can be used for different scenarios and are very useful for advanced UIs Some widgets like scroll list or scroll wheels need custom container to function It is possible for users to create widgets through code for specific needs in the project However they will not be shown nor be configured in a canvas view of TouchFX Designer But they will be displayed in the simulator or on the board I will now show you two widget examples But be aware that every widget has a dedicated documentation article to understand how to configure them and how to work with them Each article also guides to a UI template using the widget described to showcase their usage The button with labeled widget is a widget aware of touch events It is used as a button sending a callback when released It has two states, pressed and released, each associated with an image and a text The images should not look the same but be of the same size in order to show the users when the widget is pressed It is a common widget to be used and as it does not involve complex animations nor calculation it is considered a light widget with good performances on most times used Custom containers can be used for a wide range of applications The coordinates of chat elements are related to the container's coordinates It is then useful for moving multiple elements at once It is also useful when having a recurrent group of elements displayed in different screens like a menu bar consisting of three buttons with boxes with text areas as shown on this slide Depending on the widgets used within the container, the performances can vary Thanks to Romain, you should now have good understanding on how to do UI with the touch effect designer and how to utilize some of the components within the designer Now I will start talking a bit about the user code As I mentioned in the software architecture part, there is this thing we call GeneratedCode versus UserCode where the GeneratedCode is the stuff we do in Designer and the UserCode is something that you can do in handwritten C++ code So I will look into in this section how to bridge the gap between the GeneratedCode that we do in Designer and the UserCode that you can do yourself So how to interface but also some best practices when doing your own UserCode and how to get more knowledge when working with UserCode in TouchGFX An efficient way to interact between what you have done in the Designer and UserCode is with the usage of our Custom Interactions, Action and Triggers Action Triggers as I just said creates an interface between the Designer and UserCode and is some custom interactions that we can add to the interaction system that Romain have told you about in the Designer These two are created under the Screen Properties or with Triggers, the Custom Container Properties, when you are doing your Designer project So let's start with the Triggers Triggers are elements that enables a screen to react on events coming from a Custom Container Therefore, a Custom Container is able to emit a Custom Trigger but do this as the concept of an Action while the screen is able to react to the Custom Trigger as a Trigger So as we can see here in this diagram, I have tried to kind of show you how the flow and how the Trigger works So for example, we can have a Trigger that works on something happening in a Widget that we can set up in a Designer For example, we can have the ButtonPressed Trigger and here we can then emit our Custom Trigger from our interaction system This Custom Trigger can then be reacted upon in the Designer as a Trigger in the interaction system from the screen and thereby we can create a new Action for example, show a TextWidget widget or MoveWidget and set that thing to happen from the interaction system To go into a bit more technical element, the Trigger is a generated C++ callback function from the Custom Container that we encode can set up our user code to react to And again, after the creation the Trigger is added to the interaction system but the thing about Trigger is that they can only be created within the Custom Container but then be reacted to from our screen And lastly, if you're doing a bit more user code, a bit more custom application than what you can do in the Designer, a Trigger can also be emitted from your user code So to help you get a better idea of how to create these triggers and Adam used them in your interaction system, this slide has some small screenshots from the Properties section in the Designer where we first here see from the Custom Container where we can create our Trigger where we're creating the Trigger1 and also within the Trigger are able to pass some data in different types where you can create your own Trigger to pass some data in different types where in this one we have selected the Boolean's a Boo, then over in the interaction system in our Intaction element we can select to emit the Trigger based upon a Trigger from the Designer so here we have selected that when a button is clicked from the button 1 we shall emit our Trigger1 as an action and as we can see here we can add the value that we have selected so since we have selected a Boolean we can then say that when this Trigger is emitted it should emit the value false and finally how we can react to this Trigger in our screen and set it up from the Custom Container that we are reacting upon as usual I have created a little example a little UI application to can show you how a Trigger can be used and how Trigger can work as we can see in the small movie here we have a Custom Container which passes some information to the screen but again I'm going to quickly pop the application into view and then I can tell you a bit more about how this Custom Container uses the Trigger to interact with the screen yes so as we can see here I have this Custom Example of how to use the Trigger which I have dragged in front of the video in the background so we won't be disturbed by that one and then here we have our Custom Container which is placed upon the screen which we have here and the idea is that when we press one of these buttons in the Custom Container it will emit a Trigger telling the screen to show text related to this button so if we press on here we have a Trigger that informs our screen to show the on text but then again if we press off we have a Trigger to show the off text but you know in our interaction system we can react multiple times on the same Trigger so we are also telling it to hide the on text and if we go back to the on text vice versa so this is one of the ways that we can use our Trigger to send a function and inform our screen about doing some stuff within the interaction system so what triggers offer the ability of a Custom Container to interface or interact with a screen and has the function to do this as well in user code action is where we're really offering the interface between the screen or a Custom Container and the user code because an action allows us both to call an action from our interaction system down to the user code but also to react upon an action from the user code as a Trigger and thereby do something in the designer based upon a Trigger happening from the user code I've tried to visualize it here where we both can see that our widget for example a button again can trigger the interaction system and then call an action in our user code but also the user code can emit a Custom Action as a Trigger which will work as an action for a widget for example updating the text in a widget again here we are able to pass data between the screen or the Custom Container to the user code within this action but instead of this being a callback generated this is simply virtual methods that we are able to override in the user code and thereby communicating between our user code and the generated designer code again as with the Trigger the actions are added to the interaction system but in this case it's something that we are both available to set up in the screen properties and in our Custom Container properties so again here I have tried to illustrate how to create this action and as you can see it's similar to the Trigger that we add an action here with a name and on the type can add some data but as we can see here we can have a Trigger that is called Screen Insert for example and call our action from screen one but we can also use our action as a Trigger so if we dig a bit into how this works for user code we can here see that we are able to call our action one from our screen one and thereby work use the action as a Trigger so when we call action one and if we go back have the action one as a Trigger this will trigger something in the code that we are creating or generating with the designer but also by overriding this virtual method in our user code as we see here and here we can then have our action as an action and trigger something down in the user code again I have created a small example to kind of showcase how we can use actions within our application as we see here we have different elements that are reacting upon what we are doing on the screen and what we are doing in code but I'm going to drag the Simulator into view again so we can kind of see what is going on while interacting with the example yes so here we can see our example our action example which consists of a slider and this text in a box and as we drag the slider we can use some trigger in the designer called slider value is changed and there activate an action where we are passing the value of this slider in to the user code and from the user code we are actually updating this number based on the value we are getting in this slider so here we are using the designer generated code where we are updating the slider and getting the value from the slider and then we are taking that value and passing it into the user code so we can update the value on this text area then the slider also has another function where if we release our press on the slider it will confirm the value and if this confirmed value reaches a hundred we will then call an action in the user code telling our application to change its screen as we see here as where I actually released it at 62 nothing happens but then I drag it to a hundred and release and we go to screen two because we call an action which will work as a trigger in our application in our interaction system which will then call the screen change action and we will go to screen two another concept where you are in the designer can add functionality to use in the user code a mix sense mix sense extend the functionality of widgets by adding these functionality to the widgets mix sense are some functionality that we can interact with in user codes when they are applied to different widgets so in touch effects from the designer we have four different mix sense we have the move animator the fade animator the draggable mixing and the click listener the move animator enable us to define a movement for a widget over time so going from its current position to a defined x and y coordinates over a defined amount of time where fade animator we are enabled to change the alpha value for widget over time so fade the animator out or fade the animator in over a defined amount of time where draggable enables us to drag the widget in real time when using the application when the application is running and click listener enables us to attach a callback to the widget even though it's not something like a button and thereby detecting if this widget is being clicked so the mixing is something we add to the widget in the designer usually this is placed in the button of the widgets property list and looks like this this so here for example we have added the move animator but we can select multiple types of mixing for the widget though it can be confusing if you both have draggable and a click listener in a widget for example then when the code is generated this is how it looks for our box widget where we have applied the move animator not all widgets are able to use all mixins for containers widgets the fade animator is not possible to use because the fade animator doesn't have elements to show it for itself but only shows the elements in the widget therefore we can move them around by moving the placement of the container but we can't fade because there are no elements in the widget but we can't fade the elements in that container if you have added a widget in user code you can also add mixins by doing it this way as we see here again I have created a small example to show how we can use these four mixins within our application as we see on this small video here we have four images which each have added the four different mixins and by interacting with the applications we can see what the different mixins does but I will just open the simulator so I can explain a bit more while interacting with the application yes so as we see we have the move animator fade animator click listener and the draggable mixin here as said with the move animator when clicking this bottle I am now able to start a move animation from the top here to the bottom here because of the mixin that's app had been added to the image again with the fade animator we are now able to start a fade animation when I'm clicking this button because of the mixin added to it so the click listener is where we are able to react on something called a callback that can be added in user code when clicking on a widget which has the click listener as we see here this click listeners callback makes the text clicked become visible and finally the draggable enables us to drag our image around some final do's and don'ts when doing user code together with the generated code and some bit of inspiration over how to get it when you're doing your user code you have to remember that everything that is done in the designer can also be done user code because what the designer is doing is simply generating code or generating c++ code which we of course also can do ourselves but it's important to remember that just because you can do it in user code the designer can still help with a lot of things for example getting a good idea of how your UI looks so always start adding the simple UI elements that you need and maybe in some cases even though you're not using an exact button but for example a special button that you're creating yourself adding a button to start with can help you get the look at feel of your application before switching it out with something in user code and also when you're adding elements it can be fine to add just images of them and then you can take all the properties of the location in your application and so forth for the image and use for your own custom widget so you don't have to spend time defining some coordinates for where you're gonna put it compile finding out that it's not in the correct place and do it all again which can be done a couple of times if you don't use the designer to get the idea of where to put your things in your application one of the things that we need to avoid they can get your application quite messy is if you're going back and forth between the user code and the generated code so the action and triggers helps you to interact between the user code and generated code but you can quickly start going in the user code then calling an action up in the generated code or in the designer interaction system and then calling a new action down the user code and back and forth a lot of times which can make your code very messy and you can quickly lose track of where things have been done and when you have to go through the code later you can really find out where something is happening because it's done up in the interaction system or vice versa another important thing is to remember to utilize the MVP pattern so try to kind of see when you're doing your user code for the widgets if these things are directly related to the widget UI or if they are more business logic and can be put in the presenter because you can quickly do a lot of things in the view where it again get messy and you have a lot of codes for example switches switches of how your applications should act in different stages which could be used or which could be moved into the presenter of your code and this goes also for the screen concept that maybe don't do all your code in one screen but try to divide it out so we both for the screen concept and MVP have a divided code that is easy to maintain and decouple so we can change it out if needed later on a good way to kind of figure out how to do code or how to do user code is to inspect the generated code sometimes so I have just taken some screen dumps of the generated code from the animated image example and as we can see here there is some very generic things that is done to most widgets at the set position set color and set visibility for example or the set bitmap these are common things for all widgets and it's very good to remember when doing code that you can always go in the generated code and figure out how it's done there if there's something you have to do similar in your own code and here also the callbacks which can be quite difficult to get your grab spawn in the beginning but here you can see how it's done so in general there is if there's something similar you have to do and use a code you can always generate it or go to the generated code that you know is doing more less the same to see how to do that also you can reuse the code from the example of course and it's this is there for taking so all the codes in the UI templates that we have in the designer look at those and if that's something that you find useful then reuse it as it goes with code in general as we all know and of course don't forget our api we have on our documentation webpage where you can find out which capability our widgets has because there are some of the capabilities that you have to interact with user codes that you can get from the designer and you can find a lot more information about how to utilize our widgets the best in the api and touch the effects in general in the best way since this presentation is in UI fundamentals there are a lot more information that you can find in our documentation in the UI development section about to do some of the general stuff and also about some of the more advanced stuff that you can use when you're doing touch effects application development also to get started on UI development you can go to our hands-on workshop UI development getting started where you can follow along and get started with doing UI projects we also have some tutorials that can help you get started with the UI development where for example you can find a tutorial on how to use actions and triggers in your application so on behalf of Romang and I I want to say thank you for following along with this presentation and I hope you found a lot of information about how to do touch effects development and are ready to get started with your own touch effects application