 We now move to the short ThreadX feature overview. We are going to see Thread, Memory Pulse, Semaphore, Mutex, SKUs, Events, and Timers. Threads, I can only add that you have to go through the Microsoft documentation, so docs.microsoft.com to learn more. We have so preemption priority, pre-assurance, threshold, times slicing. So I'm not going to go through details into this specific session. Instead, I can introduce you the memory pool, the different memory pool strategy we have. We have byte pool and block pool. So byte pool is a sequence of bytes allocated in RAM very similar to the heap in C. So typically, the thread is filling this pool in what is called the first to fit manner. So basically, we use the first three block of memory, and if we get a memory request, bigger than the available space, we will definitely fall into fragmentation issues. So with byte pool, you can have fragmentation issues. Just to add every byte pool can come with its own control block having the start address, the name, and the size. On the other hand, you can go for the block pool. So it consists in blocks of memory or fixed length. So for this reason, we do not fall into fragmentation issue. And of course, I can create pools with different dimensions. And typically, pool access is quicker than byte pool. So pool block access is quicker than byte pool access. And of course, every pool have a control block indicating the name, the starting address, and the byte number. Semaphors, you have probably seen them before. They are used to control access to common resources. They are also used for thread synchronization and the mutual exclusion. So a task like T1 here could have to wait for a semaphore from T2 before going on with execution. So we can imagine a situation where we return from an interrupt subroutine, and we can, for example, give in a semaphore to indicate that data is ready. And actually, there are two types of semaphores. Binary and the counting semaphores. Binary count only zero one, as the name says. Of course, one in case I can give access to the resources. Binary semaphore is actually very similar to mutex, and we will see later which are the minimal differences. You can have also counting semaphores. So we use them normally to handle a peripheral that can have only limited number of simultaneous accesses and users. For example, in the text dual, we have semaphore to control the access to sockets that can be opened as they can only handle the limited number of users. So ThreadX provides a 32-bit semaphore, and the typical operation for the counting semaphore are txget and txput. So the get operation decreases the semaphore by one. And if the semaphore is zero, the operation is not successful. Of course, the inverse of get operation is the put which increases the semaphore by one. We have seen the semaphore, and we now go through the mutex, which means mutual exclusion. They are useful to indicate the availability of a resource and to control the access to critical resource. So they are similar to binary semaphores, but the main difference is typically the mutex is closed and opened in the same thread. Message queues are primarily means of inter-thread communication, and they are used to exchange resources. They are typically implemented as FIFO. So it means that the first element to be added is the first to be read, and messages are inserted in the front and read from the tail of the queue. And when I insert a message, I do what say then in queue, and when I read it's a DQ, and normally a message that holds a single, sorry, a queue that holds a single message is commonly called a mailbox. Message are normally copied into the queue using a queue send, so tx underscore queue underscore send, and are copied from a queue using a tx underscore queue underscore receive. The only exception to this is when a thread is suspended while waiting for a message on an empty queue. In this case, the next match sent to the queue is placed directly into the threads destination area. So events flag are basically ways to synchronize the thread and they are arranged normally in group of 32 bits. Normally you see here, even from these slides, you see that it's possible to set the clear of an event flag using and or logic operation. And normally you see here that you can condition the operation of a certain thread by an event related to different threads. In general, you can have, like in this example, a thread that is put in running mode only if both B1 and B0 bits are on. And in this case, you see different combination which are not putting the thread in running mode because the only real combination that will enable us to start and to rightly run the thread number one and have the two threads on is the situation in which B1 and B0 on the event register are set, and this triggers the execution of thread one. So it's a good strategy to synchronize different threads for more complex applications. Then we have software timers that provides to the application the ability to execute a function in a specific intervals of time. So normally we have hardware timers in STM42 but you may want to use software timer. We can have one shot timer or periodic timers. And I always link you here to the microsoft.com repository where you can definitely learn more. And another thing we can provide is with the thread access module component. Module components provides an infrastructure for applications to dynamically load models that are built separately from the resident portion of the application. And these are especially useful in situation where let's say the application called size exceed the available memory. So you can also use the modules when new features are required to be added after the core image is deployed. You can also dynamically load the models when for example you need a firmware update. And with that we move to something that can be of your interest like a thread access footprint. So we see the footprint in bytes of the core service, the queue and all the elements that we have discussed. So the queue, the events, the semaphore, the mutex, the block memory service and the byte memory services. So this is just to give you an information regarding the footprint of ThreadX.