 But we can move now to step number three. Here, I have to show you a new concept, which is concurrent thread. So we have so far seen just one thread, but actually in thread access, it's an operating system we can add the multiple threads using different priorities. The priority number used by the operating system is similar to the priority number and the priority indexing used in IRQ. So the high priority thread run, and the thread with higher priority is actually the thread with absolute value number lower. So meaning that priority one is the highest, and the maximum number of ThreadX priority is configurable from 32 through 1024 with increments of 32. Typically for each group of 32 priority, you have an additional 128 bytes of RAM. You can imagine that a thread with high priority can interrupt a thread with a lower priority. We say that a thread with higher priority is preempting the thread with lower priority, and this is in fact what it's happening here. So here we have an example with free threads. Priority 15 is the lowest, so it gets preempted from thread with priority 14 once ready, and the thread with priority for one free can preempt the thread with priority 15 and 14 once it's ready. So now we are going to build an example to show how to handle this priority, and how to handle the preemption of the various threads. We are going to use two concurrent threads. So let's go back to our Q by D, and we terminate the debug session, and we are now at slides number 67. So the target is to see two concurrent threads, so it's clear that I will need to add a new thread. So to do so, I basically need to define a new thread handle and stack, and to do so, I will basically replicate the same approach we used for the thread one. The only thing is that I'm going to have a new thread handle named two and I'm going to define a new thread stack named thread stack two. Of course, I need to define a new function prototype for the thread entry, and this can be done right after my thread entry one. So I'm going to define my thread entry two. Please remember always to be inside the code begin and code end. So what you right here will be saved upon a new regeneration of the code. Right after the TX thread create for thread one, we are going to add a new function for the TX thread create of the second thread, to which we will pass the pointer number two. We're going to give it the entry function, which is entry number two. The stack is stack number two, and you see that the priority has changed. So the first number is priority. The second number is called a priority threshold. I'm going to explain this in a few slides. So for us, let's consider that we are only dealing with priority 14. So the only thing we're missing now is to add the new function for the thread two. So the entry function for the thread two, and the entry function for the thread two, is basically identical to the one that we used for the thread one and the only difference is that here we are going to toggle the yellow GPIO instead of the green. For the rest is identical, meaning same delay, time, and same sleep time. So we are going to also see a new interesting feature. We are going to enable the RTOS kernel awareness. So let's go to debug, let's click on the arrow and go into debug configuration. We click on debugger, and on RTOS kernel awareness, we enable RTOS proxy. As a driver, of course, you can do the same with free RTOS, but this time you are going to use Threadax, you click on Threadax, and you select the Cortex M7, which is the one we are using. Please disable, enable live expression, and click on apply, and then we move directly to debug. Let's click okay, and now I'm going to show you what this setting for RTOS kernel awareness has done in practice. So let's run the code. You should see the two LEDs blinking, the green for the thread one, and the yellow or orange for the thread two. Okay, so very interesting. On the left, now you have three threads listed. One is the scheduler, and the other two are my thread two and my thread. So our two threads, and what's even more interesting is that you get the stack frame from each thread. So you have the stack call for each thread, and of course you can see the status by clicking on Threadlist. You see my thread two is running because now we spend more time into the thread, and it's more probable for us to stop into it. So now we have run already the code for some time. We can basically still go to memory browser, click on X integer. You see that the buffer has been filled, and we can still export row binary 64K. We give it another number, for me is two two, because it's part two of the TraceX, and let's say view number two, I click on okay, and now before going straight to the TraceX, let's see what the expected behavior can be. So first of all, for those who are going to review the slides by themselves after the workshop, I have to explain that how to read this sequence. So you have to read the picture moving up and down on the vertical columns, meaning that the first thread running will be the high-priority my thread two, where we are going to at first do a GPO toggle, then we do an all delay, and then we go to sleep. As soon as we start the sleep, the scheduler is giving the control to my thread with priority 15. So there's a context switching and priority 15 does the same with the green LED. So we toggle the GPO and then we move to all delay. But what happens here is that in the meantime, my thread two is ready. So there is a new context switch, priority 14 thread preempts, priority 15 thread, and it starts again with this sequence, GPO toggle, all delay, and it goes to sleep. What happens here is very interesting. So let's see it together. Actually, we are restarting from all delay and we move to sleep. Because the all delay timer reference, it's keep on going even while priority 14 thread is in execution. So these results basically in a quick pass through priority 15 thread, and a new context switch to my thread two with priority 14. So this is actually what we expect. Let's see if the expected behavior is mirrored from the actual TraceX view. So let's go back to our TraceX, and let's click on, you can click even here on open, or you can click on open file. Of course, for us, it's better to go into TimeView, and we need to zoom out a little bit. So maybe let's zoom in just a little bit more. So you see exactly the behavior that we were expecting because here in this case, thread two is executed, and we have let toggle plus delay 500 milliseconds, and you see here that this part is around 500 milliseconds. Then we basically switch to thread number two, and here what is happening is the let toggle and we start to execute part of the delay. But in the meantime, thread with priority 14 preamps, thread with priority 15, and we start a new execution, which is lasting at the same 500 milliseconds, and at this stage here, what is happening is that the time for how delay as a lab set and my thread goes in the new context which goes directly to sleep, so the control here passes to the idle task. So you see here that the time we are spending in thread one and thread number two is very different. So in this first section of the step number three of our workshop, we have introduced the concept of concurrent threads. Now, I would like to introduce you to two concepts, which are typical of real-time operating system, and one of them is specific of ThreadX in particular, and the two topics that we'd like to introduce you are priority inversion and preemption threshold, and I'm going to explain you how these two topics are linked one another. So priority inversion arises when a higher priority thread is suspended because a low priority thread has still not acquired a resource needed by a high priority thread. So what happens here is that you can have situation in which it's a certain thread with a low priority, let's say T3 gets a resource. You see here, it obtains a mutex. We're going to explain later on what mutex are, but let's say that T3 starts the execution, and to complete this process, he needs to obtain a certain resources. Before getting the resource, T2 higher priority gets ready and preempts T3. The same happens with T1. So T1 gets ready and preempts T2. So T1 has the particularity that needs a resource owned by T3 to complete its execution. The fact is that T3 was suspended by T2 before getting this resource. So what happens in practically is that T1 stay suspended until T3 doesn't obtain the resource needed. So to overcome this priority inversion, there are different strategies. One strategy is using Tredex mutex object to protect the system resources and to avoid this priority inversion. Another very interesting way to avoid this issue is using the so-called priority threshold, which is a trademark of Tredex. So priority threshold is a sort of hysteresis, let's say, in few words. When creating a Tred, the user can specify the threshold over which the Tred can be preempted. So it's a very good strategy to avoid what we've seen before. In some ways, it's something very similar to interrupt subroutine, sub-priority. Here we have an example, and now you will finally understand the reason behind that second number we put in the TX create API. The first number is priority, and the second number is preemption threshold. I'm going to explain you how it works. So let's imagine two threads. So thread two with priority 15 and preemption threshold of 14, and the thread one with priority 14 and preemption threshold of 14. In this case, the thread with high priority is executed and goes to sleep for a certain time. While it goes to sleep, it's moved from the scheduler in suspended mode, and it's preempted by thread number two with priority 15. What happens is that the thread one must wait until thread two finishes because preempting is not possible. So its preemption threshold is 14. It means that it can be preempted from a thread with at least priority 13, one free. This is exactly what happens in the right picture. So in the right picture, thread number one has priority 13, and it can preempt thread number two during execution. So again, the preemption threshold is a mechanism that introduces a sort of hysteresis in the preemption, and it's a specific of trade X. We are going to now enable the feature into our code. So we come back to our QBID project. We go out from the debug session and we go to the .ioc file. We go there and we click on ST microelectronics, software packs, core. What we want to do is to enable the preemption threshold so we have to disable the disable priority preemption threshold. So we turn into disable preemption and we click on the wheel to have the code generated. So now the advantage is that we already have two threads in our code, and I just want to show you the preemption threshold. So the change will be minimal and we'll consist into changing the priority threshold of thread number one. So when we have created the first thread, the X thread create, initially, we put 15 on both numbers. This time we turn the second number into 13, meaning preemption threshold of 13.1.3. Now that it's all set, we click on Debug, we click OK. I think we have now got the point on how to handle the debug view when dealing with the trace access. So we need to run the code for some time, and you should see the two leds blinking alternatively with more or less the same period. Now you suspend the execution and you go back to your memory monitor, you click on X integer, you see that the buffer has been filled, the same thing we did before. You click on stop, remember row binary 64K, and we call it log 2, 3. And we click on OK. Let's go back to what we should expected. So before we saw that without preemption threshold, there was only the high priority thread that was taking the majority of the execution time. With preemption threshold, we can allow the two threads to keep the schedule. So the two leds should be blinking on your board alternatively. And in fact, this is what happens here because in this case, the thread with priority 14 and priority threshold of 14 cannot preempt my thread because my thread can only be preempted from threads with priority at least 13. Let's see if we are seeing the correct behavior on Tracex. So let's come back to Tracex and let's open the log 2, 3 file. Time view, remember we put the ticks per microsecond, 550, let's zoom out and you see it's in fact is behaving as expected. So you spend 500 milli in thread one and 500 milli in the other one. So this is definitely behaving as expected and even on the execution profile, we get exactly the result expected. So I will move to the conclusion on the step number three. So we have seen how Tracex is integrated in our environment. We have started from scratch with a project using initially just one thread. Then we saw concurrent thread. We saw different concepts like priority inversion, preemption threshold. We have also used the ThreadX for specific debugging views and we get familiar with the Tracex usage and all the RTOSAware debugging features of our QBid. So I hope this step number three was interesting to you.