 All right. Can you hear me? Good? Yay. All right. Today I'm going to talk about multitasking on Cortex-M, or more especially Cortex-M0 class, MCUs. I gave a talk last year about the Chromium AC, which is the firmware that we run on Chromebooks and a bunch of Google devices. This is a disclaimer. I don't work for Google. I work for Edos Research or National Instruments. And yeah, I'm an embedded software engineer at National Instruments. I do all kinds of embedded things. Firmware is one of them. And last year I was speaking about how you could use Chromium AC for your own projects. And this year I'm here and we just finished our first product that will release soonish that actually uses Chromium AC as firmware for temperature control, for fan control. And yeah, we're probably going to use it for future ones. Other stuff I do, I'm a co-maintainer for the FPGA Manager framework, and I do random drive-by contributions to stuff we use or stuff that I'm interested in. I'll jump directly to the meat. So the microcontroller that I picked for my presentation is a Cortex-M0 and it's ARMv6, which is kind of old-ish already. But it's also very nice because it's very simple. And it's a good example to walk through multitasking and how we get there by using all the features that this core gives us. So the registers are not a lot. We get 16 registers. The blue ones are low registers, they're a bit special. The Cortex-M0 runs only thumb mode. So there's limited size in instructions. So some instructions will just let us access the lower ones, which are the blue ones. We'll see that later in action where we actually run into that issue. R13 is our stack pointer, which could also be called SP. So you can use R13 or SP both in your assembler. There's a program status register that depending on the mode you're in and depending which name you use when you try to read it, you'll get different things. We'll see all that later when we look at the registers individually. There's a priority mask register, which we'll also talk about later, and a control register, which is very simple for the Cortex-M0 specifically. Let's talk a bit about the stack pointer. That's the thing we're going to play around the most with in the code we're looking at. Basically, you use a stack if you have a bunch of stuff that doesn't fit anymore into your register. So you just push it on the stack, which is RAM, and then later on you pop it off again. So in this example here, I completely made up the values, but you have a stack pointer that points to an address. We have a register R7 that we want to store. So we do a push R7, which puts it on the stack, which decreases our stack pointer by exactly 4 byte. On the Cortex-M0, our stack is always word aligned, so always 4 byte aligned. The lowest 2 bits can therefore be 0, because we know they'll be aligned. And the stack is always full descending. So as we see, the stack decreases whenever we push something, and the stack pointer value decreases whenever we push something, and it increases when we pop something. So we have a link register and a program counter register also. The link register is basically, if you call a subroutine, then you put in the value from the next program counter to run when you get back from there. So the link register is also a bit special. We're going to talk about it later when we look at the thumb state that the different states that our processor can be in. And the bit 0 indicates that we return to thumb state when we use the link register to return. Some instructions will need that to be set, but we'll also see that later when we actually make use of all these things that I outlined here. The program counter, when you read that, gives you the current instruction plus 4 because it's a pipeline processor. And the program counter bit 0 should be 0, but some instructions like BX and BLX require them to be set because we want to stay in thumb mode. The combined status register, depending on which mode you're in and how you read it, you'll get different values to read. First one is the application status register, which contains your ALU things like not 0, carry and overflow. For example, if the instruction before created an overflow or if you did a compare and it's not, or if you compare and there's a 0, or you do an addition and there would be a carry, all those would get set in that part. You see that the rest is reserved. There's an IPSR, which is your interact program status register, which the lower six bits will give you the exception number that costs you to get there, so it's pretty handy if you need to figure out what happened. For example, say you have a default handler, you jump there and you want to know what cost the exception that got you into the default handler, you could look at that and it would tell you. The IPSR is the exception program status register is not so important for what we're going to do just for completeness, it's here. All right, calling convention. It's assuming you usually program seed and you compile it would use certain registers for certain things when calling a function, for example, so r0 through r3 are usually used to pass arguments, r0 and r1 if the result are also the result register, so depending on the size of your result, it will spill over into r1 and so on if your return value is bigger. There are four and r8, also r9 is a bit special, but we can ignore that for what we're doing here, so r4 through r11 are cally safe registers, which means your function needs to restore them back to the value they were before you jumped into the function. 12 through r15 are the special registers which we saw already before, so it's like stack pointer and so on. Yeah, so priority mask is the register that we're going to use to disable exceptions that are programmable, so you write a 1, then no interrupts happen, you write a 0, it's not masked, all right. Am I going too fast? Can people still follow? It's a lot of things, good. There's a control register, something we're going to play with a lot, what the control register does on the Cortex M0. Bit zero is reserved, so that's for upwards compatibility with other Cortex M class CPUs. We don't have privileged mode or unprivileged mode on the Cortex M0, but on M3 that bit would be your privilege. The control, the bit one in there lets you select which one is your current stack pointer, so as we saw before when I showed the registers, there's a PSP and a MSP, the PSP is the process stack pointer while the MSP is your main stack pointer, writing that bit while you're in threat mode, which we'll see in a second, we'll switch between the two, and that's really cool for implementing task switching, which we'll see also later. All right, this is a thumb state, basically what it's called, so it shows you the states you could be in. There's the handler mode, which is basically where you run all your OS things, and there's threat mode, which is where all your normal task code runs, and you get from threat mode to handler mode by taking an exception, and then depending on whether your control bit is set on return, whether, no. So there's two modes and depending on whether the control bit one is set depends where you're going to stack your registers, which we'll see on the stacking slide that follows. But first, okay, what's an exception in short, it's basically an event that changes the program flow. You jump to the exception handler, and it suspends the current code that runs, and then you run the exception handler, and then you assume where you left off. And some exceptions on the Cortex M0 have fixed priorities, like reset, non-maskable interrupt, and hard-fold. There are negative priorities. Some exceptions have programmable priority, so that lets you arrange things in the system as you need, so you could prioritize different interrupts differently, programmatically. Zero is the highest priority. Of course, there's the negative ones that have higher ones, but you can't mask those. The prime mask one, as we said before, can be used to mask interrupts, and interrupts that can't immediately get handled can be pending. All right. The Cortex M0 has vector interrupts, which means you set up a vector table of interrupt vectors, which means a table of pointers to where you want to jump for certain exceptions. That table contains the addresses, and the processor will automatically jump to the handler if one is set up, and usually it's a good idea if you don't have a handler set up to use a default handler for the other ones, and then have something like an infinite loop to trap it there, so it's easy when you debug to figure out what happened. All right. So what happens on exception entry stacking with the main stack pointer? There's two ways, as I said, depending on the control register, how you could stack your registers when you take an exception. The first one is the simple one when you're not using the process stack pointer, and when you're not using the process stack pointer, but just the main stack pointer. In that case, you run here in threat mode, then you take your exception, automatically the processor will push the exception context, which is a subset of all the registers onto the current stack. The registers in the exception context are 0 to R3, R12, link register, program counter, and the current status register, and the stacking always happens on your current stack, whatever is selected when you're in threat mode, all right? And that makes nesting possible, which we'll see later. And the unstacking happens based on the link register values, so when you enter the exception based on what mode you're in, and what setup you had for your stack pointer, there will be a different value in link register, which we'll see also soon. The other case, which is a slightly more complex case, is when you actually use the process stack pointer, which is what you're going to do when you actually do multi-tasking, all right? So it's a main stack pointer that's going to be used when you're in handler mode, because handler mode always uses the main stack pointer. In threat mode, you'll select which one gets used. So you're here in threat mode, you get your exception. You're using process stack pointer, remember, because that's the point of this example. So your stuff gets pushed onto your process stack pointer, and then you go to handler mode, which automatically switches your stack pointer to use the main stack, then you do whatever you do in your handler, and then the unstacking, because of the link register value, will happen again from your process stack pointer, and this is going to be basically how we're going to do multi-tasking later, but more on that later. Okay, exceptions. Tail chaining, so there's a bunch of neat features that allow the processor to be more efficient when handling exceptions. So the first one is here that if you get an exception and another one happens, that has a lower priority, while the first one runs, the first handler runs, you'll not unstack, but you'll directly go to the handler B and reuse that stacking, so you save this entire part where you would have to first push stuff, pop stuff, push stuff. So instead you use the first exception context for the second one too. Then there's the late arrival case, where basically you take an exception A, and then a higher priority exception B arrives before the handler for A would run, and then you could just directly run the handler for B, and then the handler for A. And then there's the complex case of nested ones, where you again take your exception, then your handler starts running, you take another one, you go up, and then stacking happens twice. So let's say here you had the process stack pointer, you stack on the process stack pointer here, here your handler mode already, so you're going to use the main stack pointer, you stack another exception context, and then you go back. All right, if that wasn't complex enough, here's another picture that I drew. I know it's beautiful. Basically again, thread mode, handler mode, depending on your stack pointers, there's all these different ways you could go. I'm not going to go into details. One thing to note, however, is that on taking the exception, the value that gets put into the link register will determine our path back. So it will determine basically whether we unstack from the process stack pointer or the main stack pointer. So for the nested case, for example, we'd go take the exception, we use the process stack pointer, now we're in handler mode already, if we get another exception, now we stack on the main stack pointer, and then we could go in several times till stacking, and then the unstacking would happen based on the link register value, which would get put into the link register based on how we got there, so there's always a way out. All right, so now we talked about all stack stacks, exception stacks, so now we're going to look at how do we get from the reset to actually running C code, right, because that's what we want to do. On reset, the execution basically jumps to the reset vector, we do a bunch of stuff before we can run normal C. What we're going to do is make sure we're in the right state, so we want to set our main stack pointer, and we want to make sure that we're in the right state, thumb and privileged, or not in the Cortex M0 case, we don't care about the privilege, we want to initialize our BSS section to zero, we want to copy exception vectors to SRAM, which is a bit specific to each core in the Cortex M0 that I was using, which is an STM, you'd have to copy them over to SRAM to speed up the fetches. All right, and then we copy the initialized data section so that would be global variables that have a value, we copy that to SRAM, we set our initial stack pointer, and finally we jump to main. So how does that look in the Chromium EC code? Well, after reset we make sure that our control register is zero, remember control register zero means we're non-privileged and we're using the main stack pointer. Then we wait for that to actually happen, so the ISP will just wait for things to go all the way through, sorry, and then we have a bunch of loops to first zero, the BSS section, then we have a vtable loop where we copy over the exception vectors to SRAM, then we tell our microcontroller to please use exceptions now from the new copied vector table, then we have a data loop which will go and copy the initialized data from flash to SRAM, and then we can finally jump to main, so not so complex, right? Everyone is still on board? Did I lose everyone? No? Good. Sorry? Oh, all right. Sorry. All right, so multi-tasking again, that's what we want to do, right? So we're trying to context switch, so the idea is we have multiple tasks, we want to decouple them as much as possible from each other, and we want them to be able to run task A, then OS, then task speed, and task series, whatever order, but tasks should not have to know about each other unless they actually do want to interact, all right? So that makes writing code really nice, because as the programmer, writing code for that task, I don't have to be concerned with what other people do, because while I run, it looks to me as if I own the whole thing. And there's cooperative approaches to where the tasks actually have to say I'm done, please take the next one, but so if you wouldn't context switch, you could do something, but we're not talking about that here. As seen before, the context is basically a set of registers and a stack, and the OS will decide who goes next, all right? So to do that, we need basically an OS stack and one stack for each task, and then maybe a heap or not. In our case, we don't have a heap, we don't do malloc or free in the Chromium EC firmware. Also useful to do that is somehow to have a time base. So on ARM v6, there is an optional sys take, but most of the vendors actually implement it. If not, you can just use a normal timer. What it does, it gives you a periodic tick that will allow you to take an exception that then gets you into your OS. And you can use those events to run the scheduler to make changes on who runs. So how does Chromium EC do that? Well, so I picked again for this, the Cortex M0 because that seemed the simplest one to me from the different architectures they support. And while you have a struct task, which basically contains everything you need to know about the task, so there's a stack pointer, there's a bit mask of events, we'll talk about that in a second, and there's a pointer to the stack which we can later use if we do some statistics to see the stack usage. We don't have a heap, we have fixed priorities, so at build time we know which task runs with which priority. We have different events, like timers, mutexes, wake up events, peripherals and we're using a 32 or 16 bit harder timer instead of the sys tick and that's going to be handy which we'll see on the last slide how we can make use of that. Alright, another picture to hopefully clarify things. The task states are basically either you're disabled which means there's a global array, there's a task enabled at your index, there's a zero, so your task is disabled, it can't run. By writing a one there this task gets enabled, so now it's ready. So how do I get out of the ready state into the running state? Well, that's easy we're running always the highest priority first, so you get out of the ready state into the running state by being ready and being the task with the highest priority. So this FLS function gives you basically the first set bit in an integer as we know at build time how many tasks we have and they're usually not too many, they just use a UIN32 and figure out which the first bit is, which is the task with the highest priority. Alright, so from ready to running be the highest priority task. And how do we stop running? Well there's two cases, either we wait for an event, which could be a timer, which could be waiting for a new text, which could be things like that, and all the events also have a time out again, so I'll say task, wait for event and then I end up in the state, wait for any event and then I'll tell the scheduler please do something so another task can run. The other case that could happen is wait for event mask is where you say I'm interested in those few events that I have in my mask that I passed the call and then any of those would wake me up and I get as a return value what happened. Yeah, so that would look like this in a very simple case. Say we have a high priority console task, we have a hooks task, which is basically a thing that deals with all kind of miscellaneous things in the Chromium EC and we have an idle task. The idle task is always ready to run and runs whenever nothing else is ready to run. Usually the idle task is just something like a wait for interrupt and the thing goes to sleep until we take an interrupt the idle task wakes up and that interrupt also set an event for another task which then might get ready and then run. Let's look at this example. I'll go ahead and run it out with the hooks task which then enables all the other tasks. Remember that's how tasks get enabled by writing to that array at that index. So first only the hooks task is okay to run and then that one turns on all the others. Well now the console task has a higher priority. Our scheduler kicks in says okay now we're going to run the console task and that runs until it's waiting for an event at the point the second highest priority task that's ready to run which will be the hooks task will start running. So then we go to that one also waits for an event. We go to idle and so on and so on. Whenever an event happens we call the scheduler to then figure out if something changed if not we keep running what we were running. Good, back to code. How do we do that? In Chrome you may see there's a wrapper function for the supervisor call is an instruction that will create a exception. So what does an exception do again? Remember it switches us to handler mode and it stacks some registers. So we looked at the calling convention before so we know that parameters to function calls get passed in registers are 0, 2, or 3. So we have two parameters to this function which takes a D schedule which is a Boolean that says please D schedule myself after now I'm no longer ready to run and the reschedule which is like please take this other task instead and make that run. And we pass those in those registers we do a supervisor call which then jumps into the handler. In that handler we push the link register and R3 that keeps our stack aligned and we have the link register saved and then we branch directly to a C function because that's easier to write. This C function is already our scheduling decision right there. So remember how we passed in a register are 0 and R1 the D schedule and the reschedule our decision is easy so we remember the current task if D schedule is set and we don't have any pending events for our task we mark ourselves as no longer ready then we mark the reschedule task as ready then we find the next now another task is ready now we need to check again which task is the one with the highest priority so that one goes into the next then we set the current task to next and then we return what was the old one. Remember return values go into R0 and R1. So now we're back we had only one return that was even 32 so that will go into R0. We load the current task pointer into R3 we dereference it into R1 we compare R0 which was the return value of the function from before if they're the same we don't have to do anything so we just return from our exception handler. So if they're not the same means we need to context switch that's when like a new task needs to run that wasn't running before so we need to swap out the context before going back to our thread level. Alright that one is like the most complicated one if you can follow that then everything is easy so we get the process stack pointer into register R2 then we move our current stack pointer remember we're in handler mode which will be the main stack pointer into R3 then we move R2 which is our process stack pointer into the stack pointer so we switch in handler mode what our stack pointer points to the stack pointer of the old task the one that we're going to deschedule now we push R4 through R7 to remember them on the process stack of the old task now as I said before we're size limited because of the thumb instruction so we can't directly access R8 through R11 so we need to copy R8 through R11 into R4 through R7 and then we push those also onto the old process stack okay so now we're halfway done we copy the stack pointer now into R2 we copy R3 which was our old remember we remembered the main stack pointer into the stack pointer so now our stack pointer is again the main stack pointer as normal for handler mode now we store into R2 which points to the old stack we store into that the stack pointer of the old stack and that works out because remember in a struct and see your first element in a struct will have the same address as the struct right so we store the old task stack pointer into the old task description and then we load from R1 which points to the next task we load that stack pointer and then we un-stack all the registers back into we un-stack all those registers then we have to do the same for R4, 5, 6, 7 because it's reverse order of what we did before then we put those also in the registers and now we switch the context now all the registers look like they showed for R3 so basically that's what we remembered last time R3 stopped to run and then we just make that our process stack pointer we return back to the handler and there we're done so we just pop our program counter which was backed up so we're back in the next task so that seemed pretty complicated but if you sit down with a piece of paper I know this was probably really fast when I went through this it's not all that complicated so how do we get this thing started because initially we need at least one task to switch from so the way they do is they have a scratch pad which is just a 17 times 4 byte array 4 because every register is 4 bytes it's 17 for one full context that we we make that our R2 and then we move a 2 into R3 and so on then we go over those steps to finally call our scheduler which then will detect there's a new task to run and we switch to any of the other tasks alright tasks again we've seen that graphic before so now we've figured out how we can switch contexts but now we also need a way to actually make those changes we need to be able to make a task ready so how do we make a task ready well we use a function called tasks set event it takes a task ID and an event so what we do is we have to look at two different cases so either the thing that changes that task status originated in IRQ context or it originated in task context the task context is the easy one because we just atomically set the flag in the receiver task and then we call the scheduler so that one is easy, that's the bottom one it's like a mutex for example the more complicated one is the one where you originate in exception context so the problem is while you get one interrupt you might get another one and so on and every single time the priorities might change because each of those exceptions might unblock a different task so you want to make sure to only call the scheduler when you're done with handling the interrupts so there's a nice instruction to do that which is appendice, weave which is something like a software interrupt basically and the way you make that work is priority low enough so that all the other interrupts have a higher priority so once you're done handling all the other interrupts appendice, weave will then get executed still in handler mode and call schedule and that's how you set an event from interrupt context there's a date event which is the opposite when a task needs to wait for something and it takes a time out it must not be called an interrupt context so the way it works it arms a timer which we'll see later how arming timers works and then it basically goes and checks if my events are zero then we de-schedule ourselves de-schedule by calling the scheduler with the de-schedule flag said we reschedule what's passed in with reschedule and if the timer expires then that would set a time out event and it would return and if not then the actual event would get returned eventually and here you can see how you can use that wrapped up in a helper function so basically you have your time out and you have your task that you want to reschedule so that would be the idle task in that case so if we go back now we have basically a way to go from wait for events to ready and we have away from running to wait for an event so now we're almost done right another example for that would be implementing you sleep which just sleeps instead of busy waiting it sleeps really and puts the task to sleep the way it works is you read your harder clock source and then you wait for the events and you remember all the events you get and then you compare the event flag mask whether there was a timer one in it and if also you didn't have a time out yet meaning your time that you actually waited is smaller than the time out you were supposed to wait you just keep doing that and you order them together because they're flags and you need to remember all the other events that also happened then in the end you mask out the timer one because you don't really care about and return all the other alright cool so one thing I used and didn't talk about how it works is atomic operations while the Cortex M0 doesn't have exclusive stores or loads so all we do really is disable interrupts while we do our thing and that's basically how it works so disable interrupts you load whatever thing you want to modify then you do your operation on that you store it back you turn on the interrupts again alright timers is the only thing that's really missing to make this whole thing work the way this works on the Cortex on Chrome you may see is they use one of the hardware timers microcontroller specific so that's not Cortex M0 level solution but the microcontroller solution could be also a sys take but in that case it's nice because they use a timer that also has a compare each task can use a timer via saying time or arm you pass in a time stamp and your task ID so basically there's an array of deadlines and every time there's an interrupt we compare the time that we read from the timer with all the deadlines and see which ones are expired and the expired ones we can set the timer flag for those and yeah you can also cancel timers if you no longer need them and yeah that's basically it alright so that was super fast and maybe I was faster than I should have been yeah I have like half an hour left anyone any questions so far I bet this wasn't all super clear or maybe I was wrong about things so please shout if I said things wrong you could so for not Cortex M0s but for an M4 or something which they also support you could use the MPU so they have some targets for Chromium EC that use MPU I think at least I saw some configuration flags also of course if you have a microcontroller that has floating point then whenever you tasks which you also have to take care of all the floating point state which I on purpose left out because it would make it even longer and more complex yeah any other questions yes in that case there's a single timer and then per task you can have one timer so the way this works is you have basically an array with deadlines and you always said to compare on your as I said the timer the hardware timer has a normal counter that will overflow if it's a 32 bit for example every time you have a 32 bit overflow which happens if you run at whatever frequency then say 65 milliseconds would be always your overflow and then you also have the case where you want a time duration say timeout of a couple of microseconds then you always set your timers compare unit to the closest deadline and use that one to trigger the process timers and inside of that process timers function you always set it to the next deadline or infinity if there's no deadline you set it to ffff which will just wait for the full overflow alright yes no the Cortex M0 doesn't have cash to my note the question was whether I have to set up cash I just remembered I have to repeat them yes but you could do that if you know that doesn't happen so the question was if you could just tell your compiler to not use the extra registers I guess you could if I'm not a compiler person I don't know but anyone else yes I looked a bit around I mean I basically picked that one if you saw my talk from last year I basically picked it because I thought there would be already working U-boot and kernel integration which lets me already talk to it which turned out that at least the U-boot part was broken so I had to fix that first but I think Friartos is more generic this one was a good choice for me because it was basically meant to be used for what I needed I mean Friartos has fancy or mutexes to my understanding for example they do priority inheritance to mutexes here don't they're like you could create bad situations if you're not careful but this one seems to be very well tailored to just being you know board level control for fan control or LEDs or hadn't compared because nice colors haven't compared no alright so other stuff that I've been working on for that I have my own little branch of open OCD that has thread awareness it kind of works it needs some cleanup before I can send out the patches I wanted to look at porting it to risk 5 just because and the port to micro blaze because that might come in handy for another product so yeah questions on that is anyone interested in having thread thread visibility in open OCD I don't know yes so the question was at what point does make an Arto's sense versus running bare metal just having a Y loop that goes through all the alright that's a bit of difficult one it depends on your complexity of the system I would say so if you're working all alone you don't have colleagues and you don't share the work for example it gets easier if you have an Arto's because you can package out work packages between people power wise it might be definitely worth to use an Arto's because it's really easy to get to sleep right because you just basically have to make sure the idle task gets run at the right moment as opposed to like from every possible state make sure you go to sleep the correct way I just personally found it easier to write code like that I've done bare metal firmware that doesn't have task switching before and it makes it easier to reuse things so there's pros and cons of course you have a size overhead for having all this extra stacks if you're a size constraint for example that might be something you want to consider if you have a bigger system that has an MPU that definitely would be worth considering an Arto's or even an MMU I mean so yeah I would look at that yes they have some code in there that goes over the code but I haven't looked at how they work exactly but is a valid point you want to make sure your stack is large enough yes well the idle task is literally one line of code in that case and that's asm wait for interrupt and then you go to sleep and wait for an interrupt because that's the only thing that could make any other task run right a change in condition that triggers an exception so that is not entirely true though because the microcontroller might need to take a certain amount of time to go to sleep so going to sleep might not actually be the right decision depending on how long you're going to wait in the idle task so there's also a bit more clever way of doing that in Chromium AC but I haven't investigated how exactly that works because my device is plugged into the wall so I don't actually care that much about power between the idle task and in theory yeah you could probably take a shortcut there yes that's a good point so the question was whether the idle task actually needs a context switch you probably wouldn't because you're not using any registers in there so the question was like how much overhead do you have by using Chromium AC compared okay so I can tell you for my case the microcontroller I'm using has 128k flash and 16k SRAM that's enough for two full copies of the firmware you wouldn't have I mean Chromium AC is probably not a good call for just about any project it's very good if you need this kind of board level control that does power sequencing, that does power button control, LEDs, fans those kind of things because that's what it was designed for right I mean other RTOS's that like let you create tasks at runtime for example might be more generic than Chromium AC that one just worked really well for my use case where basically built a frank and chrome book but okay so if you buy a Chromebook they have a microcontroller on there Cortex M0 and M4 Nordic there's a bunch of them and it's the firmware that runs on there to do power sequencing the thing that's on when everything else is off so when you open your laptop to turn everything on again those kind of things and that's open source, Google open sourced it and how would I make sure that no low priority tasks I didn't get the last part that is a very good point I thought about that too there's also the case the opposite case where what happens if I end up in a wild one in my high priority tasks so the scheduler is definitely not smart enough to to sort of keep track of run times of tasks and then say like oh that guy's running all the time and my low priority task is starving or the opposite where it doesn't do it yes a tracer I haven't looked at it it's not like for safety critical things it's for your laptop sort of turning power on and off you could certainly put in hooks that sort of trace events in there but if your project becomes more safety critical you'd probably want to look for one of the more established RTOS's where you get guarantees about things I mean it has basic output where it like for event can print events like in interrupt print timestamps so that's useful when doing board bring up or something if a power supply rail doesn't come up it takes out or it takes too long and you get sort of a list of when things happened but nothing safety critical anyone else? so with the sys-take the sys-take so basically I didn't write that code so I used what they did because that worked I thought about it why they don't use the sys-take and I think it's because they're doing this thing where you would set the timer compare to the next deadline while you keep the timer running continuously so that gives you basically like a continuous time base plus finer grain control over shorter deadlines in one thing I mean I made it run with the sys-take also because for testing purposes and to make sure I understand what's happening I stripped out all the assembly files into a little project and just wrote a bunch of print dev tasks that would run around so you can definitely run it with the sys-take but then you need to work around to set it to the closest deadline thing unless you're okay with basically only getting it with complete sys-take cycle accuracy so with the deadline thing you can switch tasks faster and also have the long wrap around take-going so yes the schedule itself I think does exactly the same thing I think the stacking and unstacking would look different because you might have to take care of floating point registers or something like that but yeah I think that's it if there's no more questions thanks