 The first I'd like to speak about is a queue. The queue can be expressed as a pipe or a FIFO between different tasks in an RTOS. What you can put in the queue? You can put a number and you can put an object with a bigger size. A number is a 32-bit value and it can be typecasted as a signed and unsigned number. And it can be as well used as a pointer. And depending on the queue and its usage, it's on you to interpret the number that you receive. Because it's transmitted as a union with the signed, unsigned and pointer. So you choose how you decode and how you use the value. The default functionality of the queue is a FIFO. First in, first out. So if you put the numbers 1, 2 and 3 into the queue, they will be read by default as 1, 2 and 3. The queue however allows to change the order of the elements. Because you can as well send the data not at the end of the queue, but to the beginning. So if you have a priority message that needs to be processed first, you can place this message to the beginning of the queue so that the receiving task will take it as a first one. That's a very important thing and very nicely implemented. By the way, the LIFO can be used to using the native API using this function, XQ send to front. However, it's not implemented in the CMSIS RTOS API. So if you need this functionality, you can typecast the CMSIS ID of the queue and you can use the native function. There is however one thing the data sent by the queue should be of the same data type and same size. If you put just a number in the queue, we can define it as a standard queue. However, if you want to put a bigger object like a structure array and so on. It has to be defined as a mail. So if you for example send strings through the memory, you can put them in an array and send the whole array. But the queue should be defined with the data size of the array multiplied by number of elements. The length of the queue is declared during definition of the queue so you define how many elements can fit inside. Then the size of the queue in the heap will be defined at the runtime by memory allocation, by malloc. So you define how many elements you fit in the queue and now how to operate with such queue. So there are a couple of functions and we can see how the objects will react or how the tasks will react when the queue is in use. So let's imagine that we have got one sender task and a receiver task. The sender task will send the messages into the queue using the OSMessagePut function. After it sends the first message with this function, it will be put in the queue to the first place. The receiver task at that moment waits for the message with the function OSMessageGet. So if the sender task in the between sends another message, it's as well put in the queue and when the receiver task asks for the message, it can get it through the OSMessageGet function. So it will extract the first message from the queue and then if it waits for another message, it can extract the second message from the queue. So you can see perfectly full, first message placed, first message received, second message placed, second message received. The queue can be created by OSMessageCreate with the message queue definition. So again, the definition has to be made prior calling the OSMessageCreate. Then it can be as well allocated to a specific thread, but normally there is null here. You can put the data into queue with the OSMessagePut, with the QID, the number to send, the timeout, and it will return the OS status telling you whether it successfully placed the message into the queue or that there was a timeout putting the message because if the queue is full, if the messages are completely full inside the queue, the sending function doesn't have any place to put a new message. So it can timeout if the receiver doesn't take any message out of the queue. And on the other side, the message can be received and it is received and returned through the OSEvent structure. This structure contains a couple of elements where one of them is the data that can be typecasted as a pointer or a value. And there is as well a status telling whether there was a message received or whether there was a timeout. So again, the OSMessageGet gets a timeout for reception and if the queue is empty longer than the timeout, so if no sender puts the data inside, at that moment the receiver can timeout and there was no communication. So typically, if you expect some periodic message and you timeout on the reception, you know that the communication was cut and there may be, for example, wire cutting or some damage of the PCB. So this way the data can be exchanged, the simple data. The queue can be as well deleted with the OSMessageDelete. There is as well a possibility to look inside the queue without removing the message. This is especially useful if you need to decide what to do with the data or if you need to process the data later or preprocess something. You can use the function OSMessagePeak, which will look at the first element in the queue. It will return it, but it will not remove it from the queue. So it means that you can check, hey, is there some data? Is this data for me or not? If not, let's do something else and some other task can take it from the same queue. I didn't mention it, but two or more different tasks can wait for the data from the same queue. So it's interesting if the tasks have the same priority, then the first one that's ready to be launched will get it. If the tasks have different priority, then the one with the higher priority will get the data from the queue. So this method can be used if, for example, you have several tasks handling web server requests. So in such case, the web server can spawn several different threads and the tasks can operate as the threads providing the data. So if you get a new connection, you put this connection ID to the queue and the task that is available and waits for it can get it. So you can share the load between different tasks and each task can process different things, web page, dynamic script, image, JSON and so on. Very simply, if all of them wait on the same queue and you give them the new port number to reply the data on, they can do it. You can as well check how many messages are waiting in the queue and you can check what is the available space. So these functions are exactly opposite. One counts how many are inside and the other counts how many still fit in. When you get the message, it's received through the OS event structure. You can see the structure with the status inside and a union giving you the value of the data. So either a signed value pointer or a set of signals. And there is as well a definition what is the source of the data. So you can even identify who sent you the message. And we can try to launch the example of the queue if you are interested. So please come to the queue by mix and we will expand our project. So now we can come back to our queue by mix project and first we will choose the tasks and queues. And we will rename the two tasks to sender one and the receiver. So the first task can be called sender one. The second task can be called a receiver. Then in the same dialog box we can create a queue and we will call it queue one. The item size will be unsigned in 8 underscore t so one unsigned character. We can put the queue size 256 elements. So this way we are able to send up to 256 bytes inside the queue. And when you press OK it will instantiate the queue. And when you generate the code later it will create the queue at the beginning before launching the free RTOS. Now when you look into the generated code if you click on generate code and come back to Atolic. You shall see that the message QID was defined. And we have got a new item Q1Handle. In the main code you can see that the queue is defined and created. And the Q1Handle is instantiated and filled in. So now we will modify the sender task and we can print F the task one on the screen. And in the meantime we can as well put into our queue a number with a timeout. So this way the sender task or sender one task will send the number one to the queue. And it will wait one second and it will repeat. So this way the sender one will periodically send a number one into this queue. And when we look at the receiver task the receiver task will get the return value from the queue. And it will wait let's say one second or two seconds to get the data from the queue. Okay so I hope now it's a little bit more obvious what we should do in the receiver task. So you can see that we are waiting for the message from this queue. Because the sender task sends the data with a period of one second we should wait a little bit longer. So here we wait for four seconds and in the next step we print out the value on the screen from the return value with the value and the P. So this way I am able to extract what was sent to the queue and print it to the screen. The queue operations are typically blocking ones. Especially if you put a timeout or you wait forever when you operate with the when you put data inside or you get the data out of the queue. These operations are blocking. So if you call an OS message put and the queue is full. So if there is no space this OS message put will cause the task that calls this function to become blocked for the specific timeout that you define when calling this function. If any receiver will read from the queue and it will empty it the sending task will be awakened before the timeout and it can put its message into it. Otherwise if it times out it returns the OS timeout and the function failed. So you have to test the return result whether you were able to put the message into the queue or not. That's for safety of your application to see if the data pass through the opposite side. Receiver has as well the OS message get blocking. That means if you need to wait for the data in the queue if the queue was empty the OS message get will block your task. And it will block it either until a message arrives or you get a timeout. Of course if you put a zero timeout the function will not wait at all and it just returns event or failed. So right now we can continue with the lap with two senders. So please come back to the queue and create a second sender task. So it should have the same priority like the other two and speaking about the sender two you can effectively create a very similar thing. So right now we can change the behavior of the senders and you can see the bodies of the two functions here. So please create a second sender task sender two and adapt the bodies of the two functions this way. So having two senders will cause that the two different items get into the queue and the receiver task has to receive them. Okay so now in the next lap we can increase the priority of the receiver task. So please come back to the cube and mix and double click on the receiver and change the priority from normal to above normal. Then again you can regenerate the code. If you increase the priority of the receiver do you still get the expected behavior of your application? Yes. So now the receiver gets unblocked every time you receive new data and this way every time one sender sends something the receiver will be awakened immediately afterwards. So additionally we can create a queue with a different data size. This is again a very interesting example because now with this you can send objects with a bigger size. So if we come back to the cube and mix and looking into the queue definition I'd probably ask you to create a queue number two so queue two. And this one will have the queue size 16 and the item size will be called data. This will generate arbitrary code that will take the data as a data size so we will need a type def to the data and this we need to generate manually. So when you again generate a code with the new queue please define in the private defines of the main file. The data structure with this definition type def structure and inside there will be a 16 bit value and 8 bit source and the structure will be called data. We can as well define two different initialized variables with these contents. Okay so once we have got our data structure defined you can see that sending the data through the queue handle we can put a pointer to the data that we defined. And by typecasting it to the appropriate data type we can put these data inside the message. So technically we send a pointer to the global variable. And now when we receive such data instead of the data itself we will receive a pointer. So in the receiver we can test if we received the appropriate value and for the sources we can typecast the pointers to the data pointer type and we can reference it and take the value from the structure. So this is one way of passing bigger data structures but it's not very efficient because it keeps pointer in the memory. If you want to pass bigger data with their content because passing just the pointer it may not be very efficient because when the original sending task already tries to modify these data it can overwrite in the meantime the content that you process in the receiver task. So if you want to put the complete content into the queue so send the snapshot and receive a snapshot while the sender can start operating on the original buffer or variable we can use a mail. A mail is a little bit different because it doesn't give you only a space for a pointer but it defines the list of the items in the format of chunk of memory multiplied by number of elements. So if you for example are used to send strings with 100 characters you can define a mail for 10 strings 100 characters each or 101 for the termination. So this way you can send as much as 10 strings but you will need more than one kilobyte of your memory. So using queues and mails both have benefits. Mails take a snapshot and the content inside the mail but you need to allocate a lot of RAM. However the queues take only a pointer but you need to maintain the original content until the receiver process is dead. So it's your choice whether you are able to wait with the new data until the receiver process is dead so that you can release them. Or you need a mail where you send the full content of the message but the transmitter can use the same buffer for preparing a new message in between. So speaking about mails they have got their own API and they are as well defined in the queue mix in a separate dialog box or tab. So you can create a mail queue with the queue definition. You can put a mail inside and here there is a pointer to the mail message. But because the mail knows how long the mail item is it takes the content and copies it inside the mail structure. And you can as well release the mail from the memory and remove it completely from the heap. You can create and allocate the space for the mail. So using the OS mail alloc I'm giving a mail queue handle and waiting time within milliseconds. And you can as well clear the newly created mail to zero. The mail API is allocated here and the appropriate RTOS API on the right side.