 Okay. Thank you. Thank you guys for joining us here. My name is Rodrigo. I'm going to present to you Zbuss a new Zephyr bus, right? I don't know. I have asked before, but I think many some of you are using that. But today we are going to, I'm going to describe that in more details. The presentations are really dense, in fact, but I guess we are going to have another moment on Thursday. If you need to go deep in details, I'm going to have a tutorial about Zbuss. So this talks about the general idea and things that we can do, right? At the beginning, when we start to work with some art tools or applications or something like that, we need to make threads talk each other. It's usual, right? And we have several topologies of communication between threads, right? And one-to-one is the first one that we started, right? And we have several kind of objects to help us, free-for-leaf, full-stack, message queues, and so forth, right? But when we start to talk from one thread to several threads for many threads, we don't have that. Just in this scenario, we already don't have this kind of thing. Mailbox may be helping you with that, but it's not so easy to do because it's not able to make one-to-many simultaneously. So you have to do that manually, doing that for each thread you need to send, and you need to do a lot of work by yourself, right? And when we think about many-to-many, it's even harder, right? Because we have so many threads, we have to connect each thread, and sometimes we have to have full communication from A to B, for example, and B to A. And if you are going to think to do that with a message queue, for example, you need to have a bunch of that, right? So it's hard to keep and hard to scale with this kind of thing, right? So the idea at that time was to develop a bus to help developers to make threads talk each other, right? Because we didn't have any tools for doing that, so we needed to create the bus. And the bus could solve the one-to-one, one-to-many, and one-to-many communications topology, okay? Is that good? But we have seen that on Linux, on Windows and Mac, for example, we have applications and the applications can talk each other. But here we are talking about embedded systems, right? We have different kind of challenges. And when I am telling embedded systems, because I am talking about really constrained devices, Zephyr can run on really constrained devices with, I don't know, 16 kilobytes of memory, for example, where we cannot run Linux, for example. And we have processing limitations as well. We cannot run that in a pulling, a really fast way to guarantee that everything runs, right? Because we have a bunch of battery-powered devices, right? So we need to think on this kind of scenario as well. So energy is part of that. We need to focus on that as well. So I started Z-Bus as part of my PhD. I work in university, I am a professor, but I am part of a novice center, I have a company. We work a lot with embedded systems. And we did that internally in the past. And we came out with a bus, right? Bus that could make threads to talk to each other like MQTT or like something like that. Where you have the publisher and you have the subscriber which can receive the notifications and things like that. But to make like MQTT, we need to have topics and topics usually relies on strings and it's not easy or good for embedded systems. So here we have a kind of pub-sub using channels, addresses and things like that. It's more efficient for our cases, right? So we have two kind of observers here. We have the subscribers. Usually we see that on MQTT and other communication protocols. And in Z-Bus we have subscribers as asynchronous entities that can receive notifications from the bus. When a channel change. When someone publish to a channel, the channel will notify the subscriber asynchronously. And we have also the synchronous observers because we have two kinds of observers. We have subscribers and listeners. And listeners are synchronous and they are mainly callbacks. When a channel change, the bus calls some callbacks. You can just make the listener to see, to observe a channel and that's it. We are going to talk more in details. And a good aspect of that is that subscribers are decoupled in terms of code, right? In time, space and synchronization. I've roughly talked about that. In time we can say that when a publishing action occurs, the thread doesn't need to execute at the same time, right? So the subscribers are synchronous so it doesn't need to have at the same time the coupling. They don't need to know each other. So the subscriber and the publisher don't need to know each other, right? So the coupling time and space. And synchronization. We can say that the subscriber can run a different load or different execution there other than the communication. So it doesn't need to wait the communication. It can run other things and then get back the communication check if there is something there. Okay? On the other hand, here we have the listeners that are coupled in time and synchronization. So the callbacks runs exactly, immediately possible when some thread publishes to a channel. So when you publish to a channel, the callbacks are executed. So they are coupled in time and synchronization. Why am I talking about coupling, decoupling, things like that? Because we are talking about software engineering practices, right? And those kind of decouplings here gives us a lot of benefits. We are talking about that in the future, in a software perspective, right? And we have available actions on Zbos. We have the possibility of read the channel, right? Any thread can read the channel. There is not necessarily only subscribers or listeners. So any thread of the system, even a sensor or system thread can read a channel. A thread can publish to a channel, right? The publishing action causes an execution of a listener or a put on a subscription, a subscribe notification queue, right? So when you publish to a channel, the publication executes. We are going to, in details on that, but you execute the listeners and you enqueue the notifications on the subscribers. It's almost that. The notify one here is almost the same as publish, but it doesn't require to change the data on the channel, right? So when you publish to a channel and you don't need to change the data, or, for example, you are sending an event, you are not sending information just telling on some event occurred, you can just notify instead of publish. It makes the same thing, but you don't need to change the message. So the message is the content of the channel, which you can use to transfer data between the producer and consumers, right? And this is a kind of illustration. It's good to know how Ziba works underneath because sometimes we can improve our code just by knowing that, right? We have some steps here. First, for example, thread one wants to publish to channel B, right? The first steps to lock the channel. We use Mutex for that, right? And for some reasons, we use Mutex instead of semaphores. And after that, the thread one replaces the message. On Ziba, we have shared memory, right? We don't send the message from the producer to the consumer, right? We just set that on a shared memory and the consumer goes there and consumes the data. So both are talking about the same part of the memory. We are not replicating that, right? This is a kind of approach and this is the fastest one we could reach, right? So first, lock the channel. Second, you replace the message. Third, you execute the listeners and then you enqueue the notification to the thread two. In that example, I put here a queue, right? The notification queue is a message queue in fact. So when you are running inside a thread and you need to subscribe, you just receive a notification, right? The listeners are executed but the subscriber just receives the notification and with the notification, it can go there and take the information, okay? So here in Ziba, we don't have even dispatcher, in fact. We don't have a central entity doing this kind of thing. So it's really different but all the stuff that we discussed here are executed by the publisher, okay? So this is why we call virtual distributed event dispatcher. We have the event dispatcher logic running distributed by the publishers and we don't have, in fact, event dispatcher, right? This is why we call that virtual. So the VDD, I call that VDD, right? Really weird thing. But the VDD brings us a lot of good, a lot of advantages. But first, talking about the distribution of an event or a message, in fact, for example, we can say this scenario here and in this scenario, thread one is publishing to channel A, right? And all the other threads and listeners here are observing that channel. So when thread one publishes to channel A, the message needs to reach all of that, all of them, right? But for the subscribers, just a notification that the channel has changed, not the message, in fact, but the listeners receives the message here, okay? And I don't know if you already have seen the documentation. I've put that on the last version of Zephyr. And first, we can see that that publication happens in the thread one context, right? Here in A, we have some action that thread one is running. And after that, in B, we have a lock to the channel A. After that, we replace the message and execute the listeners. Listen one and listen two, right? After that, we notify thread two, thread three and thread four. Someone could ask why not thread two preempts the execution and reads the message, right? Thread two has more priority. If you take a look at the arrow here, the highest priority thread is four. T1 is just the lowest one. But when you notifies the thread, as we use Mutex, Mutex is two lock channels, we have on Zephyr, we have the priority inheritance feature. So when you do this kind of thing, thread one goes to the same priority, then thread one receives the same priority as thread two, and then thread three and thread four. So this is why we are trying to finish the published process as soon as possible. But without interfering in other higher context priority things. We are going to talk about that. And next, when you finish the publishing action, the subscribers can act. They receive the notification so they can do something. In this scenario here, I am telling you that thread four goes there and reads the channel. So it locks the channel, cops the message and locks the channel. Do something, get out of the DMCU and thread three and thread two and goes on. In this way, after this image, the message started from T1, right? And now it's on L1, L2, L3, T4, T3 and T2. So we could share our message to all the interested, the observers. Is that okay? So a big advantage from VDD is that we can run different kind of context without interfering each other. Here, for example, if we think thread three and four having lower priority than thread one and two, if they are communicating in channel B by via channel B, right? They can be preempted by thread one to make a higher priority context communication. So it can happen because the event dispatcher is distributed. It's not central. If we have just one event dispatcher, it could happen some kind of priority version like that because you would have low priority thread communication, preventing our high priority thread communication to happen. So event dispatcher is good for that as well. And it's really interesting. So another available action for this action is more for advanced use. Don't use that if you are not sure to do what you are doing. The claim finish, it's a pair, right? The claim, you are locking the channel. And after that, you can access the metadata of the channel. So we have a lot of metadata there. We have the subscribers. We have the message in fact. We have a user data pointer so you can add your metadata to the channel. So in this kind of thing, you can do a lot of stuff with the channel. You can change and finish, claim and finish, okay? And I guess this is the last action. So this is a really, really usual application mainly for IoT, for example, where we have, for example, a trash can level monitoring system or water level or temperature of a room. IoT application. And this application, we can stretch like that. You can do differently, but in a different way. That's a kind of solution. And here we have a timer. Time to time it triggers, it's published to the Start Trigger channel. As the sensor thread is an observer, it will receive the notification and start to fetch the sensors, for example. It fetches the sensor data and then publishes to sensor data channel, obviously. The name is great. And after that, SCORE is an observer of sensor data. It will receive the notification, gets the information and make some math over that. I don't know, collect some information of that, make a box plot and then send that to the payload, published to the payload. As Laura tried, is an observer of the payload. It gets the data, sends to the internet when it's finished, when the transaction is done. It makes, it publishes to the transmission. Observing both channels, Start Trigger and Transmission Done, we have the blink callback, right? So when the time triggers, everything in a kind of chain of actions is working, right? But is it good? Why is it? I have four channels, three threads. So a good thing here is that we can make changes without changing the existing code. So, for example, I would like to add a button on the system to initiate the sequence, instead of the timer, only the timer. I just need to add a button and everything keeps the same code. I would like to store all the payloads to the system in the future. I don't know, the government needs that for some reason. I just need to add another subscriber to the payload and it doesn't need to change anything, just add, right? I would like to get the storage data via Bluetooth. For example, I could do that, just doing this. And, for example, make some debug or something like that. I would like to see if the transmission done is happening, is it possible, without changing anything, right? I would like to insert a mock to test better my system to check if the core is working properly because when you receive, I don't know, a set of data, the core is not responding properly. I would like to see what's happening. I just need to add, okay? So it gives us a lot of flexibility and it gives us really, a really interest power to make our code and change things there, okay? Is that okay? Okay. And, for instance, some, I don't know, I don't have Laura one coverage. I would like to use NBIUT, just remove the code and add another one, okay? If you do that using a kind of interface, you didn't, okay, it's good, right? You don't need to change the Laura. You can reuse that in another code, right? In other products you have. You just need to add a new module that you can reuse in other modules because you can have that module working with a payload and a transmission done channel. You can reuse that. So, some, I don't know, in my product, I would like to remove the timer. Just remove the timer. The button will do the work. Okay? So, some usage considerations. It's not the end of the presentation. I have a lot of slides. I think it's good to finish a part to talk about the other. And it promotes event-review architecture. It's really good for embedded system software because of the battery-powered devices. We can go to a low-level, a really deep sleep mode, and then we can react to some things using interrupts and things like that. In this way, we have, using Zbuzz, we have a unified way to make threads talk. So, for mainly all the communications, you can use Zbuzz for that. There are some I would not... I will talk about that. We have code decoupling. We talk about that. And this is really amazing because we can reuse the code. It promotes the reuse here. If we can reuse the code, we can add some different kind of things without... Because when you have a bare metal system, for example, if you don't do that in a really good way, you are doing a kind of coupled code because all the parts talking to each other, it's really hard to do that. When you use Artos, you can do that. It's mostly separated. But when you have a message queue from a node to a thread, to another thread, and they need to know each other, you are doing a kind of coupling. So, if you want to change this part, you need to change the other part as well. So it's not good. When you are trying to maintain a code or something in the future, add some functionality, it's not easy to do when you have a coupled code. And it promotes reuse. So when you have that well-tested and you would like to reuse that on different kind of products you can do, it's really straightforward to do that. And even when you are using Zafru, that has a really big knot. It's awesome. Layer of abstraction over drivers and devices, in fact. So if you are using that over Zafru, it would be really better. It increases the testability of the system by increasing controllability and observability. What are you trying to tell us? In this case, when you can add a mock, for example, in this situation, you have sensor thread sending data to sensor data. But if you would like to check core, you could change the mock and send the data there. You can send data replacing threads, so you can send and see what the module is outputting. And it's really interesting when you are trying to test the system, because you can replace parts here. I don't have the sensor yet, no problem. I don't know, you can add a mock here, faking the sensor, and the system can go and you can continue the implementation until it gets your home or something like that. So you can observe and control that better when you are trying to use the... And it's really extensible. If you use client-finished with user data, you can add a lot of things on top of over ZBus. We can think that almost like a socket and a TCP protocol, for example. A TCP protocol uses socket to make a really interesting thing. So we can do that, use ZBus. So ZBus is the foundation. It's really simple, short, sometimes maybe when you are trying to use that if you do. And you are thinking it's a little hard to do some parts of that. I try to do the best to make it easier to use. But we have a lot of constraints, and those constraints makes us to do things in a different kind of way, okay? Some cons is we have too many possibilities. We have listeners, subscribers. We can subscribe to a topic statically, dynamically. We have a lot of details I didn't show here, because we are going to show that in the next presentation on Thursday. So there is a lot of things, but I submitted a lot of samples, the documentation. I don't know if you have read that. It's really complete and a lot of details, and I tried to show the community on this kind of thing. It's not for streaming, right? If you are doing streaming using ZBus, maybe you are increasing the lateness of the communication, right? Because I don't know if you have a sensor that reads three to milliseconds and publish to a channel. Maybe you are going to increase the lateness of the bus. But if you do that using the right priority and everything else, maybe it will work for you, okay? But in most cases, use pipes, right? Pipes, message queues, or something like that for intensive byte stream. And there is no guarantees for subscribers to receive the message. Sorry, they will receive the message, but sometimes we can lost some message. How? How? How? How? It's almost like that. If you have a producer producing really fast and the consumer is consuming slowly, the notification gets to the consumer. But when the consumers go there, the message is already changed. So maybe you have duplications. Loss, I don't know. I didn't see any losses. In fact, we have a benchmark. We have a lot of things. We don't see losses, but duplications is possible if you do that in the wrong way, okay? Ziba's feature backlogs. Guys, I'm going to talk almost all the time with a few Q&A time because we have a lot of days and we are going to have another time to discuss more about Ziba's, okay? But we have a kind of feature backlog and I would like to call you to talk about it in the Ziba's Discord channel. But I am going to talk about that because we need to know where to go, right? We have Ziba's, but what the community needs. So Ziba's async APIs are on our radar because you cannot run publishing or any Ziba's APIs inside of ISR, right? So I am planning to implement a kind of async set of APIs to make that possible, right? So with that, we would avoid work kills for ISR, right? And we could have a dedicated Ziba's thread to do that. It's a kind of executor that receives the attempt to do a publishing or a read, for example, and quee that and then executes. It's almost like that and we can control that with, I don't know, the priority we wish and the stack size we wish, right? So this is a really interesting thing that we need to think about. Omsubscriber to help us to... Omsubscriber is just a subscriber that listened to. It's not good to hear. A subscriber that observers all the channels, right? It can help us to extend Ziba's features. Sometimes we need to do a lot of things and it's hard to do that in the way because if we need a subscriber that has all the events, all the notifications on it, we need to subscribe on all the channels. It's a sugar feature, right? I would like to do some Ziba's integrations with other subsystems, right? Input subsystem is one of them. I guess it's a really good match with Ziba's when you press a key, you could receive an event on Ziba's channel. This is a really good thing to do and add some samples for Bluetooth, sensors, FSM and other parts of the system that can be good with Ziba's, right? And we are going to talk about a really tricky one part is Ziba's for mood core. Oh, sorry. In this case here, I would like to add a clone of Ziba's in different cores, right? We have a Ziba's here and a clone in a different core and both are synchronized by an IPC service, for example. In this kind of thing, you can see that application I told you working in different cores. This would be interesting. I don't know if you have already worked with mood core, but it's not so easy. Nowadays, we have open AMP, right? Helping us, we have a lot of initiatives, but it's not so straightforward to do that and we need to do a lot of work, okay? And mood target, after we solved mood core, we can go to mood target. You can have different SOCs talking to each other with Ziba's clones and the transmission or interface could be Bluetooth or, I don't know, serial or internet or whatever, right? It would be really good. And the last one is Ziba's desktop version. Why that? This is interesting because you could develop some model to work with Ziba's, right? So maybe you have an AI model that generates something that you want to test on your embedded system device. You could do that in Python and make that to work with your core thread that's already implemented on the device, right? So it would be really good or use Rust or Matlab or something that you can connect to the bus on the computer and the computers and the embedded Ziba's work together. It would be really interesting to see that working, okay? If communities like that, we can start to talk about that. So I'd like to discuss more about that so please if you have any question about the roadmap or, I don't know, need, I wouldn't like that on Ziba's or something like that, go to Zephyr Ziba's Discord channel, right? And talk to us. And if you have any doubt about Ziba's as well, go to the Ziba's channel. I'm answering that a lot, right? Please read the documentation first, right? And we have a lot of things to talk, but I'm going to put that tips and tricks to the next talk, okay? And I'm going to open to questions and answers. We have three minutes and 20 seconds. Any questions? Okay. Okay, who's responsible for unlocking the channel and to allocate the data, right? Everything is done when you, the allocation is done when you describe a channel, when you define a channel, everything is done. And the lock and unlock process is during the publish, so you just need to call Ziba's, Chanpub and everything is done inside that. You don't need to figure out where about that, okay? The data is not free. In fact, the data is persistent in this kind of thing. During the communication we have transient and persistent communication, right? Transient is when I talk to you, after that the message goes, I don't know. And the persistent is when I publish the message, the message is still there. Any part of the system can go there and read the message, okay? So the message, the shared memory is static, in fact. I have a fairly long question from the online attendee that was asked a while ago. Okay, okay, go. I'll relay it now. Good to see Ziba as a firm believer in the coupling threads and avoiding callbacks were possible. I've implemented something similar on one of our freeRTOS-based platforms using queues and callbacks. However, I found that to allow interrupt handlers to use the bus I needed a dispatcher queue and a thread and a set of APIs that can be called from ISR contacts to post messages. Does Ziba support post from ISR whereby the listener callbacks run in thread context? I'm going to read that because it was so long I got lost. So you could ask it again just a moment. Oh, I'll talk to Sam on the Discord channel. Sam, please go there and we can talk later. And is there anybody, another question here? Go ahead. It's a red address. The question was the scenario I said the producer is producing more often than the consumer is consuming, right? What to do? Yeah, you can solve that using listeners. In fact, you can use a listener in conjunction with a message queue, for example. Yeah, this is exactly the solution because I was thinking I'm going to solve that in a different kind of way. It's hard to solve all the community needs, right? This kind of way you can solve this thing. You don't need that for all the channels. Some channels need that. So is a listener with a message queue together? Done. Okay. Another question. Yes. Yes. The question is then in difference between Nordic event manager and Ziba's implementation. Right? Yes. They use a message passing for the message transmission. They unlock memory dynamically. So the memory is gone. Oh, we are out of time. We can discuss more. Guys, if you want to talk, we can ask. I'll be here until 30. Okay? I mean, I can talk more. Thank you.