 Okay, so hi everyone, my name is Aleksandar Rutović and I'm going to talk about my experience with developing an open source, which I will bind with Zephyr. So I start off with a few words about me, I'm a resident graduate of computer science at AGEUSD in Kraków and a software engineer at AV system for nearly two years. So AV system specializes in a couple of fields including IoT device management where we provide solutions based on lightweight machine-to-machine protocol and I myself work on ANJ, AC platform agnostic implementation of light with M2M client and also ports so that I vary many SDKs, Ardosis, whatever including Zephyr. As for my experience with Zephyr, it also has started at AV system so I didn't even know about Zephyr, that Zephyr exists before joining the company as an intern and at the same time it was also the very first thing I have worked on there and my first experience was really good and from day one I really enjoyed using Zephyr and also from that moment I use it in pretty much all of my embedded or IoT projects after hours including the one I will be talking about today. So what's the plan for the talk? I will give you a short introduction into real-time locating systems and what they're used for then I'll talk about what's ultra-wideband and how you can use UWB to build how you accurate RTLSs. After that I will finally introduce the actual project I have worked on and also the motivation for it and then I will talk about the implementation and my experience with using Zephyr for this project. More specifically I would like to talk about how Zephyr has helped me with developing the solution quickly which is quite generic and however independent. I will also tell you about some things that didn't work so well and I will share you many tips and observations from my point of view that is from a fairly new user. So let's start with a general introduction into real-time locating systems and ultra-wideband. So generally speaking RTLSs are used to track locations of some assets like vehicles, tools or workforce in real-time and it's usually assumed that the location data needs to be sent to some central server. Also when we talk about real-time locating systems we usually mean indoor settings like warehouses or manufacturing plants. So for example GPS trackers simply are immediately out of discussion because they just don't work there. To make things clear I start with a real-life example. So in McDonald's in Poland and probably in many other countries there's optional table servers and one way to order food to your table is to actually go to a kiosk where you pick up one of these tent-shaped devices then you enter its number and you bring the device to your table. Now these things have a secret. They're also a locator which uses a Bluetooth to find its location. Thanks to that staff doesn't have to look for you all across the venue. So how do they work? There's many options but usually it's like this. There's a couple of static devices called anchors and the tent-like device which I will call a tag measures the strength of the signal coming from these anchors or vice versa. Now because we know where exactly the anchors are set up and we know the signal strength measured by the tag to each of them we can calculate the approximate location. The same principle of using sign-out strength to locate things can be used with any other wireless technology like Wi-Fi for example. In general these systems are pretty cheap to make and easy to build and also they work in indoor settings as opposed to GPS trackers. The accuracy at least with Bluetooth is up to one meter which is like pretty good for some use cases but there's also a few problems. First the measured signal strength is affected not only by the distance between their devices but also by possible signal reflections attenuation etc. And these systems seem to behave quite poorly in non-line off-site situations where there's some kind of obstruction between those two devices. This obviously negatively affects the accuracy of the system. Secondly, there's also many use cases when that maximum one meter accuracy isn't just enough so what else can we use? Say hello to Ucho Wildband. It's a wireless communication technology that has been around for a few years and recently it started to become more and more popular also in consumer electronics. For instance all iPhones since the 11 have a UWB chip. It's also a part of 802.15.4 now. It's one of many files defined in the spec and what sets UWB apart from other rather technologies are its distinctive physical properties that enable precise descent measurements even in situations where there is no direct line of sight and this makes UWB ideal for constructing highly accurate relocating systems. So this distance measurement capability is already being used in some widely available products. One example is Apple AirTag. It's a tracker which you can attach to your personal belongings and Ucho Wildband there is used to find the exact distance between your phone and the last item. And another example is BMW Digital Key Plus which is an app that turns your smartphone into a keyless Android system and the best thing about that is that actually it's immune to relay attacks because it measures the actual distance between the car and your phone which makes it much safer. So how does this work? Well generally speaking UWB works on very high frequencies between 3 to 10 gigahertz and it communicates with short message bars occupying at least 500 megahertz of bandwidth which is a lot compared to other radios. Because these pulses are so short, 2 nanoseconds or less actually we can measure the reception time with high accuracy and we are also able to detect reflected signals much easier. From that we can calculate the time of light of a message which multiplied by speed of light gives us the distance between the devices and from several distance measurements to anchors we can calculate the position. So the simplest distance measurement method called single-sided 2-way ranging is the following. The initiator sends a message to the responder recording transmission time then the responder receives the message and after a particular delay sends the response. Then the initiator receives the response and records the reception time. The difference between TX and RX times on the initiator is round-trip time. From that we subtract responder's delay, divide it by 2 and we get the time of flight. And finally that time of flight gets multiplied by speed of light giving us the distance. So in practice we did this method and some additional magic stuff that happens underneath like clock jif correction and so on. We get about 20 centimeters of accuracy but with additional processing and antenna calibration it can be increased even to 2 centimeters. So how do we get from this distance measurements to position? The process is called true-range multilateration. There's actually many algorithms but I will explain just the common principle. I'm also showing the 2D case here to make things much simpler. So by measuring the distance to an anchor we know that the tag is located somewhere on the circle around the anchor and the circle's radius is equal to the distance. And by adding another measurement we can see that the circles intersect giving us 2 possible solutions. So we add one more measurement and now we know where the tag is located. For the 3-dimensional case it's a little bit more complicated because you are not intersecting spheres so you are intersecting spheres instead of circles and as you can see the 3 measurements do not give you the unique selection yet. So you need to have at least 4 anchors. Since double B based RTL assets have such a great accuracy they not only improve the usual use cases of RTL systems I discussed earlier but also enable some new ones. For instance such a system could be used for prevention of accidents. Imagine a system where the workforce and the heavy machines are monitored and then if anyone comes close to that heavy machine while it is operating it triggers some kind of alarm. It also enables various cases of asset tracking in factories or warehouses. Thanks to that it's possible to find these assets much quicker if they are needed and also by tracking historical usage and move patterns of these assets you could optimize the way they are used and placed around the site. And there's one also I came up with. Think of smart shopping carts with a navigation system which could scan your shopping place and navigate you through the shortest path around the store to grab all the things you need. At first I thought about it more as of a joke but then I realized that this could be actually useful. Just please don't patent it because maybe I'll try to make one someday. Okay, so we're done with the introduction to that topic so I think I can finally show on what I've been working for some time. So the project is called HyperRTLS. It's an open source with the ability based RTLS system that I have actually co-developed with my friend, Sebastian Czerpanski for our engineering thesis at AGU-HUSD. At this point I would also like to thank Sebastian for his collaboration and idea for the system and Professor Tomasz Shidwo for the supervision. So the whole project is hosted on GitHub and in case you are interested you will not only find the application sources there but there's also the full text of our thesis. So the solution consists of two major parts which are a set of Zephyr-based apps which I wrote for tags and anchors. And these apps target by default DecoWave MDK101 DevKit. I think the most popular UWB DevKit on the market and that DevKit runs on the NRF 52 and UWB module. And thanks to Zephyr, these apps actually have no direct dependencies on this exact board so they are easily portable to any other target that has BLE and DW1000 chip connected. Additionally, there's also a gateway app requiring BLE radio and some IP stack but I will tell you more about that later. The other part is the backend software developed by Sebastian which includes a Node.js app and Wi-Fi PostgreSQL database and Mosquito MQTT broker. And this app serves a REST API that is used for managing individual RTLS systems and for retrieving the location data. And that REST API is supposed to be used by external applications that run business logic of some potential and product. Additionally, there's also an example app using that API which I will show you shortly. So this is the general data flow slash architecture. The location data is transported from tags over BLE to the gateway which communicates with a broker over MQTT which then talks to the app that exposes a REST API for potential external services. So what was our motivation for the project? Well frankly speaking, we needed something for our engineering thesis, right? That's the reason number one. But then we chose this problem because apparently there's pretty much no open source systems like this which is definitely a gap in the market since there's plenty of companies which actually start that system for a lot of money. What's more, what's worse, they aren't cheap which makes the technology inaccessible for makers and hobbies while the hardware itself for these kinds of systems is rather inexpensive. We are talking about 25 bucks for a device that's ready to go. What's more, the def kits come with a library that can be used to implement an RTLS but apparently is distributed as a blob which is a huge shame because it makes it pretty much useless for learning purposes. So before we get into the next part I'd like to present a quick recording of the example app. So on the video we will see the view from the camera and the app. The app has a 3D model of the room where I set up the system. On the view there's also dots representing the tag and anchors so the tag is wet and the anchors are blue. I know that I look quite funny working with the thing out top of my head but anyways, you can see that the system is able to catch even the slightest movement in all three dimensions and if you compare that to what you see on the camera you can say that it works pretty nicely. Now, let's finally talk about the implementation and neat features of Zephyr that allowed me to get this project working quite fast. So let's talk first about the gateway. One of the problems we had to solve was getting the allocation data from the tags to the MQTT broker and since there is no IP stack on the board we used we needed to find a different way. So we figured that since every pair of knee-boring anchors is always in proximity due to the way you need to place them to make UWB ranging quark it makes perfect sense to use some kind of mesh not working for that where the anchors would be used as the backbone made of relays. OpenTrad would be probably the best option since we would be able to talk to the MQTT broker directly but the SOC we used didn't support that so we went with Bluetooth mesh. We obviously can communicate with the broker directly using Bluetooth mesh, right? So hence the need for the gateway that would translate messages from Bluetooth to MQTT and vice versa. So how did we make the gateway? At first I didn't think even about using Zephyr for that the idea was to use some kind of Linux single board computer like Raspberry Pi which has both Wi-Fi and BLE and write some app that would communicate with Bluetooth mesh using the bus calls to Bluezy. It was a neat idea because we would be able to run this script also on our development PCs but frankly speaking, either I couldn't find good materials or both the bus and Bluezy APIs are very complex especially for a newcomer so and what's worse the wrappers around that for example for Python were also poorly documented. So I tried to find some options around Zephyr instead. The big plus is that we would use the same Bluetooth API as on tags and anchors but let's be honest, since we need to bring additional hardware instead of just running the app on development PC the development which will be much more complicated simply to just cracking some Python script. Or want it, I didn't want to give up on not having to bring any additional hardware at least for the development so I export emulators in Zephyr and it turned out that we can still get away with no additional devices. So Zephyr as probably you know has many virtual targets including QEMU which gives the closest experience to running code on a real board or native POSIX which compiles your Zephyr up into a Linux executable and native POSIX is certainly much lighter to run compared to a full-blown virtual machine but it has some strange limitations like you can't use blocking loops so you need to watch out. The docs are pretty good on that topic. At first emulators doesn't sound useful because emulators are supposed to emulate things right we use them for testing and so on but it turns out that you can proxy real peripherals to them and make code interact with the outside world. So we went with QEMU. So how do we add the internal connectivity to Zephyr on QEMU? Pretty much options are about first setting up a ton or top interface so basically a virtual network interface either on layer two or layer three and then somehow forwarding that interface into the emulator. So for instance you can use serial line internal protocol which is forwarded to QEMU by opening a unique socket on the host side which becomes a serial device in the guest or we can make QEMU virtualize an Intel gigabit adapter over top interface which is the preferred way for the target. In practice this is quite easy all you need to do is to set up a couple of options in cancuffing and enable the Intel gigabit driver and configure the network. Then you need to run a net setup HA script which is provided in net tools repository of Zephyr. Actually there's a bunch of various scripts for IP forwarding for a zillion of configurations and after that you also need to configure the net which is a couple of additional commands and also there's a script in our repo for this. As you can see in this example we are routing the traffic to the wifi adapter of the host. Unfortunately at first I had some issues with getting everything to work properly for some reason only on QEMU for the first couple of seconds after starting the app communication just didn't work so the MQTT client wasn't able to the DNS query. Thankfully with Kconfig you can detect on which board you are running on so you can solve issues with specific targets like this. Well that's one way of dealing with problems I'm not proud of it but at the moment it works so I just didn't care. Setting up a separate interface has also a cool side effect you can use wire track and capture the packets from the whole interface without any other communication on your PC interfering which is a big win in terms of debugging. It's also quite convenient to write the networking code there you can first write and test your code on an emulator like QEMU and only then if you're sure it works fine you can port it to the target hardware. So we've dealt with one part, the internet connection but we need to get the BLE to work in QEMU too. So as you know, Bluetooth low energy is split into two layers host and controller and eventually there are some layers on top of that like Bluetooth mesh. So host and mesh implementations are just pure software and we include them into the application. The controller though is completely separate and it's linked to the host using the HCI protocol or host controller interface and that controller can be located either on the same chip as the application so that's the case for most BLE SOCs like the NRF 52 or it can be moved to another controller for instance on the NRF 9160 decay the host lives on the NRF 91 while the controller is on external chip and these are connected using QART. Now for our needs, we also needed a similar configuration but just a little bit more complicated. So we are using the controller that's located on the host PC that we are running the gateway on so that's probably the Intel's Bluetooth adapter or whatever your PC is using and then we use a tool called BT Proxy which forwards a Linux Bluetooth socket into a Unix socket which then is forwarded to QEMU as a serial device. In practice, well, there's no KCONFIC options to show you because Zephyr is apparently smart enough to enable all the necessary Bluetooth options when you choose QEMU. As for the one command, just to make sure first let's list out all of the HCI's in the system with HCI config but probably you are going to see only one HCI zero then you need to shut it down actually because Bluezy is running a host layer which uses a controller so you can connect all the Bluetooth peripherals to your computer. By the way, make sure that you don't have anything connected. I was dumb enough to call that comment when I had my headphones connected to the computer so you need to watch out and then you need to run BT Proxy which forwards the controller to a Unix socket which then is converted to a serial device on the side of the guest and the hardest part here is probably to compile it with the proxy because it's not distributed as a package at least on Ubuntu. And that's it. Text to just, I would say, insane flexibility of Zephyr. You can build a functional app with Bluetooth and network connection that runs directly on the PC. So in case of our system, the gateway was like the very same laptop I'm presenting on today. Is it lightweight? No, it runs on the damn QEMU, right? The app takes about like 15 megabytes of RAM but at least it was pretty much issue-free besides one small problem. As a side note, before we jump into other parts of the system I had the difficulty with finding proper sample code for making a custom Bluetooth mesh model but thankfully I found one in NRF Connect SDK. So NCS is a fork of Zephyr for Nordic semi-products and most APS are compatible which means that the most code samples and documentation also applies to Option Zephyr which we used. It wasn't so straightforward because there were some macros we used which were not present in Option Zephyr but I just inline them and everything seemed to work. So the takeaway is if you look for some documentation or samples, especially around Bluetooth in Zephyr, NCS docs are worth checking out or if you are using Nordic products then probably you can just switch completely to NCS. Okay, so we are done with the gateway. Now let's talk about getting the ultra-wideband module to work. It's actually a quite embarrassing story. So Zephyr has a 802.15.4 subsystem and there's also a driver for DW1000 but I didn't use it. Why? Well, at first it occurred to me that you can't use that API for retrieving the timing data which is crucial for implementing that range algorithm. So instead I partially ported the original driver for STM32. Later, after doing all of that porting work and experimenting with the samples, when I finally learned that the registers are used for the control of TX and RX times, I finally understood the Zephyr driver and realized that it's possible to do everything I needed. So the netPKT interface is quite flexible and allows for configuration of many different parameters but the functions for that are optional and you have to enable them in Kconfig. So the takeaway is if you provide a complex interface with many layers of abstraction, it's quite hard to use them without any concrete examples. Anyways, it was for me to light to change the implementation so let's talk about my custom driver port. So the porting process actually went quite smoothly considering that I don't have much experience with writing code that interfaces directly with the hardware. I think that Dekawave has done a quite good job on separating the platform-specific stuff although the interfaces could be probably named better. Like write to SPI or underscore sleep are probably not good candidates for globally exported symbols. As you can see, the primary difference between the original STM32 code and Zephyr implementation is that you make only a single code to the SPI API and the API accepts a set of RX or TX buffers and it also automatically handles the chip select pin that you configure in the device tree which is quite convenient. One warning though, on some platforms like STM32 you can configure automatic hardware control of chip select pin instead of dedicating AGPIO that is controlled by the software. So before you try to configure any SPI API refers in device tree, make sure that the chip select control isn't synced with what you configure in kconfig. One time when I was writing a driver for a NFC module I wasted a lot of time hunting down this problem. Speaking of device tree, I think we all have been there. You try to change a one small thing there and you end up reading the docs in the Linux corner for the fifth hour to understand how it works. And because in Zephyr, the device tree configuration is translated into a million of C macros that you later use in the code, it makes debugging it even harder. Obviously that's the price we pay, right? For SAP App Experience, we, for example, the sensor API where you say that you need some kind of value from a sensor at a specific node and it works like magic if you finally get the configuration right. So I had two things that can make your experience, well, not good because it will never be good, but at least less painful. The first one, if you are using VS Code, is to use NRF Connect for VS Code extension pack. So what's little known is that this extension pack works not just with NCS, but with AppStream, Zephyr and other Zephyr-based SDKs just fine. So the device tree language support there can make your experience much, much better. You won't have to regret through Zephyr's sources anymore to understand where the definitions come from. Secondly, I'd like to recommend Marty Bolivar-Stark from last year's Zephyr Developer Summit. He explains there the inner workings of the device tree and you will also learn how to try to decode the cryptic errors that compilers generate when something is not right. And you will also learn the so-called macrobatics that are used there. Okay, so we're done with setting up the connectivity and peripherals. So now I would like to tell you something about doing maths on the other devices. So as I have said earlier, the text measure the distances to anchors and then from that we calculate the location. There are two ways you can go with. So the first one is to send all of these measurements to the server, calculate the location and then if it's needed, send it back to the tag. Or you can calculate the location on the tag and then optionally send it to the server. So we went first with computing the location on the tags because it has some huge advantages. First, you can have a very high refresh rate locally. For instance, to be able to implement a very responsive navigation system which only periodically would send some results to the central server. And secondly, it's quite a lot of traffic to send all individual measurements and retrieve the results back. So basically it said that decision of scalability, right? So the problem is that we need to implement the multiliteration algorithm to run on these tags. Obviously, implementing it all from ground up is not an option. I'm not showing the algorithm here. You will find it in the full sizes. But the key thing is that involves the computation of a magic studio inverse which is a complex algorithm. On full size computers, there's plenty of options. So you can use NumBuy for Python or GNU Scientific Library for C. And at first I thought that maybe I would be able to compile GSL for Zephyr but the library is just so huge that it surely won't fit on a one megabyte flash. Thankfully, Zephyr has a dedicated linear algebra library called Zephyr Scientific Library just for some reason, though, it's not called GSL, it's GS-highly. What's nice is that the API is actually quite similar to GSL. I think so if you have any experience with that you will be able to start using it quickly. What's even better is that it's very easy to add it to a project even though it's not bundled with Zephyr itself. So you add an entry to a project manifest called West Update, enable a couple of options in K-Calific and it just works. There are some things I'd like to warn about, though. First of all, the library uses, or I would say actually abuses, VLAs. So it's not easy to track the memory usage. Especially if you are trying to call some factors that do many, many recursive calls underneath and calculating a pseudo inverse is one example of that. Secondly, it's easy to forget about enabling FPU sharing, which saves the floating power registers, floating point registers when doing a contact switch. Otherwise, if you try to do floating point math on two different threads, then everything can get mixed up, so please be aware about that. Okay, so that's the end of the technical matters for today to wrap up Zephyr. Obviously it's not a silver bullet for all of the burden of making embedded devices, but certainly it's quite a good bank killer and it makes writing applications with it really enjoyable, even if you don't have much experience. If you ask me about my project, I think I have learned a lot, and also we graduated thanks to it, so definitely I can call it a success. And I hope that our efforts are not completely lost, and everyone who came here has also learned a bunch about Zephyr, like adding systems of ultra wide band. So, thanks for joining, and I hope you enjoyed the presentation. So, do you have any questions? Okay. So maybe you, do you have a mic here? Okay. Johan Fischer, I'm the driver of the DecaWave, as well, and the author of the driver for DecaWave transceiver in Zephyr OS. So my question would be why I started your own work and then use implementation in Zephyr, which is actually generic, but it's missing part is more work and abstraction and 15.4 subsystem, because there's no real support for ultra wide band part of the 15.4 specification. So for us, like from the project right, starting going to work on more generic solutions, would be to improve 15.4 support for ultra wide band, because the driver is already there, and it works fine with native platforms. Thanks. Thanks. So, yeah, like as I mentioned, I, well, I know the driver is here, right, but there's no samples though. It was, you know, pretty hard to get something working with it right. So that's why I had to. There are samples for several echo and several client and several socket client and socket servers for 5204 overlay. So it can transmit the data between two nodes. It's like with usually 504 transceivers, if you power up with two samples, it will start to receive and receive the data between like a network package, yeah. And the confusing part is maybe about net packets and the timestamps inside the net package, yeah. That was a hard pass for me, right? How to gather the data. Yeah, the thing is to use this like example from time-sensitive networks, I guess, support in Z-Fi, that's what I use, because there's like 5204 support in Z-Fi, it wasn't designed to be like own part, yeah. Initially it was like part of IP stack, yeah. And it was more abstract at the last time. They said that needs more polish and more work, yeah, to be more abstract, maybe, yeah, some makesupport and ultra-wide band parts, and then, yeah. Okay, well, thanks about telling me that. Well, maybe when I have enough time then, I will try porting actually the application to that API. Like that's something that probably will be on my list in the future. Okay, so next question. How did you localize the anchors? So basically you enter the positions of the anchors to the system up front. And actually when I was doing the demo, first we modeled the room with very high accuracy, then we placed the anchors in the model and from that we were able to get the X, Y, Z coordinates. So we kind of did it in the reverse, I would say. So you did it manually, like not, okay. Yeah, yeah, I mean, as far as we call the examples from the K-Wave actually, like they have samples which allow you to auto-position the anchors, but in our case we had to enter the coordinates manually. Okay, sorry. And did you open source your, like how much pain was it to simulate your firmware in Kimu? So I mean the only simulated part here is the gateway for the system. Like the rest of the software is running on the actual devices. And like only the gateway is running on the emulator because we just didn't want to bring additional device for the development phase. Yeah, yeah, and that just worked out of the box? Yeah, yeah, I mean, like the only problem, as I said on the presentation was that, well I had some weird problem with the IP connectivity on the start. So I didn't investigate the problem, instead I just added a slip statement and it kind of fixed the problem. And everything, all of the rest was pretty much working nicely and I believe that's part of the reason for that, why it works so good is the separation of the host and control wires on Bluetooth. So these are like super interoperable and I had absolutely no problems using the controller running on the laptop with Zephyr host, which is like it totally blew my mind when I first turned it on. Yeah, so thank you. Thanks. Great project. Thank you. Next question. Impressive project, yeah. So it's a little undilated to Zephyr itself, but have you done any kind of analysis on the power requirements and how many devices, how many tags can be supported at the same time? Yeah, so as for us, for the power measurements, I didn't do any analysis of that, but as far as I know, ultra-wide is actually quite power hungry, so that depends on how many measurements you make. And as for the scalability, well, TW itself, TWR itself is a method which isn't actually too scalable, I would say. Like it was probably the most easy to implement, but you have a problem when you have many devices at the same time trying to infer their locations, so you probably try to implement some collision evidence algorithm. But as far as I understand, and I recall from marketing materials of the company which actually built these kinds of systems, they probably go with a time difference of arrival method, which actually requires you to configure the time base on all of the anchors, and then you measure what was the time difference of arrival of the message for the tag to those anchors. So that's the method which is much more scalable. The TWR obviously takes a couple of milliseconds to the deranging, so you multiply it by four because you need at least four measurements, so that this would be probably like 30 milliseconds, and you can guess that probably you will be able to run like 30 tags easily without any optimizations. Like if we had optimized this deeper, I believe we could get up to like hundreds or maybe more. Thank you. But we didn't very optimize that, at least for that phase of the project. Thank you. Thanks. So yeah, I had a question if you just wanted to clarify, I'm new to Wideband, so I got the impression you were using like, you were using Beely Mesh for less like a communication channel, and then you were doing the Ultra Wideband for the distance? Yes. Can you clarify, could you use Ultra Wideband for the communication as well? Yes, I mean, yes, so that's totally doable, but then you do not have any, you know, ready to go API to implement any kind of this system, so like Ultra Wideband can be used to get mesh systems obviously, and I recall that even Deco, I believe in some application nodes has some examples of how that system could work, but there is no, you know, implementation that you can use out of the box. That's why we went with Bluetooth Mesh just for convenience, I would say. Thank you. Okay, thanks. So is that all questions then? Okay, I see one more. Could you please pass the mic? Hi, thanks for that. Was it written in a sort of generic way where the tag would sort of listen or discover what devices it could actually communicate to first, sort of to get a sweep? Well, so the idea is to use, we haven't implemented that yet, but the idea is to actually get the data to the tag using the Bluetooth Mesh, so when the device initially connects to the system, it will download like the list of all of the anchors and their addresses. From that, the device should be able to know against which anchors it should be measured, the distance against, right? Because it's in the neighborhood table, or... Sorry? Because it's in the neighborhood table, or will it try to actually localize against every device that's in the mesh? No, I mean it is doing the individual measurements like one-on-one. There's no broadcasting of the messages. It measures the distance against one anchor, one and then another. Obviously, I believe there are some schemes of TWR which actually sends out one broadcast messages and then the anchors, which are supposed to answer that message, are in a specific order, so they also in a specific order, they reply back with different delays to that device, so that's one way of optimizing the system because you have N plus one messages sent instead of two N messages, but that's not what we have worked on. So we haven't optimized that yet. I'm just curious, did you experiment with range? So how far can you still measure distances and is Bluetooth giving up first or is ultra-wideband range first? So at least for the range of purposes, I didn't test the whole system, but to check how far UWB works, at least my configuration in the open air was about 30 meters, so at that distance it worked just fine. And then obviously the range of the system highly depends on which channel of UWB you use because the base frequency is from three to 10 gigahertz which has a huge impact on how far the system works, so you have big options and obviously each one of these affects the effective range of the system, right? But I would say that it's about 30 meters at least, you can get more. Just check, I don't have anything online, but I think we're just about done now. Nope, we have any questions online, so I think it's a good time to stop. So thanks everyone for coming. Thank you.