 So, this is going to be a talk about writing the upstream wave of Zephyr and developing an out-of-tree application and how to keep it in sync with upstream. So, I just want to do a quick introduction. So, Lunaro is the company that we all work for. It's a collaborative engineering organization that's kind of focusing on ARM solutions. The group that the presenters are working in is Lunaro Technologies. They're a small team inside Lunaro. We focus on taking open-source software, making it better than applying it to a real world solution. So that's kind of what we're all about. So I'm Tyler Baker. This is Ricardo and Michael. They're going to be presenting kind of a co-presentation today because we've all got kind of different parts and different perspectives. So if you're interested afterwards getting in touch with us about anything, we've got links here. These slides have been uploaded so you can grab them off the web and follow us or send us messages. So what are we building with Zephyr? It's pretty simple. So we're building an application that delivers firmware over the air to real hardware and multiple MCUs when we talk about hardware. So we want to use the latest from the Zephyr project and it's been successful but it's also been difficult and that's what we're going to talk about today. So what hardware did we work on? So these three platforms are the ones that we've mainly been focused on. They're all ARM CPUs because that's what kind of Lenaro focus is on. So two of them are 96 boards which is part of our 96 board program. The other ones at NXP chip. All of these are supported to some extent upstream in Zephyr. So we had a hand in some of that and the K64F was pretty well supported when we started. So how does this firmware over the air app work? So really what we're going to talk about is this part of the equation but there's a lot more pieces than just the Zephyr part. So we're talking Zephyr, our Zephyr application talks six low pan to the gateway. Our gateway joins devices with a service that we wrote so it can search out beacons or search out MAC addresses and add them and then we can start to communicate over IPv6 to the gateway which then turns it from IPv6 to IPv4 to talk with web services. As we were going along with this demo, we've been working on this for about six months now, we realized none of the cloud services natively support IPv6. So the IoT devices endpoints can't talk directly to these services. So we have to kind of do a little bit of six to four tunneling to get things to work. So that's kind of the architecture of what we built and we're really just going to focus on the far left piece today. But if you have any questions during this presentation and we have a small enough group, just go ahead and stop me and throw your hand up and we can talk about it. So the question is, is this implementation that we're going to talk about open or closed source, it's all open at the moment. And I intend it to be that way for the foreseeable future. So let's get started. Okay, so let's talk about what our project goals are here. So we want to support delivering firmware over the air. What does that mean? Well, we have to have an AV partitioning scheme. There's been lots of presentations at ELC this year about AV partitioning. So it's really not much different than that but it's doing it with Zephyr. So we basically download firmware to a scratch partition and then we have a bootloader that will go back, cryptographically validate the images and if they succeed, we'll boot into them. If they don't, it rolls back. So we have all that working hardware. We wanted to support all the MCUs that we listed before. Technical debt. Zephyr's moving very quickly. We want to keep our patch set low and rebase on upstream master as much as we can. So that means upstreaming all our platform code and keeping application changes in sync with upstream, which sounds easy to write on a slide, very difficult to do in practice. And then quality. We want a testable design when we're building this because we want this end to end solution to continue to work. And it's something that we can use to help fix any issues that we may find in Zephyr so that we have something that's open source that works. And it's a real world use case that we can test with. So we want to also automate all the things. We want to leverage as much automation as possible to keep the work, the manual labor of keeping everything in sync kind of make that easier. So I'm gonna have Ricardo come up and talk about things that we're missing when we started. All right, good morning, everyone. So I'm just gonna talk quickly a little bit of the challenges and the work that we did when we started this. Tyler showed the tree boards that we wanted to support at that time. But we're kind of designing the boards as well, especially the 96 boards kind of in parallel when we're trying to decide what we're going to do in the software side and the project itself. So the first thing that we had in our mind is to try to get it supported by Zephyr. We were targeting Zephyr, so we like to get back and see like the MCUs were supported or not and get that in place and see kind of everything else that was missing in there. So we can actually try to focus a little bit more on the application side. So some of the things that we found out is the usual lack of harder because the project was quite new and recent at that time. At least in particular with ARM, there was not many MCUs supported by the project. So we had, especially on the carbon here, we were using two different MCUs, one just the main application based on MCUs like the STM32 and the one NR51, just simply providing the Bluetooth part of it because the main MCUs doesn't provide Bluetooth by default. So we kind of had to think and know how we're going to get that supported and how we're going to get that to work with Zephyr at that time. So there were several complications. We had to contribute a lot as well, especially kind of nearly they should get all of this going. And there's no, for example, there was no specific boot loader supporting Zephyr at that time, besides simply the realm that was available in some of the hardwares. And as Tyler showed up, we wanted a boot loader that could validate the images, swap the images and so on and also be generic across multiple harder. So we didn't have anything in Zephyr at that time. We were looking around and not our tosses. We did find a project that had that already kind of in place, which is what the runtime guys, like talking about like yesterday, we now kind of have sorted that out as a common boot loader. And also some of the things that we found at that time, like even though, like for example, Nitrogen, which was an RF-52 base at MCU, was supported the readiness effort. There were several things lacking in me saying like, for example, the flash driver wasn't in place, even GPIO wasn't kind of well supported. So I had like to do several fixes across the boards that we wanted to use and besides just simply including hardware support in that case. So and then after, I think like two or three months, we were working and trying to get to kind of was our first milestone like in connect last year and like in September, tried to demonstrate. They know like go through like those project goals and try to show up something like showing the update over the air basically with those boards that we wanted. So we simply focused on carbon and Nitrogen at that time, simply because we were like also demonstrating the gateway and having the conversation over Bluetooth 6 low pan. And what we were able to demo in there was at that time was based on Zephyr 1.5, like I think 1.6 was released a few, like far from that date that we had the demo. We were able to demonstrate the entire Bluetooth support over 6 low pan and getting the device and talking through the gateway to an open source cloud-based service. Another service, but like the server based on Hockbit, using one of the 96 boards, Community Edition boards as gateway. In that case, we're using Dragon board, we're using High Key now. And we at that time because of the bootloader problem that we had, we simply decided there was like no time to get that properly done in Zephyr. So the demo that we had, we're simply using the bootloader from different RTOS that was actually booting Zephyr. So it was one of the things that we wanted to fix moving forward. But and one of the other things that was nice as well is that by the time we were looking at in carbon in particular, because of the the difference here that you have a different MCU providing Bluetooth support. The only firmware that was available at the time for this MCU was like a proprietary firmware, which is called its soft device, which is also like containing a proprietary protocol. So we wanted to avoid having to add a proprietary protocol in Zephyr. And see if you could have something that's more generic, like a pure HCI protocol, for example. And one of the nice things is that by the time we were looking at that problem, like the thing this slide went off, right, there we go. Is that Nordic showed up and contributed the whole controller stack, which was great for us. We just had to get that to work with the NR51. And then one of the challenges of course like in there is that there was no Cortex-M0 support at the time and that is Cortex-M0. So we had to first work with Zephyr to get the M0 support in place, get support for all of those MCUs, get support for the boards. And then we could actually get the carbon to be online and use the Bluetooth connectivity and so on. And by the time that we had the demo, we had still several technical dabs and several things that we wanted to fix. The first thing is of course we didn't want to use two RTOS to demonstrate our project, we simply wanted to focus on Zephyr. So the first thing is sorting out the bootloader, right? How can we get like a bootloader Zephyr compatible? Booting and other Zephyr application. The other things that we had in mind as well is that the project is moving like really fast, especially like over the last couple of months. Like a lot more contributors are joining up the project and so on. And we wanted to at that point, you know, like continue working on the application of the demo, but making sure that everything that we're doing, we're in sync with upstream and continue, you know, like contributing and testing and make sure that we simply didn't like fork the project at that time. And we still had like several changes that we're carrying our branch in particular here is I'm saying the Zephyr support, the slave support as well. So there's like some discussion that we had like on this over the past few days and we hope like to get disordered soon, to get this merged up stream as well. There you go, the slide went off again, all right. So and also like we wanted to be prepared with the core changes that are going to happen with Zephyr 1.6 and 1.7. One of the things that we're kind of scared about at that time is like the IP stack that we were planning on replacing the entire IPC stack. And at the point that we had this demo, we kind of had it working with the old stack. So there's several things that are gonna be replaced over the following releases. So we wanted to try to make sure like to be aligned and getting that out to work as we continue working on the project. So I'll hand it over now to Mike. Mike was gonna talk a little bit of the challenges of keeping in sync with upstream and kind of the issues that we had after the presentation and since until we actually earned it. Yeah, it's gonna talk a little bit about the fund that we had over then, it's better now. Good morning. Yeah, it was so obviously at the end of Connect, we were in a fairly good place. The demo was mostly working, it had its complexity, and we knew what we wanted to solve. But then we had upgrades we had to deal with. And I think there was an initial sort of, hey, we gotta get everything working, Zephyr's just go, go, go. Let's get the code in, let's move faster. And then with the change to 1.6, Zephyr kind of started establishing itself and we kind of started feeling like, wow, we're really going places. Everybody kind of started joining in thinking that maybe it was a little more stable than it was, but it's still moving super fast. And so I think we're kind of entering the gold rush phase of Zephyr right now where everybody's kind of like, let's get aboard, let's go. And here's where we had problems. So obviously the biggest change that we introduced in, or at least Zephyr introduced in 1.6 was we unified the kernels. But prior to 1.6, we had the two different models. You had the nano model and you had the micro kernel model. And we obviously had code that was out of tree. We were using fibers and we were using all of the previous APIs. So immediately we had to start adjusting our code, which wasn't in the source trees so it took quite a while to bring that back up and then came the IP stack. So what could possibly go wrong? It's a new IP stack, we kind of have an IP based app. When we jumped in, I think we had a higher expectation of where the stack was going to be at and I don't know whether we jumped in maybe just ahead of sort of the maintainers and where they were at. I'm sure they had use cases that they were testing. But to be honest, when we jumped in it just wasn't working. I mean it wouldn't connect, the states were wrong. There was quite a bit of debugging that went into this. I mean I think we spent three to four weeks of literally looking at TCP dumps, figuring out why things weren't connecting, adding our own tooling to figure out where the debugging was going. And so I think if it was one thing I'm pretty proud of is that we made some really good contributions to the IP stack. And I think as open source citizens, that's kind of the responsibility. If you're going to try to develop an app, you're going to stay upstream. We can all make really good contributions and get things working together. So now everybody benefits, right? And at the end of this, I think we're on a Zephyr 1.7 now. It is working and TCP is functional and it's actually in pretty good shape. It's a lot better than it was. The 1.6 was completely broken when we jumped to it. Now 1.5 was, you mean 1.5 was working okay. It definitely had issues and I can see why we wanted to migrate away from that and get to a more stable stack where we had a little more control. This was definitely built from the ground up. I mean the code was there. They had done a good job of kind of getting the base sort of structures in place. I just think that like I said we kind of jumped in a little early. They were testing UDP at the time. I think we were kind of early into the TCP phase and but it worked out. And to be honest, the maintainers were great as we were submitting patches. They were very responsive. Yeah, yeah. No, that's a good point. We were literally daily or every other day rebasing on master to try to bring in changes and make sure that everything was going to work. It was kind of an incredible, you know, it was a big switch for us. I know obviously this was our biggest challenge really to get and it was timing. When it was timing, you know, we happened to have a purely TCP based app that needed some sort of core functionality. It just wasn't ready yet. But I think it worked out really well. And I'm going to talk about problems but I do want to focus on, you know, this is sort of our commitment that when you pick up an RTOS or you pick up something that does have problems and you fix them, you need to get them upstream. You need to get things fixed so that everybody benefits. And that's so I don't want to be totally negative about the problems. I think it all worked out towards the end. Some of the other issues is obviously, you know, the documentation is changing. Things are changing so fast in the system. Sometimes you don't necessarily, it's not clear like if you're going to implement like a flash drive or if the erase needs the right protection set or things like that. And these things cause little delays along the way. So, you know, as we moved along, we sort of just little by little, you know, the whole project sort of kind of got rebuilt on Zephyr 1.6 and 1.7. And then we thought we had everything working, yay, the IP stack sort of working and then there's a ton of knobs. I mean, there's so many settings in Zephyr to control the memory usage for the stack and for the, you have data buffers and you have transmit buffers and receive buffers. You've got Bluetooth buffers. I mean, it's an end-to-end. I would say I really felt like the knobs doubled between Zephyr 1.5 and maybe 1.6. And so we spent quite a time debugging what were the right settings for our app, you know, versus the defaults. A lot of the defaults are still old settings versus new settings. And luckily these are getting better. These are things where as we bring up issues and use cases, you can make real changes so that maybe the next guy doesn't have to configure his app from scratch. If you have a TCP app, the defaults are better for, you know, where the TX and the RX buffers are and things like that. But these definitely cause problems. I mean, there was just kind of one thing after the other. And a lot of these issues, you know, being new to, it was, I jumped on board right after the switch. So there's a fundamental understanding of what you need to turn on to really debug your app. A lot of the errors don't print without turning on a config. So it's not maybe natural thinking, but you have to enable the debugging. I mean, you would think maybe some of the errors would print automatically, but they really, they don't. And then some of the debugging is so heavy, you can only turn it on for a little while because it actually causes race conditions and other things to kind of creep up in your code. So debugging is kind of a lesson all by itself. And then we hit other issues. So obviously our app is dependent on Six Low Pan. Six Low Pan has been around a while in the Linux kernel. But it'll come back, but we actually were hitting Linux kernel problems as well. And so there again, we have a couple patches we're looking at now. We have one already, I think that's going to get submitted. But the interface is debugFS currently. And it's almost like it's still a very work in progress type protocol. So if you hammer that debugFS a little too quickly, you can cause the kernel to crash. And there are other issues where there was a ref counting issue that got figured out. And these all kind of added to the complexity right around this time period of going from this what was seeming to work to our newer app, which is now a lot more stable. And so here we are today. We're based on Zephyr 1.7, RC1, although I believe maybe RC2 just came out. We have a unified bootloader, which is maintained outside of the Zephyr and the minute code base, but it works with both. And you have kind of a real good community of contributions there, where it's validating images. We're using real technology, real security, we've got an IP to work. There's still some probably fixes in place. I know that we actually still are dealing with like a Six Low Pan issue, where the headers are still getting modified, but that's coming. So once again, we have more Zephyr changes coming. And I think the focus is that we need to focus on master and keep as close as possible to the upstream so that you can still get the benefits of where Zephyr is going, because it is moving so fast. So down at the bottom here, this is kind of a summary of what our versus master branch is, things we're gonna try to get upstream. We have a couple of the speed drivers. And I'm gonna hand it back off to Tyler. Tell you how we're doing it. Thanks, Michael. So now I'm gonna talk about continuous integration and automation. So we just kind of saw where we started from and then where the current state of the demo is and the problems we hit along the way. So now I wanna talk about how we're trying to keep this app functional and still rebasing on master. So first thing we gotta do is we gotta keep track of the sources. So all of our code is in GitHub. So funny enough, we just decided that we were gonna integrate with GitHub because that's how our workflow works. So I hear there's talk about having the Zephyr project moved to GitHub, away from Garrett. So that seems to work well with our workflow. So we endorse it. So the Zephyr tree itself, there's three branches we're monitoring. So upstream master, our master upstream dev branch, which is a mirror of master with patches on top. And then 1.7 dev, which is like our stable type branch. That's fun. Is it the projector doing that? Okay, oops. Okay, so then we have MCU boot, two branches there. MCU boots is a Zephyr application, right? So we have to validate against all the Zephyr trees up there. So the growing permutation list that we have to validate against. And then our photo application also builds against the Zephyr tree. So now we have to build it against all these branches. So yeah, I mean this is a complex matrix of just build testing this stuff. So we wanted to integrate with GitHub and have our CI there, because I don't want to force people to go to some other website to look at the results of some of the CI. So let's take a look at what this looks like. So if you go to our GitHub page for the photo app, you'll see this. This is updated live. These badges come right off of our CI server. So like our 96 board carbon on Zephyr master is failing to build. There's a flash driver patch that is in progress that Kumar is going to graciously merge here very soon. So we can fix our build issues. I already merged. Good to hear. It's like, yeah, this is a snapshot. This is an image. It's not live. So yeah, this is just, you know, okay, this is how our apps building for our different platforms that we care about across our branches. So we can see when something breaks in master and our app is failing to build, we know about it instantly. And we get, you know, when that build fails, we get an email as developers. So that's really nice. But we can also kind of see, okay, there's something upstream that's coming that we need to investigate and either it's a regression or it's something's changed that we need to know about it up front so we can get patches together in our master upstream and figure out if we need upstream or something we're going to have to hold. So let's look at the next one. So this app's got lots of dependencies. So that's on the first page. If you click the dependencies link, you'll see we have to track all of the MCU boot changes, right? So you've got two branches of MCU boot versus the three branches of Zephyr. And we've got this build matrix too. Down here is actual sanity check across all three branches. So that's just checking Zephyr. So we have this kind of at a glance for developers to look at since they're going to go to GitHub anyways and look at pull requests. So that seemed to work out okay. So what's the strategies that we use? So we need to answer some questions. How do we stay close to upstream? You know, how do we reduce our technical debt, keep our patch set small? But how do we do all of this and make something that works? Because that's the challenge. It's always a challenge doing that. So basically we have our solution is we're going to build and use automation as much as we can to kind of do the heavy lifting. So we build all the tests. We run the unit tests for applications on supported hardware. We run functional tests on our application. So we actually like flash it onto devices, check that it comes up, and then test the end to end story, which I'll show you in a second. So how does this keep us sane? Well, it's changing all the time. So it's just better to detect problems because they're just going to be there so that you can deal with them and it's not a surprise. So we were actually looking to rebase on RC2 and we did some testing and there's a regression. So five times out of six, our device is update. And when it jumps to the next slot where the application loads it's just hanging. So there's an issue that's popped up just between RC1 and RC2 that we need to look at now. But we were able to kind of identify that with some of the CI and automation practices. So what do we do for pre-merge testing? So we've got it all hooked up to GitHub. So if anybody does a pull request, they're going to get this little five checks. So what are the five checks? We build the photo application. So this is the tip of master plus the pull request. So all the patches applied. Then our bootloader, we just sanity check it that we can still build it. That doesn't really, probably don't even need to have the bootloader here but we do it anyways. Then we run check pass just like you guys do on upstream Zephyr projects in Garrett. And then we actually deploy the devices. So we, these basically wait for these to finish and we deploy a bootloader and the application that the pull request represents to a device and just check that it actually comes up. And we use the test case utils library within Zephyr to create parsable output. And I'll talk about that in a second. But we can basically tell, you know, okay, Bluetooth came up. We're advertising the right profile. You know, it's all ready to go. It's not necessarily hooked up to a gateway and talking, you know, IPv6. But the app comes up. The Bluetooth radio comes up. That's good enough for us to just say, yeah, that's probably okay. So this is what it looks like. It's a little verbose. You can turn this stuff off to comment, like stop commenting on the statuses and just use that little information box, which I think we'll probably switch to, because you'll get emails every time this happens. And I know that's a problem in Garrett, but at least we can turn that stuff off with GitHub. So that's kind of how that looks. So how do we do the hardware testing? I used to be a maintainer for Lava. So we kind of naturally decided that, well, let's try to use it, right? So I developed an upstream to bare metal testing interface. So it's like a monitoring testing interface. So it basically just grabs the console or it flashes a device. And there's a way to basically detect start and end of test cases. You can parse the console output with a regular expression. And then there's also a new feature we added that allows you to send commands and then parse the output of that command. So if you have a shell, we have a shell application kind of built into our application. So we can poke different things and do functional testing that way, rather than just relying on the app to do all of its own testing. So we added some firmware tool support. So like PyOCD supported, DFU utils, the mass storage. I'm not really sure what to call that. It's basically where there's a mass storage device and you drag and drop an application. Well, we just mount it and copy the application over and then unmount. So these are all the devices that we support now. So fair range of actually x86 CPUs and QEMU. And then we have some whole bunch of ARM platforms as well that we can test on. So what is the job definition? I guess you probably can't read that at all. But it's broken down into three things. So deploy, what images we want to deploy. I mean, these image arcs, you probably can't read it. They're basically wrappers around those firmware tools. So it allows you, if you're gonna have to do something special with the firmware tool, add different flags, you can just define it there. And then that bracket bootloader matches this name here. So basically it's gonna give you a place with that binary when the call is made on the command line. So we have ability to flash multiple partitions with this schema. So we can put a bootloader down first that does a full chip aeration and then lay the application down. And then the last step there is, okay, what does my start of the test look like? What's the end of it look like? And what's the regular expression to parse everything in between it? So that's kind of how the hardware testing looks. What's the rollout testing look like? So our CI builds are pushed into Hockbit that are rolled out to once. Hockbit's actually a deployment server, more or less, right, that manages devices. I think, who makes it? Is it part of the Clip Foundation, I believe, yeah. And I know Bosch is using it. So we decided that it's okay, it does its job, it's nothing fancy. But yeah, it uses HTTP. We didn't have to worry about other protocols like MQTT or lightweight machine to machine. So a little bit easier to go. But you can kind of see here, on the left side here is our devices. On the middle column here is all the builds. And then on this side is all the activity. So you can kind of see when firmware updates fail. It rolls back to the images, comes back online, and ready to accept another image. So we can actually show you guys if there's time left over. I'm not sure where we are on time. But we can do a firmware rollout to like six devices in my house in Seattle from here remotely. I mean, that's basically, I think, kind of the progress we've made. Our demo in Las Vegas last year was like two devices on stage. And we got one update. But now we're to a point where we can roll out to a lot of devices now and have pretty good confidence about how things are going to work. So when that comes back up, we'll talk about developer testing. So, if it actually does, there we go. So another kind of aspect of this is when you're developing something like this, you don't necessarily want to just do pull requests and not really know if it's going to pass the CI test because it can be embarrassing or it's just like I'd like to be more diligent than that. So we have a way that you basically say, here's my get URL to my Zephyr tree. Here's the branch I want you to check out, and then that'll run sanity check. So before our merges, when we go from RC1 to RC2, we push it into this thing and just see how the sanity check turns out. And if there's issues, we figure out, okay, well, we need to patch that. So we always have a good Zephyr branch that builds. We can also do it with our photo application. Since our photo application depends on a Zephyr tree, we can say, here's our photo app, here's our branch, could be any developer's branch with their tree, and against any Zephyr branch and any Zephyr tree. So then it kind of puts together, we can take RC2, build our photo application with it, and then actually test it before we run any CI tests on it. So what are our future plans? We want to have developers be able to trigger hardware tests so they can just kind of do that out of band, don't have to deal with the CI system, but there's an easy maybe command line way of triggering that. Automated rollouts for common pool devices distributed around the world. That's kind of my idea is that we get to a place where this is working good. We can really test the Canary device as well, that all of the developers that are working on this project have a gateway, 9, 10 devices sitting at their desk and a subset of those can just be automatically updated as new builds come out of the CI machine and everything continues to work. So that's kind of like the goal for this is to be able to have a centralized Hockpit server pushing updates continuously to devices. Because I think that's where you want to be for a product at some point, is to show that you can continually deliver software that's going to be stable and never really have much downtime. The other thing we're not solving right now that we've kind of neglected is that we're not delivering a boot loader over the air. So that's something that we want to do eventually. That's kind of the pain point right now is if we change something that's dependent on the boot loader, we have to go reflash all the devices. Whether or not they're in Hockpit or not because Hockpit can't deliver the boot loader. So is there any questions about this? I know there's a lot of information and if there isn't, we can do some demos that might be kind of fun. That's a manual process now. We kind of look and say, oh, the net changes were merged or the Bluetooth changes are merged. Okay, let's pick it up and take a look at it. When there's interesting stuff, typically we don't like to go more than like two weeks because then the delta is massive and the quicker the better. But it does, like we found this regression. So we know now we have to do some work to figure out what's going on. Go ahead. One thing that we're not doing yet is and we're kind of why we're doing manually is that most of the core changes are kind of, we're getting them in like a big box of changes. Like for example, the net branch related changes or the Bluetooth related changes. So we're just basically tracking master now. We still need to track like those individual branches because they change quite often and the merge when you have is like a huge merge actually. So we can kind of ping point and track what is going on in the master. But it's still kind of a little bit complicated to go back and tack because those mergers are pretty big at this point. So this is something that we're not yet covering. Okay, let's see how well our system works. So I guess we'll do a live demo because I think that's fun. So again, what do we have here? So we've got nine boards online right now, three of nitrogen's, six are carbons. We'll just update six carbons to keep it simple today. And so we're going to roll out six updates right now to devices in the field. So, and then these are our CI build numbers. So we'll just go to the bottom here and grab the latest. And so what's interesting is that the latest Zephyr build against for our platforms actually here since we track all of that stuff, it's all being pushed into HocMats. We have all these builds available. Eventually we want to be able to just roll out master Zephyr, our app built against master Zephyr. That's the idea is that we have zero deltas at all. It's all upstream and we can just take that stuff and it's very usable. So here's our 1.7 dev branch. It's lovely. I wonder why it's doing that. So I'm just going to drag them here. There's a rollout way. There's an API. You don't ever have to do this manually, but just for the sake of making it look cool. Okay, so let's just check and see what it's actually going to assign. Okay, so here's six different carbons, same build. We're going to roll them out. Before we do that, let's just take a look at the timestamps of when they've talked and what build number they're running. Just see it's not vaporware. So 934, so that thing talked to Hocbit. You can see down here, it talked to Hocbit. Yeah, well, I'm just going to show what build number's on here. Yeah, so we got build 171 on this device and all of the other ones, right? And we're going to update them to 176, just so. We're going to go up a few versions. So we'll assign them and now they go into the yellow state. Unfortunately, you can't really see the console logs and actually we can probably look at one as it's going here. Hold on, I think that's a, we shall see. Let's see, there it goes. Okay, so this is the console output on one of the devices that we just asked to go and update. So it looks like it's going to start downloading and flashing now. So all six are doing this. They're just cranking away, getting their update from the CI server. This is a UART on the, on the carbon. That's a good question. How big, do you know how big the photo app is? You're doing on time. Okay, so we can also validate, I think the experience is quite a bit smaller than that, it's like ethernet. Well, initially when we were having problems with it after the demo in Las Vegas, you switched to just using pure ethernet because it was like one less thing to deal with, right? Yeah, I mean, so all of this traffic's going over, IPv6 silver Bluetooth right now from a gateway. So we have an ARM64 gateway to ARM devices and talking to the cloud and it's being fetched and proxied through tiny proxy. So I mean, it's exercising quite a lot of software to get an update. It doesn't seem like a lot, but it really is. I mean, it's like, there's multiple players involved here and I think this is kind of exciting because we're at the point now where I almost don't reflash my devices. I'd rather just roll an update out and it's just that stable. So we want to keep it that way. And we also want to provide this reference to the community. Like, if you guys want to do over there updates for your product, we've got a good framework here and go ahead and use it and help us build on this. I mean, so we're kind of thinking about some longer term issues here. Like, this thing's an app that does updates, but that doesn't really make sense, right? If you want to develop your app that does functionality and then you just want to call a library to update, right? A service, exactly. There's like a daemon like DHCP, right? It just needs to be looking for it and then when it decides there's an update, maybe it does a callback to the app saying, are you ready, save your state? And then it says, yeah, I'm ready, here's my app. And then it starts to pull the update and flash it and lock everything else out, right? So what just happened there was our MCU boot validated that image because all of these are signed with our key. So if we set an image down that was unsigned, it would have just reverted back to the other slot and came back up. So all of this outputs what we process, right? So project execution successful. We blink the light. We can advertise the profile. All of that's using the test case library inside of Zephyr. So you see we connected to Bluetooth again. So the app's gonna delay a little bit and then start trying to talk to Hockbit and say, hey, I'm all done with my update. So now it's talking to Hockbit. Let's see if any of the other ones are done here. Whoops. And we're complete. So you see here, we've got three that have updated. The other three are chucking along, probably doing something similar. They'll be along here shortly. So we'll go back. This is all talking to the server, right? So these, yeah, it's like a RESTful API, right? And so we're just, we call every 30 seconds because the server actually tells you how long it wants you to pull, right? So we have it set at 30 seconds just to keep the devices active. It's being passed through the gateway to the service. So we're actually, the devices are making HTTP calls to the service directly. It's just being proxied from IPv6 to IPv4 via the gateway and then to the service. Polling. Yeah, so this right here, it's sitting there and like, okay, do I have any more things to do? Do I have any more things to do? And then eventually when I drag those things and say assign them, the next poll, it's like, no, I want you to download this image. And I think if you actually go up a little bit here, you can see, this is the JSON response or part of it, right? It's like, go get these artifacts. Here's the MD5 for all of the bits. Go pull them over HTTP. And then our app takes them and we'll chunk them and then write them in the flash kind of as it. Yeah, it's on a carbon, yeah. So we're five of the six devices, the other one. I mean, that's the other problem that we've got is it seems like when we roll out a lot of devices, some devices connected the gateway fast and download and someone, they go a little slower. So we still have to kind of look at those things, but we're able to roll out new software to a large set of devices. And really, we're actually hitting, running up against gateway, like, Wi-Fi chip kind of requirements that they can't have more than nine nodes, endpoints connected over Bluetooth at a time. So to really scale this out, we just need to start adding more gateways and more devices. Actually, we have a good deal. That's what we're finding too. And I talk to you. You can kind of get more devices online. Huh, we gotta try that because of the use of these apps. And we'll have a few of them. The other problem that we have is that not many like Bluetooth tips actually supporting like this. Like this one that's from TI, say it's supported like 10, but reliably we can only connect eight. Like if you're gonna add TI in or it's candidate, it feels like one goes down, you know, it goes up. And you mentioned about the debug app that says API that they will say to you. And they promised to fix it. Yeah, so are they gonna make an API through BlueZ to then call maybe a daemon that connect, like a real daemon that can do like a joiner daemon, right? That would be way better. Yeah, echoing stuff into debug FS is fun, but you know, like not really a production thing. If anybody saw that code today, you know. And just to see now the devices that have finished already are reporting the new version of 176. So that's the software that's running. Yeah, it'd be interesting to have more conversations around how we could, you know, take this app. The problem we're facing is now we wanna put like a different protocol with a different device management service other than the Hockbit in place. Now we do stuff to fork our own app and make those changes. So we'd rather have something more modularized to, you know, deal with the update side of things and then, you know, your application can talk to device management back ends. So are you sure that they are doing a 101 beta? Yeah, yeah, you'll have to flash drivers and partition layout and yeah, should work. Yeah, should work trademark. So you can see the boat loader here. So yeah, one of the regressions that we had is the big block fixes. It calls for a chain loading to see what happens. So that's something that I need to talk about here. So how big is the set group loader? I can look here. Most of it is like, nowadays, like the board, for example, depending on the board that you have, it's already automatically made with a bunch of things. Yes, yes, you need to talk to my app. Especially for the boat loader, you really have to remove a lot of the serial, a lot of the heavy to debug, a lot of the data. Well, then when you want to update it, you have to have another partition, scratch partition for that boat loader and things get complicated. All right, any other questions? I think we can kind of wrap this up. Thanks for getting up early, guys. I appreciate it.