 Hey, everyone, I'm Sterling, I'm the CTO and co-founder of a company named Runtime. And we are one of the largest contributors to an open source project called Apache Minute. And this talk is focused specifically on the build and package management system that's intrinsic to Apache Minute. So I'll give you a quick overview of the minute project itself just for some context. But really the focus here is going to be on a tool we call Nuut, which is our build and package management system. And the target of Nuut is kind of these Cortex-M styles, a build and package management system for the development environment on Cortex-M style MCUs. So it's a land that's littered with artasses and tiny little processors that have 16K of flash and 100, or 16K of RAM and 128K of flash. But first just a little bit about my Nuut, which is the Apache project that encompasses the Nuut build and package management tool. So Apache Minute is an open source operating system for a real-time operating system targeting these Cortex-M style devices. So my Nuut itself encompasses a lot of individual components that are all stitched together using the Nuut build and package management tool. So at kind of the base level of my Nuut, we have a secure bootloader and flash file system that's been designed for very small embedded systems. This is actually a talk just after this one that I'm giving on the my Nuut bootloader, which has been broken out. It's a real-time operating system. It's hardware abstraction layers. It's drivers. It's a whole bunch of statistics and logging and debugging. It's networking stacks. So there was a talk earlier on Nimble, which is an open source Bluetooth networking stack. There's security. But the real focus of this talk here is on build and package management. And so my background, just for people to know, is I'm a self-taught computer programmer. I made websites when I was 13. And through that, I got involved with PHP and the Apache web server. So I did a lot of work on PHP 3 and the internals of PHP and PHP 4 did a lot of high-performance websites. And then the entire world was moving to Java, and I was repulsed. So I went and did deeply embedded systems at a company called SilverSpring Networks, which did wireless connected power meters and street lights and distribution automation controllers. And at SilverSpring, I started as kind of the newbie who didn't know anything, working on 8-bit microcontrollers, AVR 128, and then, or at Mega 128, and then moved on as we moved from ARM 7 thumb processors into Cortex series processors. And really, the genesis behind minute was that when we went out there and looked, we saw everyone was really making their own operating system, right? And not an operating system in the case of an operating system kernel. I mean, we used Mikrium's UCOS. But when you looked at it more encompassing, if you think of an operating system as file systems and networking stacks and boot loaders and all of these things, then we bought in our own proprietary networking stack, which we had to maintain and add features to and fought to upstream with a commercial vendor. None of the work we did there will ever see the light of day in an IP networking stack that's useful for other people. And so really, the idea behind minute was this concept of an operating system. And in there, we really focused a lot on doing our own build and package management system for minute, which was kind of an interesting decision if you're going out and you're building an operating system. A lot of the, at SilverSpring, and I'm sure all of you, a lot of the systems that we used were make-based. And you just create a make file. It's not that pretty. But over time, it grows. You've seen a lot of projects use autocomf. And I had used autocomf and Automake extensively prior to SilverSpring. At SilverSpring, we just used straight up make files. But when we came and we started with minute, we really thought that it would be a good idea to have a build and package management system for embedded. And the reason we thought that, well, there was a couple, but the main reason was that it enabled collaboration. We built our own preemptive real-time multitasking, our costs. We did that because, frankly, it's very, very easy. And it was kind of the core of the system we wanted to have. But the real development and work we were doing and embedded was going to be around networking stacks and bootloaders and USB stacks and things like that. And if you looked at the operating system space today, there's a whole lot of free RTOS. There's a lot of Micrium. There's a lot of ThreadX. There is a, there's Zephyr. There's Minute. There's just a whole bunch of operating systems out there. And what we thought was, well, why can't a Bluetooth stack be cross operating system? Why can't a USB stack be cross operating system? Fundamentally, the concepts of semaphores and mutexes are not that different across the real-time kernels themselves. But the problem was you didn't have a good build and package management system to stitch those together. Whereas in the web world or in other areas of software development, whether it's Go or Ruby or any of these other languages, you actually have the ability to take multiple components that people share efforts on and then stitch them together for your environment. We also just saw this as an aspect of a real-time operating system that had to work within constrained environments, but had to support a lot of different things. Somebody developing a Bluetooth device, 90% of them don't care about canned support and probably don't want industrial ethernet drivers in their OS distribution. But they do want a Bluetooth stack. They want a flash file system. A lot of them are going to want a core kernel. So the idea was really to go and put in the effort to build a build and package management system that allowed us to easily stitch together these components. And so kind of a couple of additional reasons or things design goals behind the build and package management system we developed. And just in terms of an agenda for the talk, I'm going to give you a little bit of an overview of the motivations behind the build and package management system, an overview of kind of the core concepts behind it, and then probably half the talk is going to be spent at a command line to actually show you how Newt works within our system and discuss some of the features of my new package manager. So one of them is how do you manage large code bases? How do you break your system into testable components? This was certainly a problem we've seen with other companies and we had at Silver Spring, which is we developed network interface cards that went into the same fundamental firmware and software that went into an access point or a gateway was the same software that goes into a network interface card that goes into a meter, a light, and there was a core kind of platform that we had to maintain. We maintained our own bootloader, but we also maintained an IP stack across all of those. There was a Silver Spring network operating system that we maintained, and then we had individual source code projects for each of the types of devices that we went into and controlled. And you'll see this similarly with Nest, right? They probably run the same operating system on the thermostat as they do on the smoke alarm. And so what ends up happening when you develop these projects is you have shared pieces of code that you then stitch together for the projects you're actually engineering. And that's especially true in a system where you have to fit into very low RAM and flash footprints. If you have to compile your code into 128K of flash or 256K of flash, you're probably going to break it into a set of components that you share, a networking stack or configuration management system or bootloader. But then you're going to select which components go into individual projects. And so it really helps to have individual systems that are tested and released that you can then have in the products that your company is making. And that's going to just be fundamental to anybody's products. You look at industrial suppliers like Honeywell, who probably have 40 or 50 different skews of their product. They're going to want to have platform code bases that they maintain separately from the individual projects themselves. And then if you're building your own build and package management system, it's important to have tools built into your system to help you understand what is actually going on. So generating map files and generating list files, being able to automatically show the size of RAM variables and items in your stack are very important. It's a controlling and debugging production lifecycle. There's a lot of commonality in how you manage a connected product. And if you have a build and package management tool, you can actually have that control cradle to grave your product development lifecycle. When you start out, you should have debugger support built into your board support packages. And you should be able to load your code onto a device using your debugger support. You should be able to take that same image and build it optimized as an upgradeable software image. You should be able to take that same source code base and generate a manufacturing image that can get burned into your flash. And you should have information about your build tracked throughout that entire process in a coherent, unified way. And so having your own build and package management system allows us to do that consistently across all of the projects that you use minute. And then finally, this enables collaboration. So now when it comes to having a USB stack or a Bluetooth stack, we're taking Nimble, and we're making it work across RIOT and other operating systems, or MCU Boot, which is the next talk, which is a bootloader. It's very easy for a user of our system to simply rely on that component and install it. And that component gets released, and you have defined APIs. And so we don't have to have a monolithic source code base. We can actually break our system into smaller components. And so that's really the reason we did it. In terms of the basics of our system, and it's really, we spent hours on these terms, and we probably got them wrong. But this is what we decided. So we have the base term in Nuut as a project. And what a project is, it's a collection of packages. So when you start your first Nuut project, you type Nuut Nu, the name of my project, which might be Nuut Nu light bulb, as an example. And that creates a basic project skeleton. And it has a few basic packages in it. Packages, you can think of like individual libraries. So they have a fixed format. They have a source directory. They have an include directory. And they have a package.yaml that identifies their dependencies and their name. And a project itself is just a collection of those packages. And that's all a project has defined us to be. It has a project.yaml file, which identifies its project level dependencies. A versioned project. So a project that has a repository.yaml file and has a pointer of the version number to a git branch. That's known as a repository. And that every project can become a repository. And then other projects can depend on it and rely upon it. So to recap, again, projects are collections of packages. A versioned collection of packages or a versioned project is a repository. You can have as many projects as you want. And you can rely on them. The reason we decided to come up with this hierarchy, instead of just having package dependencies, which is where we started, is if you look at Node Package Manager and you look at JavaScript, where you see these 10 line packages that people redistribute, we didn't think that was a particularly good model for scene embedded. You kind of want to have a collection of packages and a collection of libraries that you test and version together that you rely upon. And so we made the redistributable component of it be slightly larger. And so NUIT has the built-in ability to install a remote project, to pin a remote project to a specific version or a specific git branch. And then to allow you to rely upon that in your project, you can upgrade, downgrade, and do all of the things you would expect with a package manager. In terms of build, so everything within the concept of a project, which is where your build is, ends up being a package. There are a specific type of package, which is called a target. So targets define your build environment. And a target is a combination of an app application and a board support package, or app and BSP. So every target in your build system will have an app, which is where you define your main task in the minute kernel. So you say, basically the way the minute kernel comes up and builds is that the board support package initializes the hardware and starts the operating system. You define the main function, which is the main application task within the minute kernel. And that runs in the application itself. You don't have to use minute with this. That's how we've defined it. But there are two heads or roots of the build tree. The app and the BSP. And then below those, they all have dependencies on everything else that they do within the minute system. And we'll go through all the files and show you this all for at the command line. This is just kind of a quick overview of the layout. So we have system configuration built into the system as well. So this is very similar. If folks are familiar with Kconfig, this is very similar. It's kind of an equivalent to Kconfig. Packages can create system configuration settings, which are timers to use, the CPU frequency. Any of those things are defined within the package itself. Other packages can end their set with a default value in a description, which means you can introspect them. And then you can override those settings in other packages. Packages can also make different decisions as to which build flags to export, or what settings to set in the system based upon the results of these Kconfig settings. And source code files can conditionally compile things in based upon those settings. One of the other decisions that we've done in the system is we bundled. So in minute, what we've basically said is that anybody who builds a board support package is responsible for implementing the debugger support. So whether it's OpenOCD or Jlink or something like that, it's your job when you create a BSP to make sure that the debugger support actually works. And so then Nuut is cognizant of that. And when you type NuutDebug or NuutRun, it actually calls out to the specific, board-specific OpenOCD or Jlink scripts and is able to load code onto the devices and run them. So that's built in there. Essentially in the BSP, we have a set of scripts in Nuut that help you manage Jlink and OpenOCD and all of the various debugger tool chains. And then within the BSP itself, you inherit on those scripts and you define a debug and a download script, and then Nuut knows to call out to those debugging download scripts. Finally, there's a set of functions to create and generate downloadable software images within Nuut itself. So NuutCreateImage, and there's a talk on our bootloader just subsequent to this. But NuutCreateImage will support generating a downloadable firmware image. So essentially what you pass to NuutCreateImage is the name of the target and the version of the image along with an optional key that you want to sign that image with. It will then take the result of a built image. It will sign that. It will put an image header on it as well as an image trailer, and it will sign that image for you. You can then use that image to load that directly onto the devices, or you can create yet another package that sits above that, which is your manufacturing image. And you can generate an entire flash image for a specific image. And that's what the Nuut manufacturing commands do. So there's kind of two ways that you can generate images for a device. There's either downloadable software images that can be uploaded over the serial port or remotely using Bluetooth, or what a lot of people do is when they actually go and manufacture products at scale, they create a whole flash image that gets typically burned in by their flash provider, an arrow, or an avnet. And so Nuut manufacturing create allows you to create that entire flash image for a device. So that's kind of a quick overview. I'm going to jump directly to the command line here. Please feel free to interrupt me if I have the wrong. Well, that needs to get bigger. Is that big enough or bigger? It's fine, all right. So the first thing I'm going to do is I created a sample project, but I'm going to do a new one. There's a little bit of latency due to the internet, but this is the workflow. So if you're going ahead and you're going to create your own sample project, what you type is Nuut. So there's the command Nuut, and it has a couple of commands here. But the one that's interesting is NuutNu, which allows you to create a new project. So we're going to create a new project here, and we are going to call it NuutNuOpen IoT. So what that does is it goes and downloads a project skeleton from incubator-minut-blinky, which is the basic skeleton of a project, and it installs it in the Open IoT directory. So now what you have is a base project definition, which contains kind of typical ASF boilerplate, a sample application that we call blinky, which blinks the LED, and a project.yaml file, which I'll show you in a second, and then two targets, or one target, which is myblinky-sim. So myblinky-sim has an application, which is the local app, app-blinky, as well as a board support package, which actually relies on Apache Minut core hardware BSP-native, which is our native BSP within Minut. So the way our system works is that we have a simulated kernel that runs on x86 Unix systems that essentially saves and restores the stack using set jump and long jump, and uses signals as a timer. So you can run things simulated. So what we're going to do now is we're going to compile blinky locally. And the way we do that is if you look, there's a project.yaml file, and this project.yaml file specifies the project name, and it specifies the repositories it relies upon. So the repository it relies upon is Apache Minut core. And you can see here, this is type of a GitHub repository. Get zero latest, so this is prior to 1.0. And user is Apache repo is incubator Minut core. We support straight up Git repositories. We also support private GitHub repositories as well. And so what we're going to do now is do new install dash v, and depending on the speed, we're going to wait for it to install. So what it's doing now is it's going out. It's downloading the repository description for Apache Minut core, which I'll show you in a second. It's then saying, OK, well, you asked for zero latest. Zero latest corresponds with the 1.0.0 beta 2 tag, which is our latest release of Minut. And now it's going to clone that into the repository, which we'll jump into in a second. So now when I type tree, it's going to be a monster. So what you can see is this is now brought in Apache Minut core into the system. And it's put it in the repo's directory. In addition to that, it's created a project state file. So if you look at the project.state file, what that does is 1.0 beta 2 is essentially 0.9.9 in our release numbering scheme, which I'll show you in a second. It locks the project to a specific version of the downloaded file. So you saw in project.yaml what we specified was zero latest. Zero latest is a tag that is constantly moving in space. So we've now downloaded in our building our software with 1.0.0 beta 2. And so we want to lock it to that so that somebody else can go and can replicate this repository, and so that we don't just constantly upgrade past it. So project.state locks the file. If you then go into repos, there's a Apache Minut core. And you can see here that if you do get status, this is now detached at 1.00 beta 2 tag. If you want to view the branches, you can see that there's master and there's 1.2 beta tag that's been downloaded. So now if I do this, I can do new target show. And now I can run my Blinky Sim, which is going to go and compile. One of the nice things about new is that it automatically detects the number of CPUs that you have and does threaded builds based upon that. So that was all done based on, which is probably not quite a few CPUs on here. And now I'm going to run this simulated. And you can see this is not very exciting, but our simulated version is running. I can stop it and do bake points. So what I used there was a command called newt run. So what newt run does is on physical hardware. It compiles all of your source code in the target. It creates an image out of it. It then loads that image onto the device. And it then starts GDB. Well, it starts OpenOCD, sets up a GDB remote, and then connects GDB to the target. And we have a number of commands built into newt to do that. So if you look here, what we're going to do now, and I'll get a little bit into more into the artifacts that we've actually built. But I just want to, because I think, but first let's go build it for a real target because that's a lot more interesting than our simulated environment. So if you look now, what we're going to do is we're going to build newt for a target. Right here, I have an NRF 52840, which is their new shiny board with Bluetooth 5 support and a whole bunch of other things. So I'm going to go and I'm going to run Apps Blinky, but I'm going to run it now on this PDK board. So a couple of things will help me do that. One thing is I want to find a new BSP. So there's a command called newt-vals, which actually shows me what type of elements are available for me in the local directory. So if I do newt-vals app, for example, it shows me all of the possible application type packages that I could run. Or if I do newt-vals BSP, I can see all of the available BSPs. And so these are the BSPs that are available within Apache Minute Core 1.0 Beta 2 release. So what I'm going to do is I'm now going to copy the target I have for my Blinky SIM. And I'm going to create my Blinky 52480. And so now if you look at newt-target show, I now have two targets. I'm then going to set the BSP to the 52, I chose a handful, 52840 PDK, the new target show. And now what I'm going to do on this board is I'm actually going to build it and run it. So there's newt-build, my Blinky. And that's going to go through, and it's going to compile all of the Apache Minute Core packages. And it's going to generate an artifact in the targets directory. So if you look now, there's now a bin directory. And there are two results in there. One is for my Blinky SIM. The other is for my Blinky 52480. If you go in there, there is a app directory. And then it's in apps, Blinky. So it ends up getting generated for every device is a .afile, which is the local archive file of the package itself. There's Blinky.elf. There's a bin version of that. There's the command used to generate it. There's a list file. There's a map file. And then there's a manifest associated with the build. And if you look at the manifest, that contains the name of the target, the build time, the packages that were a part of that, as well as any build version information that we have. Full target definition, as well as the repos that were used, and the commit ID, and the URL of those repos. So you have full information about what was used to generate this build in the new tool itself. You can use some commands to introspect that build. So what you have here is a new target show. So there is my Blinky 52480. These are package descriptors. So if you look at apps, Blinky, this is the definition of a package. There's a package.yaml file in it. If you look at that package.yaml file, what it defines is the name of that package, the type of that package, author, basic information, key words to help for search, and a few dependencies within Blinky. It defines kernel OS, hardware HAL, and the full console that we do. If you look at the source code of Blinky, what it will look like is source main. And you can go and you see there's a simple function here where we include osos.h, bsp, bsp.h, HAL GPIO. These includes have a very specific defined structure. So if you go back to package.yaml and you see we include kernel OS. So if you look in the kernel OS package, that's going to be in repos, Apache Minute Core, kernel OS. And if you tree this, you're going to see a few things. So they include directory. For every package, we require you to include alias with the package name. This is because there's no way to actually create and include alias with GCC. So you need to physically alias the directory itself. Within the OS directory, the board support package actually defines a concept of an architecture that we're building. And so there's Cortex M0, there's Cortex M4, there's MIPS, SIM, simulated MIPS. And then there's just a whole bunch of independent OS header files that can be included there. There's the package.yaml for OS. And then there's the source. If you look at the source, we have Arc Cortex M0. We have a specific implementation of our OS for the M0, for the M4, for MIPS, for SIM. And then we have a whole bunch of OS files that can be executed. In addition to that, if we look into the OS directory, you can see that there is a, well, let's actually look at the package first. So there's a couple of things in here. The OS itself is kernel OS. It has dependencies on syssystem initialization. So that's our OS system initialization, as well as some basic memory management functions that we include. These dependencies don't reference Apache Minute Core because they are in the Apache Minute Core project. You only have to add a repository descriptor when you're referencing an external repository with a newt. It requires a set of APIs. So this is something we've added to the build system, which is to say, and these APIs can be versioned, I don't need a specific console package or a specific implementation of a console. I need what I define as a console API. And if you remember, in the app's Blinky package, we actually add it as a dependency sys console full. That's because you want to be able to let the application decide whether you're running a full console, a console over a UART, or just a stubbed console. All the OS cares about is that there is a package that is included in the build that satisfies the console API. And when it does that, that automatically gets added as a dependency to the operating system. There's an additional set of dependencies added. So OSCLI is actually a system configuration setting. And what we say here is, only if the CLI is enabled on the operating system do we include the shell as a dependency. And then there's an initialization function as well. So if you look here, what a system configuration looks like in minute is, essentially, there's the definition of the system configuration itself. So we have the priority of the main operating system task, the stack size. All of these are overwriteable settings. But they're set initially within the operating system. And so if we go back to new target show. So we've built my Blinky 52480. We can do functions like new size dash f. And this will show us the size of the, well, this is not showing great because of the size of my terminal, but the size of all of the code size of all of the functions that are actually compiled into the system. And what percentage of code size they're actually using on the system. If you want to do that with RAM, I believe it's dash r, this could just fail. No? Cool. And it'll show you that our idle task stack is 256 bytes. And it is 2.71% of total RAM usage in the system. So there are helpful functions that help you understand the results of your builds. In addition to that, what we can now do is we can now say new target show. And we can now create a downloadable software image. So we do new create image. And that will give us the ability to create an image from the target itself. So the options here are the version of the downloadable image, along with potentially the private key that you want to sign that image for. So if you do new target show, and we want to create an image. My Blinky 52480. It will actually generate an image for that device. What we're going to do now, though, is we're going to do newt run. And we're going to do that with my Blinky 52480. And that is connected in over Jlink. So the Nordic 52480s have a built-in Jlink chip into them, so you can actually load code directly onto them. And I'm going to specify an image version, which is 0.0.0, which is just my local version. So what this is doing is it generate, if you can look up here, what you'll see is it actually generates a app image in Blinky.image. It then executes the debug script for the 52840 PDK. It runs that script. It starts GDB. And because you've typed run, it actually goes and it calls mon reset. You can then type continue. And you will see it's running. And if we go here, you can see it's in the idle tick itself. So we've built in debugging support for GDB. You don't have to start GDB. So you can also call newt run and have it set up a GDB remote and then connect to it with Eclipse. So there's actually instructions for calling out to newt from Eclipse. And all of this can run from Eclipse. One of the nice things about that is if you have a mixed environment where some people run Eclipse and others like me run VI, you don't have to worry about generated Eclipse project files and all of that stuff. It's all calling out to newt as a back end. So you have the same development environment for both. In addition to that, we have a couple of other helper functions for debug. So you can just type newt debug. That's the name of the target again. And that actually just connects to the debugger where it is and doesn't monitor reset it. So that uses the open OCD connect. And I think there's just one other thing I wanted to show everyone before I actually took questions. So there's newt info that allows you to also introspect things so you can see repositories in the project. You can see information about those repositories. And then the final thing that newt allows you to do, and I'm going to do this from the core because that's what I do, is we have a unit test suite built into the actual devices themselves. So our operating system has a unit test suite intrinsically built into it. Those unit test suites can also be executed from newt. So what we do here is we type newt test all, and that will go and compile all of the tests for every package and run all of those, and run the entire test suite locally. You're seeing this actually happen with threaded builds across the entire system as well. And it will execute all of those tests. If you look at the, well, let's just stick here. Well, believe me that this runs. It takes up some time. But if you look at the way you do tests, is there is a test directory. In the test directory, you can define source, and you can actually define your tests here to actually go and run. And if you define that directory, what newt will do is it will go through every package in your system, and it will automatically run your unit test framework, and it will run that as a part of your system. So we have tests built into it. So that's a little bit about newt. How do you what? How do I get it into Git? How do I get it into Git? Oh, you just commit it. Well, so you can do a couple of things. I was actually just going to go to GitHub. And I'm addressing your question. I'm doing it with an example. So I think maybe it just helps to show an example of a project that's relying on my newt core that has some development in it. So if you look here, what I'm going over is something called Minute Arduino Zero. So Minute Arduino Zero is Arduino Zero support for Minute. If you look at the Atmel header files, they're not a patchy license compatible. So they have what a lot of chip vendors put in, which basically says these header files cannot be run on any other system except Atmel products. And that's incompatible with a patchy license, so we can't release them in the base Minute distribution. As runtime, we just release them separately. So in there, what you see is that there's a project.yaml file. And that project.yaml file contains a dependency on incubator Minute core. And this has a repository.yaml file. So this is actually released as well. And it is a project that other people can depend upon. If you're developing an application for the Arduino Zero, you can say, in your project.yaml, I want to rely on incubator Minute core. I also want to rely on Minute Arduino Zero. And these are both dependencies for your project. And then we have versions, which basically map to tags. And so what we have is we have a tracking tag here, which is zero-latest. We say zero-latest maps to a real version, which is 0.9.99. And that maps to a specific get tag. In our version here, we have not actually checked in the full get repository for a local project. So if you look here, what would end up happening is if you want to use this, what you would do is you'd basically say, I want to go and get clone my new Arduino Zero. Now I have my new Arduino Zero. If I do a new target show, no targets, I'm going to create one. And if I had a new target show, target set. And I have a couple of sample apps for Arduino. Right here, I'm just going apps Arduino test. I have custom VSPs for the Arduino Zero. So do Arduino Zero as an example. So now I have my target. If I type new build, this is going to fail. So new build, my blinking. So unknown package, right? Apache Minute Core is not actually installed. So if you look here, the way we've done this is we just have a project.yaml file. So we do new install dash v. And that's going to go out, and it's going to do what we showed you before, which is download Apache Minute Core. Say I didn't want to, so I'm going to, here it goes. But say I wanted to have my own fork of Apache Minute Core, which I think is what you were asking. What I would, yeah, so you just create your own GetRepo, it can rely on it. You can either check in the source code for Minute Core, or what you can do is you can create a fork of that. So if you want to go here, for example, the way we work is we submit feature branches, and we do everything by pull request. So I think my GitHub might be sterling use. So I go like that. I can now do new install dash v. And it actually pulls it from my local branch of that as well. So if you want to maintain your own version of Minute Core, and you want to just be downstream from us, that's great. You can do that. And you can divine your own repository.yaml, and you can just sync in from our system as well. So I think that's pretty much what I wanted to cover. I'd love to take any questions that folks have. I think we're right about on time. So I don't know if there are any questions. Sure, yeah, yeah. And sorry, they have a little note here that I have to repeat the question. So the question was, can I talk a little bit about what Nude is written in and what it was inspired by? It's written in Go, and Go has a package management system intrinsic to the language. So what was inspired by Go, I think, in a lot of ways, was the idea of tying in the language to the package management system. C obviously doesn't directly fit into that paradigm and wasn't designed for package management, but things like enforcing and include directory hierarchy or forcing a source directory hierarchy that just makes people work in that way. Because if you work in that way, things will be consistent. It was inspired by that in Go. And then what we've added and what we did there that was not really in Go but was in other things like Composer or Ruby on Rails systems was the concept of versioning and how you manage versions for it. So using a project.state file to lock your repository to a specific version, how you manage, install, and upgrade, and how you do that process was very much modeled after Ruby and PHP. So those were the inspirations. Any other questions? Yes. Yeah, exactly. And you do have the ability to, with them, Nude gives you a lot of information. So we spend a lot of time. Obviously, we make lots of mistakes with the ML files, too. And so if you, for example, have two packages with the same name and there's a comp in the system. So a lot of times before we, earlier in our system, we didn't have a Nude package copy. So what people would do to create a new package is they'd just copy the nearest one. And of course, the immediate thing they'd forget to do is change the name of the package and then build system would just core out and it'd be awful. But so now we actually detect multiple package names and conflicts and we've built a lot of kind of conflict detection into the system of that. But it's all text-based and all YAML. Yeah, so there are no other operating systems currently using this besides Nude. But we are starting to use this with Nude components that we're breaking out and making cross-operating system. So the next talk, we're talking about MCU boot. It's Galleria South, which is the secure bootloader for Apache Nude. And that is using this. And that is a shared project with Zephyr. We're looking to bring others on. Similarly, we're looking to break out our nimble, Apache nimble stack, which is our Bluetooth stack, and making that work with Riot and other operating systems. And that'll use this as well. Obviously, those projects also use whatever Zephyr uses to upstream things as well. So we don't dictate this to other people, but it helps us manage some cohesion for our shared projects. And then the second was continuous integration. Yeah, so Runtime as a company actually is developing a continuous integration system based upon Apache Nude. So on every check-in or every pull request, we actually download. We take the pull request. We download it. We take these builds. We upload them to our CI system, builds them. The manifest gets uploaded. And so we can actually track all of the information based upon the results of these builds. Any other questions? So I sort of think you might be coming out of this data. That's right. Sure, sure. So the Apache Software Foundation, it's actually a great, or it doesn't do a lot of embedded, right? It's typically Hadoop and big data. So we're one of their first embedded projects. The way it works is it's a 600-person volunteer member organization that has essentially voted on a set of rules for running Merocratic open-source projects. And so what's nice about the ASF is they do contributor license agreements, patent assignments. They manage your infrastructure, and they do all of that completely free, so long as you abide by those rules, which are essentially that individuals control the direction of the project. Committers elect other committers. Those committers elect a project management committee. And everybody, Merrick doesn't go with the company. It goes with the individual. And those are kind of the founding principles of the ASF. So when new projects come to the ASF, and the ASF has over 600 projects, there's a whole bunch of rules on how you release, what you have to do, how you, for example, every release, we have to do license checks. And so how do you ensure that there are no GPL or other type of software that's incompatible with ASF within your release? And so what you do is you go through Incubator. And the results of going through Incubator are you learn how to release, and you build a community, and you show that that community can operate within the Apache way and understands that. So we're actually, we're probably going to exit Incubator March, knock on wood. But so we've been in there about a year, which is typical for incubation projects. And so we're becoming a full-fledged Apache project. Oh, thanks. Yeah. Yep. I explored mine just a couple of months ago. So actually, it is because it's not big. So basically, so you guys have a project that is the point that gets. Yep. OK. And there's a way that a software that can run as alternative in my new project. Also, you can actually plug their stack into an operating system that you actually deploy. Sure. We are compatible with the soft device. We do not have an API compatibility. So you're either using the soft device API or you're using our API. But you can certainly link the soft device in and then add my new to that system. And so I mean, I like it. Exactly. But you can't run Nimble, obviously, at the same time with the software. It's one or the other. And we were thinking of doing an API mapping between the two, but it just ended up being too much to take on. Yeah. Today is a kind of a question that I asked. Do you feel like we'd be like very attractive and I'd get the other users like that even if the software didn't come up on a route like that? So we have now officially passed all of the PTS 4.2 tests. So on the Bluetooth stack, we pass full 4.2 certification now. So should we? Yeah. I think so. But you're welcome to use the soft device as well. Any other questions? Well, thank you very much.