 All right, shall we get started? Good morning. All right, so I'm Carlos from Nordic, and I'll be presenting today about how we at Nordic development maintain a Zephyr-based microcontroller SDK. I'll start by talking a little bit about me. I'm a former demo scene coder. I don't know if you're familiar with that, but you can go later. I don't really have time to go into the details, but it's quite a fun amateur scene for programming. I'm an embedded engineer now with a background in Bluetooth. I started in Bluetooth back in 2000, I believe, so a long time ago. I've been employed by Nordic since 2010, so I'm celebrating my 13th anniversary this year. I'm based in Barcelona. I like cycling a lot in the hills. I also co-authored a book about Bluetooth low energy back when I was working in the soft device. Soft device was Oris Nordic's Bluetooth low energy stack, and I was part of that team, and I designed the API. I also co-authored this book about Bluetooth. I was the main person in the beginning to drive the push to adopt Zephyr and Nordic, so I went to blame if you're a customer who's not happy. Some are not, some are. So there's a little bit of a red thing or everything. And right now I had a team called VestaVint, which means West Wind. Zephyr is a Western Wind, as you probably know, in Norwegian, and it's a team I'll be talking a little bit about later. So let's talk a little bit about Nordic. Nordic is a fabulous semiconductor company, so like many others, we make chips. We are specialists in low power wireless, so we don't typically, we don't do at all generally the MCUs, generic MCUs like other companies do. All of our MCUs have some sort of wireless connectivity built into them. We are market leader, especially in short range, so Bluetooth Low Energy in particular, but also Thread and other technologies we'll be talking about. And then we are also introduced, some years ago, LTEM and Narrowband IoT, so beyond a short range, also a longer range, and now we're expanding into the Wi-Fi market recently we presented, or we introduced our first Wi-Fi chips into the market. We have offices a little bit all over the world. We've been growing a lot since I joined, so the company has been doing well and growing, but the company is actually pretty old. It started in 1983 doing custom ASIC designs at the time. We're celebrating the 40th anniversary this year, so we have some really nice celebrations lined up, particularly for employees. We're 1,300 right now and growing, so of which 76% is R&D, so it's a very technology-oriented company. And sort of financial type information, not very familiar with that, but there you go. So our chips, I wanna talk about our chips because really the purpose of all the work I do at Nordic and me and my team and many other teams is to support these chips because we sell chips, the software is free, right? So when our customers buy our products, they're buying our chips and the software is just there to convince them to buy it and to be able then to develop their software that runs on the chips. So we have chips for short range, you can see here the 52, 53, and the upcoming 54 series, they support Bluetooth, Thread and Matter, these three logos there, the three tiny logos. This year we introduced the NRF 70 series, which is Wi-Fi. Right now we only have a companion chip, so to speak, but a company which sits side by side with another MCU. Then there's the NRF 91 series, which I talked about a minute ago, they support LTM, Narrowband IoT, and in the future there's this new standard called DEC-10R that we are working towards supporting, we're actually part of the specification working groups, just like we are with Bluetooth, we're big contributors to the Bluetooth spec by the way, and then also relatively recently we've introduced the other product ranges, Pimix and Range Extenders. And of all these products, almost all of them of these chips are supported in upstream Zephyr today, not all of them, but almost. And if they're not, we'll probably work towards getting them in. Where do Nordic ICs live in actual products? Our, typically our bread and butter, when we started selling chips, not custom chips, but rather generic chips, wireless chips, we started with a HID market, so mice, keyboards, and so on, but there's been a bit of everything, medical, gaming, tax, so smart tax, lighting. For Bluetooth mesh, for example, was a big boost for the lighting systems, so being able to connect multiple light bulbs in mesh and being able to turn them on by just connecting to one of them and then relaying that information and all sorts. And with LTEM and Aromand IoT, we're getting into the asset tracking business and this type of different applications that we couldn't reach with Bluetooth. So we're now a company that's present in multiple segments. So I wanna talk a little bit about how our SOCs have evolved over time because that's one of the major motivations that drove us to actually adopt Zephyr. And I actually had to look this up because although I've been at Nordic now for a long time, I didn't remember the numbers and it's quite astounding actually when I did my research. So in 2004, I wasn't there. We introduced a chip called the 24LE1. It's been enormously successful as a wireless, non-Bluetooth, proprietary wireless connectivity solution and it had 16K of flash and 1K of RAM on an 8051 running at 16 megahertz. In 2012, some years later, we introduced our first sort of modern Cortex-M-based series, the NR51. Cortex-M0, not M0 plus, M0 at 16 megahertz, 256 up to. All of the numbers here are up to so the series there will be different variants but these are the highest numbers of each series. 256K of flash, 32K of RAM. So that's the chip that introduced the soft device architecture. And then there was the NR52, 2015. And now we stepped up from an M0 to an M4, 64 megahertz, one megabyte of flash this time and 256K of RAM. So scaling up in terms of supported technologies, all of those support our proprietary technologies that used to be popular with Myzen keyboard with a little dongle that you connect to the PC but also then Bluetooth Low Energy started with the NR51 and then Threat starting with the NR52. Then the NR53, 2020, we're already dual core now at 128 megahertz, so no longer single core that complicates things. Starts getting, we start getting into the IPC primitives we start needing to build two images and so on and so on, so complexity raises. We're at one megabyte plus 256 flash that's per core and 512 per 64 and now we support matter on top of the technologies we already supported. And finally, the upcoming family that's not, it's been pre-announced but it doesn't have a release date yet. It has, I cannot unveil more than what has been already set in the press releases but basically multiple ARM Cortex M33 cores at 320 megahertz, multiple RISV core processors, two megabyte of flash, one megabyte of RAM. So you can see, and I have a little diagram here to show this, the amount of flash, RAM, and the clock speed and don't pay attention to the Y axis, this is just for reference but the point is that the complexity in the chips has been rising nonstop. Since we introduced our first commercially available off-the-shelf chip until now where we're developing the NRF 54 that will come out at some point in the near future, we just have more flash, more RAM, and more megahertz. And it's not just that we introduce variants with more flash and RAM, it's that even the most basic chips in each new family have more flash and RAM. So everything is pushing forwards and up. So there's no stopping this. So that's a fact and in order to realize that, so let's talk about the software that goes with these chips. So like I said, we offer a software development kit, we offer software and that's free of charge. It's almost like every other silicon vendor, you download the SDK, you start developing your application and the SDK gives you everything you need to write an application, drivers, storage, kernel if necessary, everything that's required in order to develop an IoT slash embedded application. So, and then the thing is the architecture itself of the SDK had not evolved as quickly. And this happens often because the hardware kind of, it tends upwards with the market as well. So there's new nodes, new process nodes, there's new ways, new non-volatile storage technologies, new libraries from the foundries, et cetera, et cetera. And that naturally pushes the chips forward, but that doesn't necessarily happen with the software. The software, sometimes it can get stuck in the past and that's what happened to us. And we realize that and we go over that because it's not just that the software had to evolve and had to improve is that the amount of software that we had to write for an MCU SDK, basically skyrocketed, the complexity skyrocketed. We just, it just went so fast. So at some point in 2016, we realized that the SDK offering that we had was really not ready for the future. It was not scalable, we knew we wanted, we were about to introduce our first long range wireless chip and the software wasn't there. So what were the problems with this software? Well, first it wasn't scalable. We had an SDK per technology. So if you wanted to do Bluetooth, you would download a zip file if you wanted to imagine another one and it was really difficult to combine them. On top of that, not all our chips were supported by all SDKs because of internal reasons, the development, so that was again for the customer, for the user, that was a nightmare. We also had a very inefficient development model at the time. So there was no common code base. We had everybody live through, every team developing every SDK was living in their own silo and there were some cooperation but it was ad hoc at best and it wasn't really made for the future. We knew that that would not scale. So on top of that, we didn't have enough software engineers to work on all the software that we realized we had to develop. This became very obvious when we started with needing a TCP-IP stack for Narrowband IoT and LTEM and LWM2M it became obvious that it was too much software for a company that didn't have enough engineers to write it. Then there was the problem of updates. Your customer, you start developing your application and some months, years after, you want to update to a more recent version of the SDK and updating was really complicated and this was in part due to the fact that there were multiple SDKs, that the architecture of the SDK was not really designed for that. Another problem was that we were kind of stuck in the past. We were now at the Cortex M33 level because we were with the NRF91 that had an M33, it had a powerful micro controller core, had lots of RAM, lots of flash, but we could only offer bare metal to our customers and RTOS, like in the last few years, you probably know that already, it's been slowly but surely pushing or becoming the standard using more and more RTOS, so the world of embedded software is trending towards that but we were in there, we were absolutely isolated from that and we only had a bare metal SDK. On top of that, we could not offer advanced scalability systems like configuration and hardware description and we were reliant on the different IDE mechanisms that were, you know, ad hoc and barely holding together at best, it was just not a good system. On top of that, the distribution model, so I think that's still the case for some vendors but our SDK was offered as a zip file, you just click download and that was a zip file, you take that zip file, you learn zip it and then you as a customer, then you had to either create a good repo with it or a subversion, a repo, put it on the version control so it was no version control by default. That has several downsides but for me the biggest downside is that for the user, they had no idea between one version or another why we had made those changes, there was no description, there was no, you just got a huge dump of files and that's it and no idea why we got from one to the other, you could make a diff for sure but there was no justification, no explanation as to why things had been, for example, I remember back then we rewrote a couple of systems, one of them was the bond manager, I think it was and we had very good internal documentation about why the shortcomings and so on but that never reached our customers, they never knew why we did that, so that was a shame. So we started looking around, shopping around and thinking about what we could do to make the situation better. The thing is, coincidentally, or maybe not, maybe it was the whole world was tending towards that, short before, I think, I think Zephyr was introduced in 2016, early in 2016, so we started looking around mid-2016, just after Zephyr had been released to the world, obviously it was Zephyr, very early Zephyr, the very beginnings of Zephyr, so a small group of people, me, initially, but then I got help from other contributors and other Nordic employees, we started a small pre-study to evaluate the feasibility of actually using it, taking Zephyr and using it as a framework for our future SDKs. And not only Zephyr, but other Arthosis, so we started, I remember vividly, we started by looking at everything that was there, like proprietary Arthosis, commercial that you had to pay for, then open source ones, but the thing is, very quickly, a few of us decided, sort of arbitrarily, decided, based on what we were seeing, that open source was the future of embedded software development, just like it had been, or it was already the present of mobile phone Cortex-A software for Android and so on, but also server software, so we thought that that open source revolution was gonna happen in the embedded world, just like it had before in the other segments. So, we went out and studied in-depth the open source Arthosis, because we were now focusing on open source Arthosis, so we studied all of them, Riot, Contiki, Minute, Embed, Notex, Free Arthosis, and Zephyr, of course. Of course, not all of them are comparable, but we still went through them all, we tested them out, built some apps, and in the end, we settled for Zephyr for many reasons, and I've talked about this many times with some people too, but then to justify now that Zephyr is a bit bigger and kind of settled as a very important project in the industry, they asked me, it is a little bit less, but back then, why did you choose Zephyr? Well, for starters, it was open governance, that was not the case with all the other Arthosis. This means that no single company could impose their view. It was vote, it was essentially a cooperative decision-making process. Cross-architecture, again, not all of the Arthosis in my list before were cross-architecture, and we knew sooner or later we would need to go beyond ARM, and that's a fact with our latest offering, which has reached five cores, focused on small footprint, problem with some of them is they were, small footprint was theoretically supported, but it wasn't their main focus, so you tried to build for a small MCU and things would blow up. Very good code quality, strict code reviews, clean commit history, that actually is surprisingly important, and also it was batteries included, not only a kernel, more than a kernel, and we wanted that, we wanted specifically something that went beyond the kernel. So we assessed the risks and it wasn't easy to push this through internally. This was a very disruptive break, there was a lot of uncertainty in 2016 of where Zephyr would be in five years. There were also concerns with software IP inside Nordic, like what happens to our IP if we start contributing to open source, but we ended up moving forward with it. There was a sentence that someone said, I think it was me, but I don't even remember, so if someone said, instead of waiting to see if Zephyr and open source end up happening, we can actually make them happen. So what we decided to do is let's go all in, let's buy into Zephyr, base our SDKs on Zephyr and contribute to Zephyr. So we decided to create the NRF Connect SDK, also known as NCS. So we went from bare metal to RTOS, Zephyr, we went from ZIP files to, sorry, we went from multiple SDKs to a single one, single unified code base. We went from a ZIP file to GitHub for distribution. And finally, we went from a development model where each project would push directly, sometimes without even code review, to a system based on pull requests where everybody has their code review before it goes into the main branch. So that was the whole setup, the whole idea. And behind that, there was an internal development model, lots of hours convincing people and setting up everything we needed to make this happen. So let's start with the first point, bare metal to RTOS. So Zephyr obviously is the core of NRF Connect SDK. In fact, all of our samples and applications are Zephyr applications. We don't have a special build system layer, they are Zephyr applications. Almost all of our SOC's and boards are upstream to simplify our lives mostly, but also our customers. And then NCS as an SDK makes super heavy use of everything in Zephyr. So kernel, the device and driver model, the OS services, the connectivity. So it's not, we didn't pick Zephyr for the kernel. We actually use everything, or not everything, but many of the components present in Zephyr. Single code base, another of the four tenants that we adopted that had to make it happen. So I just spoke like a few minutes ago about using the proprietary IDs back then, Kyle and IAR mostly, which would then provide the build and configuration system. So the actual build system, the make file, so to speak, was actually the IDE. And also the configuration, we used some Kyle system that they had, but it was really difficult to maintain because obviously, first of all, we had to support multiple, they were not really consistent and the whole thing was not unified, was our problem. From there, we went to industry, what we call industry standard tooling. I mean, CMake is pretty much a standard today. So we use CMake for building. In fact, the transition from make to CMake in Zephyr was contributed by us. We also now use Kconfig just like Zephyr to configure and we have one way of configing the code and only one and you don't need an IDE for it. You can do it from the command line and also the registry for describing the hardware itself. Then there was the distribution model. So like I said before, I mentioned this before, the distributing the SDK with Zip files, no version control, no visibility as to why changes had happened, no intermediate fixes as well. That's very important because between two releases, let's take between one and a release in another three months elapsed, you're basically completely clueless of what's happening. If there's a fix that has happened in the tree, you as a customer, as a user have no way of finding out unless you have a support engineer sending you to you. With GitHub, it's obviously much more convenient. So first of all, you're using version control already. GitHub is a standard, Git is a standard. It's easier to update using Git and West. We'll talk about the West in a minute. The Git history is all there for all to see in plain sight. So why change was made this visible? All the fixes and improvements that we Nordic developers push are immediately available to everyone. So a lot of advantages in this model. And the development model, when it comes to contributions, again, that was completely reworked from scratch. So instead of having these silos, these teams with their own project leads that would then push or accept the push or we then said, OK, we will inspire ourselves on the open source development model that we have upstream. By the time, we were pretty familiar with it because we were already maintainers, some of us. So we said, OK, anyone can contribute inside or outside Nordic. So this actually brought some real tangible benefits almost immediately. It's that Nordic employees like FAEs, field applications and engineers, support engineers, now could contribute to the code base before they had to open a ticket and hope for the best. Now they could actually send the patch, and they do. So that's very, very useful and practical and optimized. And of course, users and customers as well. So obviously, this required a lot of internal adjustments. We had transition from internal Git servers to GitHub, a hierarchy of maintainers, just basically mapping the open source model to an internal development model. Took some time, but we did manage to get there and now we have a model that we're satisfied with. So just very briefly, so what is NCS? We take from Zephyr, we take a subset of Zephyr, but that's a very substantial subset. Kernel, libraries, build system, device train, config, Zephyr modules, and West. I'll talk about those in a minute. And then Twister, which is the test tool. We also use that. And then on top of that, we add proprietary features and technology, applications and reference designs, testing and qualification, technical support, and VS code integration. So basically what this means is Nordic engineers are free to work on things that add value, add actual value. So does a kernel add value? Not really. There's dozens of kernels out there. Does another TCP IP stack add value? Not really, there's plenty of those. What really adds value, at least from our perspective, is more applications and better written applications that you can use as a starting point for your product, proprietary features that gives you an edge, qualification, technical support, these sort of things are what makes a chip make or break. So that's where we're focusing now. So very briefly, the components in NCS, the blue boxes are added by us, by Nordic. The violet boxes are Zephyr. So middleware, Zephyr, and the RTOS itself, and the board config are all from upstream. And then applications, some connectivity protocols, and some low level wireless stacks. So we only do the low level. It touches the hardware itself, because there we have a competitive advantage. So this is just a zoom in. So for example, in the low level part, you can see that we have the LTE layers, the multi-protocol coexistence layer. So that's something that we only support in the SDK, so being able to run multiple protocols at the same time, proprietary 15.4 driver of extra features. And so we basically focus our efforts in the bottom, with the features, advanced features that touch the hardware, and especially the wireless radios, and at the top, applications, TFM integration, samples, DFU, et cetera, et cetera. So now we're gonna talk a little bit about repository. So I'll start with some terminology. I wanna talk a little bit about how to maintain and distribute from the perspective of repository management. So a repo, it's a good repository. You probably are, hopefully, are familiar with those. A fork is a modified copy of a repository that you keep regularly updated. So you have a repo, and you have a fork, and you update it. Upstream is the repository you fork, so that's how it looks like. And downstream is the repository you fork into, so that the copy you make. So you have upstream, which is in the case of Zephyr, would be in GitHub, Zephyr project RTOS slash Zephyr, on GitHub, then you fork them by making a copy, you maintain your copy, and you update it regularly with the changes that have been committed upstream. Now, upstream as a verb to upstream something means to send a change upstream, be it a change that was already in your downstream, or a change that you've written specifically for upstream, that depends. And finally, synchronize of merge means to update a downstream with the latest upstream changes, yeah? And there's also a concept of out of three. Out of three just means something that you do not keep in a fork. So instead of keeping, we'll talk about that, we tend to minimize the amount of things we wanna keep in the fork. So what we do then is we try to keep as much out of three as possible. So we wanna keep the fork clean, our copy of Zephyr. So we, how we do that, we tend to send everything upstream as much as we can. We use multi-repo, so that we don't have to put everything in one single repo, meaning polluting the fork. And we use as much out of three as possible. That's our approach. And it's an approach that we're happy with and that it's served us well over the years. West, a little bit about West. West is the main tool that we know they introduced in 2018. It's maintained by us as well. It does two main things. One is repository management and the other one is a command line interface, a standard command line interface for Zephyr, for building, flashing, debugging, and so on. So it has built-in and extension commands, just as a way of basically extending the functionality of West via the manifest repo. We'll talk about that in a minute. So how does the repository management work? You have a manifest repository and inside the manifest repository, you have the manifest file. Zephyr is a manifest repository as a manifest file in it. In the manifest file, you have a list of projects. Those projects are other good repos, which each of them at a particular revision. So in the case of upstream Zephyr, you have some repos that are forks from other open source projects and we keep a copy of with some changes. Some others are not forks and they are sort of original to Zephyr. That's a small subset. An example is Hal Nordic, our own Hal. So technically it is a fork, but anyways. The point is that the idea here is that you have one central repo that points to multiple other repos at particular revision so that for each revision of the manifest repo, you determine all the revisions of all of the others. Then importing, that's a key feature of West that we introduce in order to enable NCS. Importing means that if I have my example application, I can not only point to Zephyr, which I could, but I can point to Zephyr and say now from Zephyr, take all of its projects and bring them over to my manifest and that's exactly how NCS works. So NCS and NCS-based applications. So you can import Zephyr, NCS imports Zephyr and then an application written for NCS would typically import NCS, which in turn imports Zephyr. Yeah, that makes sense. So indirectly you get all of the projects that Zephyr has, you get them in your own in your own workspace, in your own projects to speak. Modules, that's another key contribution that has been fundamental for us. So modules are a way of extending Zephyr without having to modify the Zephyr 3, the Zephyr Git repo at all. So this is the main out of three tool. Basically what they do is you have some metadata in the form of a Zephyr slash module.yaml that sits in the module repo and the Zephyr build system reads that and then pulls in K-config, device 3 and source code as needed and CMake, by the way, into the Zephyr build system which means that I can plug into Zephyr and into NCS by extension additional repository without having to touch a single line of the original repo. That's the whole point of out of three. So we use this extensively in NCS as well because our NCS is actually a Zephyr module as well. And typically if you write your own application and you have your own board, you probably want your own application repository to be as F-remodule as well because then automatically your board will be recognized by West. Even if you have an SOC or a driver, it will be picked up by the build system and so on and so on. Our repo structure, this is a very important slide. So we have our manifest repo. Sorry, our manifest repo which has our list of projects. This manifest repo is called SDK-NRF arbitrary choice of name. We have a library repo where we put everything that we distribute as a binary blob, so to speak, as a binary instead of everything that's not source code. It's there, so some proprietary features that we don't want or cannot distribute as source code. We have private repositories, a few. Some repos have to be private because of the licensing restrictions of certain vendors or there's things that we cannot have in the open for legal reasons mainly. So those are private, but they're still pointed to by the main manifest. It just by default you won't get them unless you enable them because you have to have access to them. Then we import Zephyr, so I was saying, you can see the double arrow here, that means that we're not only pointing to Zephyr, but actually pulling it into our manifest. And with it, we're pulling the projects from Zephyr. Now, we pull them in in two forms. One is vanilla, meaning we take the exact same module that Zephyr has. We don't override it. So this is the case, for example, with some examples there, Hal Nordic, LVGL, little FS, we just pick whatever revision Zephyr has, pull it in. And then some of them we fork ourselves. So because we have changes specific to NCS on top of those that we cannot have in the Zephyr forks, we actually make a fork of the Zephyr fork, so there's two levels of forking here. Let's take MCUBoot, there's an upstream MCUBoot, Zephyr forks MCUBoot, NCS forks the Zephyr fork, right? And then all of these comes and is picked up by the manifest as well. And finally, we have other forks. For example, connect what's now called Matter, used to be called connected home IP, is not supported by Zephyr, so we fork it in our own GitHub organization and then point to it from our manifest. Because we have, that's not the only example, we have a few. Obviously, like everything else in this presentation, everything is available in the NRF Connect GitHub organization, you can look at our manifest, you can look at all of our forks, except the private repositories, everything else is out there for you to inspect. Synchronization, so from time to time, maybe every couple of months, we have to synchronize. Means we bring the changes not only from Zephyr, but from MCO boot, from Tostit firmware M, and from other open source projects that we use, we bring all the changes down to our forks. That is a complicated process because, first because we have some, even though we've gone through great lengths to have as few patches as possible in our forks, we still have some. So we still need to deal with those and sometimes there's conflicts. So we use tooling to help us. We use tooling, mainly we use a set of Python scripts, they're also open source, they're out there. You can reuse them if you want for your own projects. And we use this source tag system where our patches, our commits that are on top of open source projects have a special tag to help us distinguish them. So from list means this is something that we've posted as a pull request to the upstream project, but it's not been merged yet. From, because sometimes you need the change immediately in your downstream. You can't wait until they merge it in Zephyr. So you post it as a pull request and then immediately you take it and cherry pick it and post it in the fork. We have from three patches. So the patch has been merged, but you can't wait until the next synchronization, until the next stop merge. So you just cherry pick it, take that one. And finally, know up, which means patches that are not applicable to the upstream. So we'll remain forever in our downstream fork. And to avoid evil merges or merges with logical changes, this NRF commits, before the merge, we typically revert them, either all or the ones causing conflicts. And finally, once per release, we rebase. So these merges are done via Git merges, Git merge operations. But the problem with that is that you as a customer or a customer wants to see the easiest way for a customer to understand what we're doing is to see Zephyr and then on top our changes. That's the best way, right? Because that's the simplest way for everybody. So that's what we do. Every time we release, just before a release, we rebase all of our open source trees so that you get the pure vanilla open source project with the exact same shaft that are in the respective upstreams and then our patches on top. So what we've done in Zephyr in order to enable of this, I wanted to brag a little bit, so it's been what six years now or so with so many contributions, so many hours that, so we've transformed the tooling almost entirely since we joined Zephyr or since we started. We did the move from make to CMake, including the Zephyr CMake package. We moved from C-based K-Config to Python K-Config lip so that we could enable Windows builds. We completely reworked the device tree tooling. There was something there, it was not very good, so we rewrote it from scratch. We reworked the SDK tool change so that they would work on Windows and Mac OS. It was a Linux-only affair. In the beginning, we ensured, even today, that we support all of the operating systems. We introduced West incorporation with Foundries at the time. We introduced Zephyr modules, as well as many modules upstream. We made almost everything in Zephyr out of Treeable. We introduced SysBuild. We introduced the logging and the shell subsystems, a Bluetooth controller. We overhauled the USB stack. We introduced many parts of the networking subsystem and we maintained that. We also reworked the documentation system, which we reused, by the way, and we made countless other distributions, by contributions, fixes, and improvements. And we're not done. We want to work on a new board and SOC model. We want AMP, a symmetric multiprocessing improvements. You saw that our next chip will have multiple ARM cores, multiple RISC-5 cores. We need more advanced functionality, handling multi-core, multi-image, so, and then we want also to improve the device driver model. VestaVin, just my last slide. Our team, my team, or our team, we're about 12 engineers. We contribute upstream. We maintain subsystems. We do the mergers, the synchronization. We also maintain the downstream repositories. We do the release management. We rebase, we tag. We do all of that. We're all maintainers upstream, Bluetooth networking, built systems, storage, USB drivers, not only in Zephyr, but also in TFM, also in MCU boot. And also, we help out with any Zephyr-related issues or questions coming from customers. And that's about it. Before we switch to questions, I just want to remind everybody that today, at 3.50, there's a maintainer's BOF, Birds of a Feather, at South Hall 3C, and that we're really, really looking forward to having as many people as possible to contribute to the discussion of how to get more maintainers into Zephyr and how to scale up the maintainer count. And that's it. Any questions? I don't know how the microphone thing works. Just repeat the question. Oh, I'll repeat the question. Okay, go ahead, please. Go on. Hi, Carlos. Hey. Thanks for all the room. Sure. Sorry, the question is not as fast as you said. Mm-hmm. I understand. So, the question is, what happens if we post a pull request upstream? We cherry-pick that into our NCS. We ship that, and then suddenly upstream does not accept that. And so, we've shipped something that will need to change. The answer to that is that we're very cautious. So, it doesn't tend to happen often because what we do is, we typically wait for the next stop to merge for big changes like new APIs. We rarely cherry-pick new APIs, big new APIs. So, we wait for these kind of changes where the new API is involved. We typically wait until the change has been involved. So, we get involved earlier. So, we try to get there earlier. Sometimes it's happened, and then we basically have to document, revert the change, and reapply the new modified version from upstream. But it's rare. We don't have that problem very often. So, we're just careful is the real answer. Any other questions? Yes, please? Yes. Yes, correct. That's correct. Sure. Yeah. So, the question is, back when we had the Bermetal SDK, our APIs were modeled by the hardware. So, if the hardware had a feature, we had an API for it. Now, because Zephyr is a common denominator sort of for all hardware vendors and not all URs have the same features, not all SPI buses have the same features, we have to compromise. So, how do we do that? How do we deal with that? We deal with that in two different ways. Way number one is some of our drivers in Zephyr have extensions, proprietary extensions. So, for example, the clock control driver. So, those are specific to Nordic, but they're still in upstream as part of the Zephyr code base. And so, for some cases, we do that. For some other cases, we, for all other cases, really, our Bermetal layer, NRFX, is actually part of the SDK. So, you can access it directly because they cooperate. So, if you really desperately need a feature and the Zephyr API does not cover it and it's fundamental critical to you, you can still use, in most cases, the NRFX layer directly. Not always, because, for example, if there's a sensor, then after that, and you want to reuse the sensor code, you have to use the Zephyr driver, of course. But in many cases, it can be done. And then, internally, our drivers also use the special features, in many cases, special hardware features, in order to make better drivers. So, that's how we deal with it. It's not perfect, but that's what we found works. Any more questions? I think we have, well, okay. Maybe time for one last question. Yeah, please? Yes. Yeah, the problem is not so much. No, sorry. The question is, how much time does it take us to do this synchronization? The answer is, the actual gate operations are quite fast. We have tooling that just, you know, automatically detects which commits have to be reverted, and the whole thing, the problem is the testing, because every new update brings API changes sometimes, K config changes, and that's really what takes a long time. The regressions introduced by the changes that come from upstream. Yeah. I think we don't have time for any more questions, right, David, we're out of time. I'm really sorry, I'm really sorry. But please, I'm on Discord, and also on the mailing list and everywhere, so just ping me there if you have questions about the slides or about anything at all. I'd be happy to share with you any additional information about our development model. Thank you very much. Thank you.