 Hi, everyone. So I'm going to talk about Azure Sphere, which is a new Linux-based OS that Microsoft has been working on for secure IoT devices. But before I kind of dive in, I want to give kind of a brief overview of who I am and why I'm here. So I've been with Microsoft for about 10 years, working on security pieces inside. At the beginning of my career, the Windows OS, a lot of authentication work, remote attestation, things like that. And then in 2016, I got the opportunity to start working on a Linux-based operating system in secret inside the company and just jumped at the opportunity. And so we've been working on something that we went public with in April and I'll kind of walk through and talk through the problem space, the product, why we chose Linux and kind of go from there. So I'm going to talk about Azure Sphere since we're a relatively recent product, probably most people here don't know about it and what it is. Talk a bit about some kernel work that we've done, user mode work, and then at the end, some kind of takeaways and future work we hope to improve here. So this is actually, I think, very interesting that I was able to have this talk scheduled so close to the talk this morning about Zephyr because I think there's a lot of similar problem spaces here that we're looking at. So there's 9 billion microcontrollers shipped a year today and that number is just increasing. Right now, of those 9 billion, about 1% are connected to the internet in some form. That's going to increase tenfold within a few years. And we started looking at this in 2015 and we realized that, well, security was basically non-existent in the space. And so we wanted to really figure out how can we kind of improve the security strength of that space and take advantage of the new hardware and the power that's coming to microcontrollers as technology improves. Right now, you know, a microcontroller might have 256K of RAM or 512K of RAM, a few hundred megahertz ARM CPU. And what we're finding is as the technology gets more and more powerful, you're just going to start to seek some of the kind of more modern CPUs end up in microcontrollers, things with MMUs, things with integrated 802.11 Wi-Fi or cellular connectivity and megabytes of RAM. And so we started on this effort to figure out how to take advantage of that hardware and bring modern operating system productivity and security to that space. So Azure Sphere is something we've been working on, like I said, since 2015. It's a three-part solution for microcontroller devices. What I'm primarily going to talk about today is the Azure Sphere Operating System. This is the Linux security summit, so that's most relevant to everyone's interests here. But it also includes hardware technology inside the microcontrollers themselves. It includes security service for, over the year, update for device authentication and attestation to cloud services and air reporting type capabilities. But what I'm really going to focus on today is the operating system. So before I dig into how we use Linux, let's talk a bit about what an Azure Sphere microcontroller looks like and the kind of compute power that we're seeing here. So it's a multi-part chip. The system on the chips are multi-core, you know, complex designs because the price points are getting in such a case where they can afford to include different layers of technology in the stack. So the first parts is that you have your real-time processing, your things driving your motors, your sensors. These are traditional ARM, often Cortex-M microcontrollers. And then we layer on top of it the rest of the technology stack. We bring in a Cortex-A that has a memory management unit and can run something, you know, can run Linux, can run something close to a real operating system. We bring in integrated network connectivity. It'll be 802.11 in the first chip, but, you know, cellular is another one that comes up a lot. We have on-chip flash of a greater than four megabytes. The first chip that'll hit the market, the MediaTek MT3620 is 16 megabytes of storage to give you a sense of scale and four megabytes of RAM. And then some specially crypto hardware and security IP that we brought in. And so when we started thinking about this, when I first kind of took the job, we were trying to figure out how do we make anything modern fit in four megabytes of RAM? At first, you know, we thought about, okay, can we make Windows fit in four megabytes of RAM? Can we make Linux fit in four megabytes of RAM? Well, they're both quite hard, as it turns out. You know, both operating systems at various points in their lifetime have run in that kind of envelope, but it's been a long time since four megabytes of RAM was the standard. I mean, I remember upgrading my computer to four megabytes of RAM many, many years ago, but that was a very long time ago. And, you know, security today is not security of the 90s where these were the standard kind of computer power. So we really realized we needed to do some specialty work and some customizations to make it fit, but still keep the heart of what makes a modern operating system functional. So let's dive into the operating system and kind of talk about the architecture here. So at the base of the operating system, we have specific microcontroller hardware. And this includes hardware support for things like secure boot and device attestation, crypto acceleration, things you would expect from a modern SOC and things that are really required to build a nice secure route of trust. Later on top of that, we have what we call the security monitor. This is a secure enclave that's responsible for the secure boot functionality and certain crypto operations, accessing sensitive keys, that kind of thing. In the first chip, it's leveraging trust zone from ARM as part of its execution environment and has a dedicated CPU core for the really secure security sensitive operations. Next, we have what we call to the public the high level OS kernel because when we're talking to device manufacturers that are making things that are looking at chips of this scale, they're not used to high level operating systems. They're used to RTOSs, real time operating systems. And so we say, oh, we have a high level OS kernel, we have the Linux kernel. We're running a customized Linux kernel. I'll get into some details on what customization we did, but it is very much Linux at the heart. We layer some system services on top of that for update authentication connectivity. And we add kind of application containers on top of that where you run your code. And so there's kind of two programming models in the Azure Sphere world. There's what we call containers for POSIX. These are things that run on the Cortex ACP, the application CPU. So this is where you might do your kind of compute heavy load. And so if you have an ML model that you want to run, wake word detection, driving an LCD or something where you're displaying a UX. And then we have real time containers for IO that run on Cortex M's. And this is where you're doing your time sensitive operations where you need real time your motor control that kind of thing. So let's dive into the kernel customizations because I think that is in a lot of ways the most relevant topic to this audience. So we really decided very early on we wanted to bet on the Linux kernel. There were two kind of deciding factors that came into that. First of all, Linux has done some great work towards targeting a very diverse range of hardware. And we knew it was going to be some work to make it fit in four megabytes of RAM, but we also knew it wasn't impossibly far away from that. And the second thing, the thing that really resonated, especially when we talked to potential customers, people in the RTOS world, in the micro controller world are used to open source. We wanted something that was open source where we could publish what we're doing, we could get contributions back and really be a part of that community. And so Linux was the obvious choice in many ways. So our kernel that we're shipping with the OS is based on the upstream kernel.org sources. We're not deriving from a specific distro. We're deriving from the main line. When I started two years ago, we were on 4.1. We've moved to 4.9. We're going to keep moving with LTS branches as they kind of come out probably about once a year, but we're still figuring out how we do all that in a non-disruptive manner. But the goal is to not snap and fork. The goal is to stay up to date over time and keep moving with the main line as servicing branches are declared. We merge in upstream and then we're going to be able to catch as monthly as part of our OS build. And so I think we're at 4.9.116 at the moment and we're due actually for another merge this week. But the goal is we don't want to get too far behind, both to be able to pull in those security fixes when they come out and the functionality fixes when they come out, but also to reduce load on, you know, the cost of those actual merges. We made the mistake early on of waiting six months before merging. And well, that was an unpleasant week for a lot of us as we thought we'd be able to do that. And to give you a sense of scale of how much work we've done. So we've done 227 commits in our private Git repo on top of the upstream sources. Some of those are very, very tiny and some of those are quite substantial, but you know, that gives you the sense of scale of how much Delta that we have on top of the kind of mainline kernel. So our first and perhaps biggest problem was making it fit. When we kind of took the kernel at first and I picked some sense and I ran it, you know, ran the build for arm and it spit out a kernel. It was four megabytes in size. And so we'd have enough space to load it and then not actually boot it, which wasn't super helpful. And so one of the first things we realized very early on with four megabytes of RAM is we were going to have to figure out how to avoid putting putting text into memory. A lot of these microcontrollers have an execute in place feature that's integrated in their flash controller where you actually can take a flash region, map it into the address space as read only, but also in a mode where you can actually execute that as code. And so that way you don't have to pay copying to RAM. It gives you some nice security properties that most of these implementations are immutable. It doesn't support write, it drops writes basically in that address region. And so we moved to execute in RAM, which meant we could have a four megabyte kernel. But we still needed about another four megabytes to successfully boot by the time it got through all the slab allocation and came Alex and everything it takes to boot up a modern kernel. So the first maybe four to six months of this project was just tuning, was just figuring out what config options can we add to make things more modular. In some cases there were things where we didn't need the functionality, but it was bundled as part of a larger config item. And we wanted to kind of cut that into sub items so we could turn certain features off. And a lot of it was just tuning cache sizes. The modern Linux kernel is optimized for a million processes in some of its lookup tables. It's got hash tables for PID lookup and translation to task structures that are optimized for a million processes. And we have, I think, 25. So that was a lot of wasted RAM. So we did a lot of small patches towards tweaking those default sizes, squeezing RAM out of the system. We turned some things off. For example, CISFS was the one that I held out the longest before fully removing the feature. That saved almost a megabyte turning that off just in the cost of inodes and de-entry structures and those kind of things. We turned off a lot of the memory tracking options simply because they took more memory overhead. And things like Kalsims that are very, very helpful but add space to your kernel. So as of our public preview build, which will be released in about a month, we're at about 2.4 megabytes of code and data size in the Linux kernel. About 2,100 kilobytes of RAM usage after init has come up. So we've come a very long way. And I have kind of a breakdown here showing where basically by top level folder structure, the size of this, it's not surprising, at least to me, that most of it's in the network stack and in hardware drivers. Those are obviously quite sophisticated pieces of code with a lot to them. So let's talk about the security model here. When the first kind of internal preview release we did was December of 2016, we started with the state of the art for IoT at the same time, that is, we had an SSH server with a fixed root password that you would just connect into and copy your app over and run as root and everything was great. Obviously that was not going to cut it long-term. So we took a step back and we experimented with a lot of different security models. We started by kind of baking things into the file system, leveraging file system capabilities, leveraging setUID and setting it to make predictable environments. So we had a build process that kind of stamped these under all of our processes. And to make it even kind of easier to reason about for us and less attack surface, we even experimented with a patch in the kernel to force effective UID equals real UID in all cases and just kind of take transformation of identity off the table. We ran into a problem pretty early on where what happens when two processes want to IPC with each other or share data. Well, we solved that with supplemental groups, also kind of baked in the file system. So these files would say I'm in these three groups. And the end result of this after this kind of experiment is we realized it had some nice properties. It was very easy to reason about what was actually going to be allowed. You could write tools that could audit this very quickly. But you put all the burden at build time to make sure everything was set up correctly. And any mistake there would just propagate through the system and leave you vulnerable. And so we backed away from that, but it really informed in a lot of ways our thought process on how do we get more granular in our permissions and how do we really build a model where applications can access resources with a principle of least privilege and using kind of some of the existing capabilities in Linux and existing code. So our second attempt here, we decided we were going to build a really lightweight LSM. And the reason we went down that route is we wanted to kind of solve a few problems. First of all, we wanted to reduce attack surface by taking certain features completely off the table. User management isn't really relevant for an IoT device, at least not in the traditional desktop or server sense. Certain things for sophisticated job and process management are just not really relevant. And the other thing we wanted to do is focus on new access control scenarios that are more closely modeled how people were authoring applications for the platform. So the LSM is really what I would consider pretty minor here. It basically does two things. It statically fails a number of calls that we just don't need to support. Very simple. Just return, you know, negative e-fail or the equivalent. The second thing we do is we add app identity to every task here. We kind of have three fields we put in here. We have app identity which represents the application package. We have a network identity which is used for remote authentication. And so that's relevant when you want to go talk off box to a cloud service. And we have an extended set of new capabilities. And to be perfectly honest, I think we will continue to evolve how this is actually modeled. You know, I don't like the idea of maintaining a separate set of capabilities from the base in Linux, but it enabled us to kind of prototype really quickly and add new capabilities to the system or make things more granular than the current model. And then, you know, apps and kernel modules can use these fields for access control. It's immutable once set. They inherit by default kind of pretty standard setup there. The other thing on the kernel side I kind of want to talk about is we experiment a lot with file systems. And one of the things we realized was that most file systems, especially in the main line at this point, are designed for desktop and server scenarios. They're designed for setups where, you know, you've got a NAND device that is gigabytes in size. And the first chip we were using has 16 megabytes of flash. 512 kilobytes are right able. And so it's an extreme the other direction. And so we started kind of pushing that execute in place code everywhere so that we could run applications execute in place as well, save some RAM there. We started with some kind of public KramFS patches that had been written many years ago. Never made it to the upstream. But we started on that basis. We forked it a bit with some modifications to reduce some overhead. We patched in copy and write support for debugging. This is one that we didn't actually see coming ahead of time where, you know, someone starts GDB server on the device. They want to do their debugging experience. And, you know, they set a break point and it wants to patch out an instruction. And in an execute in place model, that just fail. Well, it drops the write on the floor and then GDB gets really, really confused. And so we did some small tweaks there to be able to temporarily put certain pages in RAM for debugging scenarios and then throw them away later. And we tried a lot of writeable file systems. We tried EXT2, we tried JFFS and YAFS, which are popular in the Flash world. They all took hundreds of kilobytes of RAM to initialize. And that's nothing for most computing environments. But when you're talking about, you know, one-sixteenth of the total system memory available, you very much feel that. And so one of the things we did is we ported ARMS, little FS, which they built as part of their embed efforts to Linux as a VFS module. This is a file system designed for really, really tiny setups. And specifically for Flash chips like Nor Flash chips that you often see in micro controllers. And so we ported that into Linux. That driver and code will be available shortly. And we've been hopefully, you know, thinking towards a strategy that would let us upstream that. And then the kind of last bucket here of kernel customizations we did was really about access control to existing features. The major one that kind of resonates. General purpose I.O. You know, you've got your Raspberry Pi or whatever your preferred I.O.T. devices of today. You write a program that opens slash dev slash GPIO chip zero. You've got I.Octal calls to read and write pins. Very simple. But it treats the entire GPIO infrastructure as one resource for access control. You can read that dev note or you can't. And the problem is that in the real world, not everything connected to your chip has the same sensitivity. So for example, on a appliance, I might have one GPIO that toggles an LED saying I'm connected to the network. Not super sensitive if that got compromised. I might have another GPIO that opens the solenoid on my furnace and starts the gas flow. If an attacker got control of that, that's a little bit more worrisome, especially if they also get control of the pilot light. So we wanted to make sure we had granular access control in the OS to be able to take some of these things that are today considered kind of core hardware features. Only system level software needs access to them. And figure out how to make the access control more granular so I can say, okay, app number one can toggle the network status LED because it's the network manager. But it can't touch the gas control. Whereas app number two that is the core logic and isn't even connected to the network can be the one that can decide when to turn the furnace on and off. We added file system quota support for MTD devices. Turns out that wasn't there or didn't work right. And some items to leverage our new capabilities instead of checking for routes. I spent a lot of time trying to figure out how not to grant apps Capsis admin because that's the bucket full of everything in the kernel. So we took a few things and we kind of split them out into new capabilities to enable scenarios where you could give access to something that is a little bit sensitive but just that one thing. In terms of our app model, so, you know, the kernel side, we are fundamentally the Linux kernel with modifications on top of it. We looked a lot in user mode to see what could we start from. And we honestly fumbled a bit to try to find something that actually fit in the space that we have. The kind of traditional Linux model of, you know, run system D or a similar inner process is very expensive. And they just weren't designed for resource constrained environments. And so we decided to build a custom in it. You know, we call it the application manager. Apparently that's a popular name. It's basically, you know, the only process, traditional process that runs on our system. Everything else is part of an application, all other processes are part of an application, including system services. And so it's sole job is to load applications to configure the security environment those applications should run under to launch them. Apps are self-describing through manifest. They're independently updateable. They actually are their own isolated file systems. And they run isolated from each other. And so by default an app cannot access any resources of another application. You know, they all run as a unique UID, that kind of thing. We have four out of box system apps. I kind of listed them on the right here. They're very classic services here. We have network management updates, command and control for development scenarios when you're cabled in over USB, for example. Hardware crypto and RNG acceleration. And then optionally you can get GDB server on there for debugging stories. And then an OEM brings one or more apps to the table that actually contains their business logic. One of the major things from a security perspective, we wanted to make sure was there, especially in our V1, everything is updateable over the air. Everything is renewable. We know there's going to be zero days in the version of the Linux kernel that we run between now and the lifetime of these devices. We know there's going to be zero days in the code that I've written. And so it's really critical that we have the ability to update over the air and update quickly over the air. And then we have the development, I don't know, it was about a year ago when the crack Wi-Fi vulnerability started circulating. We pushed those patches out to our test devices in 24 hours and had everything patched up and ready to go from the public disclosure to on the device the code is executing. We're really aiming for critical system vulnerabilities, for critical security vulnerabilities, that level of turnaround. I don't know what to say. And from a product perspective, Microsoft's managing all the OS updates. And so when a customer buys this product and wants to build their device on it, we don't want them to have to reason about what version of the Linux kernel should I run? How do I get the patches in? Should I worry about this disclosure, the CVE or not? Part of the product is we're going to keep the OS functional. We're going to control over their app update story, so they decide when they push out updates to their code. But it's kind of a shared model. From the app model perspective, we looked at a few options for self-contained apps and containers. We started early on with LXC trying to get proper containers working. And we spent a couple months on this at various prototypes. We just couldn't get it to fit. Containers have some great properties and we also have some serious overhead when you're talking about megabytes of RAM. The cost of remounting your file systems is really non-trivial to a device of this size. We kind of started also playing around with can we build something lighter with namespaces and then realized that a lot of the peripherals that people expect in IoT devices don't play right with namespaces at the moment. GPIO is a good example. Some of the other peripherals that these apps really need access to, we just kept running into problems where they weren't namespace aware. That's something that I would like to revisit in the future and see if we can take more advantage of that, but it's just not there today. We pivoted off of containers and we just really focused on can we really isolate these apps aggressively, make sure that our permission model is sane and easy to reason about. An escalation, a buffer in an application really only gives you what that application could already do. So we build each app as its own file system. They're mounted and unmounted to the file system as part of install and uninstall. There's no copying of files around for installation. This actually made it much easier to reason about how to be space efficient and do self-contained update. The apps have made a data in their file system that describes basically here's how to run the app and here's the types of hardware I need access to. By default all you get is compute and RAM. You know, you can run your program, you have access to RAM, you don't even have access to the network by default. Everything must be declared as part of the manifest and that helps us reason about the security state and help support these developers in making sure they really do least privilege for their applications. And then we validate that policy and we enforce it with a bunch of Linux technologies that I'll kind of go into. So here's an example manifest. You know, it's very simple. It describes the app and its entry points. You know, what to run, what policy to enforce, what peripherals you have access to. So for example, in this case this application has access to one serial port, one UART called ISU0. It has access to change the Wi-Fi configuration so it can put the device on and off networks as part of, for example, an out-of-box experience. And it has access to a fixed number of endpoints on the internet. And so it says, okay, I need to be able to go to login.microsoft.online.com or graph.microsoft.com but if someone compromises my app and they try to reach out to their botnet command and control endpoint, I don't want them to get there. So we aggressively firewall that traffic and use this kind of opt-in model for access. And so when this is loaded by the application manager it kind of parses this and we use a bunch of different technologies for enforcement. First we use C groups quite a bit for resource limiting and quoting. It also has the nice side effect of having predictable error states. And so when you run out of RAM and on a device of this size, you first few temps of writing code, you will run out of RAM. There's a predictable failure mode. You don't run the default isolation. And then you run the time killer where, oops, you accidentally killed in it or you accidentally killed another process that you needed to be able to continue. It assigns every app a unique user ID and group ID. This is just to get default isolation. It updates access control on dev entries. And so when you say I need access to that UART, we're going to go and we're going to CH mode the UART to say, okay, this process should work. And it programs the firewall. We leverage an outfilter for firewall management. And then the other thing I want to call out is basically every process other than in it is an application. So we built this for our own usage of system services and for applications that had a nice side effect of us figuring out what we really had to build, what access we had to let through, and what features we could defer to the future. You know, the last thing I'll kind of say about the app is we really focused on reducing the attack surface by removing features. There's a lot of things in base Linux that are powerful, but not necessary for an IoT device. At first it started off great, less RAM usage. I actually can get this thing out the door and then it kind of turned into well that's one less thing I have to reason about from a security perspective. That's one less thing I have to figure out how do I secure. And I find that is often just as valuable as actually reason about how do I lock down this thing. So we have no shell or no user account management. The shell, you know, keep putting a shell on a device is a kind of classic IoT mistake, I think, in a lot of ways that a lot of products have done. We have no kernel module support because we don't need it and it's one less surface we have to worry about. We have aggressively stripped our system libraries. We're at nine shared objects on the entire file system, which is quite a contrast from most Linux distributions. In fact, many of those SOs are actually symbolic links to another SO. So it's even smaller. And we really focus on limiting resource usage. And so we configure C groups to make sure that when your app has an infinite malloc loop it just kills that app. We also use it for certain resource contention, making sure you can't steal 100% of the CPU cycles and block update, things like that. And I did mention already about you can't access anything that you don't opt into. So what I kind of want to talk about here a bit is opportunities for the future. We're coming up to our public preview in the next couple of months. That'll be the first time that people will be able to order development kits from our partner seed studios and actually get their hands on it and play with it. And then we're looking towards the first real products in the future. So we also know that there's still a lot of work we can do here and want to do here. I think our original vision is still many years out. And while we're very proud of the V1 and we know that it's going to up level some of the security in these products, there's a lot of opportunities for future improvement. So the first one I want to call out is now that we're public, we've really been thinking about what are we going to do to make sure that we're available and figure out if we can get those into the main line. There's no reason for us to hold those when they're things that can be applicable for the rest of the community. I talked about the file system. I actually think there's a lot of potential value in that file system being in the main line because it's very resource-efficient even if you're talking megabytes or gigabytes of storage. If you're talking about the devices, even something like a Raspberry Pi or a router or something that's a bit more powerful, they might be able to take advantage of that. We're reasoning through the early stages of what we upstream. Not everything we do is going to be upstream. Some of the hardware work, for example, is very specific to the individual SOC, but we hope we can get some stuff there. We want to use namespaces more. Some namespaces are easier to fit in this memory model than others. For example, putting things in a pit namespace is generally pretty lightweight. Putting things in a user namespace is very expensive at the moment. There may be an opportunity for us to see if we can make some improvements there to pull some of those back and put them on the table again. I really want to try to figure out how we can get some stuff out of that. I think it's a bit more granular. There was a period of about a year where it felt every other day someone on my team was coming to me saying I have to give this process capsis admin and I'd say go away, figure out another way to do it, but that has very much become the dumping ground and the capability model is awesome until you hit that point where you need it. You're trying to figure out how do I not give them everything. I talked about GPIO, I squared C and spy and these kind of low level peripheral interfaces are all kind of built around the fact that the only people that are using them are low level system processes and it's just not true in this space. They're part of the scenario. They're what's driving other micro controllers as part of the product or other devices and asics as part of the product. Figuring out how to pull some of that into those resources would be a major win. The other thing I'll say is we built our own LSM kind of out of desperation more than anything else. I really want to revisit looking at something like SE Linux or App Armor and figure out if we can make it work and take advantage of the efforts going on. We did some early prototypes. We shelved it for the moment to focus on functionality and getting the opportunity to go back and revisit some of those decisions and figure out how to take advantage of all that great work going on and really bring the power of that into the platform. Then of course, I think I could fill this slide up 12 times over with crazy ideas that I have. Especially after a couple years of working on this we've got no shortage of things we want to improve. The last thing I'll highlight here is some takeaways. One that kind of speaks to me a lot after having done this for a couple years now is that security and resource usage are at odds in a lot of cases. Features, you design something for strong security. Everyone here does it. You focus on security. You don't necessarily think about space consideration or if you do it's a secondary factor. You build your tracking mechanisms for real-time analytics and analysis and security and you don't think about, well, okay, I just malloced a megabyte worth of tracking structures and that actually prevents certain classes of devices from even using this feature. And security features are often all or nothing. You can use the feature or you can shut the feature off. And there's not always a way to say, okay, I want some benefit out of this. I know I can't be as secure as I could be if I'm a server with a gigabyte of RAM, but I don't want to accept zero. I want some value pulled in. I want to be able to take advantage of some of these features and improve the strength of what is there. And certain security frameworks have done better at this than others, but in general, a lot of it is seen as all or nothing because as a security expert, that's how I like to think about it, right? Why would you ever not want all of the security? And then the other thing I'll say is that a lot of security features and other frameworks depend on things like CISFS and CISFS is the best example on the top of my head of an all or nothing. You say config CISFS equals yes. Every module out there starts putting entries in CISFS when really you just want a subset. And there's not a good way right now to say I want these set of features in CISFS but not this. And so that's part of why we had to turn it off. That being said, I will highlight a lot of this stuff just worked. We had our first prototype up and running from the moment we decided hey, we should use Linux to I have something booting on a prototype piece of silicon. It was about six weeks. And that's six weeks for a Microsoft team where most of us hadn't touched Linux in ten years. And so, you know, I think that's really a credit to a lot of the effort going on in the community, especially in the embedded space and the kernel in general to just show how flexible it is. And most of the changes we had to do were pretty small. You know, like we added some new config flags. We put some pound defines around things. We tweaked some constant hash table sizes. Those are all very small tweaks. You know, we have a few larger modifications but they were rare. And I think that's really a credit to the quality of the work going on here. So the kind of last takeaway, you know, there's a lot of improvements going on in the desktop and the server space and the traditional computing space, especially when it comes to security. And a lot of those can benefit embedded in IoT. The problems here are just not unique. They're scaled, you know, they're resource constrained. But the problems of worrying about what happens when someone exploits a buffer overrun and escalates to root and, you know, takes over my device and my network, that's going to be there on every device out there. And if we're talking about security here, that's a lot of surface to try to protect. And so we really want to try to figure out a way to take advantage of that innovation going on, to pull it into the IoT world and to really at times drag them kicking and screaming into the modern security world. And with that, I'll open up for questions. I've got a couple here. You're up next. Yes. Sure, absolutely. There are kind of two problems to unpack here. First of all is when we started out, we were trying to convince the company this was a viable product. There's a little bit of politics there just to be frank. We're past that now, which is great. And, you know, we were able to open up and announce this year in, I think it was April at the RSA conference, and we're continuing that. I will say we are publishing the source of what we've done as part of our public preview, which is scheduled currently for September. So the source will be available. We're going to, we're really working to become more open as we move forward and think about things like upstreaming. I mean, we have been doing some early thought to that even internally within code reviews. There have been a few times where I've had to stop members of my team and say, hey, go look at how this is in 4.19, and let's just make sure we're not building in an opposite direction. But I do expect there's going to be some pain as we merge back to the public, and that's part of the cost I think we're going to pay over the next few months to get into a good state. One more question. So if I understood correctly, you're running Linux on the Cortex-A. Correct. What are you running, or can you say what you're running on the Cortex-M's? Sure. So the core boot processor, one of the Cortex-M's on this first chip is the core boot CPU. It's responsible for secure boot. That's a proprietary piece of firmware. It's really lightweight. Its job is to boot, to verify signatures to get out of the way. There are two Cortex-M's for real-time I.O. for customer code . We don't prescribe what they run there. So they might take an existing RTOS, for example, and compile it and run it on that for their real-time code. I think we'll probably provide like a reference lightweight RTOS slash library, but we're really looking to let customers self-enable on those chips. And what's the interface between the Linux instance in those? Is it just shared memory? There is a mailbox-based communication protocol that we document and provide reference libraries for for communication. So we're out of time, and thank you, Ryan, for this. Thanks.