 I'm Steven Smalley and James Carter is sitting down here and will be coming up a little bit later to take over part of this talk. And we work for the U.S. National Security Agency and today we'll be talking about security in the Zephyr and Fuchsia operating systems. So James and I work for the Information Assurance Research Organization of the NSA. And that's an organization that is specifically focused on doing research and development in support of the agency's information assurance or defensive mission to protect information and information systems that have been designated as relevant to national security. As an organization we've existed for a very long time, we were originally carved out of what was known as the National Computer Security Center. Within the Information Assurance Research Organization we work particularly for a team called Trust Mechanisms and our team is particularly focused on research and development in the area of both hardware and software, security architectures and mechanisms to enable us to obtain trust in computing platforms. While our organization is much older, I myself have been involved in this body of research and development for the past 25 years. Going back to the Distributed Trusted Mock System and a series of successor research systems that we developed jointly with various university and other external research partners. Our team was the first within NSA and probably the U.S. Intelligence Community to create and release open source software in the form of SDLinux or Security Enhanced Linux back in 2000. And since that time we've gone on to a long history of open source collaboration and contribution impacting the security of many different systems. So today we're going to be talking about two emerging open source operating systems both of which were open sourced in 2016 for the first time. And these two open source operating systems are targeting very different use cases from one another that we'll be talking about. And they have very different operating system architectures, both from each other and from Linux. Today we'll be looking at the operating system architectures and security mechanisms of these two systems. We'll look at some prior and ongoing work to advance their security and we'll touch on how they compare with Linux-based systems, although a full review of that would exceed the time for this talk. So Zafer is an open source project that is seeking to provide a cross-architecture and vendor-independent real-time operating system specifically supporting the needs of Internet of Things devices. It's actually sponsored by the Linux Foundation, but it originates out of a long history of real-time operating systems that came out of Wind River Systems, which was subsequently open sourced. Zafer is specifically targeting devices where Linux is not considered viable, either due to resource constraints or for real-time or other requirements. So in particular, Zafer is currently supported on 32-bit microcontrollers, ranging from as little as 8 kilobytes of RAM up to a few megabytes. And in that ecosystem, Zafer is really trying to become the new Linux for little devices, a common platform where both hardware and software vendors can collaborate on a shared platform and have that common ecosystem. From its beginning, Zafer had security as a stated goal and focus, although as we'll see in a bit, what they meant by that might differ a little bit in terms of what we think of normally in security. So when it was first released in 2016, Zafer had a model in which one created a single executable, single-address space operating system. The kernel was simply a library linked into the application, and there was only one at a time. And all of the threads within it would run fully privileged in Supervisor mode. There's no form of hardware-based memory protection, no virtual memory. And this kind of a model is actually quite common in many real-time operating systems targeting this kind of low-end hardware, because they're generally running a single application, which often has full access to the hardware in question that it's driving, and they're very much focused on minimizing their footprint in any overheads associated due to those extremely tight resource constraints. And so while Zafer had security as a stated goal and focus from the beginning, its notion of security was primarily focused on its development process and code-auditing practices, the use of common static analysis tools, providing some form of secure update, and providing common libraries and infrastructure for cryptography to provide secure communications and the like. Not on what we would typically think of in terms of operating system protection mechanisms. However, even in the context of these highly resource-constrained single application devices, there are a number of reasons why we might want to have operating system protection mechanisms. This can range from increasing the difficulty of exploitation of software flaws in the real-time operating system to containing the damage from any given flaw to sandboxing components of the system that may be handling untrusted data, whether that's the network protocol or whether it might be an interpreter for some sort of higher-level support. We also may wish to protect the integrity of certain portions of what's going on on the device, protecting the integrity of critical components or ensuring that certain functionality can't be bypassed on the device. And in many cases, in order to bootstrap trust in the device, we need to protect some form of long-term key on the device against leakage. Even aside from security considerations, we generally can benefit from these operating system protection mechanisms simply in improving the robustness of the system, both when it's deployed as well as helping application developers during the development cycle to catch errors at an early phase in their process. While we have been participating in the Zephyr security work, much of the Zephyr protection work has been done by the core Zephyr developers, particularly from Intel for the x86 architecture, from Lenaro for ARM, and from Synopsys for the ARC architecture. Along the course of this presentation, we'll identify some of our own specific contributions. Zephyr is limited by the hardware that it's targeting, these microcontrollers, and most of these microcontrollers lack a memory management unit, and thus they provide no support for virtual memory. Some of them, however, have with them as a memory protection unit, or MPU, a more constrained component that does support a small number of discreetly protected physical memory regions, often as few as eight distinct regions. Although within those, it is possible to cart out subregions. These MPUs are also very limited in terms of their flexibility. In the classical ARM MPUs, prior to their latest generation of microcontroller, these MPUs required that each region have a power of two size and be aligned to the size of the region, greatly limiting the ability to apply this with any granularity. Other MPU implementations, such as NXPs, offer a wider range of flexibility. Zephyr also had a number of constraints that were imposed on the design of operating system protection mechanisms for it. Since Zephyr is targeting this very low-end hardware and these microcontroller-based devices, they wanted to ensure that as they designed protection mechanisms for the operating system, that they would be supportable on typical microcontroller boards. And so while the Zephyr OS protection mechanisms can make use of an MMU if one is available, they need to also function even in the context of MPU only boards. Zephyr had also already made a number of releases and had been around for a little bit before this OS protection work began. And so there was a key compatibility constraint for the Zephyr developers in order to avoid compatibility breakage, both for application developers as well as for drivers. And thus, they sought to minimize changes to the kernel's interfaces and couldn't do some kind of wholesale rewrite of the kernel APIs to provide a typical encapsulation of the kernel objects using handles or file descriptors that you might find in a system like Linux. Also, due to their focus on real time, there was a key need to minimize and bound both memory and runtime overheads throughout the system. And so a key design philosophy that you'll see in the Zephyr protection mechanisms is they do as much as possible at build time. And then what they can't do at build time but as much as possible at boot time, minimizing any kind of runtime overheads and ensuring that they have bounded latency. And then lastly, the designers of the Zephyr protection mechanisms needed to ensure that they would have no impact on their ability to continue to support very low end hardware. And thus, these features need to remain fully configurable and impose no overheads if disabled in the configuration. So beginning in the 1.8 release of Zephyr and officially supported in the 1.9 release, Zephyr began introducing basic hardware enforced memory protections. These are dependent on either the microcontroller having a memory protection unit or an MMU. Either one will suffice. And they enforce sort of your conventional read only no execute restrictions, protecting the kernel's text and read only data against tampering and ensuring that data and the stack and the like aren't executable. They also provide runtime guarantees to catch stack depth overflows through a conventional guard region mechanism. Most of the work here is done at build and boot time only. So for example, on x86-based architectures, the actual page tables are generated during the build. And at boot time, they simply update the corresponding register state and don't need to do any kind of dynamic manipulation there. And then there's the corresponding runtime support for the actual stack depth overflow guard regions. For this work, our primary contribution aside from doing some review of the basic memory protections was to develop and contribute a set of kernel memory protection tests. These were modeled after the LKDTM tests in Linux from the kernel self-protection project in order to validate that the guarantees that they were claiming to provide were in fact being provided. These were helpful both in the development of the Zephyr MPU drivers in catching some bugs in the original implementations. And they've subsequently also caught some regressions as the system has continued to evolve. And they're now part of the standard regression testing that Zephyr performs on all future changes. After creating this basic framework for hardware enforced memory protections, Zephyr then moved on to introduce support for user space, since previously everything had been running in supervisor mode. And so this was introduced for Intel architectures in the 110 release, and then ARM and ARC added it in the 111 release. And this further builds upon the memory protection support and likewise requires either a hardware memory protection unit on the microcontroller or an MMU. And it provides basic support for user mode threads with isolated memory, but this is not a full process abstraction as we'll see later. In the context of this work, again, in addition to providing some basic code review and feedback on the implementation, we developed a set of user space tests that sought to validate, again, that the security properties that they were claiming to provide for the user mode threads were in fact being enforced. And this was initially used to confirm the correctness of the Intel implementation of this. And then as the ARM and ARC implementations were under development, they used this test suite in order to validate themselves and to catch various issues as they progressed with that. And once again, this is now part of the standard regression testing used in Zephyr. So in Zephyr, the operating system continues to have a single executable and address space and still just a physical address space. There's no virtual memory on these. Even on systems with MMUs, you have an identity mapping. And so we don't have a discrete virtual memory regions. And so we support user mode threads, but not full processes. And again, this is sort of conventional in this ecosystem of real-time operating systems on these low-end boards. And so the application developer can launch specific threads as user mode threads and have them run deprivileged in that context, at which point they can continue to execute from the text and read from the read-only data sections. But then they're constrained in that they only have right access to their own per thread stack. And then there's a mechanism that Zephyr provides called memory domains, which is an abstraction on top of the MPU or MMU functionality that allows the programmer to define shared memory regions between the user mode threads. And then there's also a convenience feature that they created, an application memory feature that would, if enabled in the kernel configuration, allow all of the user mode threads to access all of the globals declared in the application section. In order to support user space, Zephyr had to introduce a collection of abstractions and mechanisms. So first they needed some way to refer to kernel objects in the user mode threads. As I mentioned earlier, they were constrained in that they didn't feel they had the freedom completely rewrite the kernel APIs. And so they continued to pass kernel addresses as handles in those interfaces, obviously exposing kernel addresses in that model. In order to validate the addresses, at build time they generate a perfect hash for all of the static kernel objects. And thus the kernel can, with fixed latency, validate that the addresses are pointing to actual legitimate kernel objects. And those kernel objects live in the kernel's memory, which is protected using the memory protection mechanisms from direct access by user mode. And then subsequently they've also introduced support for dynamic kernel objects and a mechanism for efficient validation of those. They also added an object permissions model in which user mode threads have to first be granted permissions to any given kernel object in order to access it. And so that initial granting actually has to occur from a kernel mode thread. So a kernel mode thread can grant access to a user mode thread or to itself. And then an inheritance mechanism allows those permissions to be propagated down the chain of user mode threads. This object permissions model in Zephyr is purely an all or nothing. You either have the ability to use a given kernel object using the system calls exposed for it, or you don't. There's no notion of per-operation or read-write distinctions currently in Zephyr. And it's not clear that that level of granularity is really necessary at the subtraction layer. Zephyr also had introduced a system call mechanism, of course, in order for user mode threads to invoke kernel services. And again, focused on their whole design constraints of minimizing overheads and continuing to support low end boards, they've introduced machinery that allows for these API calls to be transparently redirected to either direct kernel function calls or system calls, depending upon whether the caller is a kernel mode caller or a user mode caller. And so you can build an application that's purely kernel mode, and everything continues to be direct function calls. Or you can construct an application where you have some kernel mode threads and some user mode threads and things will be handled correctly, depending upon the caller's state. And then what they've been doing is they've been progressively building out the full scope of the kernel system call interface, validating the kernel APIs for their trust assumptions, and then increasing the breadth of what's exposed to user mode. So originally, Zephyr provided a very coarse-grained application memory feature to let all user mode threads access all application globals. And so it was an all or nothing model. So there was no way, if you enable this feature, the only mechanism you would have to share memory between two user mode threads was the memory domain mechanism. And that had a pretty high burden on the application developer. The application developer would have to manually organize the application global layout in order to meet the MPU-specific restrictions. And then they would have to manually set up these memory partitions and domains. So that made it fairly difficult to define any notion of multiple logically isolated applications running on Zephyr. So to help with that problem, we developed a new feature that's going to be coming out in the 1.13 release of Zephyr that supports a slightly more developer-friendly way of grouping the application globals based on how one wishes to grant access to them, and then automatically generates behind the scenes the corresponding layouts and markings and the necessary memory partition definitions and the domain structures. And then it provides a set of helpers to use the application developer's work. There's certainly no panacea when it comes to application development. There's still an awful lot of knowledge required. But it is a small step forward towards the goal of being able to support multiple logically isolated applications. So this is a simple example that was actually coded up to demonstrate this new application memory sharing facility. And so in this context, we have multiple user mode threads creating a very simple pipeline from plain text to ciphertext. And so the goal here is to ensure that each of these remote threads has its own private memory, and then there's a shared memory buffer between each pair in the pipeline that can be used to communicate the data across that. Providing a conventional pipeline that, for example, in an SC Linux world, you might do through a type enforcement-assured pipeline. So these are some areas of interest to us in terms of future work with Zephyr and protection. So currently, as I mentioned, the MPUs are very constrained. Many of them only support eight discrete regions. And so supporting a layer of MPU virtualization that would allow us to support a larger number of regions that get swapped in and out of the physical MPU on demand would allow for greater flexibility and granularity as we move forward. And some other real-time OSes have actually explored this as well. Currently, as I mentioned, we still have a single executable single outer space OS. And so actually splitting up the actual text and read-only data sections for the applications and supporting multiple ones of those for multiple applications is an area of interest to more fully support the notion of logically isolated applications. The key area of concern for Zephyr, of course, is kernel software protection. And so we'd like to see greater incorporation of some of the defensive measures that are present in KSPP in the context of Zephyr. But these will have to be tailored to the particular resource constraints and needs of Zephyr. And as much as possible, moving as much of the work as possible to build time and boot time, minimizing any kind of runtime overheads. Their number of features coming in emerging are microcontroller hardware that we believe will enable us to construct more trustworthy architectures on these low-end devices. In particular, the MPU configuration support is becoming much more flexible in the next generation, or the generation that's just coming out now. And ARM has also released definitions for trust and support for microcontrollers, which is specifically tailored their trust zone capability to microcontroller-based devices. And so being able to use that to construct a core root of trust in the environment is an area of interest for us. And then lastly, as the work to support multiple logically-separated applications continues, ultimately we have an interest in exploring some form of mantrex control, but one suited to real-time operating systems. And this would look very different from something like SCLinux. It would be more oriented towards build time application partitioning and pipelining based on some static configuration for the system that an application developer could create. All right, so this little summary slide is not intended to be comprehensive or complete in any way, but it kind of gives a little snapshot of some of the differences between Zephyr versus Linux. And you have to understand, you would never choose between Zephyr and Linux based on security, right? You're gonna choose between Zephyr and Linux based on your target hardware, right, and what your particular goals are. So Zephyr has, over the past year or so, gained support for some of the sort of common case protections that have been present in Linux for some time. There's still no address-based layout randomization support in Zephyr. Again, it's just a physical address space. And actually, supporting that kind of a construct in a real-time OS on these low-end boards is somewhat challenging. So, again, it would most likely move to more of a build time randomization and maybe a small boot time relocation. But certain things on these boards, like the MPU itself, have fixed physical addresses that you use to program them and it's not clear that one can, in fact, cleanly relocate those. Zephyr also today has very much had a model where the kernel code is trusted. So, again, we'd like to see greater growth there of mitigations for kernel volums. And today it only has a user space thread model, not a full process model. The Zephyr developers have plans to introduce more of a full process abstraction model, but I believe that's only gonna be on hardware that actually has MMUs, not on the very low-end boards that only have MPUs. So, typically in a Zephyr world, you're usually dealing with a single application as opposed to a multi-application or a multi-user type environment like in Linux. And then in Zephyr, the exact security guarantees you're gonna get will tend to be very dependent on the particular SOC you're using, the particular kernel configuration, and a lot of it depends a lot on the application developer, whereas in Linux, you have a number of core OS security features that are sort of neutral and common across all of those spectrums. So, there are a number of other resources on Zephyr security that you can find. There's an excellent talk by Andrew Boy of Intel at Embedded Linux Conference earlier this year talking about the implementation of many of these memory protection schemes and there's some user-made documentation. And with that, I'll hand off to James Carter to speak about Fuchsia. Thank you for that applause. I haven't even started yet and you're already applauding me. I'm glad that I have the view and not you because if you had the view that I did, you would get nothing from this talk whatsoever. And if I zone out, just throw something at me, I'm probably just watching a sea plane out there which is far more interesting than what I'm gonna talk about probably, at least to me. All right, so I'm talking about Fuchsia. So Fuchsia is a microkernel-based operating system. It's primarily developed by Google, but it's open-source, so it's very much like the Android model of development. Fuchsia targets modern 64-bit machines with plenty of memory. It has an object capability-based security mechanism and it's very much a work in progress. So everything I say today could change tomorrow. So at the heart of Fuchsia is the Zircon microkernel. So this was initially derived from the Linux little kernel, LK, which is an embedded kernel, RTOS, used in the Android bootloader. So Zircon extends LK to make it a microkernel. It adds support for 64-bit, adds a user mode, adds things like processes, object capabilities, IPC, and all those things. And so Zircon is the only part of Fuchsia that actually runs in supervisor mode. It's microkernel. So driver's file system, the network, everything runs in user mode. And so Fuchsia has many different security mechanisms. The primary one is the handles, the regular and resource handles. These are the object capabilities of Zircon. It also has job policy and VDSO enforcement, and then in the user space, name spaces and sandbox. And we're gonna talk about each of these. So first, regular handles. Again, these are object capabilities and these are the only way that user space can access the kernel objects. So Fuchsia is different in that it uses a push model. So the client creates the handles and then pushes it to a server. So which is very different like most systems. So handles are per process and they're unforgeable. They identify both an object and a set of access rights on that object. And then with the proper access right, you can do things like duplicate them with equal or lesser rights. You can pass them across IPC and you can use them to attain handles to child objects using object get child with equal or lesser rights. So there's a lot that's nice about handles. They separate rights for propagation versus use. This was a problem with the first generation of capability systems. Also, they separate rights for different operations which is good and you have the ability to reduce rights through handle duplication. All these are good features. Do have some concerns, object get child. So if you have a handle to a job, then you can get a handle to anything in that job or child jobs using object get child. This could be a problem. It definitely means that a leak of the root job handle is fatal to the security system. And currently that means anything that has access to dev, MISC, systemful can get access to the root job handle. Again, this is a work in progress system so not everything is as it will be down the road. I'm also not much work has been done to make everything least privileged. A lot of work needs to be done that way. We'd also like to see more assurance of just control over handle propagation and the addition of revocation for handles. And again, because it's a work in progress, not all operations check access rights and some of the rights are unimplemented currently. So resource handles are a variant of a handle for platform resources. These are things like memory mapped IO, IO ports, IRQs and things. So these allow you to specify a resource kind and optionally arrange for the resource. And the root resource handle allows access to all resources. So you can use a resource handle to get a more restrictive resource handle that has, for example, maybe a smaller range. So that's good. It supports fine grain hierarchical resource restrictions. Very good. Of concern, the root resource check is very granular right now. A lot of things just check whether you have a handle to the root resource or not. And so that essentially means that the root resource becomes sort of like capsis admin right now the way things are. So a lot of work needs to be done to make things least privilege. Again, if the root resource handle leaks, then that can be fatal to the security of the system. And again, if you have access to DevMisk SysInfo, you get that root resource handle. And again, we have some concerns over just control and just assurance of the handle propagation and support for revocation of resource handles would be nice as well. So in Fuchsia, everything is part of a job. So processes don't have child processes. Jobs have child jobs. So jobs can contain both jobs and other processes. The root job contains all other jobs and processes. And so a job policy then is applied to all processes within the job. So when you create a job, you assign a policy to it and then you add processes to that job. So policies are inherited from the parent and they can only be made more restrictive. And there's policies over things like error handling behavior. You know, what do you do if you have an invalid handle over various types of object creation and the mapping of right execute memory. So the fine grained object creation policies are very good. We like those. The support for hierarchical job policies can be good. There are some concerns. The right execute policy is not implemented yet. And even when it is, it's gonna cause problems with these strict hierarchical nature because if a child needs to map something right execute, that would mean all ancestors need to be able to map that thing right execute as well. So the strict hierarchy is probably not gonna work down the road. And again, like everything else, just a lot of work needs to be done to make things least privileged. Right now there's only two things that actually have a job policy assigned to them. Device drivers specify what to do with an invalid handle and the fissure job is not allowed to create processes by itself. The last mechanism in the microkernel itself for security is the VDSO enforcement. So the goal here is that the VDSO is the only means for invoking system calls. So it is fully read only and the kernel constrains its mapping. It can only be mapped once per process. That mapping must cover the entire VDSO and it can't be modified, removed, or overwritten. Additionally, the kernel restricts then where a system call comes from and it needs to come from the expected location in the VDSO. And also you can have a VDSO variant as well which allows you to have subsets of the system call interface. So again, a lot of good things here. It limits the kernel attack surface, enforces the use of the public ABI, supports per process system call restrictions. All of these are very good. In addition, the VDSO code is not trusted by the kernel so it's still fully validates system call arguments. Of concern though is the potential for the tampering or bypassing of the VDSO. For example, right now, process write memory allows you to overwrite the VDSO. This is known, again, this is just part of it being a work in progress. And this VDSO enforcement doesn't have as much flexibility as something like Sec Comp. In the user space, there's namespaces and sandboxes. So namespaces is just a collection of objects that you can enumerate and access. It's the composite hierarchy of services, files, and devices. A namespace is component, not global, and they're constructed by the environment which instants the eight to component. So if a component needs to or wants to offer a service then they can extend the namespace and offer the service. And then a sandbox then is for apps. So the app manager creates an app based on a manifest and there's a sandbox section in the manifest which specifies what the namespace should be for that app. So no global namespace is good. The fact that object reachability is determined by the initial namespace, also very good. We would like to see the sandboxes not just for application packages but for things like system services as well. Also, we would like to see more granularity for namespace and sandboxes as well. Like currently you give access to say a directory and then that process has access to anything under that directory and you can't specify like an individual file only. And additionally, the sandbox in the manifest, there's not a lot of keywords that you can use right now for the setting up the namespace. Also, there's no independent validation of the sandbox configuration. So what does it mean when the app manager creates the namespace? It doesn't ask can this app or should this app actually have this namespace? Just whatever's in the manifest, that's what it gives as a namespace. There's no actual enforcement of what it should have. And everything right now uses global data and global temp right now. So this is again, part of the in progress nature of it. The docs mentioned per package data and temp is just not implemented yet. So let me give you an example of just bootstrap and process creation. So user boot creates the device manager in exits. It's not like in it, it doesn't stick around. The device manager creates the Zircon drivers and services. These are two separate jobs with multiple processes in them. The device manager also creates the service host. The device manager creates the future job and then the app manager process in that job. And so the service host then acts as the thing that creates processes in future. But the caller then supplies all the handles for the processes. So the service host doesn't actually create any handles. You pass all the handles in and it creates the process with those handles. So the app manager provides a component creation facility. So it creates handles, but it's not allowed to create the processes. So the way things work, caller identifies a component, app manager constructs a namespace based on the sandbox and then uses the service host to create the actual Zircon process. All right, let me talk about manager access control now. We think that a Mac framework could address some gaps left by Fuchsia's existing mechanisms. It could help control propagation, support revocation and apply lease privilege. Could support finer grain checks and generalize the job policy and validate the namespace and sandboxing and support finer granularity. In addition, it just could provide a unified framework for defining, enforcing and validating security goals for Fuchsia just like it has for Android. So our early work, mostly Steve, even though I've been in the office for 16 years, I'm still the new guy. So our early work was in the context of capability-based microkernel operating system. So DTMoc and DTOS for the mock microkernel, Flask for the fluke microkernel. And in general, we think that capability systems and Mac really work well together. And we've revisited Mac and capabilities repeatedly. SC Linux and Unix file descriptors, SC Darwin and mock ports, Android and Binder. So we have three different sort of options we can take as we go forward. One, we can build the Mac framework entirely in user space with no microkernel support. So in this case, it would just be built on top of the existing capability system. The second option would be mostly in user space with some microkernel support where we just extend the capability-based system. And then the last option is the security policy logic in the user space with full microkernel enforcement for the objects. And this would be similar to our previous work. So example of our previous work is the Flask security architecture. This was the stuff done on fluke. And this is what SC Linux is based on. So SC Linux is a application of the Flask security architecture for the monolithic Linux kernel. So in Flask, the user space security server provides labeling and access decisions. And then the microkernel and user space object managers bind labels to their objects and enforce security server decisions on their objects. So the microkernel provides peer labeling, fine-grained control over transfer, and then use. There's lots of benefits to this approach. It just helps with an insurable implementation, provides that direct support for labeling and access control. Just helps mitigate capability leaks in the user space. Just helps reduce the assurance burden on user space components. And it just causes user space object managers not to have to trust each other very much. Also, it gives us a centralized security policy, which is very good for analysis, audit, and management, and just supports a flexible fine-grained access control. All right, so currently we've just been, we're really in the beginning stages. We've been looking at the creation of flow of handles among Fuchsia. We've been searching or looking at the reachability of security critical handles and objects in the system, assessing the effectiveness of existing mechanisms and just exploring our options. So, to give you an example, I've been just labeling handles so I can just track and see how things flow through the system. So here's some examples. And the way I label here, the vertical bar represents, in addition, when your handle's added to the process, vertical bar with an asterisk, that signals it's being removed. And this is always in the case of being passed across a channel. So what happens is, well, a handle can never be in two processes at once. So if you're gonna pass a handle to another process, it is removed from the one process, it goes to the channel, and then the other process gets it. So you always see a removal, and then in addition, when you pass a handle across a channel. The vertical bar and the plus sign indicates the duplication of the handle. So in the first case, we have a virtual memory object. In this case, we're tracing out the VDSO. This is the full VDSO, not a variant. And how it goes from the kernel and gets to a shell. So VDSO is created in the kernel that's added to the user boot process. And then that passes the handle over a channel to the device manager. Device manager duplicates the handle, the duplicate is added to device manager. And then that duplicate is passed across the handle to the service host, which then duplicates the handle again. That duplicate is added to the service host, which then passes that handle across the channel to the shell. So we see how it goes from the kernel to the shell. The second example is a resource. So the root resource, again, is created in the kernel. It's added to user boot. User boot passes a handle to device manager. Device manager duplicates the handle. The duplicate is added into device manager, which then passes that across the handle to the server or the device driver, dev host sys. And then for a channel, a channel consists of two objects, it's a pair of objects. So we label the pair with the kernel IDs so that we can match them up later. So in this case, we have the kernel IDs 2407 and 2408. It represents the channel. And so this channel was created in the device manager. And then one of the handles was passed over channel to dev host PCI number three. And the other handle was passed to the service host. So now service host and dev host PCI number three, they have a channel between them. All right, like Steve showed for Zephyr, so here's a slide showing fuchsia versus Linux OS security. Again, this is not comprehensive. Fuchsia is a work in progress. Not all of these things are complete or as fully functioned as in Linux. As an example, ASLR, all the plumbing is there. It just so happens for debugging purposes. Everything is loaded at the exact same random address every time, right? So this is just for development. We also haven't really examined the self-protection mechanisms in fuchsia yet. So in general, the big differences are obviously fuchsia being a microkernel has a small decomposed, trusted computing base versus Linux's large amount of lethic one. Obviously fuchsia uses object capabilities Linux has DAC and Mac. So Zephyr and fuchsia are each seeking to advance state of OS security for their respective domains. There's a lot of work remains to be done to the security of both of them. And I encourage you to all to get involved. So is there any questions? Which I'll happily direct to Steve. If we have any time left. Any questions? I do not see. All right, I have a question. Yes. Well, for fuchsia, again, it's very much a work in progress. So there's not a lot of testing, right? Right now it's more of just trying to get everything to work sort of stage. And we've tried to evaluate things. And we've noticed some things, but most of the things that we've noted are things that are already known. It just reflects the state. Steve would have to answer the question about Zephyr. Zephyr clearly has a broad based community coming from many different companies right now. Fuchsia seems to be mostly Google right now. There are some external contributors. It is an open source project, but it's more in the Android mold right now. And also because it's still very much active in core heavy development, it's not as broadly based yet as some other systems. Yes. Well, I mean, it does limit how you get to the particular system call. So it does enforce that the system call has to be through, it's not the primary security mechanism, but it does limit the ways you can access the system call. So it does enforce at least the entry point to, I would say more, it would help us later on when we add security that it would make it easier for us to layer on the Mac. I think it would make it easier for us to do the Mac stuff on top of that and to enforce the stuff with that. Because it's always easier if you know that in order to do this, you must do this first. It's easier to set that up than it is if you can come from lots and lots of different ways of doing the same thing. So, and maybe Steve has some other or something else to say about that. So first of all, right today, I wouldn't put any confidence in the mechanism right now, because we already found one way that we could clobber the VDSO and not just our own, but every other processes. So this is again, very much a system in development. It appears that their goal is to make the VDSO an unbipassable gate for entering the kernel, all right? And so they're trying to set up the preconditions to ensure that it's fully read-only into the processes it's mapped into and that it can't be tampered with right after that setup. So it's almost like they're trying to use the VDSO as a reduced functionality sec comp feature, right? As the gate into the kernel, and then the variants really would essentially whitelist what system calls are exposed. But it's a little unclear to us from the outside as to whether they're thinking of it more as a security feature or if they're thinking of it more as just a, let's limit the public ABI of our system and not let third parties just use random system calls at will, right? Because that's also something that's generally of interest to them, right, as system developers. So it's hard to assess right now how much of it is for security and how much of it is to limit how much of an ABI they need to support going forward. Okay, thanks for that and any follow-up questions?