 Thank you, James. Yes, as James mentioned, I've been working with the NSA for a little over 26 years now. And in that time, I've been involved in designing, implementing flexible mandatory acts control, or MAC architectures, transferring those architectures to mainstream operating systems, and applying them to solve various problems, in addition to other research and development. So today, I'm going to be providing some of the history and lessons learned from that body of research, as well as looking at some areas of future exploration and some residual challenges. I'll also be providing a little bit of the background and motivation for mandatory acts control, including some work that predates my own involvement that started in 1993. So many of you are probably aware that if you have an Android-based device released any time recently, that it's actually using a security framework that was originally developed by NSA, and actually running code, some of which I wrote. And so are many of the Linux systems in use today. What's not as widely known, although it's no secret, is that all iOS devices are also leveraging a security framework that originated out of the same body of research and development whose development was sponsored by NSA's research organization. And that same framework can also be found in macOS and FreeBSD systems today. So today we'll be talking a bit about how we got to that point where these security frameworks and architectures became so widely deployed. So the organization I work for within the NSA, the Laboratory for Advanced Cybersecurity Research, is tasked with performing research and development in support of NSA's mission to support the goal of protecting what's been deemed national security information and national security information systems. That is, information information systems critical to the defense of the nation. And that includes both classified data and other matters that are relevant to national security. Our organization was originally carved out of what was known as the National Computer Security Center, the NCSC, that originally published the Trusted Computer Systems Evaluation Criteria, or the Orange Book, back in 1990. And so we'll be celebrating our 30th anniversary as a separate organization within NSA next year. Our particular team within this research organization was the first at NSA and probably within the intelligence community, at large, to create and release open source software in the form of SCLinux back in 2000. But even before SCLinux, we had been engaged in unclassified research and development with universities and other external research partners for many years. Since the release of SCLinux, we've gone on to a long history of open source collaboration and contribution, touching many different systems, some of which we'll discuss today. So originally, the motivation underlying our work was the inadequacy of the protection mechanisms of mainstream operating systems. And in particular, their inability to support higher level security goals due to the weak mechanisms they provided. And a particular note of these protection mechanisms, mainstream operating systems of the day lacked what's known as mandatory access control mechanisms. They only provided what are known as discretionary access control mechanisms, or DAC. And MAC is fundamentally necessary in order to address what's known as the confinement problem, a fundamental problem that was identified by Butler-Lampson back in 1973 and subsequently revisited. That is, ensuring that a program cannot leak a user's data in violation of the intent. While MAC was originally focused on the data leakage problem, over time, it's been generalized to encompass the bounding the damage that can be caused by flawed or malicious applications to include both confidentiality and integrity concerns. MAC is a fundamental mechanism to being able to enforce system-wide security properties across everything running on a system, including not only information flow goals, as originally conceived, but also things such as enforcing code and data separation, or enforcing the desired architecture and interrelations among software components. A key motivating aspect for MAC is enabling one to gain confidence or assurance that a set of security goals are being met by a computing system. And in particular, MAC was early on required for the higher levels of evaluation criteria, although its usefulness spans far beyond just those limited use cases. One of the benefits that MAC brings to a system is simply providing visibility into and control over the complex interactions of modern software systems, which are often implicit and ill understood even by their developers. MAC has been variously defined and implemented over the years, but there are three key properties that we believe are crucial to any MAC system. Not all systems claiming to provide MAC meet all three properties. In fact, many only provide the first. The first property is that a MAC system enforces and administratively defines security policy and is not subject to manipulation by users or their applications. This is key to ensuring that the security policy is not susceptible to flawed or malicious applications or to careless or malicious users. In some systems, not even the administrator can change the MAC policy. It may be fixed by the system builder, as an Android, for example. The second key property of a MAC system is that it provides control over all of the subjects, objects, and operations of the system. By subjects, we mean an active entity, such as a running process. And this is crucial to ensuring that the system provides complete mediation and that the policy cannot be bypassed or violated in any way. In our view, even the privileges of the system should be encompassed by the MAC mechanism so that the full protection state of the system is represented in the MAC policy, including even the protection of the privileged subjects, which is often crucial to providing the system with its security guarantees. And the last property is that a MAC system enforces its security decisions based on security labels or attributes associated with the subjects and objects involved in the operation. The security labels embody all of the security-relevant properties of the subject or object in question that are important to the policy. And they have to be bound to the subject and object in a trustworthy way. This property is crucial to ensuring that security decisions are based upon accurate and complete information. And the use of these labels enables the policy to be organized into security equivalence classes, which facilitates scalability and analysis that goals are being met. Historically, traditional MAC implementations were limited to a set of fixed security policy models derived from government rules for handling classified documents. Originally, there was the Bell-Lapagel model developed in the 1970s. There was a representation of the government's multi-level security model for protecting these classified documents and ensuring that they couldn't be leaked. Subsequently, Baiba introduced an integrity model that was the duel of this to protect against tainting of data from low-integrity sources. Just as these traditional MAC schemes had fixed security policy models, they also had security labels that were specific to these fixed models and directly encoded certain aspects of the policy. Traditional MAC implementations were historically limited to a separate set of trusted operating systems. There were a small number of high-assurance operating systems with very limited functionality and application support, and then there were a variety of trusted variants of the mainstream UNIX products that incorporated mandatory X control and other trusted operating system features. These, too, tended to lag behind their mainstream cousins in terms of updates, features, and support. Traditional MAC suffered from a number of gaps that limited both its flexibility and its security. First, in any real-world system, there was a need to violate these fixed security policy models for specific processing. This then required so-called trusted subjects that could violate the security model. Further, these MAC systems lacked any means of effectively confining these trusted subjects to only the minimum access required for their legitimate function and to protect them from influence by untrusted subjects in the system. This created a situation much like the superuser problem in UNIX, enabling the system to be violated through any flaw in any of these trusted subjects. This binary notion of trust was ill-suited to constructing secure systems. Further, since traditional MAC had a fixed security policy model focused on originally the multi-level security goals, it couldn't express many real-world security goals of interest to the commercial sector. Lastly, because these traditional MAC schemes were focused on information flow, both for confidentiality and integrity purposes, they generally ignored the particular program or code that was being executed. They were only really interested in ensuring that a process couldn't leak data in violation of a security goal or that it couldn't be tainted by lower integrity data irrespective of what code was being executed. While this was sufficient for the information flow goals that were originally conceived, it was inadequate for many integrity or least privileged goals of interest. Type enforcement was a MAC model that was originally introduced to address the failings of the BIBA integrity model and other hierarchical integrity schemes. In particular, type enforcement was designed to support enforcing desired software architectural goals through the use of assured pipelines. In an assured pipeline, type enforcement is employed to ensure that one or more security-relevant subsystems, such as an encryption transform or a data sanitization transform, is unbipassable and tamper-proof and is bound to specific code approved for that particular function. As a model, type enforcement supports more than just integrity. It's generalizable to enforce many other kinds of security goals as well. Type enforcement was first implemented in a system named the logical coprocessing kernel, or LOCK system. And in LOCK, type enforcement was originally employed to decompose the kernel into a set of lower-privileged kernel extensions that ran alongside the core reference monitor, each limited only to its particular function and purpose, as well as to layer the trusted computing base with each layer constrained only to what was needed for its particular function. Originally, the LOCK design called for a hardware-based design with a security coprocessor and a tagged memory model, something which seems to be coming back into vogue, but eventually transitioned to a software-only implementation in order to accommodate commodity hardware at the time. Type enforcement has a number of key properties that enable to overcome the limitations of traditional MAC. First, whereas traditional MAC ignored the particular program being executed, type enforcement explicitly takes it into consideration. This allows binding trust and specific permissions to the program being executed, enabling the trust model to be tailored to the particular trustworthiness and function of the program in question. This necessitated distinguishing execute from read access, enabling it to represent goals such as code data separation. Second, whereas traditional MAC had fixed labeling schemes that directly encoded aspects of the policy, type enforcement cleanly decoupled the labeling from the policy. The labeling simply became tags and the policy was separately defined through a sparse access matrix. This enabled type enforcement to represent many other security goals besides the limited models of BIBA and Bellapagela. Whereas traditional MAC was primarily focused on information flow control and thus sought to map every conceivable operation of a system to read and or write flows, type enforcement provides policy-driven granularity in which distinct permissions are defined for each operation and the policy author is able to group them as desired to meet his or her particular goals or devolve over time as understanding of the particular security applications of given operation change. Lastly, whereas traditional MAC required the notion of trusted subjects that could violate the security model and became potential attack vectors into the system, under type enforcement, there are no all-powerful trusted subjects. Nothing is exempted from the security policy and even the notions of trust are defined within the context of the policy at very fine granularity. While type enforcement provided a number of key advantages over traditional MAC, we recognize that even type enforcement would not suffice for all needs or for all time. We also recognize that flexibility would be key to commercial adoption of mandatory access control in mainstream systems. Consequently, we worked with our research partners to create a generalization of type enforcement and to create a flexible MAC architecture in which type enforcement could merely be one of many different security models supported behind a general security interface. This architecture grew out of earlier work, both in LOC and another system known as Trusted Mock. The flexible MAC architecture was prototyped in Carnegie Mellon's Mock 3 kernel through the Distributed Trusted Mock project, or DTMock, and its successor, the Distributed Trusted Operating System project. These two projects were both part of a broader NSA research program known as Synergy. It was exploring how to develop a distributed microkernel-based operating system security architecture that was exploring not only flexible MAC, but also flexible support for other kinds of security needs, including audit, cryptography, and authentication. The use of this architecture was motivated both by recognition that microkernel-based architectures offered significant potential security and assurance benefits for operating systems and by interest in Mock by industry players at the time. Mock at the time had what's known as an object capability model, but Mock's capability model had a number of deficiencies that made it fundamentally unsuited to enforcing security policies. First, Mock provided no means of distinguishing the ability to use a capability from the ability to transfer that capability further, providing no means of gating the propagation of access rights in the system, which is key to being able to support security policies. Secondly, the capability in Mock gave the ability to invoke any service the object offered, providing no means of supporting least-privileged goals in the system. And lastly, Mock's capability model offered no means of revocation, thereby limiting its ability to support dynamic security policies and changes in attributes. The DT Mock and DTOS project addressed these limitations of Mock's model, introducing Mock policy controls over the propagation and use of capabilities. The controls over use also incorporated fine-grained support over service and vacation, allowing least-privileged policies to be enforced. A number of technical reports and papers were published about the DT Mock and DTOS work, and those are publicly available. The flexible Mock Mock architecture that came out of the DT Mock, DTOS, and later Flask work, encapsulated the security policy logic within a separate subsystem known as the security server. In the microkernel-based architectures, the security server was a user-space task, given the microkernel-based architecture and the precedent of the user-space paging model of Mock. This also fit with our goals for policy flexibility. The interface to the security server was designed to support many different security policy models by taking a wide range of known security policy models and then generalizing those to create a generic state machine model of any kind of security policy goal. This interface supported many different properties, and thus, both existing known security models as well as potential future ones. The architecture also defined the requirements on what was known as the object managers of the system. That is, the components of the system that implemented the objects and operations, including both the kernel and user-space components that provided file systems, network protocol stacks, and the like. Along with these architectural requirements, a caching component was defined to address the concerns about needing to call the security server frequently. This axis-vector cache provided efficient caching of security decisions and also provided a protocol between the cache and the security server to support dynamic policies and evolving requirements. The flexible Mac architecture embodied in DTOS and its successors provide a number of key benefits. First, the architecture lent itself to an assurable implementation. The direct representation of security labeling information and access controls in the kernel state facilitate analysis and validation that security goals are being met, enabling one to gain assurance. Capability leaks by any flawed or malicious user-space component could be mitigated by the kernel checks, providing defense in depth. This then allowed the assurance burden on user-space components to be scaled and bounded to only what was needed for those particular functionality and security requirements. The trusted computing base could be decomposed and layered with the user-space components only being trusted for their requisite functions and bounded in scope. In addition to providing a more assurable implementation, the architecture provided for centralized security policy, thereby facilitating analysis of the policy for its particular goals, auditing that certain security goals are being met in management of the system as a whole. Lastly, the architecture support for security policies offered the greatest flexibility of many different alternative designs that were considered and captured in the technical reports and papers that were published. The DTOS prototype was distributed to a number of universities and other research partners. It was used for further research into security policies, secure windowing systems, secure databases, and other matters. However, with the winding down of the Mock project by Carnegie Mellon, DARPA invited us to partner with the University of Utah's Flux project, which had taken over maintenance of Mock and was beginning to explore a new microkernel-based system known as Fluke. In the context of Fluke, the primary innovation that was introduced was to further explore the limits on support for dynamic security policies beyond even what had been supported in DTOS. And that enhanced architecture became known as Flask, and its prototype in Fluke was also known as Flask. Also during this time, we noted an increased activity in the industry and in academia, attempting to provide security simply at the application or middleware layer and giving up largely on the protection mechanisms of mainstream operating systems. As a consequence, we issued a paper calling for increased exploration and renewed work in operating system security, noting the inability to provide adequate security purely at the application layers. This paper called not only for flexible MAC, but for other key features and goals at the operating system level and brought significantly wider awareness of the need for flexible MAC and operating system security beyond the academic community. While DTOS provided limited support for dynamic security policies by providing an interface between the access vector cache and the security server and a protocol between them for negotiating changes, it did not fully address the revocation problem, particularly revocation of permissions that had migrated beyond the cache into the state of the object manager, for example, in the context of memory protections or in established IPC connections. Or further, it did not address the problem of in progress operations that have already completed the permission check but not yet completed the operation in question. The Flask architecture defined the requirements on the object managers and a protocol for ensuring an effective atomicity in policy changes, so that one could ensure that a given policy change had completed and been fully reflected in the state of the object managers before other controlled operations would be allowed. The Fluke microkernel interface particularly facilitated the support for revocation as a result of its support for checkpointing and transparent process migration, it provided support for full thread state exportability and it cleanly divided all kernel operations into cleanly restartable atomic stages. This enabled us to fully implement this revocation support in the microkernel, although that full scale revocation support could not be fully extended to all of the user space object managers in the Flask prototype. As the University of Utah completed their research goals for the Flux project in the Fluke microkernel, they wound down their efforts in that space, leaving us again looking for a platform to carry forward our own research. During the DTOS and Flask work, we had approached multiple operating system vendors and engaged with them to seek to gain adoption of flexible man-triax controls into their respective operating systems. Unfortunately, we could not convince them of the viability and value of flexible man-triax control when we could only demonstrate it in the context of these research systems. In order to provide us with a means of demonstrating that viability value, we looked to open source operating systems as an opportunity to demonstrate the architecture in a real system that could be leveraged. And at the time, the National Security Council had actually called on the government to begin looking at greater leveraging of these open source systems. Our goals in doing this were to not only show viability and value, but also provide an open source reference implementation from which any operating system developer could learn. And to provide ourselves with a long-term research platform for further research and development that could outlast any given academic research project. At the time, Lennox's large developer community, Grand Option and Open Development Model made it an attractive foundation for our work, even though we ourselves had greater familiarity with BSD Unix and its derivatives at the time. We created the reference implementation of the Flask architecture in Lennox, which became known as Security Enhanced Lennox, or SC Lennox, first released to the public in 2000, and accompanied by detailed technical reports describing its design implementation and motivations, which was followed by a number of published papers seeking to gain mind share and insight for those ideas. In bringing the Flask architecture to Lennox, we had to first adapt it from a microkernel-based architecture to a monolithic kernel architecture. The security server was moved from a user-space task into the kernel itself as a kernel subsystem. This was viewed as a necessary accommodation to performance and complexity, and to fit better with Lennox's natural architecture. However, in doing so, we preserved the flexibility of the architecture. The security policy logic remained encapsulated within the security server behind its generic security interface. And the support for the caching infrastructure and the other elements remained intact. SC Lennox was presented to the Lennox kernel developers at the March 2001 Lennox kernel summit. At the time, there were a number of extended access control implementations, all seeking to gain support or adoption into Lennox. Our goal was slightly different. For us, SC Lennox was simply a reference implementation in order to convince the community of the importance of flexible mandatory access control and demonstrate its viability and value. We were less concerned with the direct adoption of SC Lennox itself, and more concerned with showing that flexible MAC was something that was necessary and valuable, and that the flask architecture provided a sound way of doing so. Unfortunately, Lennox interpreted our presentation in much the same way as the other extended access control implementations, as simply seeking direct adoption, and instead called for the creation of a general security framework for the kernel that would support any of the different security projects as a loadable kernel module. This led to the creation of the Lennox security modules project by Crispin Cowan. At the time, we had a number of concerns with LSM as an approach to operating system security. Nonetheless, we chose to work with the community to develop LSM as a framework, and then to adapt SC Lennox to the LSM framework, moving SC Lennox behind the module interface, including the flask architecture components. SC Lennox helped to drive many of the requirements for LSM due to the comprehensiveness of its controls and its support for release privilege. None of the other extended access control implementations provided the same range or granularity of control, something that remains true to this day. We then worked to get the LSM hooks incorporated into the mainline kernel, and ultimately SC Lennox itself, which was merged in 2003. One of our key observations from the earliest of our work was the importance of application security. We knew that operating system security would be inadequate to meet the higher level security goals of users, and so from the beginning, the flask architecture supported extensibility beyond the operating system to support application layer access controls over their abstractions and operations, so that the same guarantees could be provided at that layer. In Lennox, we preserved this architecture by exporting the security server APIs to user space and by providing similar components in the user space libraries to user space object managers. We also explored user space security servers to support alternative access control models not suitable to the kernel in a number of research programs. The flask architectural support for policy enforcement was implemented in a number of different user space components in Lennox. Debuss was an early adopter of this support in order to support mandatory controls over inter-process communication through Debuss. For Xorg, we developed a general access control framework somewhere to LSM, and then an access Lennox module to support labeling controls over the windowing system abstractions and operations, which has been leveraged in various multi-level desktop solutions. As Lennox community members developed Mac support for Pesgres, applying these same kinds of guarantees to database records and operations, which has subsequently been enhanced and productized. These extensions in the user space provide a means of enforcing a uniform access control policy over the entire system as data is processed at varying layers of abstraction in order to enable end-to-end security goals to be met. While we had some success in gaining adoption for this user space access control, the lack of a coherent security story for Lennox, and particularly for the Lennox desktop, limited the adoption of the user space controls in third-party applications. While the Lennox provided a successful reference implementation of the flask architecture in a real operating system showing its viability value, we wanted to demonstrate the applicability of the architecture to other operating systems as well, and to foster wider adoption of flexible Mac beyond Lennox. Toward that end, we partnered with DARPA to sponsor MantraX Control Frameworks first for FreeBSD, and then for Darwin. The Mac frameworks for FreeBSD provided more extensive semantics specifically to Mac than LSM had provided for Lennox, and a more consistent set of APIs and infrastructure specific to MantraX Control. In the case of Darwin, its hybrid BSD mock kernel provided an opportunity to revisit our work from DTOS. We provided the developers of the Darwin work with our DTOS reference implementation so that they could apply similar controls over the mock abstractions and operations. The FreeBSD and Darwin work was led by Robert Watson, who had earlier started the TrustedBSD project, which had earlier explored TrustedOS features for FreeBSD, initially traditional Mac in that context, but Robert also had a similar interest in flexible access control that aligned with ours. The Mac frameworks were adopted into FreeBSD, first as an experimental feature in FreeBSD 5.0, and later as a default enabled feature in FreeBSD 8.0. In FreeBSD, it's been leveraged by various FreeBSD derivatives to enforce custom security modules, including modern re-implementation of what was the first type-enforced firewall product that predated us to Lennox originally, the Sidewinder firewall, later re-implemented in the context of FreeBSD. In Mac OS and iOS, the Darwin work was adopted and incorporated, where it's being leveraged for a number of different security modules that form the application sandboxing model in those operating systems. The DTOS controls over mock are key to providing complete and coherent controls in a hybrid mockBSD kernel of this sort. The absence of such mock controls in earlier versions of some of these systems has proven to be a source of subtle vulnerabilities, and the ability to provide consistent controls between the Mac framework on the BSD side and on corresponding controls on the mock side is key to being able to provide that consistency. Growing demand for the use of mobile devices in secured government environments spurred the NSA to begin exploring architectures and mechanisms to enable the use of such devices in government spaces. This coincided with growing interest by the government in Android as an open platform that enabled significant functional and security customization to be performed beyond that of other mobile platforms. Recognizing the need for improved security in mobile operating systems, we initiated the Security Enhanced Android project, later re-branded security enhancements for Android to comply with the Android brand guidelines. In the context of this project, we explored mentor X controls at two different layers. First, we explored how to adapt SLNX to the unique needs and usage models of mobile operating systems, seeking specifically to address some of the limitations we encountered with SLNX and Linux distributions. We also performed research into new forms of mentor X controls suitable for Android's middleware model and its higher level security semantics. We created a reference implementation that was initially released in 2012 demonstrating the use of SLNX and then presented a series of case studies showing how this would have blocked exploitation of many public Android vulnerabilities. We then submitted this through the Android Open Source Project where it was incorporated into mainline Android and became part of that system. Today, Android has become an exemplar of how to apply SLNX and Mac in general. In contrast to Linux distributions, SLNX in the context of Android is confining for every process from the init process to third-party applications, providing both protection of privileged processes, commonly viewed as trusted by many systems against untrustworthy inputs, as well as completely unprivileged code. A feature of SLNX, known as Never Allow Rules or policy assertions, has been extensively employed in Android to define and enforce security goals for the Android platform by the Android security team. And these goals are checked both at platform build and through automated testing of devices required for Android compatibility branding. This allows the Android security team to enforce certain security semantics even over the OEMs. In the context of Android, SLNX has been particularly applied to a range of different security goals from protecting the trusted computing base, including both general protection of the various services against various forms of attack, as well as more specific access controls even extended up into user space for things such as the key store, to significantly cutting attack surface exposed to untrusted components, ranging from limiting access to various network protocol families, cutting off access to various kernel pseudophile interfaces, even limiting access to specific device driver IO control commands. The support for IO control whitelisting was actually developed by the Android security team and contributed back to SLNX as a general feature, going beyond the existing granularity of control. SLNX has also been employed in Android in order to support decomposition goals, such as the decomposition of the media server that was spurred by the stage fright vulnerabilities, which was initiated in Android 7 has been carried forward further in subsequent releases. The Android security team has also found Mac very useful in enforcing their separation goals between the core Android platform and the platform or OEM specific components of the system, enabling them to rigorously force the OEM components to interact through well-defined interfaces. In the Android 9 release, mainline Android finally incorporated the last residual feature from our original 2012 SE Android reference implementation that is per app security contexts for application sandboxing. In addition to being an exemplar of how to apply Mantra X controls, Android is also, to our knowledge, the largest install base of any Mac system ever. It's currently fully enforcing on around 90% of the over 2 billion active Android devices and eventually it will reach all Android devices once all devices are running Android 5.0 or later at the point at which SLNX became fully enforcing and mandatory for Android. It's also through this effort that SLNX Mac has been brought to Chrome OS. Originally, this was in the context of supporting the Android container on Chrome OS since SLNX was a mandatory part of Android. It was necessary to support it in the context of that container, but there's work in progress visible in the open source Chromium OS project that's extending the same SLNX guarantees to all the Chrome OS processes providing similar protection guarantees throughout that platform. In parallel with our work on SLNX, another research project at the NSA began exploring emerging support for virtualization on commodity computers in combination with Mac to support new functional and security needs. This project created an architecture in a prototype known as NetTop. The NetTop architecture combined VMware and SLNX in order to support connections to networks and differing security levels from a single unified desktop environment. NetTop was an early pre-release user of SLNX. The same approach was later commercially realized in Red Hat Secure Virtualization, or SVRT model, and in its approach to using SLNX to enforce container separation and controls over the interactions between containers in the host OS. As NetTop transitioned to product, a research group initiated a follow-on program known as the Secure Virtual Platform. Secure Virtual Platform, or SVP, sought to go beyond NetTop's more limited usage of virtualization, as well as beginning to leverage trusted computing technologies, such as the Trusted Platform module and later Intel's Trusted Execution technology, in order to construct a secure system architecture in which we could obtain a higher assurance system, even with lower assurance components. This research program fed into a number of other efforts influencing a later NSA program known as the High Assurance Platform, or HAP, as well as an Air Force Research Laboratory program known as Secure Review. It also influenced Citrix's Zen Client XT commercial product, which was later open sourced as Open XT, and these same ideas were also fed into what became the first secure wireless laptop approved for use in sensitive government environments, as well as the basis for a variety of research programs that we engaged in with external partners to explore similar ideas in the context of smartphones and tablets. The transition to hypervisors was recognized as an opportunity to revisit microkernel-like architectures for security. Some have said that hypervisors are microkernels done right at the right level of abstraction. In particular, we're interested in being able to leverage hypervisors to isolate untrusted components, such as drivers, such that a single flaw and, for example, a wireless driver could no longer take down the entire system, as well as to be able to isolate security critical components, like a virtual private network or an inline disk encryptor, so that they could be strongly protected from the rest of the system and couldn't be bypassed, enabling us to provide the same notion of assured pipelines that we had from type enforcement in the context of these systems. For this research, which is Zen as a research platform, like Linux, in the context of SC Linux work, Zen was a viable open source hypervisor with a growing community adoption at the time. While Zen provide us with a good basis for performing our research, we also recognize the need to make Zen suitable for security. This led us to extend Zen to support hypervisor-level mandatory access controls through the Zen security modules effort and the Flask security module, bringing the same underlying Mac model to Zen that we had in Linux, which further then allowed us to compose those policies and analyze them together. From our earlier microkernel-based work, we recognize the importance of having a secure lightweight inter-process communication mechanism that could be used. And in the context of a hypervisor, this necessitated a need for a secure inter-virtual machine communication mechanism, one that wouldn't require, for example, a full network stack to be present or exposed by all the VMs in question. This led to research and experimentation with a variety of inter-VM communication mechanisms, from some early internal prototypes that we had to leveraging and generalizing the Cubes OS VCHAN mechanism to the V for V mechanisms embodied in Bromium and Zen Client XT, later Open XT. Most recently, upstream Zen has accepted a new mechanism known as ARGO, which provides many of the features and properties that we would desire in such a secure inter-VM communication mechanism. We also recognized the importance in being able to move towards higher assurance with the Zen of decomposing domain zero, which hosts conventionally a full operating system that provides the management stack and the device drivers of the system in conventional Zen-based platforms. We explored the ability to disaggregate that domain zero through a number of research programs, including the ZOR system, and we contributed upstream support to be able to split the hardware domain from the control domain for Zen to facilitate that kind of decomposition. This also fed into work to support lightweight VMs so that we wouldn't have to have an entire OS stack in all the VMs of the system, both from a scalability point of view and from a trusted computing-based minimization perspective. SVP embodies much more than just mandatory access control, but in the context of SVP, Mac enables us to define, enforce, and validate the key security goals that are being met in the system. In SVP, Mac is enforced at multiple differing layers of abstraction, including the hypervisor, the operating system within specific VMs, and user-based object managers running potentially on those operating systems or in lightweight VMs. Each of these components enforce Mac at the right abstraction layer for the goals it has and then leverages the Mac facilities and the secure IBC mechanisms in order to achieve their underlying dependencies. The union of these Mac policies embodies the overall system security goals and provides a means of validating that end-to-end goals are being provided through the system, something that's been explored in a variety of research efforts we've sponsored. While our work in MantraX Control long predates SCLinux coming back to the early 90s, it's nonetheless interesting to compare the state of MantraX Control and mainstream systems at the time we released SCLinux versus today. At the time we released SCLinux in 2000, to our knowledge, there were no mainstream operating systems with Mac features. At best, there were the various trusted variants of the Unix products, which, as I mentioned before, tended to lag behind in terms of features, updates and application support to their mainstream cousins. Today, flexible MantraX Control can be found in a number of different operating systems. Even better, it's not only supported by those operating systems, it's being actively leveraged to achieve real security goals of interest to users. Interestingly, traditional Mac mechanisms were introduced into some operating systems after the release of SCLinux. In the case of Solaris, the MantraX Control features previously limited to the separate trusted Solaris product were folded into mainstream Solaris through the Trusted Extensions effort, beginning in 2006. In the context of Windows, the next major release of Windows after the release of SCLinux incorporated something that they branded mandatory integrity controls, which was an implementation of part of the Baiba integrity model. While these were traditional Mac implementations rather than flexible Mac, nonetheless, we can see that Mac has transitioned from being a niche feature of separate trusted OS products to being a mainstream feature of many operating systems today. Over the past few years, we've also done some research and development exploring Mac and how it may pertain to emerging operating system architectures that are fundamentally different from many of the mainstream systems of the day, which we had a talk about at last year's LSS. In the context of Zephyr, Zephyr is a real-time OS targeting the embedded space that are too resource-constrained to run Linux. And so this raises interesting questions about how to adapt MantraX controls for these highly resource-constrained devices and preserving real-time guarantees even in a Mac-based scheme. Our early Zephyr work has focused more on basic enabling of critical features necessary as precursors to Mac, but this remains an area of interest to us going forward. Fuchsia is another emerging operating system seemingly targeting more capable devices in the Internet of Things space. And it's of interest because it blends some of the historic work on capability-based micro kernels as well as hypervisor functionality. And so it is potentially a place to revisit both the interaction of Mac and capabilities as well as the integration of the secure virtual platform concepts. There are a number of different areas that remain as challenges in potential future research and development for Mac, some of which we've explored in our prior research but have not yet fully been embraced or deployed into production systems that can be easily used. Usability remains a key challenge for MantraX control implementations in general, not only SC Linux, but others as well. And in particular, being able to advance the usability of these systems without sacrificing security remains a key open issue that would benefit from further examination. Even in the early days of Flask, we looked at how to compose multiple Mac models. SC Linux itself embodies multiple Mac models today. But this problem will become even more pressing as systems support stacking of LSMs, for example. And there's wider use of multiple models together. The simplistic models are simply requiring that all of the Mac schemes to agree will gradually be less and less satisfying. And supporting more complex ways of composing them will become more crucial, and yet in a way that preserves security. As Mac becomes more widespread, being able to enforce and manage policies of a distributed Mac system, whether we're talking about a collection of different network systems or a virtualized system with Mac and multiple layers will become more pressing. And lastly, while most existing Mac schemes are primarily oriented towards supporting a single party's interests, whereas most real world platforms have multiple parties, potentially with competing interests at stake in them. And so being able to support these multi-party situations and being able to reconcile and enforce the security goals of them all will become an increasing need. While we've come a long way with regard to Mac, and it's today well established for operating system hardening and some core concepts such as isolation of applications continuous or virtual machines, much of our earlier vision for extending Mac up through a user space to provide a uniform access control model that can be provided to enforce end-to-end access control as data goes through all the different layers is not fully realized today. Instead, there's a tendency to reinvent access control models at each different layer using different abstractions and fundamental primitives in a manner that makes analysis and validation that a coherent set of goals is being met very challenging. It's also not being truly leveraged in emerging technology spaces, which again are commonly reinventing the wheel. So just as in 1998, we published a paper noting the inevitability of failure with respect to security if mechanisms such as Mac weren't incorporated into operating systems, we'd have to say much the same is true today, that the failure to incorporate these mechanisms as well as to leverage them for meeting the fundamental dependencies of higher level services and emerging technologies will fundamentally doom the security scheme to failure. So my call to you is, as you engage in security developments, please, where it is appropriate, leverage the underlying Mac schemes to meet the assumptions and dependencies your schemes have and then seek to extend that so that coherent and consistent access control can be applied throughout the whole platform. Thank you. So we have time. Thank you for the interesting talk. I want to ask about your views on the quality of implementation aspects, especially in the context of unique kernels written in memory and safe languages, like for example, Linux. So let's say there is this unprivileged system called that allows the tamper with kernel memory and they flip as Linux enabled flag and then suddenly all of this is disabled. So you mentioned hypervisors and micro kernels several times. Are those required for these to deliver the guarantees or not? Okay. So we view architectural improvements such as hypervisor based technologies. There's also been a big trend that was discussed yesterday about trusted execution environments and a number of different hardware based capabilities that are emerging. So I guess let me tackle two different things there. So one is while we're certainly not opposed to use of safer languages where appropriate, we don't think they magically solve all security problems and they often have run times that themselves have dependencies that have concerns. So to the extent possible, we want to be able to leverage hardware primitives to be able to provide us with some safety guarantees. And so using a hypervisor based architecture or doing a micro kernel based decomposition of the OS provides us with more of those hardware backed guarantees rather than just the language based guarantees. That said, where appropriate use the safe languages to make specific components. An architectural model like this allows us to focus attention where most critical in the system to get the higher assurance and then use the architectural guarantees to provide us with the overall system goal because it's impractical for us to reimplement the whole system say in your safe language or whatever. With regard to trusted execution environments and that whole approach to things, the caveat we have with those is a lot of times those are just moving the trusted OS problem from one place to another, right? So we might move the trusted OS problem from the normal world to the secure world in a trust zone based architecture, for example. And so we have all the same issues still. We still need to address those issues and then real world users still have security care about in the so-called non-secure untrusted side that matter to them. And so that itself doesn't obviate or eliminate the need for those mechanisms. So thank you so much for your work. This is all fantastic stuff. I was looking for it until you got to the end when you were talking about multi-party and the usability and both of those are concerns that I think make a lot of these systems very difficult to imagine using in a much more broad way. I think the places you guys have pushed it is really fantastic. The other thing I was wondering if you could touch on a little bit is how a flexible Mac architecture deals with the assumption of bugs and the assumption of vulnerabilities in components within the system. And I absolutely like the benefit of sandboxing and keeping isolating the vulnerability to a relatively small area. But it seems like there are a few assumptions of things that cannot be tampered with. So for example, offline vulnerabilities that tamper with labels or the presence of a bug that allows you to influence one of those fundamental primitives that the security system depends on. So being able to modify labels, being able to modify the way that types are labeled on objects in the system. And the same question then with regard to offline tampering where the system is not running and where we have an external actor able to modify those things from an offline environment where all of the analysis doesn't help us because we can't model that offline attacker. Right, so with regard to the first piece, the concern about residual vulnerabilities in core components that have to be trusted, right? So our goal is, architecturally, we want to limit trust in every component of the system to only what it has to be trusted to do. And then we can focus our assurance efforts very specifically, right, which is pretty critical if you're gonna get assurance, right? Because we will never get high assurance for a whole system or a complex software system or, well, I shouldn't say never, but it's gonna be a long time. And so that's a key architectural element. And then what I didn't talk about in this talk because it's not a MAC topic is we have a whole another body of research and development in what's known as the measurement and attestation space. And so in addition to creating these architectural guarantees for prevention, we also have a whole body of work that's looking at how to determine whether the different software components of the system are both loaded and still in the expected state that we expect them to be in. So we have developed prototypes, for example, to be able to measure a running Linux kernel and determine that it's still in the expected state, not just when it's first loaded, but at any time during its execution and then do, like, inter VM introspection or to run a introspection from system management mode running on an STM, for example. And we've done the same kind of thing for hypervisors within context of Zen. So building out an architecture where we can drive our assurance efforts in a way that's scalable, limit residual vulnerabilities in the other components and detect vulnerabilities and find the key places to stand on the platform from which we can conduct those guarantees has been a big part of this body of work. And it's also been challenging because every time we find a place to stand on the platform from which we can do that kind of analysis, it seems like the Harbor manufacturers introduce a new place to stand that we don't have visibility into. So we would like that to stop. So I have one more question because we're running out of time a little. So one of the things that always jumps out to me about descriptions of SE Linux and similar systems is they tend to focus on policies coming from the administrator in contrast to systems like SecComp, BPF, or Apple Seat Belt, which tend to sort of place the developer of the software at the center of policy creation. Do you think these are just like vocabulary things, just different ways of talking about the same thing, or do you think these descriptions ultimately dictate different architectures? So Mac can be a somewhat relative term. So mandatory with respect to whom, right? And so you can kind of scale that all the way from I have an administrator or system builder who dictates the whole policy to this component dictates policy for these other components, right? And there's even a sort of a notion of that in the context of our user space object manager model because then the user space object manager models are enforcing policy respective to their objects, right? So there's obviously a delegated trust model there. The problem I think we often get into the further you take it up the discretionary access, right? Is the more susceptible you are to, or the greater the assurance burden then is on the user space components to be free of vulnerabilities and bugs, right? And to get their respective policies right, which particularly as you move into the SecComp world and you have a more programmatic type model, right? Analysis of a SecComp policy, I think is much more challenging than analysis of a Mac policy, right? And so it can be hard to gain assurance there. So in general, what we would like to see is as you build a higher level mechanism, if you can root that in some underlying guarantees like that the Mac mechanism provides you that are needed for its basic safety, right? Then we can get some better granularity, more dynamic controls that are good to have, but still have some baseline hard bounds on the possible range of actions in the system. Okay, so we'll finish up with the questions there and continue discussions during the conference as needed. So thank you once again, Stephen for the camera. Thank you.