 So now, it's my pleasure to introduce Ken Zhang. He's going to present his paper, Zero Trust Security Approach for MOSA Systems. Good morning, my name is Ken Zhang. I work for L2Harris and I'm a social architect for the cyber division here in L2Harris. And I've been mostly involved into architecting type one cover devices for technical and space systems, as well as developing security and key management in the infrastructure for a system that handles sensitive data processing. So my topic today is the Zero Trust Security Approach for MOSA Systems. So MOSA Initiative, like SOSA, has made significant impact to industry to improve the affordability, upgradability, and otherilities to better design. It certainly has and will benefit the DOD to enable more rapid and cost-effective accessions and improve lifecycle and supportability. It also helps the industry in steering its product roadmap to provide a standard interoperable solution to optimize development costs, risks, and time. And given SOSA as an example, the consortium drive and enforce alignment with the SOSA principles through the quality attribute listed here, I would like to use them to explain how their positive features may have negative implications on the security aspect of it. In this slide, SOSA attributes are organized to show how some may inversely affect the other ones, shown on the left and right-hand side of the slide. For example, attributes with interoperability and portability drive for greater sourcing opportunity for more cost vendors, but because it becomes easy to swap parts to integrate into the systems, it may increase supply chain risk, since now there are more interoperable supply chains to be greater supplied products to SOSA. Similarly, given the system is easier to insert in the change hardware and software components, it may also lower the barrier for cyber attacks without adequate security controls. At last, modularity and scalability drive to optimize resource utilizations. At the same time, this will also imply that SOSA needs to securely host applications and process data of different sensitivities. I described it as a parameter list environment. What I mean by this is that, for example, when we use to have a physical module, a single one that's dedicated to do only top-secret processing, you can just protect that because there's a fine parameter around that particular module. For people who familiar with the term, this is more like a NOS environment. However, when a physical module or software and hardware resources are being shared, or it's being general-purpose processing nodes, for better scalability, you can no longer actually draw a parameter around these processes, so we've been more facing more of an MLS environment. Now I want to look at these security implications in more detail and see how we can mitigate it, traditionally, and how a zero-trust security approach may provide better alternatives. For the increased supply chain risk, traditionally, we would just expand embedding process and increase the rigor of it, but this can become very expensive, and more importantly, if the increased rigor makes it more difficult and costly for vendors, it may reduce the incentive for them to align, which limits the benefit of having a wider offerings. On the other hand, we can keep our standard embedding process, but its trust and the cost of hardware and software be introduced at use. Such components are to be periodically checked for integrity and develop a potential compromise that's detected. The velocity can be tailored because of how sensitive or critical a system is. For a lower barrier for intrusion and exploitation, we can harm every interface, but based on a third implication of having a perimeter-less environment, this may not be even possible. So even if it's possible, harming some, but not other interfaces will have negative impact on the overall interoperability. A better alternative may be to enforce the least privileged policy at a system level, which would manage by a root-of-trust component in a zero-trust architecture. And lastly, such system-wide policy enforcement mechanism should be designed to enable network segmentation across the system to securely isolate applications and traffic of different sensitivities to protect the perimeter-less MLX environment. So it's just a security approach that mitigates security concerns in most systems. It's previously based on a zero-trust security model. A zero-trust security are native from the commercial world that's designed to minimize exposure to cyber-security risks in a perimeter-less environment like cloud environment. It's been around for a while now as mostly used by user with stringent regulatory or risk-averse considerations. The underlying principle is that every application's and inputs are considered malicious and should be guarded against. The fine-maintenant of the zero-trusts are having least privilege, secure access enforcement, and more resilient access control, continuous monitoring, and inspection, and locking mechanism that comes with it. This slide shows more of some of the zero-trusts application in the commercial world. For example, you know, Cisco is thinking about how going from a trusted perimeter network environment to a zero-trusts perimeter-less will impact the trust boundary as well as feature required to secure it. Similarly, Microsoft is looking in point-conditional access and dynamic access control for managing resource requests for the cloud services. And now, we should real transition to discuss how zero-trust approach can be applied to a most aligned systems. For this analysis, I will use social example. The diagram shows social architecture and associated models. The focus here is aligning the security-related modules to the approach, naming the security services, the cryptocopter, and the guard cross-domain service. The network sub-system is related as well to support the network segmentations that we'll discuss in later slides. So first, the core of zero-trusts security approach is the instantiation of this rule of trust. The rule of trust is the most secure part of systems. There will be a single rule of trust per systems, but the delegate can be made to improve the resiliency. Depending on the criticality of system, rule of trust should be invented in a highly secure and trusted hardware, like high assurance or type one platform. Security critical data, such as keys, and policies should be securely provided out of ban. There isn't yet an official modular allocation in the social standard for able to trust, so I'm going to call this the security manager, which is the term I've been using in related social interaction so far. The security manager is responsible for a policy-driven autonomous security functions. This means the security manager can make decisions on his own based on policy to maintain a security posture of the system, which may interrupt ongoing operations. So this includes like security expectations, which is the process to validate trust of hardware and software components being introduced to the system at use, or a lot management, monitoring functions, to react to attack intrusion, and the privilege to grant more importantly, revoking security privilege when a compromise is detected. As opposed to current security-related social module that one lists on the right-hand side, which would be more of a request-response service functions, which would casually help security manager in executing some of his own functions. So kind of a little con up describing how the security attrition works. So security attrition function help mitigate supply chain risks, mentioned in the beginning of the slide. Security attrition is a verified identity of entity, as well as valid in integrity of the entity. As the vendor develops hardware and software entities, they will also develop an attrition evidence associated with their solutions. For hardware, this can be like hardware boot-up sequence, and for software, it can be a hash of the image itself. The evidence is then provided security to the mission security authority, which loaded into the security manager prior to operations. When hardware and software is installed in the system, it will start off with no publish, except to communicate with the security manager, thus enforcing the least published principle. The newly installed component then provides attrition evidence to the security manager to validate. Only successful validation would the security manager grant privilege to the new component based on replaced policy. The approach can be controlled by key certificates that module needs to get access to other resources in the system. Let's see, up here, okay. All right, so once you get past the security attrition, the security manager will need to continuously monitor the system for intrusion expectations. This can be done by enforcing the collection of security related metrics and logs from the operating modules and running analysis and bits. If compromise detected, the security manager needs to have mechanisms to quickly revoke privilege. The privilege are potentially on the compromise module to contain the incidents. If the mechanism is not well thought out, let's say system needs a single key to encrypt all of each of module communications, then revoking one module will impact the entire system, so that's not resilient at all. So instead, the privilege management need a flexible network segmentation approach. So that brings us to the next topic about network segmentation. So remember we're dealing with a perimeter-less environment, so we'll try to create these software-defined security parameters among the resources of different sensitivities. For example, every connection between module can be isolated by encryptions with symmetric session keys. Given the last slide, if the module is compromised, revoking a module will only affect the module that had been communicating with this compromised module but not serve the entire system. Given this approach, every module will have a function at its network edge, so shown as NE on a diagram that encrypts the traffic appropriately. Also logically speaking, the module would not be directly talking to each other. Instead, there will be an enforcement function that controls and monitors traffic and ensure all the network-etched security functions are properly keyed to support the inter-module communications. So not shown here, a logical place to put this traffic path, like function, will be on switches in the system. Lots of remember one motivation behind network segmentation is to allow the security manager to enforce lease privilege more effectively. Time is back to the security expectation when a module is introduced to the system, the network edge function only has a pre-placed key to talk to just the security manager. Once validated, the security manager will update the inter-module access policy settings in a traffic enforcement function, which in turn will generate the associated session keys to enable the newly authorized inter-module connections. In a compromised case, as security manager can quickly revoke access, associated with a compromised module with the same access policy configurations, traffic enforcement function will then in turn mediate the key exchange to support that change. As mentioned before, to maximize resiliency, there can be delicate localized security manager that ties back to the single road trust through agitation. This will allow the system to have security enclave to improve efficiencies of security function and improve the modularity of the design. Lastly, the two trust security approach is reduced as possible limitations for social systems to mitigate attack vectors associated with supply chain and a complex MLS environment. Social is used as an example of how such approach can be implemented and it requires security capabilities. As part of security's technical working group, we've been engaging with several other PWGs to allow the possibility of adapting this approach in social system, but it's really relevant to all MLS-aligned architectures. So while all design and procurement benefits are being realized through MLS initiatives, a zero trust approach can help ensure the security and resiliency of systems are not being compromised. And that's the conclusion of my slides.