 I'm very excited to be here even in this virtual world. It presents interesting challenges, of course. So what we'll be talking about a little bit is about our paper on the three perspectives in dealing primarily with buyers, suppliers, and integrators. So first, kind of an overview. Today, we all understand you know, we're in an industry of the evolution and the solution space is becoming very complex, right? You have the disparate services and technologies and methodologies which are often achieved really through a unification of legacy systems and new technologies. So the whole focus of this is really maintaining operational and competitive benefits in this changing world. So we need to actually, you know, rapidly develop and rapidly adapt our systems as well as create and bring new ones online. So our paper here discusses really the concerns, risks, and challenges, and really the benefits of utilizing the face reference architecture in order to meet those challenges and utilizing it, in particular, the face data architecture and utilizing it to benefit the whole process. Now here we look and as part of the three roles of interactions, we present a very simple case of the buyers, suppliers, integrators, and you might note on supplier three, you know, some of these are off the shelf components. So really the idea is how to pull all this together from your specification, how to integrate those, and of course what the suppliers need to do. Now as Captain Wilson mentioned a little earlier, one of the things with MOSA, as we look at our modular open systems approach, it really begs the question or begs the solution of model-based systems engineering. And that's really where face enters the fray. It really does help manage some of that complexity and when we look at this model-based MOSA, it really helps us through unification of those legacy systems, right, as well as those new systems. So that's a real focus. Now one thing I'll mention is I'm going to talk about the buyer perspective. And then Bill Tanner will talk about the supplier perspective. And then Leanne will follow up with the integrator perspective. Okay, so the buyer perspective. So the real focus here when we look at the MOSA approach, right, is really it's a divide and conquer approach. And this diagram, by the way, it's not marked, but it's a benefit provided by Southwest Research Institute out of San Antonio. So it's a divide and conquer and really looks at the flow and the life cycle of your system and breaks that up into the different pieces. So from a buyer's perspective, the idea and the real focus is really developing the specification so they can go out for procurement for the integrators and for your suppliers as well, potentially. Some of those components that are purchased, like off the shelf, are then provided to the integrator as part of complete subsystems. So it really adds to a lot of complexity. Now there's a question of how do they do that, right? How do you create these specifications? Now there's a lot of work through different technologies. Of course, when we talk MBSC, everybody here says ML, and that's one key technology, but it's not the only one. There's other ones out there such as ADL, FACE, and the real focus is how to integrate all these different model-based systems engineering tools to really create this solid specification that's not ambiguous. When we have ambiguity, it's really the enemy of any systems development effort. Ambiguous, it means it takes someone's interpretation or that they have to seek clarification. So it becomes a real problem for long term systems development. And usually when people make assumptions about the specifications, which is horrible, we end up with the problems arising or identified much later in the cycle and that gets very expensive when you look at your life cycle. So the phase of data architecture, one of the things we do talk a little bit in the paper, but there is limited amount of space in the paper and then presentations obviously when we get 20 minutes, some of these topics could take days to discuss. But it's really how do you take the phase data architecture and how does it support that specification? Now some of the key areas we look at because FACE was designed for unambiguous identification of your data, right? It's really defining your data dictionary within your system and being able to share that with your suppliers and integrators. It's one of the key components is really understanding what we're talking about. And it doesn't just do that in terms of the semantic definition or what the thing is, but it also does it in terms of the measurement specifics. We are dealing with aerospace after all and so the key is making sure that we all understand exactly what it is. But it goes beyond just the data itself. It also gets into your performance, understanding the different units of portability as they are defined in the system. There's a functional, non-functional specification as well as one of the keys is also interface specification. Making sure we really understand what those interfaces are and we specify them sufficiently but not over specified to reduce the flexibility needed in any development or integration effort. So next, Bill Tanner will talk a little bit about the supplier perspective. Thanks, Sean. Okay, so from a supplier perspective, really what I focused on was my experiences with building data models for the Aviation System Integration Facility work basically in phase 2.1. And so what I tried to present is some practical or pragmatic approaches and how I manage things I bought in real early to the fact that I could be able to create a conceptual model from which I could generate multiple logical and platform instances. And so that kind of worked well for the varying projects that we had. So if you look at this common V diagram, there are really three different efforts that or types of efforts that I would encounter. One where it was pretty much like a green field from the very beginning and I could define the concepts and the semantics and pretty much have control over that. Others were a little bit of mix where there was some requirements coming in from project management or engineers or maybe even system people that didn't really have that much experience but they had some SME knowledge in the subject matter, so that was a very good thing. Other times, I was basically given a set of header files and said, hey, go make a phase data model for this that's conformant and we don't really have any time to tell you what these things mean. And so what I found out is one of the real benefits that I thought of phase was that I could model each of those successfully and actually get a phase file that was conformant and that did the need. Go ahead and advance the slide if you wish, Sean. So that kind of brings up some issues here, right? You can actually have a phase file that passes conformance but doesn't really, truly reflect accurately the semantics or the context that was intended. And so that's flexible and that allows you to, if you've got only a few hours budgeted to get something accomplished that's conformant, but it kind of defeats a little bit of the purpose. So there's a concern there, but I definitely thought that was a benefit that we could move on and we could make changes in the future. That's another really interesting point is point three is this realization disconnect. And what the realization disconnect allows you to do is really disconnect the messages and the views from the semantic context and the definition of the meaning of the data that's in the messages or in the views. And so that's very powerful because if you make a mistake in the semantics or in your understanding of the semantics progresses over time, then it's very helpful because you can change the upper portion of the model. That's to say the conceptual and the logical perhaps without influencing what the platform views or the platform messages look like. So there was some flexibility. Some folks don't really, you know, when they're building these models are not too keen on having to harden the frame of references and or the units, but that's just a little thing that we kind of deal with and some of that's been mitigated a little bit in 3.0, although I don't talk about it too much. And then the code generation is tied pretty much to the platform. And so there's some issues there where you want to where you want to have your naming conventions lined up, especially for the platform types. Go ahead and advance to the next slide. Here's just, you know, again, trying to be a little pragmatic and show you what I did. This is just kind of one conceptual, logical and platform and UOP model packaging that might be of use to some people. And, you know, I know we're kind of constrained on time. So I'm not going to go through this too much. But really the goal was to try to keep the common generic, more abstract conceptual semantics and context within the conceptual model, you know, independent of any particular specific project. Now I broke that a few times when, you know, I didn't have the time to do those abstractions, but that would certainly be an opportunity to go back and rework those. But this generally worked for me. And so this is an example that you can think of. And kind of with that, the following slide goes into a couple or the following two slides go into a couple different ways of managing how the changes occur from conceptual to logical. There's this notion of a single entity with multiple characteristics where you've got something like a rotorcraft and you need to realize the speeds differently using different frames of reference, different units, and perhaps even different constraints. So that's one way to do it. And you can see that I've used some suffixes to the names of the, to the characteristic compositions there, P1, P2, you know, perhaps for Project 1, Project 2. And then you can see similarly at the platform level with a Type 1 and Type 2 and that gives you kind of a flexibility without having to create a lot of redundancy. Now, the next slide shows realization of interseparate entities. And that was just another way of doing things. Sometimes I did a combination of each. I'll just come clean as what I tried to do with this model, it had kind of a two-fold purpose. It had the purpose of showing what, you know, we're providing what needed to be done and then also exploiting as many as the Phase 2-1 constructs as I could. So I think that's it and we're ready for the end now. Hi, so can everybody hear me? Yeah, so when we're looking at the integration perspective, we looked at the traditional challenges for an integrating. Some of those were legacy systems and custom solutions and the challenges that they provide generally being brittle, patched over their life cycle. So they're and, you know, the custom solutions are usually narrow in purpose. So they're not well suited to integration into a larger system of systems. And when you consider additionally the multi-organizational solutions, you have challenges there for the integrator in understanding the domain of the interfaces defined, possibly the number of interfaces they need to support and other data rights. And the number three was the data interpretation. It was found that even within teams, there was not necessarily a common understanding of the data. So we need to build that out in a larger organization in a group with the breadth of that knowledge and the understanding of the data becomes skewed. The last point for the challenges was the skill set. As these systems and systems become broader, the integrator has to have a blending of skills and that gets difficult for building those in the marketplace. Next slide. So looking at the traditional integration, this kind of shows you graphically the challenges that were encountered with a multi-vendor solution. So, you know, as I spoke, you have interfaces that are defined custom or they're proprietary. So, you know, going left or right, vendor A has a certain set of interfaces. They're not standardized. So it gets difficult to integrate those with the other vendors, vendor B and vendor C, because the points there may not align. In some cases, some of those solutions are still monolithic and that, again, presents a challenge because your alignment is not there. And then it causes more work for the integrator, either writing adapters or working to integrate those through additional means. And then when you look down toward the bottom, this graphic shows you that there is a bit of a horizontal achievement for each of those vendors across that last component, but the point is here that without, in a traditional environment, those are typically opportunistic and not well coordinated. Next slide. So now here's a look at integrating those same types of vendor solutions in an AMBOSA environment with an MBSC approach and what the FACE standard gives us there. So when you look at the FACE standard, it defines the interfaces for us. It has the OS interfaces. It has the transport services interfaces. It's the IO interfaces. So with the well-defined interface and it has convergence among the industry, that allows us to have an anticipate better where those integration points will be and it allows us to coordinate better. So we can be in a position where we can achieve the horizontal modularity and it gives us more opportunities to do so. Additionally, with the transport services interface and the data model, we can now capture that data and express that semantic meaning in a very powerful way so that there is less ambiguity and everybody understands the interpretation of that data. So again, the FACE with the MOSA gives us much more powerful tools for horizontal modularity as well as vertical modularity within a systems of system solution. Next slide. So to conclude the effort here with the three perspectives you can add to that, you know, we're very much saying that the FACE standard and MOSA is a powerful tool set to achieving integration, achieving the needs of the buyers and the suppliers, all in a solution set that and in a process that gives us a good path forward. We recommend that all of those roles are participate in FACE data architecture to development understanding of the data and it gives us the best of breeds from our component standpoint for moving forward in the industry. Thanks. Okay, that concludes our presentation. I'm not sure if we have time for questions. We have one question come in. We have a couple minutes till we're at our breaking point. So let me read that to you. Have you considered the open group standard open data element framework for a common semantic starting point? I'll take a stab at this. When we originally looked at the FACE data architecture, we did pretty exhaustive analysis of all the things that were out there. This has been quite a few years. And what we identified was things like XML and UML and every other ML. They really didn't provide enough semantic definition in terms of the measurement or the framework reference as well as the semantic relationships, right, the interrelationships of the data. So the real challenge, you know, we had was we needed to develop this data architecture. Now, what we did in FACE is we didn't tie it to a single definition of different elements but allow them as, you know, building blocks. We do have a shared data model which does provide the foundational kind of observables that we call observables or fundamental things like mass weight and that. And then we added, of course, measurements that define things like measurement systems that's similar to a topologies or metric spaces to be able to build that model. So what we find is most of the existing model data architectures that are out there like UDEF or Open Data Element Framework, they can be built on top of FACE, but it's a little hard to adapt and take it wholesale.