 All right, good afternoon everyone. This is going to be the TWG overview, which will be kind of an overview of not only how the TWG was in the face consortium structure, but it will involve a technical overview as well. So here is your current face officer's list for the TWG. You have myself as the chair and Ben Brasco with ADICOR as vice chair. So the technical working group operates under the steering committee. We have multiple subcommittees. Each is responsible for a different aspect of the face technical standard. We have general enhancements. They're the ones that are responsible for program language mappings, ideal management, things like the IOSS, PCS Feature Bless. You have your operating system subcommittee. They're responsible for everything OSS, which if you don't know what OSS is, we'll get into that later. But basically they're responsible for the operating systems aspect of the face technical standard. For security concerns, you have a security subcommittee, airworthiness concerns, airworthiness guidance subcommittee for graphics, graphics subcommittee, everything having to do with the transport layer involved in the face technical standard. You have a transport subcommittee, CBM. They develop and maintain the CBM, aka the conformance verification matrix. You'll learn more about that in the conformance overview. Then we have a standard subcommittee and that's basically just a fancy way of saying it's the subcommittee leadership along with the TWG leadership. We use this as a forum to make decisions regarding the TWG as a whole and also use it to host our biweekly TWG CCD meetings to adjudicate PRs and CRs against technical products. And that's about it for this slide. What we're responsible for is basically all the technical aspects of the face consortium. We produce not only the face technical standard, but the face reference architecture, any additional supporting technical documents, implementation guidance, and basically any artifact that assists you in gaining face conformance. We're also responsible for the conformance verification matrix as well as the conformance verification matrix user's guide. It does say on here that we publish the shared data models. However, that's kind of a shared effort between us and the DIOG. Technically, we publish it, but the DIOG is responsible for creating it and maintaining it. So just a little clarification on that one. All right. So now we're just going to go into the technical overview. Now we've kind of, you know, given a brief overview of what the TWG in the face consortium is responsible for and how it is structured. So what you probably learned in the business overview, the overall face approach, you know, was to develop a technical standard for a software common operating environment designed to promote portability and to create software product lines across the military aviation domain. The aspects of the face approach are business processes and business drivers, technical processes, I'm sorry, technical practices to promote the development of reasonable software components and to create a software standard to promote the development of portable components. So that's why we're doing this. A little bit more information. So kind of the current state of things when it comes to aviation software, particularly concerning barriers to portability. Right now, we have a lot of software that has a lot of tight coupling and what I mean by that. There is no abstraction. When it comes to things like transports, graphics, device drivers, there is very little use of open interfaces. And so what winds up happening is if you make a change to something, an example I like to use is say you want to change the transport protocol in your system. That change is likely going to triple down to the software application level, especially if they're making transport calls and handling protocol decisions within the application. So that's our current barriers to portability and reusability. What face tries to tackle are those barriers to portability by kind of abstracting and using defined interfaces for things like transport, graphics, calls to device drivers, the operating system, etc. Therefore, you have a greater sense of portability and scalability bringing back the example of the transport protocol. If you were to, you know, using in a, if you use a defined interface for all data movement, then if you decided later on you wanted to change the protocol that you're using to implement your transport layer, that is likely not to affect the individual software application if they're using a defined API for the data movement. So you have greater, greater chance of achieving portability using those practices. So that brings us to the face technical standard. What is in it? What does it provide? First and foremost, the software approach designed to tackle barriers, I can't talk today. To modularity, portability and interoperability, it defines a reference architecture that uses standardized interfaces and it provides the requirements for developing the components that will reside within architectural segments. And I'll talk about more about the segments in just a moment. It defines a face data architecture for describing all data that moves within the reference architecture and the semantics that define that data. There are ideal definitions for face interfaces and there are quite a few and I'll go through those in just a little bit as well. And there are also also programming language mappings that take you from ideal to four unique programming languages, one being C, C++, ADA and within ADA you have 95 and 2012 and you also have Java. And then when the C++, there are a few levels of the, sorry, additions of the C++ standard that are supported. And someone has a question? Okay, if you do not have a question, please make sure you are muted just as a courtesy to others on the call. Also, within the face technical standard, there is a set of OS profiles defined based on levels of criticality. I'll go into more detail about these later, but they basically are divided into general purpose, safety and security levels. So I briefly mentioned the terms segments in the previous slide and what that means is the software approach that Face took was to abstract capabilities and services based on into groupings called segments. And what I mean by that is depending on what your, what a software application provides and the interfaces it needs, it can be grouped into one of these segments. As an example, like you have at the top the portable component segment and as you can see, it only uses two interfaces, one to the operating system and one to the transport. So if you have an application that only needs to move data and provide a service, it will likely meet the requirements of a portable component segment. And then the structure that is created by connecting all these segments together as well as these interfaces is the foundation of the face reference architecture. And there's just an example of what it looks like and we'll go more into the segments in just one moment. Just a caveat to the face architecture or more importantly the face computing environment, which is a realization of that architecture, it is not meant to basically supplant current software subsistence but merely provide an additional environment in order to integrate portable capabilities. So you don't have to go out and buy a whole new piece of hardware, buy a whole new operating system. It's very likely that the operating system you have now provides the infrastructure that you need for the face computing environment, though now there's no guarantee but it is a likely scenario. And now just going through the face architectural segments and talking just a little bit about them before we go into more detail later. At the top, if you're looking at this from a top-down approach, it's the portable component segment. Once again, these are the applications that are considered truly portable because they provide a service and really only need to move data in addition to using operating system level calls. The transport services segment, this is where you're going to abstract all of your transport logic and you can do this in a variety of ways using one U of C many but any but all of your capabilities concerning how the face architecture moves data around is going to be within this segment. The platform-specific service segment, this is kind of like a hybrid one. It extends the portable component segment in the fact that it provides a service or a capability but also can move data in and out of the iOSS which the iOSS or iOS services segment is meant to provide services to either provide or retrieve data from an external means. And what I mean by external means is it can be either hardware, a device driver or an external bus. And then you have your operating system segment. Your operating system segment is where you're going to have your programming language run times, your time and space partitioning, your device drivers, your network stack, things like that. This is how you gain entry to your operating system devices and capabilities. So face interfaces. So here are the major ones that are defined within the face technical standard. The first and foremost is the transport services interface. This is a data type-specific interface that's meant to move data messages between applications in the PCS and P-TRIPS. So basically if you need to provide communication between PCS and PCS or between PCS and P-TRIPS or vice versa, you are going to use the transport services interface and it is typed according to a specific model message which we will get into a little bit later as well. And all logic having to do with moving that data is encapsulated within the UOCs that make up the transport services segment. And what I mean by UOCs is the software applications within the transport services segment. And you have the IO service interface which provides an interface to provide data movement and external access to and from devices or external hardware. We kind of touched on that in the last slide. Basically if you need something from an external source or need to provide something to an external source like a message from a radio or a command to a radio, those are two popular examples. You're going to use an IO service or a software capability that provides a service located within the IO services segment. And then the OSS interface. This just provides a standardized means for the software to use the services within the operating system and other capabilities related to the OSS. If you're making POSIX calls, this is how you get to it. The OSS is typically going to provide either ARIK and or POSIX interfaces for your software applications to use to execute their functionality. Alright, some face terms for software. Any software component that resides in a face segment designs the requirements for that particular segment is what we call a unit of conformance. Basically a software application that is designed as a face technical standard. A piece of software may be referred to as a unit of conformance at any time during its life cycle. It is not considered a conformant unit of conformance until it goes through the face conformance program. As a subset of units of conformance, you have UOCs that communicate with the TSS. If a UOC communicates with the TSS, it must provide what's considered a unit of portability supplied model, which rules for how to construct it are featured within the face data architecture. The ability to host and integrate face software components is dependent on what's called a face computing environment, which is an implementation of the following segments. The face TSS, the face IOSS, and the face OSS, as well as common services required for operation. So all the units of conformance that provide those four things are going to make up the face computing environment. Basically, face computing environment is just a fancy way of saying you have all of the capabilities there to integrate PCS and PCCS, aka your capabilities and services. Diving a little bit more into the individual segments, starting with the portable components segments. This is where, once again, your applications that are truly portable or considered as portable as possible are going to reside. They provide a service and require only the operating system and the transport service, more importantly, the TS API in order to migrate data. So if they have anything else, any other needs, like getting data from a radio, they are not going to be considered part of this segment. For the transport services segment, this is where all your data movement is abstracted in terms of capabilities. It provides an application. Yes. Sorry, Chris. I'm trying to interrupt. I'm looking at the slides that are showing up, and it looks like there's some grayed out area at the very top. I don't know if you see it. Is that on purpose? No, I'm not doing it on purpose. It's the speaking thing. I can't really do anything about it. I'll just try to move it as far away from the screen as possible. Sorry about that. Okay, thanks. Yeah. So continuing with the transport services segment, all transport and protocol specific functionality is encapsulated within the software components contained within the TSS. And just some examples of the capabilities that can be provided within the TSS are distribution, type abstraction, configuration, data transformation, methods association, and there's quite a few more, especially in the 3.0 standard. There are a few more capabilities defined that a transport can implement, including domain to domain capabilities. Let me check the chat because I just saw, okay, that's just saying that visibility is better. Okay, and this is just a graphic example showing things from an integrator point of view. And what I mean by that is, say, you have three portable component segment UOCs. The integrator using the transport services UOCs and the technologies that those allow, the integrator can be a little bit flexible when in terms of what underlying mechanisms are used to move data. And so in this example, you have PCS1 that only needs to move data. So it uses the distribution capability and moves data using UDP sockets. And it does that by using the OSAPI. In the PCS2 example over on the far right, it also requires just the distribution capability, so it's going to implement a TCCP connection. And then in PCS3, you have a protocol translation that happens here, most likely between PCS1 and PCS2, where it recedes in UDP and then translates it into a TCCP socket call. So here's just an example of some flexibility in the things that the TSS allows you to provide. Moving along to the platform-specific services segment, so this is where your software components that are unique to the platform are going to be housed. Things that are dependent upon an ICD, things that provide a common service like logging, device protocol mediation, centralized configuration, things like that. Also graphic services, things that have to interface directly with a graphics driver are going to be provided within this segment. And because of the array of different capabilities that can be encapsulated here, you have sub-segments within the P-TRIPLE-S. You have your platform-specific device services. This is where the majority of your P-TRIPLE-S ULCs are going to reside is in your device services. And what I mean by that is the things that provide movement of data between a PCS and an external device or an external bug via the IO services interface and the TS interface. And then you have your platform specific common services. I use the examples of logging, DPM, centralized configuration, things like that. And then your graphic services. Now graphic services can reside in your PCS as well, but the difference is whether they need direct access to the graphics driver. If they require driver access, typically they're going to be reside in the P-TRIPLE-S IO services segment. Now this is the one where if you need to communicate with an external piece of hardware, device driver, or an external bus, you're going to have your capabilities for doing so encapsulated within the IO services segment. Each unique unit of conformance within the IO assess provides a service either to retrieve data or provide data to a bus or a device. An example of this is if you wanted to implement a service that writes data out to a victory bus, you would implement that software capability within the IO services segment, operating service segment. So this is where the foundational system services and vendor supplied software is going to reside within your face computing environment. It supports the execution of all the face components within your face computing environment. All other segments rely on the operating system segment. It defines a set of APIs that provide a standardized means for space software to use the services within the operating system, and it defines the OSS interfaces based on a series of profiles, general purpose, safety, and security. When it comes to your POSIX and ARINC APIs, general purpose being the most, for lack of better terms, liberal when it comes to the function calls that you can use. Safety is a little bit more stringent, and then security being the most restrictive in terms of the things that you can use. Just an example of other capabilities that would be limited by profile. Time and space partitioning is required for safety and security profiles, whereas in general purpose, the only requirement is space partitioning. Time partitioning is considered an optional capability within the general purpose profile. And then in the graphic below, you have some examples of units of conformance that would be provided by the OSS. Because, once again, this is a component-based standard, so everything is at the lowest software application or software component level. So, UOCs that would be in the OSS would be your health monitoring fault management. UOC, if you're using POSIX to do your health monitoring. Program language run times, this can be for C, C++, ADA, or Java. Any component frameworks that would need to be provided. Configuration services is another unit of conformance which also provides an API, and that API is available to all other UOC types based on regardless of segment. So that's just a little bit about the operating system segment and the capabilities that it provides. So now that we've kind of gone down a list of what is within the FACE technical standard, which once again you have your segments, you have your requirements for designing and implementing units of conformance within those segments, as well as best practices for how to do so. And then here's just some differences between the two major standards that we have that we are currently maintaining. In FACE 2.1, the only optional interface was the type of distraction interface for the TSS, and all the APIs were defined as a set of procedural functions. The only expectation that integrators had were to resolve linkage with FACE libraries in addition to tailoring extra client-specific UOC needs. Now FACE technical standard edition 3.0 did a little bit of redesigning. One of the things is they added some more optional interfaces, one being lifecycle management interfaces for initialization, configuration, connected to a framework, things of that nature. And then in the TSS, there were two other capabilities added, each that provides an API. One being component state persistence, which allows you to retrieve and store data. And then a transport protocol module, which allows for movement of data between TSSs residing on different domains or different pieces of hardware. The only exception regarding these optional interfaces is TPM, because it is considered an intra-segment TPM, where the others are considered inter-segment. Main difference is, of course, intra, pretty much as the prefix symbolizes, it is an API that is only called within the segment. Configuration services was added to the OSS officially in addition 3.0, and it provides an API as well. And then the FACE interfaces, the programming language mapping specifically, took on a more object-oriented design to where FACE interfaces are no longer procedural interfaces. They are defined as a set of abstract definitions of interfaces to where the user or implementer must provide the implementations or the concrete instances of those classes or functions in the case of C. In addition to moving forward with an object-oriented design, dependency injection was also introduced for FACE 3.0 in that if you are a user of an interface, you are expected to provide an injectable interface so that the integrator can provide you with a concrete instance of that interface in order to use. 8 of 2012 support was also added in addition 3.0, and the FACE data architecture kind of changed things up a little bit. They introduced a new template inquiry language for the use of creating models designed to the FACE technical standard. Talk a little bit about the FACE data architecture, and now this is only kind of a little bit of what's going to be covered in the FACE data architecture overview, which I believe will be tomorrow. So if you have any specific, like down in the weeds questions, please save those for the experts that will be going through all this tomorrow. They will go into a little bit more detail as to the housing-wise, but this is just to provide more context as to what all is in the FACE technical standard. So the FACE technical standard also defines the FACE data architecture and it consists of the following elements. One, a data model language. Two, a set of data model language bindings that map the data model language elements to each of the supported programming languages, meaning being C, C++, and Java. The FACE data architecture also defines a shared data model, which provides the building blocks of all unit of portability supplied models, as well as domain specific data models. Also, the FACE data architecture consists of the rules for construction of UOP supplied models, aka USMs, and domain specific data models, aka DSDs. A little bit more about the shared data model. Once again, it is the foundation for which all other models aligned to the FACE technical standard are created. The USM is intended to extend the SDM and define FACE unit supportability and describe the data that the UOP sends and receives. And then a DSDM is a way of capturing domain specific semantics without defining specific software elements, aka UOPs. But it does define messages as well as semantics for describing those messages. So the aspects that are produced by the HDWG, one, the FACE technical standard, within the FACE technical standard, you have the metamodel and OCO constraints, which are basically governed how the USMs are constructed. You also have the shared data model governance plan, which is a separate document. The shared data model is also separately published. And then the two examples of products that were built using everything I previously stated are the UOP supplied model, and which is the one thing that is new to FACE 3.0 is an integration model, which is built by system integrators. And here's just an in-depth view of a UOP supplied model. The aspects that are provided by the shared data model are going to be the conceptual data model and the logical data model. What you are expected to provide within the USM that describe your data and the software applications that are going to use it are the platform data model and the unit of portability model. And what you're expected to use this USM for is for things like configuration, code generation, and things of that nature. Like I said, the FACE data architecture overview will explain these far better than I can. And here's just an overview of all of the aspects defined in each of those model levels and what you're expected to add for your UOP supplied model. I won't go through all of them, but take the conceptual model, for example, you're going to have defined in the shared data model a list of observables and what you're expected to use those for in your UOP supplied model is for in your conceptual model to define entities and entity associations that define elements that realize these observables. Down at the platform model, here's where you're going to have your ideal primitive types that are provided by the shared data model and what you're expected to do in your UOP supplied model, further expand on your entities and associations but also define ideal types that map to define types in your entities and associations and also define views for describing your data messages. And then in your UOP model, this is solely where the USM comes into play. You are going to define platform-specific components and portable components and assign them ports depending on whether they are incoming outgoing messages and those will map to a view which is, in a sense, a data message. So once again, face data architecture overview will go into these in a lot more detail and provide better explanations. So one question I get or one popular question that gets tossed around is, what are you expected to do with a USM? Well, here's how this relates to the face reference architecture. Your USM generates this interface, the TS interface because every TS interface is typed to a specific message and once you are a little more familiar with the standard, you get a chance to see some code and some types generated off of a USM. This will make much more sense but this is how the USM fits within the face reference architecture. Using your USM, this is just kind of a notional flow of products to more products. You build your model using your model editing tools by importing your face shared data model and producing a UOP supply model and you can use other tools in order to generate code and your TS interfaces as well as other artifacts if the tool allows it. The face conformance test suite is an option out there to be used to generate types and interfaces and there are other tools out there that are provided by third parties that do the same thing plus a little bit more depending on your needs. So just to kind of summarize everything that I've gone over, Face Technical Center defines a reference architecture. Its overall goal is to eliminate barriers of portability and encourage software reuse. There are within the Face Technical Center are face segments that are done defined to abstract functionality in order to further promote portability and the Face Technical Center also defines the set of practices for creating face software components. At this time we'll move on to questions. I just saw the one pop up in the chat. Has the process of adding new messages incrementally to a pre-existing face environment been streamlined in 3.0? No, not exactly because the Face Technical Center does not define messages within the standard. This is at the unit of conformance level. The process for adding messages is relatively the same in 2.1 and 3.0. That sort of answer your question. I know there's technical standards out there that do define messages and in each addition of the standard that comes out more messages are added but the purpose of the Face Technical Center and more specifically the face data architecture is it allows software suppliers and integrators to define their own messages. However, if you define a message you have to define the semantics that go with it. That further moves towards the need for a face data architecture. If we define all the messages that you are allowed to use within face one I'm sure we would be adding to it a lot but also there would be no need for a face data architecture. Let's see. Got a question from Mr. Lenning. While compilers are generally off the shelf items are the other tooling for using face generally available especially with respect to shared data models and writing code? Good question. The face shared data models are publicly available. There's one for each addition of the face technical standard and there are subsequent releases that are put out as well because the shared data model committee under the dialogue they are constantly adding new types and new rules and it gets updated a lot. There are third party tools available. Those are available on the face landing page. There's a place for third party tools that you can click on and different vendors that have elected to have their links to their tools hosted on the face landing page have their links there. As far as writing code there is an examples of a face computing environment that is available to consortium members only right now and that is the Balsa source distribution. It provides working examples of all the face segments with the exception of the OSS. It doesn't provide a complete OSS because it's meant to run on Linux and be open source but it does provide the configuration services interface within the OSS as well as an HMFM and it's available to consortium members and soon the public. It is currently going through distro a review but it is there for people to gain a better understanding of what they are expected to provide and give them assistance on getting started with face as well as providing them an example environment to use for testing purposes because all of the UOC is defined within Balsa do work. Did that answer your question Mr. Lenny? Yes, thank you. Also with respect to writing code you have the reference implementation guide that accompanies each technical standard. There's also an integrators guide coming and then there's a software supplier getting started guide that is out there as well. Okay well thank you to everyone that joined I hope this was informative. It's kind of you know drinking from a fire hose for those of you that are brand new but I assure you you know as you use the technical standard more a lot of this will make sense and you will see the value of using the face technical standard. If you ever require Balsa access please get in touch with me. Nick just asked if the slides are going to be posted. Yes, I'm posting them today so these will be available on the TWG page. So thank you again everyone for joining and I hope you have a wonderful day and that all the follow-up meetings the next few days are productive.