 To Sean and Ken with TEF staff. Thanks so much. Thank you very much. Appreciate the opportunity to present. This is Shama Holland and my co-presenter is Ken Erickson. He'll jump in in the middle a little bit. So the title of our paper is Advances in Applying a Model Based Modular Open Systems Approach, or MAMOSA for short, to hardware and software verification and conformance. Okay, so we'll talk just a little bit very briefly since these are short presentations about the promise of MAMOSA. Some of the primary challenges of MAMOSA is to apply the verification and conformance and then, you know, methods to mitigate those challenges, of course, and then some of the past projects that actually Stephen Seamy talked a little bit about that in an earlier presentation, if you were on that. We'll present. Okay, so of course, what is the MOSA promise? It's really about utilizing the best of breed technologies and taking these disparate or the divide and conquer approach to building a system and pulling from these different components, integrating those and being able to deploy that system. Now, what can it do? Well, it helps us build a more complex functional capable system, right? So the idea is we can bring these best of breed technologies together and enhance our overall system capability. Now, the other promise about it is a reduced cost and hopefully a reduced schedule. Now, one of the interesting problems is there is a certain complexity that arises from that and the disparate systems and you get to specify every single piece of a component. You can really make sure that the integration points are simpler. But when you specify or you purchase different existing components, then you have some complications there. Okay, so part of this when we look at the different technologies that are currently out there that we would apply MOSA to or utilize for MOSA. And they are what we see is Air Force of SOSA. There's the Navy's efforts on host of which we did a phase two silver SBIR for doing conformance verification for the host hardware standard. Then there's CMOS, which, you know, we also applied, well, we looked at as part of our requirements that as well as SOSA and then FACE, which TS and TS Abby have been a big part of FACE for a long time as well. And we do participate in all these. So just so you know, you know, we are part of all of these standards efforts. Okay, so, you know, one of the interesting things that I thought this was somewhat compelling this graphic from GTRI and thanks to them. It really looks at the different upgrade iterations that we have in terms of your aircraft, how it upgrades every X number of years. And then applied to enclosures, your face plates, the changes that you have on those and what those upgrade cycles are versus your processing or your SBC. The idea is those happen quite frequently. And then the graphics cards happen much less often. So one of the real challenges you run into is you have these different upgrade cycles of technology. And when you look at software, it's even, of course, we upgrade the software typically even more often as we see on the bottom part. So you have to deal with these changes within your hardware and software and the different technology cycles and still verify that you have a solid foundation from which to build your overall system. So this is a very large challenge when when we look at that upgrade cycle and applying that and having that solid standard or foundation. You know, turn it over to Ken Erickson. Hello. So some of the primary challenges we have found this MOSA verification and performance. There's a large number of possible test configurations for hardware and software that can lead to a quite a bit of needs for hardware and different software environments to test all those. There's a lack of comprehensive verification performance environments. Looking at something like hosts, there are basically two sign better solution and one of the company that did a silver face to silver as well. Looking at things like SOSA, you know, there aren't any yet and, you know, I think the two companies that did the servers are probably positioned well to take that on, but even for SOSA, that's a much bigger task than it has been for hosts. The testing program needs will often conflict with the technical standards and the requirements. There's a lack or ease of access to test the test data and the performance results. Those can be in many different programs, different hardware, things like that and all has to be brought together and brought into one place. There's a large number of incompatible tools used by different organizations and even within a single organization. One thing like national instruments might be used to be in-house custom test results and things like that. All those things have to be brought together. There's many times on this match of the tools to the standards. The tooling may be written for general testing or something and the standards have different requirements and so they have to be customized in many cases. Then we've also run into cases where the different additions and versions of the open standards conflict with each other or are different enough that the components don't necessarily conform to each other. So we've categorized some of these, all of these challenges into these five bullets, essentially, you know, ambiguous requirements really can lead to a lot of problems. The goal there is to kind of automate the processing of those requirements and understand them through program problematically. Triceability is hugely important to determine your coverage for conformance, something like the host standard. There's a tier one, a tier two and a tier three. The tier three is kind of the specification for the component that has to trace back into the tier two. The tier two traces back into the tier one. And all three of those can trace up to dozens or 30 or 40 different other standards like IEEE and PETA. There's the tool chain differences that leads to many probabilities for verification of conformance. And then conflicting requirements of the different components. And then when you start bringing in requirements from electrical, mechanical, power cooling, software and integration from all these different disciplines, they all have to be brought together into a single tool, essentially, to really get the performance you need for those. So now we move on to the methods we've determined to mitigate some of these challenges for verification of conformance. So we developed a tool now, it's over 10 years in development called ASM, and that is our MAMOSA approach. And it does things like clarify the biggest requirements. We're looking at the year standard and you can kind of process and automate the processing requirements so it requires less human interaction. You can deflect requirements amongst the different standards. We have full traceability from, and we bring all the documents into our awesome tool and so we'll bring in a tier one through two, all the tier three standards. We'll import the IEEE reference standards, the PETA standards, OZICS, anything else, and then trace all of those requirements up to those and can develop tests against each of those requirements and determine full coverage for conformance. The tooling also lets us centralize all the information we need, whether it's the requirements, even in the things like design and test, all into one place. It also handles multi-disciplinary technical data very well too. Some of the projects we have succeeded on using awesome for process and toolchain, including the post-siber, which we leveraged awesome MAMOSA to print in a conformance test suite called Harmony, which is kind of a subset of awesome capabilities. It allows us to do start to finish management of conformance and the full traceability amongst all the tier one, tier two, tier three standards, and all the other standards. We're following the face conformance toolfold approach where there's a conformance verification matrix along with an automated test suite, and with the tooling we bring all that information together and determine the conformance. It handles, like I mentioned before, the document management of all the different standards and all the references they're in and traceability to all those standards, as well as we have both internal and external test capabilities. We can integrate with external tooling. We have our own internal test capabilities, CC++, Python, Java, any of those types of tests we can also integrate with. And just as an example, down there in the metrics for our post-conformance server, we are looking at around almost 11,000 requirements. We brought in 57 documents. We do roughly 300 tests per device and tested 60 different cards. Another project that we've used awesome, and actually it's in progress, is the aviation architecture environment exploitation. We're more radio control management software application. This is going to be deployed on the UH-60M. We're going through full qualification. It's going to be VO1-78 DLC. We're using awesome for all the requirements, developments to design, development, test, and all the traceability. It will be five units of conformance for face verification and roughly 120,000 lines of slot. Our big focus is making sure it's a holistic approach to conformance. The real key there is being able to pull in all your requirements, your reporting, being able to manage your hardware requirements, the different disciplines, as well as software. The real focus is making sure it's all conformance and integrated and has a complete solution. One of the real challenges, like we said, is when you have the different tool chains, it causes a real issue in terms of managing it all. But that's not to say that we don't leverage that as well. There are different tools. We're able to access those and pull the data in. We have integrations, of course, with doors and various requirements tools. We like RecIF, so we're able to pull all the requirements in. We import the documents in their native format, word, or PDF. So we pull all those and extract all those requirements and really allows us to deconflict and manage all that detail. And that's one of the keys of the awesome and the offshoot, which is harmony for the conformance testing approach, is that it makes sure that we can manage that entire process. And it's also a commercially available tool chain for others to be able to utilize for this effort. I will say when you go to hardware testing, the cost of the hardware test stations can be somewhat significant. So the key is to have just certain labs most likely to be able to do that kind of verification or development testing. So ultimately, you know, tagline, it is a feasible approach. We looked at this very hard and we considered how to do, you know, from that soup and nuts, right? That holistic approach of managing these complex requirements of these different disparate standards somewhat and how they can build on each other as well as overlap. So it's really key to make sure it all works and really provide that solid foundation on which the systems can be built and as well verified. That is the end of our presentation. Any questions? Okay, thanks. There is one question I see here. When you say how many process you mean test how many, not the IBM harmony process, right? Correct. Correct. Right. So harmony is is a product solution built as part of the awesome test suite or I mean, awesome product suite. So it's awesome harmony. Okay, one other question. How do you handle versioning issues? If two systems implement different versions or one or one system wants to update its version with change field names between the versions, they may lose their interoperability to each other. This implies that one system. Okay, that might have been the question from the previous. Actually, yeah, are you right? I'd like to throw it out there because this is actually something we considered we do have a server component or a cloud based component as part of awesome and harmony that allows us to manage the multi vendors. And multiple multiple card solutions as well as test suite and standards. So the idea is it has versioning built in within the cloud database system. So, you know, in terms of all your requirements, design, test cases, procedures, results, it's all built into the tool and the process that goes along with it. Great. Thank you. I don't see any other questions coming through. But folks, if you do have questions later on, please email it to us and we will definitely get back to you. And so, thank you again, Ken. Great presentation.