 Hey, really glad to be here today. We have had an interesting journey over the last few years. In our assessment, NAVAR is, and the Army are a little bit ahead of the surface Navy in terms of MOSA. And we're right now looking at, you know, FACE and the basis of it, the principles contained within it for lessons learned and different things to kind of start thinking about what we would do for a MOSA approach for ship systems. And my contact information is there, but just really grateful for the opportunity to come and present to you guys and gals this information that we had. Just to introduce us, the Naval Service Warfare Center, Dogger Division is one of the warfare centers in the Navy. We are the big participant in the area of warfare system and combat system design. So if you start here in, you know, Aegis and different things like that, all of that in some way is touched by us. And we are a research facility, and so our job is to be thought leaders in this space and to help the programs of record, ship side, understand what's up and coming and what's ready to incorporate and things like that. And we decided to present this work to the consortium because we wanted to let you all know that other communities besides air system and area of the Anec communities are starting to look at things that you all are doing and there's potential there for synergy in the future. We also wanted to present it because we were newbies when we began this project in the area of MOSA. We had only heard of FACE. We didn't even really have the words to express it. And so it'll kind of give you all an idea of what it takes for a group that really doesn't have a lot of exposure to the kinds of things you're working with to get spun up to speed and some of the challenges that folks will encounter as they do so. So, Dolgren set up this future warfare system architecture project and we investigated FACE. We did a few other things, but really we wanted to look at FACE for MOSA and see if it could be helpful. The basic premise there is that integrating a sensor or something like that onto an aircraft really isn't that much different than integrating the same onto a ship, although obviously the things that get integrated on ships tend to be a little bit bigger and a little bit more complex in some cases than a GPS sensor or something like that. But we have a tough job over there in warfare system integration. There are a lot of heterogeneous system types that come together. More many organizations are involved. There are many points that data currently converges across the architecture. And so the integration of all this is pretty hard. It takes years, quite frankly, to integrate a lot of systems. And reuse, portability, interoperability is virtually non-existent today. And so folks have started to think, hey, we need to do a better job of this. And so our team was one of the ones that was leading the way to say, hey, you know, we really need a MOSA approach here in order to improve this situation and start to bend the capability curve higher. So we looked at FACE, we looked at several standards, quite frankly. And we were really impressed with what we saw at a high level of what FACE is doing. The advertising brochure was pretty compelling. I'll put it that way. A lot of the flexibility decoupling the systems, portability reuse, especially the computer-aided integration aspects that y'all are pushing towards, really very interesting. And so the other piece of it is that if we have systems from multiple domains using the same data architecture and or interface governance, there's this idea that we may be able to just unify that interface management across the entire DoD. And a lot of the past efforts we've seen really didn't have the scalability that were necessary to accomplish that. But we were hearing some things about FACE that made us think maybe FACE and the data architecture there would start to fill that need. And so our basic hypothesis was that FACE is a good start point for warfare system MOSA. And so we set up an experiment to test that. And it was really just a representative integration effort on a small scale where we wanted to assess how things went with FACE integration as the system of systems evolved over time. And so we focused on the area of sensor and track picture integration. The team that was working this were extremely experienced integrators in a legacy thought process paradigm. So kind of the old way we've done integration. A very experienced there. And so the team was just full of horror stories about integration done the old way and was very interested in seeing what improvements FACE might be able to bring. And sensor and track picture integration is an area that has really been plagued by a lot of very difficult challenges in the past. So we kind of picked the worst case, you know, area to work on to see if we could get some advancements there. And so we really wanted to investigate, can we integrate legacy systems that are not originally built to talk to one another without code change? That is an acquisition goal in most integration scenarios that we face in the surface Navy. Usually we will spend 10 years developing the technology for a new weapon or a new sensor. And then everybody is like kumbaya or rah-rah, we just finished. And then it's handed over to the integrators and they say, OK, great. It'll just take us another 10 years to get that integrated with any of our ship face watch. And everybody's like, what? And can we do better with FACE? And if we could integrate those legacy systems, we believe we could get some without code change to the systems. We believe that that would get us a leg up there. We also wanted to just assess the overall maturity of the FACE approach and what the state of the practice is. And so we set this experiment up as five releases to incrementally add capability. We basically built one data model per system and then merged them together as needed for the integration. It was very similar to what the previous presenter was saying that they did, I think, if I understand them correctly. And so each UOP kind of had their own conceptual data model. And then we merged them, which is part of that integration process. So we used a FACE tech standard 2.1, SEM 2136. And later we switched to 2.1, 3.9. We used Sparks Enterprise Architect version 14 with the plug-in that the consortium built. And also we used the performance test suite to test what we had done. And so we wanted to see how things shaped up as we used FACE for something like this. So for release one, we really just wanted to get our feet wet, demonstrate existing systems can be integrated without code change. So we created a simple data model, CDML, the MPDM, and these imports, the whole caboodle for track display. And then we integrated an existing FACE conformant aircraft position sensor with our track display. So this was modeling a aircraft saying, here I am, here I am. We'll put a dot on the screen basically. So we auto-generated the TSS interface code and we manually generated the integration code. So there was still a manual aspect. And most of that was because the tools don't exist yet to actually do a lot of the automatic generation of the integration piece in this. We really enjoyed the precision that the FACE data modeling brought. We really felt that it forced us to build cleaner and more thoroughly documented interfaces. As part of that, of course, we actually had to go back to SMEs who built the systems that we were integrating to say, hey, this is what you really meant by this piece of data, because the source documentation we had to rely on was not always as thorough as it could need to be to represent a full data model. It was really hard, though, I'll tell you, to build out the logical and platform models and views by hand. So we're really hoping that tool sets will come in and start to make that a lot easier. We also found that the EA plugin will allow export of non-conformant models. We didn't realize this at first. And so we thought we were good and kind of proceeded onward. And then we did the CTS checks a little bit later than we probably should have and found out that it wasn't even a valid model. So we had to go back and redo a bunch of stuff several times and get that fixed. We also found model quality matters in emerging models. So different teams will apply different thought processes in terms of how we represent the semantics. And if those don't match, it actually becomes very difficult to do the merge and still comply with a lot of the constraints that exist. And we also saw that there were cases where we really needed to do cross-observable mediation, which is not something that PACE currently supports. So duration and calendar time, as well as the name and unique ID. Now, a lot of this, when we really dug into why this was happening, we thought it was because of confusion of either the definition or the usage of some of these pieces of information that we were dealing with. One of them got later resolved by an SDM change. We actually submitted a CR against the other one, hoping for a change in the SDM measurement in order to resolve some of that confusion. And so if you're a tool vendor, listen up, you're gonna need support for embedded tool checks, conformance checks within your tool. That is a huge competitive advantage if your tool can do that. And we also saw that there was a need for this manual data model mapping of some kind, probably. So I know that PACE is trying to head towards full automation of integration and things like that, but we really believe that there's always gonna be a need for some or some kind of a manual override when different little nuanced situations arise. And we do think that the community should come together, the PACE community should come together and standardize patterns and rules for model construction so that the models end up in the same form or basic shape as you're building it, because that really makes the integration piece easier. So for release two, then again, we're modeling kind of system to systems as it expands and grows over time. We wanted to expand both the width and the depth of the integration. So we incorporated a ship mounted battle space sensor and then we expanded the UOP definition of what our track is because now we had a source that could actually provide better information for tracks. So we wanted to maintain the previous integration and also we did an SEM swap out manually. And that was interesting to understand as you're upgrading from SEM versions, what's it gonna take? And what are the impacts of these things shift and change? The data models will self shift and change over time. And we were also incorporating some information from interfaces that we didn't end up integrating because they were interesting challenges in the area of semantics. So we found some tough interfaces just to represent and play with and ask, hey, can we represent these semantics in the form of a semantic data model? And so release two was completed. We modified our scope in the end. It was really quite difficult working with the data models and enterprise architect. And so we ended up having to down scope a bit and just to be able to cut some software at the end of the day. And so ended up modifying the scope. And we also had to bring in a numship and navigation sensor later because we realized that there was a dependency we didn't realize before. So we actually integrated two new sensors, not just one, which had an impact too. So we found some interesting things out. Upgrading from 2136 to 2139 was pretty straightforward. It was manually, to do it manually, it was pretty intensive as far as number of mouse clicks, but it was pretty straightforward. We had a lot of trouble with generalization and specialization relationships as we were merging these models because of that requirement in phase 2.1 to replicate associations. When you replicate the association, just because it's a rule that you have to, you're immediately gonna run into a bunch of uniqueness checks. And so we were failing the automated checks because this happened. I think some of that's been improved in phase 3.0. We haven't really looked at phase 3.0 yet because the state of the practice is phase 2.1 because of the tool sets, really. And so that's one thing we'd emphasize and you probably hear me say it multiple times throughout. There's just kind of the state of the art that y'all are working on, but the state of the practice is well behind that because since the modeling formats have changed and the rules and guidance for construction have changed, you gotta have time for the tool vendors to catch up. So what you wanna do and what you can do today, there's a bit of a mismatch in our opinion. And we did find that systems can be integrated without code change, so long as the producing interface, the conceptual content and its performance parameters are greater than or equal to that of the consuming interface. So that was a big win and it makes a whole lot of sense. And we were starting to see some tools emerge throughout this process. We've been keeping tabs on a couple of tool vendors building tools here, even though we were using EA for this one. We think that there are some tools that are gonna be coming online soon or perhaps already have that and give us the ability to kind of query for those comparisons. So we can quickly answer, is the data sufficient or not? And that's very useful in integration in general. Right now you gotta compare all these PDFs that are a thousand pages each by hand or peep it in your head and that's just very difficult to do. So doing that in a model driven fashion, we believe is very useful. So another thing we saw is that the data model names were ending up in the generated interface code. And so as we refactored our conceptual models and flowed that naming convention through at various points, conceptual, logical platform, some of those elements were being picked up and so you'd end up with a name and that was in the code itself and in the documentation of the code that no longer matched the data model because we had just done a refactor. And so even though the code would still technically work, you end up with this confusion happening between the software engineer that owns the UOP code and the conceptual modeler that kind of owns that integration space because now they're not talking about the same terms. And so we would recommend that we find some way to decouple the data model names from the interface code. Not sure what that would look like, but it would be helpful. And we did see the CTS caught a number of reasonable overlaps as we were beginning to merge these models. Now it just complained and said, hey, you've got a uniqueness issue here. So the cool part we saw is that that's actually the first data point that you need in order to integrate these two data models together. So the fact the CTS can catch those and complaining about them is sort of the beginning point for tools to be able to help us, manage these merges as they go through. And let's see, so next. So we had other releases. We had planned for five. Quite frankly, we said, we're not gonna make it. We had, if we were doing that integration the legacy way, we would have made it through all five releases. But because of the problems that we were encountering with the data modeling, namely it was just so many mouse clicks doing this in EA to get it to the code. We didn't think we'd make it. And we also were like, I think we've exercised as much as we can of the software generation from the data model piece, doing that extra bit just to have the more capable system isn't gonna teach us much. And it was a research project. So we just said, you know what? We're not gonna cut code anymore from this. We're gonna stay at the conceptual level and we're gonna keep working on assessing can we represent these semantics in the form of a data model. So we had through that process, we had some questions arise about how to model certain complex semantics. There's still a debate going on within the face community about how that takes place. And so we're interested in seeing how some of those things shake out. It seems like in a few cases, we sacrificed human readability for machine readability. We really need both though because it's humans that are building the data models and humans that are writing the code that gets cut from or writing code to match the interface for the UOP that gets cut from the data model. So we really need to make sure that we balance human readability with machine readability as we're doing this. There were certain patterns that we were using that we'd actually heard consortium members using that were failing uniqueness checks. I believe this has been fixed now. There was a CR, I can't recall the exact details of it but I believe this has been fixed now. But it was an interesting data point because we saw that there's just a lot of discussion. This is kind of an emerging arc here about how to build and use these data models. Converting legacy ICD to data centric interfaces is hard. And this goes to what the previous presenter said that systems are not built to do this today. And so I see I'm running out of time. I'm gonna skip through a little bit more of this. Basically need better data modeling guidance kind of throughout. So in a nutshell here, my summary is we really like face. It shows a lot of promise. We recognize now that this is an emerging practice surrounding the kind of the state of art, the arc here. We really like the reference architecture and mediation approach. We really do believe it brings this couple in the systems and we'd like that plans ahead for integration later. Semantic data modeling we do believe that this has the potential to kind of unify that interface management across the DOD but it's got to get easier to do. And so we're hoping that tool will come forward that make that easier to do all the different kinds of things and then you heard we had to do as we were managing this evolution of a small system of systems over time. We're gonna have to do it a much larger scale to continue that activity forward. And so we wanna stay involved here. We wanna keep watching faces that emerges. So we pursued some partnerships. We set up a cooperative research and development agreement for example, the scale to get our hands on their tool for free and kind of watch how things are evolving. It was a special favor they kind of did to us did for us rather. And we're also interested in efforts to build out some of the key reusable components of these data models. So we could set up an experiment if you're a tool vendor or if you're somebody that's interested in expanding this or seeing the expansion to other domains besides their systems, we could set up a little experiment to assess things and at the end of the day hopefully have some reusable components that we could perhaps stick back in the SDM or have because there is a big upfront cost of building the data models. A lot of that has been tackled already in the air domain but there were a lot of measurements and different things like that that we needed to build that were specific to kind of the language that ships speak. And so adoption of it at this point is would be a little bit costly and there'd be some long lead items that we need to work through. So little experiments like that can maybe bite away at that a little bit as we kind of look at this moving forward. So that's all I got.