 Hello, I'm Steve Nunn, President and CEO of the Open Group. Welcome to Toolkit Tuesday, where we highlight the various components and leading experts of the Architects Toolkit, a collated portfolio of the most pertinent technology standards for enterprise architects. During the series, I'll be calling on a number of recognized experts who will bring their particular insights on how to most effectively use the various tools in the Architects Toolkit. We'll have a mix of interviews, panel sessions and pre-recorded presentations along the way. While all standards of the Open Group are designed so they can be adopted independently of one another, the greatest value for an organization can be derived when they're used in unison. The sum of the parts should be greater than the whole. In the Architects Toolkit, we have collated a portfolio of the most pertinent ones for architects together, all in one place. For most of these tools, certification from the Open Group is also available, so practitioners can demonstrate that they have the skills required and recruiters can take the guesswork out of the recruitment process, all backed up by our Open Badges program. Our contests are the most important thing to do in the architectural cycle is think. And not just up front, but all the way around and all the way through. Why? So that we as architects genuinely understand. For me, this starts fundamentally with the context. Do we truly understand what we're looking at and what the ask is? To answer that as an architect means exploring and challenging our own understanding of what the situation is and what we've been asked to look at. Any options that we consider, any decisions we make from then on are reliant upon that understanding. So how can you test your understanding? Well, one way is, could you explain it to many different audiences? The client, the customer, the development team, the executive, of course. But what about your best friend? A child or an adult in your family? Hello everyone and welcome to this week's Toolkit Tuesday. Great to have you here wherever you are in the world. I can see the chat function is already getting going with people telling us where you are, which those of you who are regulars will know we love to do. So please keep that going. It's Toolkit Tuesday, I should say today, isn't it? Wherever you are in the world, it's apart from those of us who've already moved to the 23rd of February. It's either 2222 or 2222, depending on where you are. But yeah, one of those special days, date-wise. But glad that you're spending some of yours with us today on Toolkit Tuesday. We'll kick off shortly with our main presentation, but just before we do, those of you who are not familiar with the WebEx tool or familiar with our Toolkit Tuesdays, the way that we would ask you to ask questions of our main speaker today is through the Q&A channel, not the chat channel, the Q&A channel. So keep the chat going amongst yourselves and whatever you want to communicate with your fellow participants today. But the Q&A channel is the one where we will look first for the questions for the speaker today. And if you can't see the Q&A channel, click the three dots in the bottom right-hand corner of your screen and you'll see the option for Q&A and click on that and then you'll have it there. So please ask questions that way. And without further ado, I'll move on except to thank Paul Holman of IBM, one of our regulars, one of our experts on Toolkit Tuesday, stressing the importance of thinking, particularly as an architect, but we all need to make time for thinking, don't we? When we're bombarded with emails and slacks and other kinds of messages, then it's hard sometimes to find the time. So a great reminder, Paul, thank you. So on to the main presentation today and I'm delighted to welcome to Toolkit Tuesday Dr. Stephen Davidson. He's Chief Scientist for Systems Architecture at the MITRE Corporation and has been the Chief Architect on MOSA-based projects relevant to the US Air Force, sorry, the US Air Force, the Navy and the Office of the Undersecretary of Defense. Steve has been called one of the founding fathers of SOSA, which is the Sensor Open Systems Architecture, and served as the Chair of its Architecture Working Group and as Vice-Chair of the SOSA Consortium Steering Committee. He was also deeply involved in the Open Group's face consortium and served as the Chair of its Enterprise Architecture Steering Committee for several years. Steve has also helped several other open systems architectures including DirectNet, OMS and Victory. And today Steve is going to talk to us about MOSA, explain what that is and take us through our best practice for developing MOSA-based reference architecture. So a warm Toolkit Tuesday welcome from the Open Group for Dr. Steve Davidson. Welcome Steve. Great to see you again. Thank you. It's good to see you too Steve. So I'm going to be talking to you about a best practice for developing MOSA-based reference architectures. And first I'm going to explain that a reference architecture is absolutely crucial to achieving goals of the modular open systems approach, which is what MOSA stands for. And most open system architectures are actually reference architectures. As Steve mentioned, I've been witness to and involved with the development of many open system architectures and as you might expect, different teams that are creating these open system architectures have taken very different approaches and often a few mistakes, repeatable mistakes have been made. So part of my reason for leaving this presentation is to help future reference architecture developers avoid making certain very hard to recover from mistakes. In particular, hacking out interfaces and defining messages way, way too early in the process. So I'm going to begin by defining MOSA. This is not my definition. This is the first and I say best formal definition of it. You're reading that. I'm just going to say boiling it down to its essence. MOSA is about modules that encapsulate functionality and the interfaces that connect them. So think of them as building blocks in the glue that bind them or bricks and mortar. MOSA was started initially as a way of getting the U.S. government out of a mode of buying monolithic systems, which were hard to upgrade incrementally. It's very appealing because through modularization and abstraction, it allows you to make modifications or upgrades to a system without impacting the rest of the system. That is, you can replace an individual element or module with minimal or no alteration to the rest of the system. You can replace that incandescent light bulb with an LED light bulb without having to replace or change the lamp in any way, shape, or form. As I mentioned, MOSA was originally intended to benefit the U.S. government, but it turns out to be a win-win for both the procurer government acquisition and the supplier, developers, contractors, vendors. For the procurer, MOSA enables technology refresh. It can be used to extend the life cycle of the system, and it also is useful for creating a more competitive landscape. From the producer or developer perspective, it helps with strategic sourcing. It reduces the risk that R&D investments won't go in the right direction because you have a template, the open system architecture, to work with. It also creates a more egalitarian environment in that developers have a seat at the table as open system architectures are defined. Open system architectures are typically defined through either industry consortia or government industry collaboration. MOSA as a concept had been kicking around for a very long time, and there were a number of attempts at getting traction, but adoption was slow. It involves changes to the way systems were procured and defined, and just plays inertia, held things up. So all of these efforts that were implemented to try to get MOSA adoption didn't go anywhere. The U.S. Congress got tired of waiting and essentially baked it into the law, that is the National Defense Authorization Act, which funds the U.S. Defense Department, and it essentially said that all systems undergoing certain developmental steps, at the time only major programs, had to be aligned with the modular open systems approach as of January 1st, 2019. Now, right after that law went into effect, the secretaries of our Army, Navy, and Air Force all got together and produced a joint memorandum that said, you're going to do it. Shortly after that, the Army, Air Force, et cetera, issued their own memorandum and made it very clear that MOSA was not optional. So I mentioned before that MOSA is about modules and the interfaces. So this would be, for example, the diagram of a MOSA-oriented system. You have a set of modules, and these modules are connected through key or well-defined interfaces, such that a module one in this particular picture could be replaced by a third party, as long as its functionality and interfaces aligned with the standard definition. So a module encapsulates functionality, exhibits behaviors at the interfaces. The interfaces are the physical and logical touch points. They are where signals or information is exchanged, could be a mechanical or electrical connection or a thermal connection, all of which are interfaces. And I should point out that in many communities the word module is synonymous with a circuit card. A module doesn't have to be a physical entity. It could also be a chunk of software. As long as it's encapsulating functionality and has well-defined interfaces, it is a module. The Braybox concept is fundamental to the adoption of MOSA and the win-win for industry as well as government. In a nutshell, it says that while what a module does, the function's encapsulated, and the interfaces and how it appears at the interfaces are fully defined, what goes on, that is how those functions are instantiated, is completely up to the developer and is part of the developer's intellectual property. It's nobody's business but the developer. So it's not a black box, it's not a white box or a clear box, it's a gray box. So for example, the architecture can specify that a light bulb produces a certain type of light. When powered on, it screws into a certain kind of socket. How that light is generated is the choice of and purview of the developer. And if an organization invests their R&D money and develops a better light bulb, they can do that without fear of intellectual property theft or they can reap the reward, the return on investment for their R&D investment. I should also give you a little bit of background on reference architectures. Reference architectures, they define, they constrain, they guide the development of instance architectures. So reference architecture is typically very broadly defined. An instance architecture is unique to a particular environment, let's say, air, land, sea. But a reference architecture identifies those common elements across all of those instance architectures. And then from those instance architectures, you may have specific designs. And one design might be selected by an acquisition organization and that becomes the contractor developing that particular system. The key aspect of the reference architecture is that, I guess I have my little laser pointer here. Yep. So that, for example, this instance design and this instance design, this one might be for the Navy and this one might be for the Air Force, they have lineage and connection so that there are aspects, architecture features that this one has in common with that one such that, for example, the light bulb used in this one can be transferred and used in that one. So you can have common sourcing of parts, common sources of suppliers, but it also opens up the environment to third parties. So the reference architecture is defined around a set of domain-wide needs, the objectives of the reference architecture. And we're going to cover that in just a moment. Now, it's important to recognize that the reference architecture has to be treated as a superset architecture. Now, what I mean by that is there may be functions in this instance architecture and then finds its way into this instance design. That isn't needed in this instance architecture and this instance design, but the reference architecture has to accommodate all of that. So one way of considering that is, as an example, my car is not a convertible. That is, it doesn't have a cloth roof that retracts so I can enjoy the sunshine on a nice sunny day. But the dashboard for my car has a place where, had it been a convertible, there would have been the buttons. Right now it's just blanked off. So the dashboard for that car is a superset dashboard that accommodates both the convertible version and the hardtop version. In the same way, the reference architecture has to accommodate all functionality that could be instantiated, but some functionality is optional and doesn't have to be used in particular environments. Now, I mentioned that my reason for giving this webinar is that there are pitfalls that people often are subject to. Two of them are listed here on the screen, defining interfaces before they clearly understand what's at either end of the connection. And the other one is going into messages without having an underlying data model. So if you create an architecture with a pile of messages and there's no organizing principle or data model behind it, it quickly gets out of control. And if you define interfaces before you have a clear idea of what's at either end of the connection, you're not necessarily creating an architecture that's responsive to the most needs for which the architecture was defined in the first place. So the process, this best practice that I'm going to describe is illustrated here. We often call this the blue box process and it consists of three phases. The first phase is establishing the foundation and I'm going to deep dive into each of these in a moment. The second phase is defining the modules in a way that are aligned with the goals and expectations of the foundation. And third in the process is defining interfaces. So the red text on the right hand of your screen says it all. Interfaces exist in service of the modules, not the other way around. So let's talk about the foundation. First, the foundation is stakeholder driven. Now a stakeholder is any entity that uses or is impacted by an architecture such as the developers, the acquirers of the systems, or the end users of the systems that will result from the architecture. You guys are probably have heard of Steven R. Covey's seven habits of highly successful people. And number two is begin with the end in mind, which is why use case understanding is crucial. So it's a stakeholder driven process. And so what we do is we say, what do the stakeholders want out of this architecture? And the answer is there's a vision, they have a set of goals, etc. What constitutes good for the architecture? These are quality attributes, quality attributes like portability, modularity, reusability, etc. There are a variety of quality attributes that are common in most environments. There are the architecture principles, which are the run rules for how the architect is developed, architectures developed, and then those all important use cases that identify how systems based on the architecture are going to be used. That information is used to define criteria for a functional aggregation into the modules. There are many ways of skinning the cat. And if you've got five different developers in a room, they will probably affinitize functions into modules in very different ways. So key to doing this well is defining the criteria by which functions are aggregated into modules. Now, let's talk about what these function modularity relationship looks like. So a module consists of one or more functions. And the idea is there are different ways of you could have defined modules so that function h, d, and m are in module one. Well, why would you do it the way it's illustrated here? Well, there's a reason. The reason has to do with the criteria. So the process starts with identifying based on the use cases, how a system is going to be employed, all of the functions that the systems are performed. And again, this is a superset. So not all use cases are going to require all functions. So you take them all together. You then aggregate your functions based on a set of criteria. We'll talk about that in a minute. And then you analyze and you iterate and you test. And that takes time and effort. But once you've got your modular decomposition nailed down, you have a solid foundation for defining your interfaces. The criteria that you might use would be based on what you're trying to get out of the architecture. Now, there are techniques that people use for defining modularity that are based on simple things like the number of interfaces. Well, the number of interfaces is a criteria. But it's a brute force criteria. If you're trying to do things like maximize operational life, make your modules independently procurable or testable or very important, protect intellectual property so that the functionality that might be part of someone's IP isn't extended across modules and therefore exposed at the interfaces. You're going to you're going to create a different criteria for how you aggregate those modules. So you take those modules, those criteria, you look at the functions and through a series of iterative processes, you affinitize the functions into a set of modules. And those modules then become the definition that you use or the foundation for what you use to define the interfaces. Every function in a module has some sort of an input requirement and it produces some kind of output product. So the process by which you develop your interfaces are you look at what the input needs are and you look at the output products and you look at what is the producer of those input needs and what is the consumer of those output products. So a very, very simplified version of the same three functions or modules I described before. In this case, function H has an input need will call a tasking and is provided by function F. It then produces a measurement request and that's goes to function C. Well, it turns out the function C is inside the same module as function H, whereas function F is in a different module. What does that mean? These guys that are highlighted in blue are inter-module interactions and the combination of all of those interactions then become the foundation of the interface definition between modules. In a MOSA based open reference architecture, you don't care about or you don't specify internal processes to the module. That's the developer's IP. What you care about are the inter-module interactions. The interactions that go between modules and the combination of those interactions become your interfaces. So let's do a toy example. Let's say that we have some system which has a pile of functions. We use a set of criteria and we affinitize those into that same set that I showed you before. Each function has its input and output requirements that you use to define the interfaces. So let's define what those interfaces are. So in this example, module one and module three connect to one another through interface A and there are a number of interactions on that interface. It doesn't matter. This is a toy problem so these things can be anything. What we're showing here is that interaction A1 on interface A has a source in module one and a destination to module three and then these other guys. And by building up a table like this, you now create the definition of the interfaces. So now we know what gets conveyed. The next question is how? Every interface should be treated as three separate aspects. The physical or the medium, the wire, the connector, etc. The protocol, which is the method used to convey the data or signal and the payload, which is the signal and data itself. When you've defined those informational requirements that go between module one and module two, three, etc. You then have defined the foundation of your interface definition. And now the question is, well, what's the story behind the signal or data structure? Another best practice and one of the reasons why I don't recommend jumping into message land is, and I mean by that is just throwing messages around and creating a pile of messages is to develop a data model. I do not have time to get into the details of the data modeling, but it typically operates at three levels and div one, div two, and div three, or the DODAF, the Department of Defense Architecture Framework designations for those. The conceptual data model identifies the type and nature of the data. The logical data model addresses content and things like if your angular measurement is at, you know, zero to pi. Is it minus pi over two to plus pi over two, etc. And then the physical data model is the actual structure and it's important to recognize that for every one of the logical entities, you could have more than one physical data model. That is, if you're conveying information about position, you could have three or four different ways of representing that depending on the precision and the requirement. And also, the precision related to the data need or you can represent in one case it using an integer notation in another case, you know, double point, double precision floating point type of a representation. The protocols are the means by which you're carrying the data. So a simple example might be if you're conveying a phone number, do I pause after each field or do I give you the whole number? And then your messages then become the combination of the protocol used to convey and the payload itself. So just to quickly wrap up, the best practices first take a top-down approach and establish the foundation. If you don't know where you're going, any road will get you there. That's not a good situation. So you have to clearly define what your objectives are. You then define the modules and after that use the module definition and the input output requirements to define the interfaces. So that's my quick summary of the best practice. And at this point, I will yield the floor to Steve. Thank you, Steve. We're still stopped or I know you could talk for several hours on this subject and our audience would love that to be the case. But we only have a certain amount of time. Thank you very much for that. Great stuff and some great examples along the way. I'll go straight into straight in some questions. Something you said fairly early on in your talk was about the color of boxes. It's not a black box or a white box. It's a gray box. The question came in. Why is it a gray box and a black box? I don't know what happens inside the box. Well, you do know what it does. You do know its functionality because the functionality is defined by the reference architecture. The functionality says it associates tracks or data detections and forms a track. It combines certain pieces of information and produces a product. Those are defined by the architecture. The methodology used for producing the product, however, is the gray part. You don't know that and that's none of your business. It's the developer's decision. Thank you. We've got a very international audience for this broadcast today. Has there been interest in Mosa from other countries to your knowledge? Yes. I know that the UK Ministry of Defense is very interested in it. My understanding is that there's interest and implementation in both that and NATO. I don't have specifics only because my orientation has been US-centric. I can't tell you specifics, but I do know that, for example, the open group is working with the Ministry of Defense to try to incorporate MOD personnel into the FACE consortium and also SOSA. Great stuff. Next question. I can see there are some coming in on the chat now. Does Mosa apply to COTS packages? Apply to what? COTS. In fact, COTS is enabled. One of the things you can do, on one of the first slides, we talked about the fact that Mosa is based on widely available consensus-based standards. And the COTS community has developed things like open VPX, Vita standards, and so forth that then become part of the definition of these open systems architectures. And what that does is enables COTS developers to, instead of being in a position where they have to develop custom cards for every application, they can develop COTS cards that are aligned with the open systems architecture. And now, instead of building it for custom versions, off-the-shelf becomes extremely viable. Good stuff. This is a broadcast from the open group, so down to get this question, I think. How well would you say Togaf supports the creation of Mosa? The methodology defined in Togaf aligns pretty well. For example, understanding the mission, understanding the objectives is fundamental to the foundation of the methodology that I described. You don't jump into the technical architecting until you understand what you're trying to do. Requirements are at the center of all of these things. And an open system architecture that's well-written will define, in a normative way, the requirements for how the module is defined, the performance, the behaviors of the interface, etc. So it aligns quite well. Great to hear. Quick one, do you think that Zope is a good technology to realize, Mosa? I'm not familiar with it. Not familiar. That's the trouble with specific technology questions. Last question, then, Steve, will be respectful of yours and other people's time. It's a comment, but I'm sure you'll have a comment in return to it. I do worry about this clean sheet approach being a barrier to adoption by companies with existing products. Looking for an upgrade path that doesn't throw away too much of existing investments. Can you comment on... Yeah, so I mentioned at the beginning that industry has a seat at the table. And so a well-defined reference architecture or open systems architecture engages stakeholders from the beginning. And therefore has to be aware of and cognizant of developments that are already underway. It's very important also to make a, I'll say, a gradual transition. There are a lot of systems that are out there. And if you want to incrementally upgrade them, you don't throw them away because that's costly and wasteful. What you do is identify where you can mosify, which is not a true term. I just made it up. But mosify a system by incrementally enhancing it through defining and carving out a modular portion, defining a clean interface definition around that module, and then gradually and incrementally bringing it into compliance with MOSA. Okay, great. Thank you. I said last question, but this one I had seen come in and it's an interest to myself. So I'm going to pick it. How is conformance or compliance against the reference architecture assessed? At the module level. So what we typically see is that a module is aligned with a standard or a open systems architecture and that can be individually tested at the interfaces for functionality and adherence to the API, the connector, the signal, the protocol, et cetera. That's how you do it. I know that some people think in terms of, well, you can also assess an entire system. And yes, that can be done, but MOSA is fundamentally based on modularity. So it's if the building blocks are all aligned, then the system as a whole will be aligned. Yeah, thank you, Steve. And for me, MOSA is such a great example of things we've seen elsewhere over the years of the customer pool. If the customers say you are going to do this to the suppliers and they have enough pool, then it happens. And it's great to see the consistency of demands from the major customers for this. So great stuff. And thank you in the meantime for all your work in the face consortium and the social consortium more recently. And great stuff, Steve. A number of comments coming in saying great presentation. Couldn't agree more. Thank you for joining us today and big virtual round of applause for Steve Davidson. Thank you, Steve. So that's almost wraps up for today. Just a few thanks, obviously Steve and to Paul Holman for his EA minute video at the beginning. And to thank you all for your attendance today and for the great questions. Plenty of questions coming in. Sorry, I couldn't get to all of them, but trying to keep it close to 30 minutes. A little over today, but it was worth it. We could obviously have spent more time on questions. So thank you very much. Please join us again in two weeks time. That's March the eighth for our next toolkit Tuesday, where my colleague, Dr. Andres Sokal will be talking about Actional Supply Chain Security. Actionable supply chain security, important to everyone. And some lessons from the trenches to tempt you into that one. So please join us in two weeks. March the eighth for a session on Actionable Supply Chain Security on Toolkit Tuesday. Meanwhile, thanks for joining us today and take care wherever you are. Have a good day. Bye bye.