 Hello, my name is Rich McCabe, I'm a project manager at the Software Productivity Consortium in the area of reuse. It's my pleasure today to introduce Casey King, a project manager at Boeing, who is currently in charge of the STARS Navy demonstration project down in Orlando. Casey is here to tell about the progress they've made on this project and to show how they're approaching reuse in this particular situation. Thank you. Casey? Thank you very much, Rich. STARS is a program to create a paradigm shift in the way we do software to reinvent the whole process to try and make it an order of magnitude faster, better and cheaper. Boeing as a STARS Prime contractor and the Navy as one of the cooperating services with the ARPA STARS program looked at two things. One, they looked at a process that embodied the STARS reuse vision and we looked at one that not only embodied the reuse vision but also its vision for process improvement. The Navy and Boeing found that the synthesis process answered the mail in both of those areas. The second thing we looked for in the Navy is a project that we could demonstrate the benefits of the STARS vision as implemented in the synthesis process. We found that aviation training vehicle systems or what we call ABTS was a particularly right candidate in terms of the maturity of the domain, the lack of risk involved in buying these. We thought it was really time to apply these benefits from the STARS and synthesis vision to this particular marketplace. The Navy designated that family of air vehicle training systems in general and in particular picked something called the T-34, which is the plane that the Navy pilots learned in. Right there you go, a flight training system for the T-34, particularly around teaching instrument, pilots how to fly instruments. In putting this project together we decided to do it in several phases because basically this was a new process for us and we anticipated some risks not only in the technology of training systems but in the organizations involved learning this process. So as a result of this we put together a pilot project that was designed specifically to touch every one of the processes within the synthesis and we go through the process, we produce the work products, we kind of figure out how we did that and then we'd apply it in a broader area. In order to do that in a timely fashion what we had to do was to reduce the kind of engineering burden on this so we could do it in a relatively short period of time and mostly touch the processes. So we took this whole family of flight training systems and we narrowed it down to something we called, at first we narrowed it down to a sliver which was sort of the navigation communications part of a training system and then we narrowed it down even to a finer area which was trying to do the training systems aspect of a radio navigation aid, primarily TACAN and VOR. So what we want to do is kind of capture the commonality across the family of all TACANs and VORs, which are pretty common anyway, not a whole lot of variation as you go from one aircraft to another, one training system to another. We basically have the problem of trying to say there's a radio emitter on the ground and as an airplane flies around it should have a little needle that points back to that radio station and if you're lucky it'll give you not just only the bearing to that but the range as well. So we wanted to simulate that behavior in a little piece of the training system. So that was our kind of pilot activity for what in fact is this demonstration project. We went through all the processes what I'm about to show you are sort of the evidence, the tracks that we laid down as we went through this process and give you some understanding of what lessons we learned as we touched each of these processes and produced each of these work products and some of the things that we'll be incorporating in our next iteration which will cover a much broader area. So the context again this is the STARS demonstration project, STARS and software technology for adaptable or liable systems. Been around for a good number of years as an ARPA program. It's in its own end game now as a demonstration project. Each of the services is paired with one of the STARS Prime. We're paired, Boeing is paired with the Navy and we pick this training systems world and we're focusing in on the radio navigation aids and specifically TACAN and VOR. Now we set this up, set ourselves up with some success criteria before we went into our pilot project so we could actually get measured as to whether we succeeded. We did this all last July, July of 93 and had a review in January of 94 and we checked ourselves off against and our management checked us off against these success criteria. We had two categories of success criteria. We had some explicit ones which were the sort of the technology oriented and some implicit ones which turned out to be all of the management and of course we had the challenges that we think we understood in the process of doing this and a lot of aspects of you know did we did we have a lessons learned process we were doing this just to learn lessons so we had to have a lessons learned process in place before we started it and had to make sure it worked and it did. Of course synthesis is a highly well-defined process with well-defined work products so we needed to check up did we in fact create all of those work products. The ultimate end of the whole process is did you produce yet another process that application engineers could use by the way we probably spent about 98% of our time in domain engineering and 2% in application engineering for this radio navets interesting kind of pie chart where the process is followed as evidenced by did we did we meet all of the exit criteria as we got out and did we meet the we had mixed bags on there we learned a lot of lessons probably around a hundred significant lessons were documented as a result of this and at the end did we produce a working system did we we ran out of time and we we finished the finished the pilot so we did meet our time and budget metrics were a challenge we didn't have what we call a strong metrics process throughout this it developed as we went through what we did collect basic effort schedule cost again the object here was not as this as a predictor of what the effort would be but just as a learn the process so our emphasis were not on this we did do a lot of guideline and detailed process and methods development during this process and that was essentially one of the questions going in is what did you need to decide on and established and in fact this this next chart over here our document tree reuse driven software process guidebook it's this we found we had to develop methods and standards and practices and procedures and if you get a chance you might want to take a look at this is representative of the beginning of a tree that we anticipate will grow as we evolved the domain we didn't have a clear view of what these were going to be going in but we have a much clearer view now I think just to finish off the success criteria a lot of emphasis here on on teaming we took a multidisciplinary approach we had domain experts and process engineers and environment engineers and the challenges of building and working a team that communicated it was the same in our areas as it is for anybody else I think Catherine's point of having to be a level three organization that underscore that several times we tried hard to behave to pretend we were one so all of these implicit criteria kind of dealt with some of those side management issues and and teaming is at the core of all of those this just shows two views this probably a very high level view of the the synthesis process one of the challenges we found is in working with with engineers that were brought on the project without the full advantage of studying and rubbing themselves on the synthesis is that just understanding the two life cycle strategy was a was an essential element and a very difficult element of doing business a new way is almost once you can get the light turned on this as a half I understand how that's going to work and I can visualize how it's going to change the way I do business you can get participation of that that's almost a top level very counterintuitive behavior in this kind of marketplace where you classically have fixed price throw over the wall money goes over the wall one way a trainer comes back the other way and and they're very expensive so so here's the here's the process we went through all of these steps as said we spent 98% of our time in domain engineering and 2% over here because the size of the application and the amount of leverage we were able to achieve and the automation we applied to it just said this now becomes a very fast process that by the way if anything proves the concept of of underlying two life cycles and synthesis and the whole mega programming paradigm for ARPA that proves that you can spend you know if you plan up front and invest okay you're going to get return relative to that investment now the question is that and we won't ask we didn't answer it on this iteration and our whole demonstration project is supposed to answer in the question is is the investment here going to get generate return you do enough of them you do them fast enough to have a business case so one of the other things I think that is a we like like like good engineers we sort of blinked it at the management process and so this domain evolution stuff we'll get to that later that's not real important that's really where we missed having the vision of the two life cycle and had confusion as we went down the path by engineers themselves of not fully understanding where this stuff was leading to had we done some better domain planning even for a pilot so our lesson learned was don't go past this it's an essential element or it's not just for your own strategic selling but to to get a shared vision of why you're doing some of these things very differently than you're used to but one of the contrast is you see this complex pattern this is kind of this synthesis process taken down a couple of levels of detail and put in a network it is one could say horrendously complex another could say typical projects you know look like this the whole process if you can stay with me on this we're trying to do is change each trainer development from this into way way over here something that kind of looks like falling off a log a simple trivial process to develop a component of a trainer a whole trainer whatever the case what may be so all the way from that level of complexity to that level of simple simplicity is you know kind of one of the true as we reflected back what did we really do essentially that's what we wound up doing I'm going to end now one of the things you'll see on this wall is is we've got kind of a top slide that said this is what we did and some of the key lessons learned we have what we think is a hope is an example of it and then the work product that we actually used or generated on our demonstration pilot is at the bottom so that's a as we go through the thread here there's one of the thread and this is probably one of the key things that we wanted to illustrate is having done this we then went back and looked at let's pick a small piece of the even the stuff that we did and and look at the notion of commonality which we've represented in blue and variability which we've represented in red so that's the key and how this commonality and variability with synthesis tells us is the heart of a domain engineering effort okay how this shows up in the various work products its relationship to each other and there is a relationship which areas are kind of variability intensive which ones are commonality intensive and just generally understanding how that rolls so that's that as we go through this I'll try and pick up where this thread is illustrated so we kind of divided it for the sake of presentation is domain analysis as you know is the top-level process and domain implementation we kind of put a little seam in here just for illustrating that design which is really under the domain analysis as we kind of considered that representationally as a second part here it helped us understand that I think a little bit better but one of the things we found for example let's start up in the beginning of domain analysis is domain definition the guidebook kind of suggested in its incremental risk of verse strategy of do a little bit and then for a while and go back we found that in this iteration it really made sense to to have very fine grained variability assumptions and commonality assumptions and to keep a record of those domain expert assertions this is where this and later in the decision model but primarily in the variability assumptions is where you separate the domain experts from the fakers from those that say they can but can't we had some experience going back very early in this project is is this is where you put a blank piece of paper in front from somebody and people who are qualified domain experts can fill out that piece of paper can understand they understand their field with enough depth and richness are they under they have the experience of multiple projects they have the experience that knows how to analyze that experience and abstract what was this what they saw our patterns and what they saw as threads of commonality and we are one of the results of this we have a domain engineering or domain expert profile that we are continuously upping the anti on you know it started out with a master's degree and and six years of experience and three systems and kind of inching up to four and six systems we're almost insisting now that people have published something which demonstrates an ability to abstract and a passion to communicate and and to think about you know that trying to find the right level of horsepower commitment talent experience so if we learned anything here and and it showed up right in the beginning of the process the talent for the making these assumptions that characterize the system is where the you really the you know if they can't get past this don't let them because they're probably not going to contribute later down the line they're not going to be the have the kind of insights in domain engineering that you can go out and sell the application engineers even if you if you say I have a policy you will do domain engineering and you will do application engineering if the quality of the domain engineering is not that the kind that would be produced from your real waterwalkers you're probably not going to get application projects to buy it so this whole notion of domain engineering takes scarce resources and and leverages them really comes out in this particular area we found that to be the case and we had some areas where we were very successful it's not that nobody could do it it just it depended upon the the depth and skill of the individual deep skilled individuals were successful to satisfy the results of the assertions they put down in the beginning lasted through the duration those that weren't just never got there so essentially your domain expert this is really where you milk them dry because everything else if you say this is a definitive fine grain statement of variability sure you'll miss some things and you get some things a little bit wrong through all the subsequent tests but the real invention and creativity part comes here we went on from there to the decision model basically the decision model which formalizes those assumptions that's really the heart of this process and we kept going back to it throughout the whole domain engineering cycle and even referred to it to understand the application engineering cycle so we think it becomes a you know sort of a baseline a durable work product that a lot of people are going to reference it's also where you find out if you've got the kind of predictable definable variability that says I can get leverage if you ask the hard questions up here and you get real soft fuzzy answers like gee I don't know how that varies I know it varies but I don't know how the likelihood of getting real high leverage mechanistic reuse is probably going to be low we kind of wired this thing because we thought we could really nail this tack hand and VOR so we didn't we kind of factored out that problem but in some areas we looked like like propulsion in training systems that's kind of a non-deterministic solution I mean that engine works how like that feels like that engine does feel rather than by any definable parameters so so you may have even within a given family you may have areas that are well behaved and lend themselves to mechanistic development and others that you can kind of cauterize and say put your code in here we can't we can't interpret it so anyway decision model the heart of what we did as we said the the expertise and insight first evidence in the the variability assumptions it was really reinforced here that you could define a ties how this variable behave what all the values were for this assertion and of course you want to work on this in in nice little bite-sized chunks you want to package your your decision modeling effort so that you can validate and verify little bits and pieces of it at a time because where you come up with the wrong answer then you've got to go back and refine your assumption so we worked a lot of these in a loop in a tight loop so again saying that that was a key area once you've got this finished you I would not want to say this but your reliance on domain expertise what you can milk out of them cuts down doesn't go away because there are other things they help you with in other stages but the that basic insight of understanding variability you've now got it bottled if you finished that product requirements I guess if there were one area that we felt was was kind of that we had the easiest time communicating it was the least difficult for people schooled in traditional development it was in the product so we just said list all your prior requirements and some of them are going to be conditional so you got a whole bunch of nested if statements in here and had a relatively easy time of relating to that at least at the level that we test that so it's a product requirements of all the phases we kind of went over that the quickest and we think we still did the right thing process requirements we were looking at a very small piece of this so this was not a major area that we had an opportunity to to learn essentially what we use this to do is to take the decision model and kind of say what's the the right sequence to ask these questions what's the right sequence to walk down through what is essentially a tree and cut off prune off various branches as you go as you go through since we were starting with a very small branch of this tack and variability we still have a lot to learn about this but that seemed to be the right thing some other things that probably ought to be begin to understood it process requirements is is some of the what's leader called the presentation paradigm you know how do people actually build this and where does the data come come from and in what sequence are you likely to get it it's one thing to say what kind of instruments are going to be in a given trainer or check that off you know where that comes from but but any kind of delay or dynamics in how needles move maybe a whole another data collection effort maybe a whole another process that tags along behind that your basic decision grouping that you established up here may not may not give you visibility into you just may not have the data at that time so so there may be some other demands of process requirements that we haven't fully understood yet all right essentially so everything we've done up to now that we've called domain analysis has given us a good rich set of requirements we've almost been able to to put our domain experts on a part-time basis they'll they'll show up again when they detail particularly process requirements but we milk them dry for variability assertions and and defenitize those and sequence them in the beginning part of the domain analysis the next part product design has three major components that we we considered architecture component design and generation design we were very fortunate on our project to have what we viewed as a very durable architecture one that part of the team Boeing in particular had a lot of experience in fact we built it for the Air Force over the last ten years and it was essentially as architectures go it was okay and very conventional not very flashy and it kind of packaged all of the functions of a flight training system into some very conventional functional parts it was a functional architecture we had all of the elements of flight station visual systems were separate architecture element physical cues which in trainers are kind of the motion base that then you gives the feeling that you're that you're moving propulsion simulates the effects of the engine the navigation communications piece does radios and radio navigation aids that's the part that we had this little slice so this give you a kind of a feel of what so we did a piece of that maybe maybe about 20 percent of what would eventually be in a total of this and this is one of the smaller pieces of the total architecture so we had this architecture going in a little bit cheating so we didn't have to build our own architecture but one of things that happened on this is we wound up validating that this was a good architecture not only for training systems as an abstraction but it was also a process friendly architecture and that became a much more important consideration because what we were able to do is to divide the work up the work packages that we were going to work on for the rest of domain engineering into process friendly pieces that you could have definable skills and a rational set of team to work on those now granted for the pilot we only had one so but as it developed to be became clear that that was going to be important when we tackled the the full scope of this we actually developed a subset of this architecture for this what we call the slice of the demonstration and wound up breaking it down into specific pieces and and here here you see navigation communications you see tack and controls and this tack and determines self-test which in fact is what all of these strings this string goes for a set of commonality that says there is always going to be some self-test or probably if there's a self-test procedure it's going to be in a package called that and the variability just to recover back here that we've established and says one of the ways that you can that self-test can vary if you have self-test is the effect of the push and hold button or when you push the self-test button excuse me does it self-test cycle just by pushing it or does it self-test cycle as long as you hold it one of the variability factors in how self-test initiation work so we made an assertion about that we kind of defend it ties that as in our decision model we put some conditional requirements against the product that you know if that were a characteristic then it had to do this we said for the process requirements to build one of these things you better ask that question and you better ask it after you've asked some other questions at a higher level and then finally put it in an architectural context of says here's a block of the in system where that's going to fit so that just picks that up and that's the commonality and the red is the variability coming down from architecture and and very closely related to this was the domain engineering notion of component design as a matter of fact one of the reactions and experience bases were that people weren't clear where architecture left off and component began so in some cases people would say boy that's a I thought architecture in the word high level were synonymous but in fact we're finding architecture can be as detailed as you need it to be or as it should be and and that design you making trade decisions if you will probably happened at a much lower level so there's a lot of looping back and forth one of the other things that that that became apparent here and this is just a perception as a result of learning this this lesson you look at this component design if you were to come up close and look at this you'd see a lot of if statements kind of looks like pseudo code is a spraying if it's push and hold do this else if it's this do something else that sounds like kind of processed stuff so we're finding that not only where the lines between design and architecture blurred the lines between process and product became blurred and helping our product domain engineers understand that they weren't designing a product in the classic sense but designing something that had the seeds of process wrapped inside it okay as well as around it what was a significant learning challenge and experience once they they'd overcome that and we'll we'll see that happen in several areas down here so so we had component design now the one area that's it's almost pure process is this generation design you know up till now component and architecture sounds like pretty ho hum vanilla software development so we when you add generation design then you throw the the the traditional mindset for a real loop and and one of the things that that is incorporated under generation design is is something that is it has always been a challenge in software and systems engineering which is to keep mapping between your requirements and solution world but also we you know we've added to that the complexity of mapping variability in both requirements and variability in solutions so this triple set of mappings architectural and component mapping being maybe product or component or commonality intensive and the decision mapping which is kind of variability intensive locks in in addition to the problem solution mapping walks in variability we found this to be an extremely effective treatment we we we had some good people on the project who understood it a lot sooner than some of the rest of us did and we were very happy to this rationalized a lot of the loose ends that had been flapping for us in earlier products because it brought commonality variability requirements and design into one tractable complex but tractable piece so this was a we spent a lot of time looking at this building kind of our own record keeping system to treat this and capture this mapping this seemed to be a very effective mechanism for us and embodied a lot of what we were trying to achieve so commend that you look very carefully at these and put a lot of emphasis on them okay so up until now we've gone through our requirements analysis we've turned the corner and and and gotten a design that didn't look anything like the requirements which is a test of a good design complex system we followed kept the variability commonality thread we've had the product process differentiate and we've you know been able to figure out how many interact with each other and how they are different now we're ready to do serious development in the implementation phase and although they're they're indicated as separate component and generation procedures and we actually had two teams we said oh they're two processes a component process and a generation process we'll have two teams go off and do this they wound up sending each other's lap not able to you know go from hour to hour let alone day to day or week to week working separately a lot of interaction it's probably one of the areas we're gonna have to rethink and work a little bit is how we organize for that because having those maybe having a process somebody with process skills on a team we're not quite sure how we're gonna do this but we probably didn't organize ourselves organize ourselves correctly this is the area where our one of our team members those Boeing's team member on the stars project to put something out there that will be the beginning of hopefully a healthy market of software engineering environments for mega programming and synthesis digital equipment corporation has been supplying platforms and and some of their common data dictionary repository based products Boeing team in Seattle has been working hard to add value to to put in the notion of how to represent this stuff this is the area where we started to use that very extensively we did some use of this earlier but but some of the key things you're gonna see is basically broke this process down into how we loaded the C up with these implemented components and generators or this you know mixture of components and generators so that then somebody could do application engineering down the line here and essentially we kind of cast this using some of the information that came in our process requirements and were designed into the generation procedure to said essentially we visualize that an application engineer when they sit down to build one of these tack hands of the future will sit down and answer a set of questions now the fact that they're not going to build a tack hand as an end product that means that they've answered a whole bunch of other questions beforehand and put it in the context of a training system and the family and named what this training system is so that the heart of this whole results of the implementation for us was this what we call a decide screen now the decide screen is something generated by camel which is Maggie what's camel stand for otherwise knows camel one humps or two one for process one for product so camel is the function that we put into our software engineering environment that essentially takes that decision model as it has been adapted by the generation design and the component design and poses the right questions in the right form at the right time in the right sequence with all the right validation verification underneath it to help capture the variability for an instance so domain engineers build something that will generate a camel screen which is a lot of clips which stands for the sea language inference processor okay which is a NASA public domain tool by the way and so camel sits on top of clips and asks you a bunch of questions a lot of engineering goes on underneath this but for this stage that's essentially the model so so this defines what the screen looks like the output of the screen takes the answers to the question and then builds a a retrieve function this is kind of like an adaptable process that is then executed and split into a retrieve function and then you get some the values by themselves which are used by that retrieve function values coming from the the decide screen and together these values in the retrieve function go get some adaptable code this is the closest thing that we have in our system to a component what people would like to think of a component but if you take a look at this in detail one of the things you'll see is a lot of process statements embedded in here of if one of the values that you got on this screen is this value throw out this code change this value that's what all of this camel retrieve adaptable code package tends to do it's part of the process so one of things you look at this and say that some of these things where there are a lot of variability dots are kind of the process intensive parts of the process and the ones with the the blue pins and threads that are product are the product intensive parts of the process the whole thing together in fact comes out the end as a process this was another key learning element is everybody we had people writing adaptable code who thought they were doing traditional eta coding for traditional components and fan found in fact that they had to think of the process they had to think of the process of being adaptable developing that awareness and appreciation was perhaps one of the more frustrating elements of and and kind of threw them off stride into some coding practices that weren't particularly good stress people whose coding capability was not strong to begin with got real stressed and and almost fell off the log so we had a lot of coaching in here and and as we go through this another time we know we're going to have to do a lot of training and coaching in understanding the nature of adaptable code and why and how it's different from regular code at least in this particular version and so through a combination of roams is our reusable object access method management system excuse me man and it's where all of this stuff kind of resides across the whole family logically there's sort of one repository it's the repository piece of this I think an interesting feature that we found in this particularly looking at other reuse strategies other reuse strategies emphasize reuse librarianship and the notion of browsing and like to keep metrics about hit ratios essentially what our process our mechanistic process does is every time we go to library we get a hit because he went there because a process said go there and go find exactly one and only one product pick it out adapt it you know in in predictable ways and send it to me so some of the metaphors that are used in the reuse community such as browsing and searching and candidate components in this is really the heart of leveraged reuse you know this is where leverage says every time I go to library I get something because I wouldn't have gone there if I wasn't going to get it simple straightforward probably makes it more investment intensive and your investment has to be a little smarter than it otherwise would be but that's real the real payback you don't spend waste a lot of time looking at things you're not going to use and so that gives us to adapted code and and all the things everything up to this point on the top these are files that these processes use and this is a screen that is generated all of this code the make function the retrieve function and the adaptable code is all in in in Rome's to begin with before you do the first amount of application engineering the things that that are created as a result of exercising this process are this decision vector and the generated retrieve file and of course the adapted code so so these are the points that that are transient for this particular system the rest of this is part of the furniture for the family and in fact that the the the process definition for doing all of this that we anticipated was that the amount of time it took to to actually execute this was actually in the order of an embarrassing few minutes setting up some of the context of the higher level questions that had nothing to do with radio nav aids but the radio nav aid part of this 10 minutes 15 minutes that was the payoff for the in this this several months of investment clearly we need to get smarter about doing the front part of this domain engineering so that there's a little less of that ratio but I think we prove to ourselves and to our community that if we spent the time to get this under control we in fact could take this complex spaghetti bowl of a project and reduce it to the simplicity of a well-behaved repeatable defined optimizable process are there any questions we actually got we actually had on this more variability than we had planned it got more complex just the the the variety of ways particularly when you're simulating faults and and in training conditions the number of different ways that some of the even the simplest instruments can behave is that's a challenge we're not sure what the total number will be and whether it will pass over the end but it did lend itself to we could define you know when we found some people at least based on their expertise you can pretty well de-finitize how these things vary and the people who are expert know there are only three manufacturers of tack and and if there's ever another one I'll know about it three years before we have to worry about it you know some of our other experiences that were not part of this pilot but an earlier trial use as I said looked at some of the propulsion areas and said there's this big box and I have I cannot give you a deterministic mechanistic way to solve that but I can bound it for you which is almost as good as where you lose is if you assert there's definitive variability and there isn't then it's surprise time okay if you assert that there is boundable variability you're still ahead if your variability is not definable or boundable those are domains you don't want to do any investment at all in and they're probably not good candidates for this high leveraged a synthesis approach yeah rich the question I'm not sure may put words in his mouth here but I'll ask the question anyway had to do maybe with what happens when you do a variability analysis up early on and then you get it wrong the first time and you get down in the implementation and go oh my gosh I've got some other variables here I haven't taken an account for yet I've got a file an engineering change request or something and go back and expand my domain or do something like that did you have that experience well you know one of one of the things that we had done we actually did two iterations you know I thought it was hard enough to describe one iteration rather than two we actually did this in two iterations in which we kind of got to about here and we learned a whole bunch of lessons such as whoops there's more variability in here or there's different variability or some of assumptions so we jump back reset the clock and start over here and we actually even plan to do that so so in the the overall process there were two iterations and that was kind of how the other thing we did that may be challenging and we're looking to our software engineering environment and it's underlying information model to accommodate is to help us with that because you're right once we get into some of these design we'll get some surprises so part of the challenge on the management side is that you were doing incremental evolutionary development even in the pilot project and of course one of the learning things is is we we didn't see a whole lot of benefit and maybe it was just the people and the perspective but it seemed to make sense we kept these assumptions we want to make these assumptions real definitive so we looped between decision model and assumptions just like we looped between design architecture or design and architecture so so we got to some pairs of processes where we kind of throw the configuration management blanket around the pair where there was particular trade-off between discovery and formal formalizing which is really what that decision model is is a formalizing year insights so it seemed like going back and forth between these pair wise was was a good strategy at the time thank you very much I'm Jerry Turner I'm a software engineer with Rockwell three years ago Rockwell's command and control systems division in Richardson Texas established an internal research and development project to investigate software reuse this project became a synthesis pilot for the software productivity consortium the objective of the project is to institute a reuse driven software development process for Rockwell's command and control systems division this would result in timely lower cost development of high quality reliable software in this segment I will discuss our experience with synthesis and show examples of synthesis work products because the work is proprietary only segments of products will be shown we have developed three domains at Rockwell CCSD the first domain was a communications management and control domain the second domain was the mill standard 1553 B communication domain which is a subdomain of the communication management and control domain our current domain is a message handling system domain before we started synthesis we realized it was essential to establish a foundation of discipline methods and automated tools discipline software development methods incorporated into the domain development process allowed us to produce reusable work products weekly product reviews enhanced communication between domain engineers tools were used to support the adopted software development methods to adapt components and to produce an interactive application engineering environment the methods used were real-time structured analysis a requirements method by Ward and Miller aid arts a design method by the software productivity consortium and departmental coding and documentation standards we modified the aid arts development process to incorporate work products created because we were using synthesis and because we must capture variations I borrowed this picture of the aid arts development process and modified it to show the incorporation of synthesis you can see as domain engineers we began our software development process with domain analysis domain analysis is done prior to other activities and iterations are made to the domain analysis work products throughout the process our experience showed that as we progress through the process the decision model changed the most the other domain analysis work products seem to stabilize our engineers needed a way to manage the changes in the decision model we added to our synthesis work products a graphical representation of the decision model that can be maintained with available automated tools we also added work products to capture variations next I will show segments of work products the synopsis allowed our domain engineers to set a boundary around the domain and identify systems that were in the domain and systems that were not in the domain this segment of a synopsis specifies the systems that are in the mill standard 1553 b sub domain notice bus controllers remote terminals and bus monitors are in the domain the commonalities allowed our domain engineers to specify the requirements of every system in the domain this example shows a commonality our domain engineers begin commonality statements with the word every to emphasize the concept that all systems in the domain must satisfy this requirement the variability is allowed our domain engineers to specify the requirements that may vary from system to system this example shows how system requirements may vary the decision model allowed our domain engineers to elaborate on the variations this example shows an elaboration on the previous example in this decision model segment we show the variations the specific decisions and the mnemonic that will be used for adapting components this is a segment from an adaptable document we added a graphical representation of the decision model to our synthesis work products this graphical representation is maintained with available tools and allows the domain engineer to manage the hierarchy of decisions we captured variations in our requirements model by creating non-deliverable variation data flow diagrams deliverable products will have data flow diagrams on the levels below based on cuffs customer requirements this is an example of a variation diagram we captured variations in our design by showing common parts of the design and introducing it overlays for the variation this is an example of how we showed a design model with variations we decided an interactive application engineering environment was the most attractive to our projects this is a screen from an application engineering environment this environment allows an application engineer to specify product requirements the user interface allows the domain engineer to bound the current decisions and to add decisions as the domain evolves