 My name is Michael Davis, and I am here to call Bullshit. The open source security is based on the concept that the source codes open are available, that numerous programmers around the world are reviewing your code and letting you know about the vulnerabilities in it. Bullshit. First a word from my sponsor, they won't even claim the body. I am here because I believe in the open source movement, open source software. I am here because I believe blindly. I ignored my education and my experience, and I believe. I had a customer ask me, let me know if you can't hear me because I'm around. I had a customer ask me for recommendation on a tool to sanitize hard disks so they turn computers in. I said there's a great open source tool out there, Derek Sputnik knew, D-Band. He said, can I trust it? I don't know. How will I know? We said we would look at the software and make a recommendation on whether we could trust it or not. And so we took on the task to review software for D-Band and other open source solutions and make a recommendation on whether it can be trusted or not, what kind of review had been done. We couldn't find any evidence of any reviews for D-Band or the other tools that we had found. One of the primary hurdles we faced in doing our review was a lack of developer documentation. What we found was high-level functional description, why you'd want to use this program, and in most cases, comment at source code and nothing else. So essentially if you're trying to understand the program, you have to really understand the program. So there are a lot of reasons why we need to review open source security solutions. I'm here focused today to talk about the lack of developer documentation and why we need it and who can benefit from that. And I'll talk about two different communities who need that kind of information. After presenting some benefits and hopefully getting some buy-in, I want to discuss the difference between functional, which is what most people look at when they talk about security versus assurance, confidence in what's being done. I'm going to use a model. I am not endorsing common criteria. I am just using the assurance piece of it as a model to discuss how to gain confidence in an open source security solution. Then I'll be focusing on the development piece in depth. What I see as a path ahead and my limited view of the path ahead, and then I'm going to wrap it up. First thing, confessions. The title of the presentation was meant to antagonize, to draw the crowd. For Sunday afternoon, I think I got it. As I said, I am a believer in open source. I use the solutions when I can. I recommend them when they can. I was just challenged and wanted to share my frustrations. One of the things that seems obvious, but maybe not so obvious, is just because the source code is available, does not mean it's being looked at. In some cases, this is true. In some cases, what people think is being looked at is just a spotty, not comprehensive look for vulnerabilities that can be exploited, ignoring other vulnerabilities that could be exploited but were more difficult to develop. We need to appreciate that there's a difference between just having the source code available and actually having someone look at it, preferably someone other than the person who wrote it. There's some challenges to that. A recent survey of what's on Source Forward showed that about 98% of the software out there has one or two developers on it. And I'll highlight an issue with that later on. This problem, not documenting how you develop the software, is not an open source problem. It is a software development problem. It doesn't matter whether you're commercial or open source. You face the same issues. You just have some abilities to solve them different ways. I'll tell you up front, I think the open source community may address this faster than the commercial world can, simply because of our willingness to share lessons learned, the pain we felt, and try to have other people not experience that same pain. Yes. That is one of the things I'll highlight again later in the presentation. But commercial software, the schedule is primarily driven by a marketing schedule. It's a business decision when to shift, not a technical decision. This is different than the open source community, which tends to shift when they think it's ready. Different philosophy, different pressures, different motivations. And software lifecycle. We need to respect that software's never done, just shipped or posted. It is a continuous process to improve that software. The other thing that maybe isn't appreciated, and this is going back to the one or two developers on a team, is the development teams have life cycles of their own. On larger teams, you'll see people come in and then we'll walk. And so we need to look at that transition. Most of the mistakes that are in software, we're not put there deliberately. There's screw ups, all shits, whatever you want to use, they're there because someone wasn't thinking at that moment about that particular issue. You'll also find where we discover new vulnerabilities that we didn't really think about when we were doing reviews beforehand. So a review five years ago might not have thought about looking for integer overflow situations, where if you're doing that today, you might do that. Maybe you should. Repeating again, the status quo for most of the open source projects is a high level functional description. Why you want to use the software, and then its comments of varying qualities in the code itself. There's typically no supporting documentation between except for the occasional user guide. There are some exceptions. Mozilla is probably one of the best that I can point to. It's a larger team obviously, so I'm somewhat cheating. But they have a site dedicated to developer documentation and cover it so you can kind of wade yourself in. Not suggesting that all projects need that level. The other one I heard today was Tor, which was interesting because the onion router or routing, I can't remember what the R was specifically. They have design documentation, and that documentation allowed the porting from Tor, which I think is written in C, to be written in Java over in Germany. I forget what the acronym stands for. Why do we want to do developer documentation? There are two distinct groups who need this documentation. The first is development teams themselves. And it's kind of hard when you're doing the solution yourself to think that you need documentation for yourself. But if you go back to my point about the teams grow, people roll in, roll off as they have different needs. You can see that having information about design decisions recorded somewhere would be extremely useful. If you have development documentation, you lower the bar for project participation. You can bring more people in because they can understand what your tool does and how it works easier. I have to put out a word of caution. I was introduced to a term that I had not seen before. It's a concept known as a net negative programmer. And that basically means that at the end of the day, this individual has done more damage than good no matter how many times you sit with them. So you want to lower the bar, but you can't do it blindly. In that development documentation, you're going to communicate design decisions. One of the more important things that you might not intuitively think to document is the alternatives that were considered but not implemented. Because this becomes very good down the line when someone says, oh, I have a great idea. Let's do it this way. If that path was previously explored, you have a record. And if the team members have changed, you're able to see that. Those benefits all benefit the team that is developing the open source software. There's another group who's interested in looking at the software. And those who are trying to say, yes, we can trust it. Not just for what it, not just the function, but we can trust it. We need to gain confidence that if it says it does A, it does A. But we also need to gain confidence that it doesn't do B when it's not supposed to. And that's a much more difficult task. It is not a trivial task. It is, well, never mind. I think I'm speaking to the choir on that point. We need to separate two concepts which may or may not be well understood. One is, what is a functional requirement versus an assurance requirement? A functional requirement is this is what it does. It does not imply any confidence in how much we should believe that it's going to continue doing that. This is contrasted with an assurance requirement that says, this is how I can show that you should have confidence. It's going to do whatever I said it was going to do. It depends on your model. There are assurance requirements that cause functional requirements to be created in the common criteria model. I guess that's the easiest way to say. You can have, from my perspective, it's easier to separate the two mentally is I want to talk about what it does, and then I want to talk about what I'm going to do to give people confidence. It does that and doesn't do anything else. As I said, this is not an endorsement for common criteria. It's a model I'm familiar with. They have a vocabulary to define what the thing does. They have a vocabulary to describe what assurance requirements are there. They have a methodology for evaluating someone who says this is what I want to build, and for someone who says I built this. The only piece that I'm grabbing right now is, and I'll caveat it, most of the assurance requirements, and the reason I'm grabbing that is just to have a model we can look at. The majority of the assurance requirements are the ones here. The one I'm skipping is the concept of assurance maintenance, which is a bit overwhelming at this point. We'll look at configuration management, delivery and operation, development, guidance documents, life cycle support, tests, and vulnerability assessment. My focus again is on development and developer documentation. What I'll try to do is point out for each one of these how we in the community have tried to solve the particular aspect, and the only one I'm going to talk about a little more in depth is development. Configuration management, vocabulary first. The definitions that I'm grabbing here come right out of common criteria, and you're going to need to know three definitions that we go through this. The first one is tow, targeted evaluation. Just read that as your software, because that would probably be the correct answer. When you see tow security function, that just means the security functions that your software does, and that's differentiated from other functions. And then if you see ST, that's security target, and that's just the statement of what is about your software. For configuration management and managing the versioning, the changes that are made to open source software, the de facto standard is CVS. That seems to be one area where we get it. We have versioning control. We have a way to control who can make changes. We seem to be on track with this one. Delivering operation. This deals with having your software go out, get installed, and come up and operating in the way that you thought it would come up and operate. Right now the software is typically distributed from a website. The concern you could have is that the software could be swapped out, compromised, et cetera. The way a lot of organizations have dealt with that is they pull the software down to a mirror, and then they wait a few days because people care about their software. If your web server that was providing the software got trojan, you'll know about it rather quickly. The word will get out. If they see that, they know that their internal copy's got... The issue with MD5 is it is just the hash. With PGP, you actually have someone signing the hash. If you have a software on a website, I'm going to put my Trojan software and my Trojan MD5 up there. You can see the difference. Some people don't even have the MD5s up there. MD5s provides a low level of assurance. You have some confidence it hasn't been compromised, but not a great deal. You know that if it was compromised, where it was compromised, the web server is providing you the signature and the software. With PGP, it's a little bit higher. It's a whole big area to get into, and I do not see a lot of widespread use of digital signatures to assert that the code is correct and came from who it was supposed to. Development. The area that I'm going to want to focus on and the area where we have... I'll use the term the least documentation, because what we need to start with, in theory, is a high-level description of what we want. That's going to be turned into requirements that get turned into security functions. That's the TOS-TSF. Then we're going to take those requirements. We're going to design some software based on modules or some sort of subgrouping. We're going to explain how, as we go down each level, all the way until we get to the source code, where that function is being handled. You can see right now we have high-level functional description and source code. Without really getting a note of code, you're having to jump from the top to the bottom. More specifically, you're starting at the bottom, and then once you understand the code, you'll understand how they implemented that high-level function they're describing. Guidance documents. That's a variety of guidance documents that are out there. In general, user documents are common. Administrator documents are common. Then some organizations have technical user. Some projects have additional ones. This is just a use documentation. How do you run the software? Lifecycle support. It says ad hoc, but if you look in the middle of the paragraph, you'll see floor remediation. That is one area where you cannot argue that the open-source community has a real advantage because when a vulnerability is identified, the ability to pull people in, especially if you say help, has no parallel in the commercial world. That is one area where we do have an advantage, and it depends on the size of the project is your ability to pull things in. That's the key issue. Should they use them? Yes. Let's see. Testing. You can argue this both ways. You could say that we released the software, we let it to run, we allow people to feedback when it doesn't work, and that's our test model. You may have similar projects that are extremely well organized, and they define test cases, and every time they make a change, they do regression testing to ensure that they haven't broke something else with that change. I did not see a lot of cases where that was there or that they provided the test cases that they used for the results from those test cases. Using the common criteria model, that kind of evidence would be provided to some sort of reviewer to show. Going back to the assurance team trying to say, yes, you can trust this, that information isn't available, the test cases and the results from those test cases. Let's see. Vulnerability assessments. Really interesting topic. It's where you're looking at your design, and you're saying, how can someone screw with me? And then you document how they can't screw with you based on how you thought they could screw with them. Philosophy called abuse cases is the one I was introduced to kind of explain this. I haven't seen a lot of that kind of formality in the open source community. This is not to be construed with what's going on where you have people looking for holes that they can write exploits against. That's different. This is more, I'll use the term internally driven versus someone trying to punch holes in it. If folks are interested, this sounds like something that could be worked on and built into a presentation for next year. But this is this year. Let me talk about development documentation. We need to understand what is the requirement. What am I looking for? From my perspective, I represent the teams that would look at open source software and say, yeah, we can trust it. What we would be looking for is two things. One, some document that captures the design philosophy and then this tracing down to the software and what tools are available. Before I start digging into that, I had someone look at my presentation who was involved in a couple major open source projects and gave me some insight into how not to do documentation, developer documentation. Before I get into how I think it needs to be done, let me just emphasize that I'm acknowledging that there are a lot of ways to screw this up, a lot of really painful ways. Individual was worked in an organization where the documentation consisted of source code, comments, which was pseudocode and only the pseudocode before the actual code. It was so obvious to be useful and annoying. From a paper written on IC Sharp's website, Bernie Spaduto, I mispronounced the last name, wrote a paper in 2002, which kind of amazes me. It was on how to comment your code well. I was introduced to commenting back in 1982 in my first programming class, and it just surprises me that we're still writing how to write comments well on our source code. Obviously, we haven't fixed that, and that helps me manage my expectations not to expect to be able to run out and develop our documentation tomorrow, baby steps. One of the things that he emphasizes is don't bury comments in your source code that have to do with configuration management issues. That says, I did this to fix this. He recommends that goes up in front so that you have a version in control for the files. It amazes me. Don't include comments that address the obvious. Apparently, some organizations have a real issue with this where they want everything commented. Don't, this reflects an earlier comment. Don't leave the design and the code out of sync. There may be times that you realize that you need to fix something that doesn't match the design. At the end of the day, they need to match. Don't wait until the end of the project or the code's written to write design documentation that explains how you ended up with what you got. They need to be kind of kept in sync. Most importantly, do not assume that well-commented modular code is enough. If you're looking for your project to be survivable beyond the one or two developers that are supporting it, that is not enough. You need documentation so that more people can pick up and run with your solution if you can't. So what am I looking for? We need in the documentation, development documentation to understand the flow and the purpose of what is there. It doesn't help us run the software any faster. It doesn't help us configure it. This is extra documentation I'm asking for. Years ago, they said put comments in your source code. Then they said, make user guides. Well, now I'm here saying, and make developer documentation. To do a security review in a cost-effective manner, which is the reason I'm here, is there needs to be traceability between the functional description, what you're wanting to do, the high-level design, and then from the high-level design down to the low-level design, which for software is essentially your source code, is important in a document other than the source code to capture your design philosophy. We need overall architecture, pictures are nice, and some sort of explanation for why subsystems are there, why modules are there, why they're called, and without having to go through the source code and understand that. Let us develop some expectations up front, and then as we go through, we'll understand what's written there. And as I said earlier, developing alternatives is a benefit to the developer community more than the assurance community. Also, this is significant in the comments in the source code, is if you're doing protection, like balance checking, make sure that comment's in the source code. And then there are some tools that can highlight that. What are some of the tools that are available for creating developer documentation? Obviously, some of the easiest one are text. On the Mozilla site, they have essentially a white paper that describes the design. That is an effort. You may not have the desire, the inclination, the writing skills, whatever term you want to use to produce a document like that. What I think we need to focus on is the source code comments and especially making source code comments that a parser will recognize that will be able to pull those comments out and put them in a separate document. So you're having your documentation in one file, and then if someone wants to dig in, they can see that in the source code in another file. Flow charts for certain programs. UML diagrams, I have three types of them listed here. What are your classes that you're using? How are they called? What's the sequence? And how do they interact? I don't know if I couldn't find a case, but this is a type of design documentation. Finite state machines and state charts, they're somewhat related for some very specific cases. I've seen people describe that as a technique to show error cases. But for an overall software, the major design document, that's probably heavy for most solutions. This is an example of a picture that would help someone begin to understand what the code is doing. This comes from one of the earlier versions of password-safe. This picture is in a document that was written by Andrew Mulligan while he was in graduate school. And the good news is he's now a developer on the password-safe project, which is I think the perfect situation. And he goes through in comments and says that what you see here is a design that basically keys events off a GUI, and although this might be good for the initial, it's not good for a long-term solution. And for those who use password-safe, you probably saw a big difference recently when they went to the new version. And those included his changes. I don't want to get into changing the design of the software. What I want to do is show this is an example of a picture that lets me know what are the pieces I would expect and how do they interact? I'm not the devil's advocate. I'm actually here to defend the devil's advocate. I talked to a developer in the commercial industry about this issue who agreed. Software development is software development. We have the same problems. We just have the ability to choose different ways to solve it. And he made the argument that commercial developers are going to solve it first. And I want to try to counter some of that. Actually, I'm going to counter two out of three, at least. He emphasizes that the commercial developers are market-driven to develop the code correctly. There isn't a grain of truth to that, but there is a body of evidence to suggest otherwise. In the market, you have people's salary, professional reputations, it may come into play, but in general, the market pace is not driven by technical. In the open-source community, the pressure is entirely reputation. You publish crap, people will recognize it for crap. And so if there's a hole, your ability to fix that is going to convey to people confidence in whether they can use it. So I think that the open-source community is just as capable, maybe more capable, to develop code correctly. Second, if you're paying for software, you have certain demands for reliability. I can't explain it, but I believe the same demands for reliability exist in the open-source community. Because I see the projects being developed, increased reliability, increased stability. There has to be some motive there, and it's us. We are demanding that reliability. The money is one thing, but I see our ability to influence the development and increase the reliability is an even match between the closed-source and open-source. Yes. Yes, that's okay. The last is one that I don't know if I can argue, and that's the requirement to make the code approachable. If you dig down in the file that's on your CD, you'll see the actual language used, which may be cleaned up a bit. But essentially, everybody has their own perspective. If you're the one person developer on a piece of software, your perspective may be you don't understand why everybody else doesn't understand it or doesn't understand how to use it. Therefore, when someone says, you need to make your software more approachable so that we can look at it, maybe use it, you may be resistant to that. Obviously, that's personality dependent, and I don't know if the commercial world has an advantage other than the market forces to be able to keep their software updated to make their code any more approachable than we do. Path ahead. What I see happening is for the larger projects or more significant projects, they are doing more developer documentation. They have bought into the concept that for the project to survive, we need to capture why that code is there. The OpenSSL project is actually a special example for me. For those who support cryptography in the government, you may have heard of FIPS 140. It's basically an evaluation program or validation program, excuse me, that allows the use of cryptographic items. OpenSSL would be the first open source solution validated and validated source code level as opposed to the binary level. I mean, this is huge because now you're not having to validate every different version that's compiled, which is what we had to do in the past or what the people representing projects had to do in the past. If you go to the OpenSSL website, you'll see the effort that they've had to undergo to go through that process. It's not trivial, but it is not trivial for an open source project. It is a lot easier to support if you're in a commercial world because you can have the marketing person say, I can't sell this to the government until it's validated. And then the price tag isn't that high of a barrier. It's not like a common criteria evaluation where you're talking five to an order of magnitude higher in the cost between a FIPS 140 and a common criteria evaluation. Okay. In general, I think there is improved comments in the source code. I know I made that comment earlier about, we're still trying to teach it 20 years later. I think we'll continually teach it because we have new students. I think we may need to look at how we peer review those comments and that's handled differently within projects. They're human dynamics. Every team is different. I think that if we're going to develop developer documentation, the focus needs to be, as I said, let me back up a second. When I talked about development and documentation, I talked about traceability from the top-downs. I talked about a high-level functional description than our high-level design than the low-level design in the implementation. I don't think that we can go out and do it that way, top-down. That's my opinion. I would welcome it to be challenged, actually. I think that what we need to do is work actually from the bottom up so that if we start from the top-down, we actually have a place to land. The focus needs to be on making the comments in the source code that reflect the design decisions. The reason is that's what we're working now. We're in the code now. You're not asking for a new document. I'm just asking to improve the documents that exist now. The thing is, the tools are available now to help document what's in the source code. To parse through the source code, Java docs is the example that I was introduced to, is able to pull that up and pull it together. We're not having to talk about wrestling with some sort of drawing program to create UML diagrams. It's relatively straightforward. Learn a few tags, and all of a sudden you can create a set of documentation without a significant increase in effort. And as I said, I think the effort needs to be driven from the bottom up. Kind of wrap things up. What I don't know. I don't know how to motivate the folks who are developing code to write more documentation. I mean, I've heard your arguments before. You know, you're commenting the source code. Now we're having to create a user guide, and now you want me to create a document that no one but me is going to use. Well, the answer is yes. I do. I don't want that, but I don't know how to motivate people to say that. If the software is a security solution, like password-safe, if someone may need to have some degree of confidence in it, it's more important than some other solutions, at least to me. That's my bias, of course. The other thing, and this is an important issue, is I don't know how much documentation is enough. I also think the commercial world has that problem. But I do have a theory on how to determine how much is enough. And I'll get to that last. What I do know, tools are available. They vary by quality. They vary by ease of use. But in general, if they, you know, as with all things, people start using it and provide comment, and the tools will improve to what we need. I believe this community is going to solve the problem better than the commercial world, and that the lessons from this community will actually be brought into the commercial world as best as they can. So I think it's our opportunity to direct different ways of handling it back. The advantage we have is, of course, is the open, that's a significant piece, exchange of information. We're able to say, you know, this is a bad design, and someone says, okay, it's thunk. And we're able to move on from that point. It's different than a proprietary development environment where that may be a closed community, and that conversation doesn't live beyond that moment. Feedback priorities. That deals with how much documentation is too much. As I said, there's two communities that can benefit from the development of documentation. It's development team themselves, and then assessment groups. So how much documentation is enough? The way I would determine that is, what does the team need? What can the team benefit from? And the view that you have to have is if we wanted to accelerate bringing someone on board help to bring someone onto the team. The second piece is, if your solution's being looked at, incorporating some of those lessons learned from somebody conducting some sort of assurance review and letting that kind of guide what is enough. I certainly think that what common criteria is looking for is heavy, and prohibitively so in a lot of cases. And so we need to be able to, I'll use the term experiment, find out what works and what doesn't, and then share those lessons learned. I'll open the floor to questions. Yes. Do you have any comments or questions about me opening DSP? No. I mean, I'm aware of it somewhat, but I'm not current. My education is in current, so I don't want to say the wrong thing, which, I go ahead. Yes. No, we're talking about your average desktop user. Okay. I'm going to summarize your questions and see if I got it correctly. And that's basically what's my intent with this development documentation. And that goes back to the development of what the term I was taught was abuse cases and how to refute those. In the example with Derek's boot and new D-Band, it's possible that someone could write a utility that rather than wiping the disk, they actually encrypt the information with their key or some way to hide the key. So that if all you're doing is, when you're done wiping, you verify that just garbage on the disk, the encrypted traffic would look like garbage. And then somebody waits at the sales where they're getting rooted defense equipment or government equipment or health care equipment and then buys up the hard disks and see if they have some information on it. You get one of the Simpson-Garfinkel papers written again. So what I would look at is, how do I know that it's actually wiping it versus then re-encrypting and then writing it back down? And so I look at the developer documentation, I find out where that process should be and then I go and look at it and see how it's executed. So I'm not having to look at the whole code. D-Band's a wonderful example for this because the piece that is really D-Band is really small, but it sits on top of a bootable CD or a bootable floppy. And so if someone hasn't looked at the software that makes up that bootable floppy or bootable CD, guess what? Everything is in scope. You have to look, you have to follow it through and at that point, and still is, no development documentation to guide you down the path to verify that. Yes? Yes. Have there been any objective studies of the relative likelihood of vulnerabilities being exploited on...? I don't know. Let me repeat the question. Has there been a study of the open source versus proprietary software and the ability to exploit the different versions? You know, is there a different way? The vulnerabilities are actually exploited. Or how they're exploited, I guess, is it more correctly? How frequently are they exploited? Or how frequently? Okay, how frequently vulnerabilities are exploited is the metric in your comparing open source versus proprietary software. And I'm not familiar. I can't pull above. I'll study up top of my head. Yes, please. The result sounds good. I'm sure. More obscure than that. How do you think practice is ever going to be? The thing that has to be looked at, I understand that. That was exactly the impression given to me by the one individual who had reviewed my presentation before it came here was, you know, well-commented code is enough. That's all you ever need. And all I can say is people who have done a lot more software development have written that if you don't have that design captured and the comments are on the source code so folks can understand that when you have to go back and fix something, you will spend longer fixing it than you will have if you had written the documentation in the first place. I mean, that's the real advantage. I mean, we may have to run in those situations where it's a teaching point. You had a hard time fixing that, didn't you? It would have been easier if we had the documentation in the first place. Let me mention again, we do have, they can shut down the project that the process is marking once and they will shut down the project or shut down the station so they get the information in the proper QA to the bearers that are on the project, as much as I know. I would get you here about a formal QA process to help the source to find a new one. What companies work for you? Well, not now. I've worked on a large company called Sun, a smaller start-up company so I've seen all sides of Europe. Let me answer with a point is that in the open source community there is no QA group that would prevent youth from publishing. I actually look at that as an advantage. If we had that as a separate process then all of a sudden you can say, okay, that's great. If people who are looking at code and reviewing it, I'm not just talking about finding exploitable vulnerabilities, but actually doing a review that is somewhat comprehensive were to publish that and make that more open, that would be great. There was an attempt to try to create that in the open source community. All right. No. Okay, I can't remember the name. There was a website that was going to collect vulnerability assessments reviews of software and essentially what happened was they shut it down because they couldn't get funding. They couldn't get funding because the only people writing reviews were a few graduate students of one professor at one university. We're obviously not ready for that. Otherwise it would have happened. I don't know how to get to that point but that would be really nice is to have a body of documentation of people who are looking at code comprehensively and saying this is what I looked at and these are the results. Yes. Having a solution like that would do wonders but that's going to apply for the things that would get bundled with an operating system and that's the only limitation but I think that's a good concept for that. Yes? Okay, and what we want to do is revisit at least my impression of an earlier situation and that's with user guides. A lot of programmers out there when it comes to explaining how to use their software the English skills they're not as well as their coding skills. I guess that's the right way to say it. Is that played enough? And so what happened? We didn't stop writing user guides. We found people in the community whose skill was English who would work with this programmer and translate it. Well, maybe what we need to do is open up and say are there people out there who aren't necessarily programmers but might do more focus on vulnerability assessments on architecture on just general discussions open up the body larger than just being a programmer. You don't need to have some sort of programmer badge to participate. There are no bars that I saw. There's a lot of projects out there where you go through source forage where you'll see team members and their roles are identified. Program manager or project manager you've got developers, you have documentation, you've got test specialists. Lots of people have different labels. This may need to be another label. This may be something where we have some people try and then just open it up and say, hey, comment on this. Find people who have experience doing this documentation and saying how does this work? Yes? Do you think documentation is a verified program? Okay. Let me go back to my methodology which is I'm going to develop what I call an abuse case and say this is what I think the program could do and then what I have to do is go through the design documentation and the review. And then the end result is I'm going to look in the source code and make sure that it's actually implementing what it claimed to implement. And the quality of that review is obviously going to vary by the skill of the reviewer. My expertise is at the higher level developing the abuse cases, not at the lower level. Were there any polls in how this was programmed? Okay. That's good. Another site that gave me a tremendous list of tools is a tool called Doxygen. They have a list on that website of both the non-commercial and commercial tools. As I said, I'm convinced that the tools are there for the beginning, the beginning steps. But obviously, we need more practice working with it as a refinement. One last question? Okay. On stage, let me make a Sheenies plug for Geekwares. If you like this T-shirt, $10 in the vendor area. Come along with us. The vendor area is going to be closing at 3. So if you have any kind of business to make please do so before 3 o'clock. That's going to be in my way if you've got kind of surveys going to happen in this room at 4 o'clock. Wow. Very funny one. We need everybody to exit the room. We cannot have people exiting that way as we have a very long queue for the next presentation. That's if you like to see the next presentation and you're asking that you exit. Also, I spoke incorrectly about the critical networks talk when I'm critical on next that talk or okay, now that talk is at 3 o'clock. The critical networks talk is going on. So I said it wasn't. I said it was being replaced. I was incorrect.