 Yeah, very good. So you missed my, never mind, anyway, so thanks for coming. So Brighton early on a Saturday morning. The talk this morning is about the politics of vulnerabilities. My name is Scott Blake. I'm a VP of Information Security for Bindview Corporation. And I wanted to give you a little bit of background here. My intent with this talk is not to espouse a particular viewpoint about the politics, but rather to have a more rigorous discussion about what the factors are that come into play. What are the various components of the politics of vulnerabilities? Who are the actors? What are the policy initiatives that may be having effects now and in the future? So if you're here for a rant about how software vendors suck or how we shouldn't be disclosing anything, that's not what I'm going to be doing. I work for a software vendor. So just to keep that in mind for everybody, we also do vulnerability research, though. So we're sort of on both sides of the fence here. Anyway, just a quick read out. Who has taken a political science course before? Quite a few. Pretty good, OK. Any political science majors here today? One, all right. Congratulations. I, myself, was a social sciences major as an undergraduate and did sociology in graduate school, which is really why I'm qualified to talk to you about information security. So what is politics? Politics is the study of power. And that is a very large topic. There are a lot of components to that. And we use a working definition of what power is, which I think is very useful. It's very broad. But it's essentially the ability to make one do what they would not otherwise do. Fairly straightforward, OK. It takes a lot of forms as economic power, social power, cultural power, military power, physical power, all sorts of things that come into that that have different kinds of effects. Sounds like there's a plane going overhead. Couple of important terms. I'm going to go very quickly, by the way, over the initial section here. And so we'll get to the crystal ball section at the end, where I'll make some wild-ass guesses about what I think is going to happen. And then we can hopefully have some time to discuss those. So just some important terms to keep in mind, because I will use these a couple times during the talk. What is an actor? It is an entity which can exercise power. It's not necessarily an individual, although it could be an individual, could be a corporation, could be a government, could be any number of bodies or things that act, things that do things, things that have agency. And ideology is a set of beliefs or ideas. That's what we'll be categorizing certain things into a set of ideologies here. And legitimacy and authority are very interesting ideas that is something to keep in mind while that is an airplane. While we're going, by the way, I just love this venue. It's the most interesting place I've ever given a talk in, an intent on the roof of a hotel. That's great. This, I'm sure, is something that everyone here is quite familiar with. But it's important to remember that not everyone agrees on what a vulnerability is. There are academicians who will generally define a vulnerability as a flaw in software. So for example, if you have a blank root password, they would not consider that to be a vulnerability. The software is doing what it should be doing. It is behaving as it was designed to behave. There is no bug which is producing something there. You could make a case that the ability to set a blank root password is a vulnerability. But only if the software is intended to prevent that, but you are able to subvert that intent through some other means. We use a more broad definition here including misconfiguration. So in our case, the blank root password would be considered a vulnerability because of the potential for misuse there. That should be fairly obvious for everyone. So these are the ideologies that I'm going to talk about. And I'll go into a little detail on each of these. Full disclosure has a couple of basic ideas that are behind it. One of the foundational ideas is that information should really be free, that it wants to be free, that we want to share information as broadly as possible for the greatest public good possible. And that that's a good thing in and of itself. There's a moral good associated with that. Also, the full disclosure is a power, a use of power associated with it, which is essentially to capture the power of public opinion in order to cause security to improve. That is generally for those who advocate full disclosure what they're after. The critics of full disclosure often take the point of view that what is going on here is really causing security to be less good than it would otherwise be. But that's not part of the intent and it's important to keep that in mind. Most full disclosure includes the release of exploit code. And by the way, these terms that I'm using for the various ideologies, they're debatable and they're loaded. There are reasons that people pick the certain words that they use to describe certain things because they produce an effect. And that is essentially a use of power to decide which term to use and to try to capture certain terms for one's own use rather than letting somebody else use them for something that you may not want them to use it for. But I'm not gonna go into that particularly. We're just gonna take these as a given as categories but you can call it something else. You don't have to call it the things that I call it here. Most of the adherence to full disclosure are generally nonprofit researchers. I'll talk about these in a little bit more detail shortly and a few but very few commercial researchers also adhere to that and that's pretty much it. Mostly it's researchers who are on the full disclosure side. The so-called responsible disclosure has been somewhat popularized in the last year, year and a half, two years. And essentially the major distinction between so-called responsible disclosure and so-called full disclosure is essentially the presence or lack of thereof of exploit code. There are also some timing issues there with notification of vendors that we could make, we could talk about. But the exploit code is probably the primary thing. Full disclosure adherence generally believe that exploit code has a greater good associated with it. That it has utility in being able to test security infrastructures, firewalls, IDSs, et cetera. And that therefore there is a good to be had with having that stuff out there. Responsible disclosure generally recognizes that there is a good there as well. But keeps that, but basically says that it is unbalanced by the potential for damage that can be done with exploit code being broadly available to the public. That the bad guys get their hands on it too and that that outweighs the good that is associated with having it available. It's also important to remember that responsible disclosure adherence generally are also trying to use the power of public opinion to improve security by making people look bad for not having their security up to snuff. They hope to improve that. And there's actually historically, quite a bit of evidence to suggest that that is in fact the case. The past three or four years have indicated that major software vendors have become much more responsive to security vulnerabilities being announced than they had been in the past. I don't think there'll be much dispute about that. And the case is generally made, and I think rightly so, that that is the result of full disclosure and responsible disclosure being used to cause them to do that. To do something that they would not otherwise do using the power of public opinion. Most of the adherence of responsible disclosure are most commercial researchers, and there are a number of categories in there as well, which I'll talk about when I get to the actors, as well as some notable software vendors. And it's important, I think, for us to give a little bit of benefit of the doubt there. We have a tendency to think of the vendors as being fairly strictly motivated by their economic interests, and it's actually somewhat more complicated than that, and I'll talk about that in a minute. It's really hot in here, and really dry. Is it gonna get worse? Great, well, I'm glad to be going early this time. Jeff always gives me an early time slot, and usually it pisses me off, but today I'm actually happy about it. Anyway, so zero disclosure essentially tries to limit the availability of vulnerability information to anybody. That a discoverer of a vulnerability basically takes that to the software vendor and then is done. Or they may assist the software vendor in some way, but that it is then the software vendor's responsibility to issue patches and do whatever they're gonna do. Whatever that is, whether it's just issue a patch or just do it in the next version or whatever, the zero disclosure ideology, pardon me, basically says that there is no public good to be had in the release of vulnerability information. All you do when you do that is to make it easier for people to break the security that in illegitimate ways. And the adherents here, by the way, are many software vendors and most government actors. Although the Richard Clark's speech at Black Hat on Wednesday, not withstanding, most sections of the federal government at least generally think that there is not much good to be had in the release of vulnerability information, as I'm sure most of us are aware. And the public, this is something that we can discuss in the question and answers at the end. If I have time, I'm gonna try to keep my pace up here. There's some debate about this, but I basically take the point of view that for people who think that their computer is essentially magic, which is the majority of the public, that they don't see any need for vulnerability information to be out there, that they don't see any benefit to that. And that's probably a relatively unreflected opinion but I think nevertheless it's there. There's another variant of zero disclosure which is sort of somewhere in between zero and responsible disclosure, which is essentially, I call it limited disclosure, it's disclosing vulnerability information within closed communities that limit the ability of the information to be propagated outward. And the easiest example of that is the ISACs, the information sharing analysis centers, there are a number of them organized around vertical industries, the financial services industry, electrical power, there's one forming up on healthcare, there's one for information technology, there's a whole bunch of them. These are forming out of, I believe it was presidential directive number 63, several years ago under the Clinton administration and they started seeing the formation of these and the real propagation of them, the Department of Commerce actually takes a leading role in trying to get these things organized. And the idea here basically is that, again, there is no public good in the sharing of vulnerability information, but that there can be limited good for the sharing of information within these closed communities. So if I'm Bank A and I share something with Bank B and we are therefore to get combined more secure than we would otherwise be, that's an example of the benefits of limited disclosure. They keep things from being told to the bad guys, but don't necessarily keep it from the good guys and it's an interesting idea. I'll talk a little bit about some of the pros and cons of those in a little bit. You guys have on the CD this slide, hopefully, it's good for reference more than anything else, or it lays out the major divisions between these various ideologies. Okay, so who are the actors? I got put it into these sort of five major categories and a controversial category or problematic category that I refer to as the underground and I'll tell you what I mean by that in a minute. So the vendors are pretty much the people that we're all familiar with. I'm mostly talking here about commercial software vendors as opposed to sort of open source vendors. It's more problematic to discuss the others mostly in terms of the financing because it's not really present there in the same way, although the software vendors that are open source are generally more akin to researchers in terms of motivations and the factors that go into making them do what they do. And that'll become clear I think in the next slide. So for now just consider these the commercial vendors. And the points here on the slide should be fairly obvious I think to everybody. One that I'll call your attention to is the second bullet under interests which is limiting the vulnerability of customers. And that's something that it's questionable whether that should be broken out as a separate point. You can make the case I think that their only interest in limiting the vulnerability of customers is in so far as that limits their damage to the brand value and allows them to sell more software. That if there was no negative economic value in the release of security vulnerabilities that they wouldn't really care. That if the customer being vulnerable, the customer losing access of private information and all that kind of stuff is not particularly motivating factor. Here I actually give the vendor the benefit of the doubt. I think that there is evidence to suggest that software vendors as a whole do in fact consider this to be a part of their interest not only in terms of its economic value but in terms of it being a good thing in general. And that they would probably be at least interested in improving the security of their software even if there was no economic benefit or avoidance of economic loss, right? And there are a couple of anecdotes I could tell you about those things and if we have time I'll get to those if somebody's interested make sure make a note to ask me a question at the end about that. Okay, so researchers. The motivations here are also fairly straightforward trying to advance the state of the art building more security and of course reputation. There's a parallel with the corporate desire to build brand and to strengthen their brand and limit damage to the brand in the building of recognition and respect of the researchers amongst their peers and this applies to all sorts of them. There are several subgroups within this that are basically broken down along the lines of the various financing sources that they have. What I referred to earlier as non-profit researchers are generally either academic or hobbyists and by hobbyists I'm referring to people who have a day job but they're not getting paid to do the research per se. They're getting paid to do something else but they do this on their own time where they do it not as part of their job description. The academic researchers are generally funded by grant sources, granting authorities and or contract, commercial researchers are also funded by contract and by software sales. There are a number of companies, BindView included that are predominantly software sales companies, product companies, but that also maintain security research organizations within them and publish security vulnerability information. There are also others that take money to find these things. Consulting organizations and some other groups take contract work in order to find security vulnerabilities, somebody comes to them and says, we're thinking about using Firewall X, is that okay? And they go and they look at it and they see if they can find vulnerabilities in it and there's some interesting things that have been published from some of those sources but a lot of that work is actually done under NDA and it never sees the light of day. The two primary interests on the part of researchers are to continue the funding source. No researcher really wants to jeopardize their ability to pay their rent or their mortgage as the case may be. And so whether that's their day job or the contracting that they're getting from their customers, the software that they're selling to their customers or the granting authority, whether that's the Feds or some other organization, generally the researchers are constrained by that source of funding and will not jeopardize that. This is actually one of the major reasons that we have had in the past and continue to have now lots of people who are essentially using their pseudonyms because they are required or find it prudent to separate who they really are from what their real world identity from their security research. Primarily that's a financial decision that has to get made, not exclusively but primarily. So these are the points here, some of the power relations that we have with researchers and some of the other actors that are involved. I don't need to read them too, you guys can all read, I assume. Anybody can't read the slides? Okay. The underground is, as I mentioned, is something of a problematic subcategory. There are a number of ways of defining the underground and the points that I've put up here are somewhat inflammatory as a way of categorizing the underground, particularly in our present company. However, what I'm doing here is essentially to try to break out what we might refer to as the black hats or the malicious hackers or the attackers or the crackers or whatever term you wanna put on it. In this case, I've chosen to put on there the underground, it doesn't have to be that, it could be something else and I'd be happy to talk about that afterwards if anyone has any questions about it. But these are important, there are some important differences between other researchers and what I'm referring to here is the underground. And the most important one here actually is that these are generally people who are observers of either zero disclosure or limited disclosure as an ideology as opposed to the rest of researchers who are either full disclosure or responsible disclosure in what they adhere to. And the major interest for that is that in most cases, we're talking about folks who don't want anyone to come spoil their party. They don't wanna fix, they don't want the vulnerabilities to be fixed. Improving the security of the software is generally not an interest for the folks that we're talking about here. And so the funding sources are generally coming from the same places that we've seen in the others. These researchers are people who are coming in the same kinds of places but there is an important additional piece here which is crime, criminal activities as a funding source for some of these things as well. And the interest here is basically for the maintenance of vulnerable software. Governments are another important category of actors and their primary motivation is what I refer to as the technocratic perception of public good. I think there can be a little dispute that the governments are trying to do what they think is best for everybody and balancing the competing interests of the various constituencies that the governments have. Now, how they arrive at what that is is a very interesting concept. And those of you who have taken a political science course know that we could talk about this particular aspect all day long, at least if not for an entire semester or longer. People have written big books about this and how governments come to decide what it is that they're trying to do. But I think it's just important for us to consider here that the public good is in fact what the government is trying to accomplish. We may very well, we may and often do disagree with what they think the public good is but that is nevertheless what they're trying to do. So there are a couple of points here on financing both taxes and campaign contributions are important pieces to consider at least in our particular form of democracy such as it is and some others as well. The campaign contribution is an important component of funding and deciding of what the activities of the government are gonna be. You guys can read through all this stuff. If you have any questions about this be sure to ask me at the end. The media is also a very important player in terms of wielding power in the vulnerability thing, the whole disclosure debate if you will. What they're trying to do is fairly straightforward. They're trying to get readership. They're trying to maintain their revenue stream and expand their revenue stream coming from either subscribers, people who want to read what they're printing and from advertisers. Most media outlets have some mix of these for their funding sources. Some are very heavily in one than the other and these produce different effects in making the decisions about what news is fit to print and what angles the story should be taken. Financing in the interest of the readers are major components here and how these things work. Power relations are essentially straightforward. All vendors want good PR. All government's want good PR. Most researchers want good PR. Most of us want to have the media say good things about us and make us look smart and cool. And they have the power to do that and decide who gets to look smart and cool. And are very strong influencers of public perception. There's an interesting note there that there's a tremendous degree of fear in the general populace. Something like 70% of the American population believes that it is unsafe for them to buy things online. Who's bought something online? Pretty much everybody. Not everybody, but almost everybody. Did you feel safe when you did it? Who felt safe when you bought something online? Most everybody. Yeah, Mark Loveless, our illustrious Razor Team member was pointing out that he bought on his corporate card. So he didn't care. And you shouldn't care either actually if you're using even your own credit card information because your liability is limited by law. I add most to $50. And by sort of public trade practices, most credit card companies, unless they think that you really did buy it and you're lying, won't even give you the 50 bucks for liability of misuse of your cards. Most of them will give you full refunds of any not charges that weren't yours. So the question then becomes, what are they afraid of? Whatever the media says they're afraid of, I would submit. But we can talk about that later too. The public here, I mention it because it's an important component of what we're talking about with the various perceptions of public good and people wanting to do what's best for everybody. It's, however, very difficult to talk about the public as an actor. And I've put these two points here, two chaotic to be relevant. You can't really talk about any one or even set of motivators that would fit onto PowerPoint slides in any meaningful way. There's just too many of them for it to make sense. They're competing interest that doesn't, and also the public has not acted, doesn't necessarily act consistently, except insofar as they continue to buy software that sucks, which they do at length and with lots of money. So that's an important thing to consider in the vulnerability of debate. So I'm gonna go again quickly through some of the policy initiatives that I'm doing pretty well on time. So we should have plenty of time for questions and talking about some of the crystal ball stuff at the end. Okay, so first off is the Council of Europe's Cyber Crime Treaty. This was, pardon me, past couple years ago. The intent here was to harmonize and update European computer crime laws. The US actually participated quite extensively in the drafting of this treaty and has actually signed onto it as a number of non-European countries. Basically, all the treaty really is is a set of guidelines for what kinds of computer crime laws, the various countries should adopt. And so they, by signing on, they basically say, yeah, we're gonna do that. And you can, from a law enforcement point of view, you can see the benefit of being able to know that just because the person who broke into your system is located in Germany or the Netherlands, I don't wanna pick on anybody in particular, Switzerland, whatever, you have fairly good confidence that you're gonna be able to prosecute them under similar, if not the same laws as you have. So it makes sense from a law enforcement point of view and from a governing point of view. One of the key components of, or not one of the key components, but one of the components of the cyber crime treaty that is highly problematic are the provisions for the criminalization of the possession of so-called hacker tools. Is there anyone here who is not in possession of a hacker tool? Yeah, good question. The question was, what's a hacker tool? And that's not defined particularly well. Essentially, what's defined in the treaty as is software and hardware, actually, that is used to subvert the legitimate security measures, the legitimate security mechanisms on computer systems and computer networks. There is an important caveat within the treaty, lest anyone get too bent out of shape and go deleting stuff off your hard drives while I'm talking here. The intent to use them in a malicious fashion is a necessary prerequisite for the law. The problem here, of course, is that that intent is very difficult to show. And it's also very difficult to establish that the intent was not there if something bad happened. So there is no case law here yet. It'll be interesting to see as it forms up. But this particular provision as well, I think will have a tendency to push us in the direction of certification requirements for security practitioners. Insofar as you want to be able to define who has a legitimate interest in possessing and who does not have a legitimate interest in possessing and who has the legitimate authority to perform certain actions like cracking password databases, if you're a security practitioner with a law very specifically says you do have the right, you are allowed under the law to test the security of your systems by hacking it. It says that very clearly. But the question, of course, is who are you and what's your authority in any particular case and how do you establish that? The information sharing policies, I talked about a little bit earlier of the ISACs. These have been moving along quite well. Several of them have been running and more are coming along all the time. The ideas here are basically to get better intelligence within these communities so they can have better predictability of the attacks that they may be facing and better idea of what vulnerabilities they're likely to be exploited in their environments. The idea is to help them stay a step ahead of the bad guys. The problem with it, of course, is that these organizations that are members of ISACs have very little interest in discussing and sharing any of their information outside of the ISACs which is problematic for security research, particularly academic research which depends upon the free flow of information in order to have, so the security researchers can know what is taking place out there and be able to improve the state of security. These ISACs have a tendency to keep the information enclosed within them and it does also propagate that information up to the government, but it very rarely gets outside of those communities which is highly problematic for public discussion, particularly academic research. And also, what I say is information has and have not. People who are in ISACs have access to all of this information and people who are not in ISACs don't. They have no access to this stuff. They don't know what's going on and they won't know what's going on. Now, the vision of ISACs to be fair is that once these are all up and running, most companies would be part of them. The information will flow up to the government and the government will then give it back out. Well, government doesn't really have a great track record on giving it back out. I see a couple of people laughing in the audience, right? And that's fair. I mean, one of the major criticisms of all of the public-private partnerships like Nipsey and InfraGuard is that the private companies give their information to the feds and nothing ever comes back. The feds will defend themselves if they're any present by saying that they're restricted if there are prosecutions. They have laws that deal with what they're allowed to tell other people and that's cool. But what's the benefit for us giving them information in the first place? The disclosure forums has been a hot topic in the last couple of weeks, at least with one of them anyway. And, but these are, they're an important thing to consider in the, they're an important factor in putting all of this together. The idea here is of course to get information out to everybody who needs it, to everybody who's running a system who wants to know what security issues are gonna be present on their systems. The other side of it, of course, is that these are open communities. These are open mailing lists. Anybody can get on. And often you have people who are, who simultaneously have a legitimate interest in the information and are intending to use it for illegitimate purposes. And that's problematic. One of the reasons that we've seen these things grow, I think, and become a major, the major source for information dissemination is that it is really essentially impossible to make the distinction between who the good guys and the bad guys are. It's the, there's no good way to do that as far as we know anyway so far. And so anyway, it'll be interesting to see I think what happens with, with bug track under new management. The organization for internet safety was announced last November and hasn't really done much in the meantime. The idea here is essentially this organization's idea is to promulgate responsible disclosure, to help vendors and researchers get together and be collegial with each other, to share the information in the ways that are appropriate ahead of time before public disclosure and to make sure that the information that is going out into the public is what the public needs and not too much. They're trying to walk a fine line here and it's an interesting idea. The idea of being, of course, to limit the amount of information as opposed to the classification of information that's available to the, again, the so-called bad guys. Of course, what they're doing is, by primarily, and I should say we, by the way, my view is a member of this organization. What we're doing is having the effect, I think, of limiting the information, again, the quantity of information more than the quality of information that's available to everybody, not just the good guys and the bad guys, but everybody. And I think that there is some concern, some legitimate concern, about the extent to which it will have a chilling effect on research in general as we start moving forward. The polite thing to do, I'm sure, as we all know with vulnerability information, is to let the vendor know ahead of time. And that takes time, it takes effort, it costs money in a lot of cases and not necessarily cash, but time that somebody is paying for. Pardon me. And it's basically a real pain in the ass. So there's certainly some extent to which researchers will or may decide not to do the research or decide not to contact the vendor or even to publicize the vulnerability because of the requirements that are placed on them by the kinds of policies that are gonna be advocated by the organization for internet safety. There's been a lot of legislation in the U.S. lately that's come along in the last couple years. Some of this stuff with the FOIA and antitrust exemptions pending, they're not currently passed. But, and actually the FOIA thing is actually not, it's being characterized not as an exemption, but as a clarification of the existing provisions within FOIA. There is actually a provision within, the idea here is for when private companies and private parties give information to the government about security, about incidents and about vulnerabilities, that their private information does not then become subject to the Freedom of Information Act and then exposed to the press and exposed to their competitors in ways that they don't want it to be. There's actually FOIA already has a provision within it that private information that's shared for national security purposes and law enforcement purposes generally does not fall under the, is generally not releasable under the Freedom of Information Act. And the idea here is to clarify that provision to include networking type information and system type information so that companies will feel more comfortable giving this stuff out. And also the antitrust provisions are important for the ISACs so that they're not open to charges of collusion for these companies that are all operating within the same industry for sharing security information. This stuff is somewhat controversial. There's been some press coverage about it. There's of course potential for misuse there that the FOIA provisions might be extended to include things that it wasn't intended to include. And of course that the antitrust exemptions as well could be used to allow things that the law is intended to not allow. So it'll be interesting to see how these work themselves out. I would actually predict given the current environment of corporate reform that we'll probably see both of these pass particularly also in the climate of the public regarding security right now which doesn't sell any sign of changing anytime soon. Legislation has passed the House earlier this year to increase funding for both NIST and NSF for additional research monies to be made available for graduate fellowships and other kinds of research grants for improving security. The NSA, NIST and a couple of other organizations are working on a single gold standard. The idea here is to basically they're doing Windows 2000 first or add other systems as they go along is to establish a baseline configuration that's better than the default that is not necessarily highly secure but is a lot better than what it was. They're suggesting that they're gonna be able to get perhaps as many as 80% of the vulnerabilities that are likely to be affecting their systems just by establishing this baseline of a security configuration for the systems. FISMA is the Federal Information Security Management Act. It's intended as a successor to GIZRA which is the Government Information Security Reform Act which was passed about two and a half years ago and has a sunset provision. It's due to expire this year. Both of these laws, the primary provision is to require federal agencies to file statements of their security posture and their incidents that they've had in the past year with the, I believe it's with the General Accounting Office while I'm not, I wouldn't be certain of that, I might be OMB. Anyway, they have to file with the Central Authority within the federal government to make sure that they're doing the right things. FISMA actually just passed the House as an amendment to HR 50.05 which was the authorization bill for the new Department of Homeland Defense. And DMCA and the Patriot Act, the Patriot Act, both of these have been covered in other presentations this weekend or will be covered later on today. So I don't wanna go into any more detail with you. I did hear this morning, by the way, that the complaint that HP had filed against their, against Snowsoft has been withdrawn. That they didn't actually, they didn't actually get to a formal complaint with law enforcement anyway, but they've sort of withdrawn their threatening letter that they sent. And there's an interesting legal debate, I think, as to what extent and how DMCA applies to security research and the latest engineering of non-copyright mechanisms. Okay, I went really fast through that. That's good. I've done this presentation a couple other times and usually it takes me about 50 minutes to get to that far. So I'm gonna try to get through this pretty quickly. There's a couple more slides here and then hopefully we can have some questions. So trends, increasing legislation, clearly just on the previous slide, there are a bunch of things in there. I suspect that we will probably be seeing more. The primary thrust of a lot of the legislation actually is to improve the definitions of cybercrime. To keep things, make sure that things are up to date that the laws are actually reflecting current technology and it'll be interesting to see whether or not they manage to get it right. The legislators aren't exactly tech savvy for the most part. We'll see more and improving communication channels, largely in the form, at least in private industry, of the ISACs, we'll probably see increasing attempts at least to improve communication between private industry and public sector. Whether or not those will play out, I think is unlikely, or I think is unlikely that those will be particularly successful, at least in a short term. We'll see more and more research being done, more and more software being put out there that has security flaws. The rate of new vulnerabilities being announced has been increasing at approximately 90% per year since 1992, shows no signs of abating and which puts us on target for about, we're expecting on the order of 2,000 new vulnerabilities to be announced in 2002. And we'll expect about 4,000 in 2003, that'll be fun. I think we'll probably see more vicious attacks. We've seen the attacks getting more aggressive and more automated. So they're going faster and being more complex in terms of being multi-pronged or multi-vector in their attacks. We'll probably see more multi-platform worms and more multiple vulnerability worms coming along. And I think we'll probably start to see stuff maybe not by the end of this year, but perhaps next year that is more destructive than what we've seen in the past in terms of the mass attack worms. NIMDA, for example, it's sort of our poster child and it was very expensive to clean up because it would scribble all over the system, but the damage wasn't really, they didn't really destroy anything or much in the course of doing these things. So in terms of damage, it didn't do that much, but it was still very expensive. And the continuing penetration of internet access will have an effect on this. They'll continue to raise the profile of information security with the public and in the press, which perhaps I should say in the press and subsequently in the public. Here's some wild ass guesses for you. Will the public demand security? Probably not. There's no sign that they have in the past. There's no sign that they will accept insofar as they may demand security in the context of privacy. That's possible. We've seen legislation, of course, with GLBA and HIPAA that creates privacy standards for financial information and healthcare information respectively. And the public seems to like that kind of stuff. We may see more of that relating to more kinds of information, perhaps even e-commerce coming along at some point where there'll be regulations dictating what kinds of privacy practices and the corresponding security practices you need to have in place if you take a credit card online, for example. That's an interesting idea. I don't think we're gonna see that in the next, within the next three years, though. Who will pay for security? I think is a very interesting question. Consumers do not appear to be willing to pay for security. They certainly don't make the more secure choices in operating systems or software that they purchase, and including corporations, by and large. There are certainly a few notable exceptions. Interestingly, they will pay for security devices. They will pay for software and hardware that will help them improve their security, that will do their security, that will do security things for them. But you'd be hard-pressed, I think, to give examples of where people will actually take the more secure software choice. You wanna address that? How do they know which one is more secure? Yeah, is a question actually, did you ask that the other day too? Somebody else asked me the same question and I gave this at Black Hat. How do they know which one is more secure? And that's an interesting question. And by and large, they don't, of course, because there's no easy way to do that. But they don't have to know which is more secure factually. All they really have to know is which one is marketing something as being more secure. Because as we all know, technology doesn't drive the decisions, marketing drives the decisions. Somebody I'm sure is gonna raise their hand on that one. But anyway, there are cases where software vendors have essentially advertised, this is the secure stuff. And it does not drive software sales. We can, well, that's a more extensive discussion than we have time for right now. We can talk about it afterwards if you want. There is some indication, by the way, that the government may step in here. It's very vague at this point, and I think it's highly unlikely, but it is possible that the government will help subsidize secure software engineering practices at software vendors, which is a very interesting idea. I don't know if it's gonna happen. I think probably not, but there are some rumblings around Washington on this. I didn't delete this from Thursday, so the lessons from recent events, the HP DMCA threat. For those of you who don't know, by the way, a vulnerability was published in 264, the UNIX operating system that came through the Compact Acquisition, and HP's response was rather than to fix it to send a threatening letter saying, you may be subject to five years imprisonment and a $500,000 fine, which generated for them a ton of bad press creating a story where there wouldn't otherwise have been a story, except in the security community, of course, would have all said, oh, HP, they suck, they didn't fix their thing, but nobody else would have cared. Instead, they got mainstream press coverage over the issue for threatening the security researchers. But it's an interesting idea that this is, it was an interesting contrast for this to come the day before Richard Clark stood up in front of Black Hat and essentially told the assembled throng that it was everyone's moral obligation to go find vulnerabilities in software. It's an interesting contrast with that and the possible provisions with DMCA. I am not a lawyer. There are lawyers here, I'm sure, who will address this in more detail. Everyone's asked me when I talk about this stuff about liability laws and how that will affect the security of software, whether software vendors will be held liable or they will have to give up our exemption for product safety and so forth. I don't see it changing anytime soon. I think it will change eventually, but probably not for a long time and I think there are a couple of reasons for that. Primarily the academic growth of the last 15 years, if you look at it, it was primarily driven by increases in productivity that were generated by improvements in information technology. The extremely rapid rate of innovation was largely responsible for the economic growth that we've seen and continue to see in just at a smaller rate. And the cases made rather strongly, if not necessarily particularly factually, that the increasing liability laws would reduce the rate of innovation to the point where information technology would no longer be able to be an engine of growth within the economy and no politician in the world wants to do that. And lots of people in the public don't really want their politicians to do that either, we should point out. So this to wrap stuff up. I don't think there are any major changes on the horizon. I think that we are drifting in the general direction of more secure software. The state of the art of software engineering is improving. We are seeing people take security more seriously, particularly at major software vendors, not to name any names. And, but I think that that progress in the improvement of the security is largely offset by the increase in complexity of the software. There's more stuff out there. The stuff that is out there is far more complex than it was. And I don't think anyone would dispute that complexity is the antithesis of security. And that's what accounts for the continuing growth that we're seeing. We are improving the software engineering, but it doesn't really matter in the big scheme of things because it's because we're making other things worse. So we've got time for questions. Yeah. Yeah, pointing out a potential emission in my slides of the funding of research by the selling of exploits and the selling of vulnerability information. And I think it's a good point, although I think it's not a significant, it's not a significant in the overall picture right now. There've been a couple of examples of places where people have tried to do that. In most cases, those have really been considered more blackmail than business. And most of the cases have really been sort of extortive in their nature. So they calls up the company and says, I found out a flaw on your website. I downloaded, there have been a couple of cases of this. I downloaded all your credit card, all your customer credit card information and pay me money or I'm gonna publish it and make you guys look bad. That we have definitely seen. We haven't seen anybody yet make a legitimate business out of that except in so far as they're doing the research on a contract basis evaluating the software. And you can sort of, if you wanna sort of spin it that way, you can say that they're not being paid to do the research, they're being paid to supply the vulnerability, right? But I mean, generally the consumer of that, the customer of that is using it to lever the vendor to improve the software, right? So I see, I understand your point, I think it's a minor concern and I don't think we're gonna see a growth in that at all in the future. They get a couple of other people to follow up after. Yeah, good. The question is how do you get into an ISAC? And I am not an expert on ISACs, I don't actually know. There are probably folks here who can answer that question better than I can. But if you're in an industry that has one, do a web search, find out whether they probably have a website or something you can contact. The office that coordinates them is, I think the critical infrastructure assurance office out of commerce, and they could probably point you in the right direction too. I think it's Chow who does it. Yeah. The product impact, which is a consumer protection ability, in Africa the industry is so important. Everybody back here now? No, you're in here. The comment was that I should rather than use, I'm generally using security vulnerability to refer to the entities within the software that are at issue here. The gentleman was suggesting that using the term product defect was a better term and was probably more, and was more factually accurate. That what we're talking about here really is not flaws in software, but defects in consumer products, just like the stuff that we saw recently with the Ford Firestone bite, the tires being failing and so forth. And it's a good point. That certainly is a debate. One of the reasons I didn't do that though is that I think it's a different debate than the one that I'm discussing here. That is a particular political strategy to go and attempt to rectify the problems that we've been talking about here through essentially the use of product liability laws. I sort of alluded to that briefly at the end. And that's a great, I think, tactic. It's a good, and the overall is a good thing to be doing to try to accomplish something. I specifically actually avoided that because I don't wanna be advocating that point of view necessarily. My purpose here was really to sort of lay out the realm more than to advocate a particular point of view. And I tried with mixed success to avoid things like product defect that are somewhat inflammatory in their usage for this kind of talk. But it's a great point. Yes, gentlemen, but here. Right, yeah. I think that's probably the only way that we will see it. The comment was that consumers never demanded seat belts and airbags in cars, right? Public research groups made the demands of government, lobbied government, government made the laws that required the manufacturers to provide the safety equipment within the automobiles. And then the consumer, of course, ends up paying for it in the purchase of the vehicle. And the question is, do I see that happening in software? The answer, not anytime soon. I think we're looking in the probably five to 15 year timeframe before we get there. Keep in mind automobiles were being purchased without seat belts for at least 50 years before any of this stuff happened, right? And there's an analogy with the automobile which I think is not good, right? And I gotta wrap up after this one. I apologize for the questions I didn't get to. We can take them, I can take them afterwards. The, there are two major things there. First off, is the physical safety of the user, right? There are, I think we'd be hard pressed to produce an example of someone being injured by their computer crashing, okay? Now, there are specific use computers that may be exceptions to that, whether in the car or something like that. But it's part of another device that is involved in personal safety. But for a PC or a desktop workstation or even a file server at a corporation, right? The crash is not putting somebody in jeopardy. Now again, there are specific examples where that's not true. Whether it's a control system for a nuclear power plant or information that's being used for surgery or something like that. And those are different examples, I think, than the bulk of what we're talking about in the consumer market, right? So that's the analogy I think breaks down a little bit there, but I think all, and the other thing is the last point for the talk is that the science of mechanical engineering is far older than the science of software engineering. We've been around, it's been around for a long, long, long, long time prior to the audit being the applications that were used in the audit, but we understood what we needed to do in order to make people safe. It's not clear to me that we understand what to do in order to make software secure. Okay, that'll have to be it, we're out of time. Thanks everybody for coming around, great questions. Thank you.