 This is the opening keynote from Black Hat that Niko would all ask me to repeat here. So it's a pretty corporate topic if you are, Bruce is across the hall and is always an amazing speaker. So what I want to talk about today is the way that enterprises and security folks misunderstand each other. I started my career at the Rand Corporation as a security researcher and engineer for various agencies. After that I became a consultant for a few years. Then I went to run security at Charles Schwab. While I was at Schwab I changed careers from focusing on security to focusing on broader areas of technology. I left Schwab to become the CIO of Google and I was there for a long time. About a year ago I left Google to go to EMI Records as president of New Music. So I sort of have done a pretty broad career arc from a pretty classical engineer to the CEO of a Fortune 500 company. EMI Records is several billion a year in turnover. So what I have learned over those years is that we all talk to each other wrong. Security guys talk to executives and executives don't understand. Executives talk to security guys and security guys don't understand. And we end up wasting a lot of money and doing a lot of the wrong things. What I want to talk about today is why that happens in terms of incentives, psychology and structure. So the title of this talk is stupid. Is that you baby or Subridge in the Sky? Hang on while I need two hands. So the good news, bad news story of security and corporations today is that CEOs are listening to us. On average, 80% of... Sorry, one more thing before I start. I'm going to quote a bunch of statistics in this talk. The statistics come from a bunch of different sources. The research scientist in me thinks that each one of those statistics is somewhat wrong. Some are very wrong, some are slightly wrong. There's bad research methodology. In one case, the math was done wrong. But generally, instead of worrying about the specific number I quote, I want you to think directionally around what it suggests. Caveat. And 80% of CEOs surveyed believe they have been the victim of a security breach. 8-0. That can't possibly be right. If you look at the privacy watch data for the last few years, even if you add them all up and multiply by 10, you still don't get 80% of major companies. So CEOs think they're in incredible danger, even though the math doesn't suggest they are. There's a famous decision-making study done by a guy named Tachanaman and Tversky. That's Danny Kahneman and Amy Tversky. They studied decision-making biases, they called heuristics. And there's a whole bunch of them. Basically, systematically things that you do wrong when you think you're doing something right. In this case, there's one called the availability heuristic. The availability heuristic works like this. If I ask you to say, tell me when you went to a restaurant, ordered your food, and got something else, did not get what you ordered. Some number of you will come up with a story of, oh right, that happened to me at that Chinese restaurant in Evanston. If after that I ask you, how likely is it that you will go to a restaurant, order food, and get something you did not order? For those of you who come up with a story, you will estimate the likelihood of that happening vastly higher than it actually is. Because you have a story available to you, you will think that it happens more often into more people than it actually does. CEOs spend all their lives being told by people, oh, security is incredibly bad, oh, security in the world, you're in trouble. So they have lots of stories that are available to them. So they compute the risk, the 80% number, vastly higher than it actually should be. So basically they're terrified of us. And as a result, they're writing us bigger checks. About 30% of all companies in the Fortune 1000 have security budgets growing by 5% or more this year. When concomitantly, IT budgets are going down by something on the order of 7% to 10% on average. So security spending is going up markedly as a percentage of IT spending. So apparently our fear is working. We're scaring the CEOs and they're writing us checks. The problem is they don't have any idea why they're writing us checks. CEOs were asked to list some combination of the following. What's a security vulnerability, give an example. What's a breach that you've experienced, give us an example. Basically just tell us anything you know about security. More than 35% of CEOs could not say anything about what security actually means. What's a vulnerability, anything. They have no idea why they're writing us checks. When I first read that number, it scared me a lot. How can one in three not know what they're doing? There's a group called FINDLaw, which studies legal understanding in the United States. They did a survey recently of general people in the U.S. asking them to name one Supreme Court justice. 45% of respondees couldn't name a single Supreme Court justice. The number one answer was, the modal answer, judge Judy. Turns out she's not actually on the court in case any of you are wondering. The problem with scaring the heck out of our CEOs and having not understand it is that fear is a bad long-term motivator. Ask Louis XVI. That was a joke. Time to laugh. Well done. So we decided that we needed to learn to speak CEO better. The fear wasn't going to work, so we learned to speak ROI, return on investment. The problem is we can't compute it. We're quite bad at it, as a matter of fact. We're bad at it for two reasons. One, we measure return on investment only in terms of downside protection. And second, we don't compute downside protection very well anyway. So if you look at the best practices out there for computing security ROI, they list a whole bunch of things that you should count. One of the very, very top ones they list is avoidance of reputational damage. What is reputational damage? What does that mean? Who does that hurt? Let me give you an example. There's a company that made children's clothing called Carters. In the 1970s, they released an advertising campaign that was targeted at the time the people who normally bought children's furniture, which at the time was women between 35 and 50. Their ad was very soft color, slight focus, really pretty music, and a picture of a mother watching her child play on a playground. And the tagline was, if they only stay little, tell their carters wore out. The idea being, don't care about the kid, care about the clothes. Guess what? Everyone hated it. They did a survey of those target people, the 35 to 50-year-old women, and they found that an incredibly substantial number of them had a very much worse impression of the carters brand and a hugely large number of them said they would never buy carters again. Guess what happened to their sales? They went up by almost 30%. I don't know what this reputational damage thing is. Second of our second component of ROIs is, oh my God, you're going to have lost productivity. The systems go down and the world ends. I used to be responsible for a NAISI trading system, a trading system that executed trades in the New York Stock Exchange. It is a big, complicated system and lots of moving parts, and it broke sometimes. If it broke at 2 in the morning New York time, did it matter? No, nobody cared. If it broke at 9.30 a.m. New York time, which is market open, did it hurt anything? Normally people are nodding their heads right about now, oh, finally I find someone nodding their heads. The problem is actually it didn't. So when the system goes down at market open, it's a pain in the neck, damn it, IT sucks. And they start pulling paper tickets and they talk to their customer who says, I want to buy a thousand shares of Google. They say, right, I got that. They write a piece of paper out and they set it on the side of their desk. Exactly the way the NAISI ran until the early 1980s. So what happened when we lost that system? It was annoying. The whole process was far less efficient, but it didn't actually hurt anything. Lots and lots of systems turn out to be able to run just fine without the technology, they just run less efficiently. So then the ROI of taking the annual revenue for a system, dividing it up by hours and then multiplying that cost times the number of hours to compute downtime, doesn't make any sense, doesn't work. And finally we compute ROI not counting sales that are enabled by security. Let me give you an example. When I was at Google, some engineers that are really interesting study, they looked at online retail, particularly small store online retail. And they tried to track through what happened through the sales process. So some arbitrary web server finds a site. They look around the marketing material and say, oh right, I want that bright green Adidas shirt. Thank you very much. I'm going to pick on you the whole talk. I want that bright green Adidas shirt. And they go through and they figure out how to add it to their cart and see how much it's going to cost. And then two out of every three bail out. Two out of every three carts with at least one thing in it was abandoned. That's of course the worst of all possible worlds for small retailers, right? They've spent all the time getting you to their site. They've spent all the time getting the marketing material up. They've convinced you to buy and then you go buy at Amazon. All the cost, none of the revenue. There's a whole bunch of reasons why they're adding emergency exit signs. In case there's fire, please find the red sign. In the event that when customers dropped their carts, we went off and said, why do you do that? There's all kinds of reasons as you'd expect. But the number one reason was, I don't want to create another account. I don't want to have to re-enter my shipping information again. I don't want you to send me emails later. I don't want to give you my credit card information. I've got a thousand accounts. I don't want a thousand in one. So this group of engineers came up with a product called Google Checkout. Google Checkout allows you to enter your credit card data, you know, your shipping information, et cetera, once into Google. And then with about ten lines of JavaScript, the retailer can use checkout as a checkout provider. So from the user perspective, they click on one button. They enter their Google password. Their credit card gets billed once. They can have their shipping information entered once. They can choose whether it gets, you know, spam from the retailer or not. So, not a bad product. So what do you think happened to cart abandonment rates? Went from two out of every three carts abandoned to less than one out of every three. So the retailers simply by getting rid of the need to make accounts doubled their sales. Well, or doubled, halved their failure rate depending on how you count. So that's an example where a little tiny bit of security, namely the creation of Google Checkout, created a vast quantity of additional revenue. That's also never computed in ROI. We're computing ROI to speak executive and we're doing a very bad job of it. And then of course, when the CEOs are done being scared of us and our made up ROI doesn't work anymore, we pull out the tactical nuclear weapon of information security. We say, but we're going to have a security breach. And then of course everyone quakes in their shoes and runs to the hallways. So it's interesting, if you look at Verizon's study of security breaches, they counted breaches pretty generously and even so more than 90, 90% of all breaches were pretty trivial things. So incredibly cool security we're teaching and telling and selling to try and avoid security breaches by and large doesn't really matter. If you look at the Privacy Watch numbers for 2009, it's obviously a partial data sec because it turns out 2009 is not over yet. But if you look at the data to date from 2009, 16%, 16% of all security breaches are from stolen laptops. 11% are from people putting paper copies of user data on the curb. I mean, seriously, buy a shredder. So by buying your employee's shredder, you could eliminate 10% of all vulnerabilities. By doing something cute with laptops, you could eliminate another 15%. So a quarter of all breaches could be fixed trivially. Again, we're using this data to suggest that hugely expensive projects get run and lots of compliance work gets done when in fact we could make a hugely material decrease in our breach risk by buying shredders. We're trying to scare our executives into spending money and they shouldn't spend it on us. And to some extent they agree. So do y'all recognize the reference? This one goes to 11. Spinal tap. The funny thing about this slide, which you can't see because the resolution's not high enough, this one's labeled in a thousand-point font. This one goes to 11. You know what's interesting? The dials only go to 10. Seriously, anyway. So even though we're scaring our executives and they're writing us bigger checks and we were pretty able to scare them even more by bad security breach numbers, we're unhappy with the amount of money they're giving us. Security officers are about one-half as likely to believe or to say that security spending is adequate at their company. CEOs are twice as likely to say security spending is adequate at my company. That seems odd. Doesn't the CEO get to decide what adequate is? I don't know. So why are we so different? We're so different because we're trying to spend money on things the CEO doesn't care about. CEOs are three times as likely to say that they want the biggest focus of security to be on business continuity planning. They want our systems to stay alive because of course, you know, fires happen. In one of my data centers, one time a car drove through the wall. I mean, stuff happens. I'm not kidding. That's actually a real story. Note to self, armor your walls. So the CEO is worried about keeping the business alive. We're worried about what? Compliance. We believe that the number one spending opportunity for security is on compliance. It's interesting that he barely makes the CEO's list because the CEO's criminally liable for compliance. And they don't care. Yet we care. Interesting problem. I had a friend who made a very, very, very healthy living selling HIPAA compliance for a while. He would show up at customers, give him a sheet of paper, his gilded list from above, that said here you must buy these pieces of technology to be HIPAA compliant. And it was a pretty expensive list. The problem? It's not right. If you look at the health and human services statement of the security rule, the final statement of the rule, part of what they do is they list comments. So people can arbitrary say, can write in and say, hey, you know what? I think this rule should have this or that, et cetera. And the committee responds to each of those comments. And the HSS comments on the security rule, there were several thousand pages of it. Very, very high on their list is a set of comments that they summarized as saying, the U.S. government has no interest in specifying technology to create HIPAA compliance. So my friend was selling a list of technology to become HIPAA compliant, which is explicitly ruled out by the standard. After he convinced them to buy all this technology, he came back and said, hey, by the way, you need two-factor authentication. It turns out I'm an extra on smart cards and PKIs. So here, please write me another check for a couple of million dollars. If you look at the NIST standard itself, section 4.4.2, I think, directionally, includes the statement, systems must be able to authenticate themselves either at the user level, the process level, or the role level, which means HIPAA compliance could be achieved with a shared password. The compliance rules don't actually tell that we need to spend the money or the focus that we're convinced we need to spend. CEOs don't agree with us because they think we're wasting money on compliance. The theory of corporations. Corporations exist primarily to create value for their users or clients and take some of that value back as revenue and do so in a way which is respectful to your employees and your shareholders. Companies exist fundamentally to make revenue and profit. We can argue about the morality of that, but that is sort of ground truth. We like our corporations to make money and profit because it means we have jobs and it means we get bonuses and our stock options go up. We would like to do, like anyone else in the company, we'd like to do things that help our company to be more successful. So if you look at what makes companies more successful, there's been some interesting research over the years, like probably 500,000 papers on it. There was a study not very long ago done about returns to companies that were on the Fortune Best Places to Work list. Have you all seen this list? They do it once a year. It's like the top 40 places to work or best place to work or something. It's a pretty good study, but a lot of things go into being on the top 40 list. A really big one is employee satisfaction. A recent study of the companies that made the top 40 list showed that if you were on the list, you made about twice the returns as a matched company that wasn't on the list. So if there's a company like yours that's not on the best places to work, you are and are likely to make twice the return. Sounds like a nice thing to do. How do you generate employee satisfaction? There's a long series of studies that show that employee satisfaction is largely driven by their ability to innovate, which is often described as their work flexibility. So if you allow your employees to innovate more, then you're materially more likely to be able to make twice the return. It's a nice problem, right? However, the CLC did a study last year asking more questions on what employee flexibility looks like. And what's interesting is they found that for a large number of companies, employees would say, yes, I have the room to innovate. However, when I innovate, I feel like I'm breaking the rules. I feel as if I'm acting irresponsibly, okay? In those cases where employees have the freedom to innovate, but perceive themselves as being irresponsible as a result, there was no additional return. Those companies looked exactly like the companies where no innovation is possible. So if you make your employees feel like thieves for innovating, you lose all the benefit of it. Interesting. So that suggests we should find a way to make our employees able to innovate, do cool things and do it in a way which makes them feel like they're breaking the rules, i.e., we should provide flexibility, particular around security, except that breaks all the rules we understand, that pushes all of our buttons. It pushes all of our buttons because we're fighting the last war. We all grew up in a world where people would drive to their offices, open their office doors, turn on their computers, and they're suddenly at work. And at night, they turn off their computers, they get back in their car, they go home, and they're suddenly at home. So that's the notion of work-life split, primarily around showing up to an office to work. This notion of work-life separation is actually a pretty new notion. It first appeared in Taylorism, which was the scientific management guy from the early 1900s. Before that, the notion that work and life were separate didn't really exist. You still had this work-life balance, but it was done differently. Work and life were far more integrated. This view that work and life are separated isn't supported by anything today, either. 50% of employees report having used their personal IM accounts to do work. That's kind of the worst of all possible worlds, right? They're using some third-party system that we don't understand and insecure manner to talk about work. Why are they doing that? They're doing it because our systems stink. When I was at EMI, we used Exchange as our calendaring system, which is a great system, obviously, but we used a particularly horrific remote access methodology. Horrific in the sense that I couldn't get on a large material number of times. And I spent a lot of time on planes. I wanted to be able to check my calendar from the Singapore airport. So what did I do? I had my admin copy every single work appointment I had into my private GCAL. Yes, since I was accountable for security policies, I needed to censure myself for doing that. But all of a sudden, by doing that, I spent a little bit of money and salary for my admin to yield a vast improvement in my ability to work. I found a way to be flexible by breaking my own rules. Not surprising EMI is not doing all that well, but then on the other hand, EMI is a record company. The reason that people like to use consumer software, they like to use Google Calendar, they like to use versions of Gmail, to do work, as I said, is because enterprise software hasn't been that great lately. Twenty years ago, the best and brightest engineers went to work for enterprise software companies. The availability of an enterprise software was hugely greater than the availability of any consumer software. The functionality of enterprise software was great. You could do anything you wanted all the time. It was great. Over the last ten years, a large amount of innovation has moved from enterprise software to consumer software. The availability and security of consumer software has spiked. I mean, if you look at Google.com, the availability of G.com is pretty high, certainly higher than my exchange instance was at EMI. People want to get their work done there. They want to get their work done in a way that makes them feel good. And increasingly, consumer software makes that easier to happen. Easier to happen. That's not even English, is it? We don't really want that to happen. We really don't really want people to merge work and life. We all grew up in a world where what you're trying to do is build a ring of security around our systems. We're trying to build great boundary security. So we try to build firewalls. We do all this trying to keep that people from the outside in. However, we don't believe in our boundary security. If we did, we wouldn't spend millions of dollars on intrusion detection systems. Our employees don't believe in or like our boundary security model. If they did, we wouldn't be hunting for rogue Wi-Fi outlets all the time. No one believes in the system we're at, and yet we're still doing it. If you look at some of the best practices of security controls that are flowing around out there, the one best practice security control, turn off wireless networks. That's clearly not the right answer. Another in the top ten commonly is add restrictive proxies so people can't use instant messaging. See earlier point. If you ask employees why they're using instant messaging, the number one reason, the modal reason, is because they want to announce presence across their work and their personal lives. Putting restricted proxies in place to block that means that all those people are annoyed and they're going to find some way around it and they're going to add inside a Gmail window. They're going to find some way around what you want and what you're doing. So we need to change our tactics. We need to be flexible in a way that doesn't make them feel like they're cheating because we want those outsized returns, right? We want our bonus to grow, and that means our tactics need to change. We can't fight the last war anymore. We need to fight the next war. The next war is this mixture of work and life. The next war says that Taylor was wrong and separating work from life isn't the right approach. At this point in the talk, I'd like to use an analogy which is almost certainly apocryphal. There's a long story about building college campuses and it starts by saying, architects build these beautiful buildings and students have to get from one building to the next and so to do that they put concrete sidewalks between this building and that building and plant grass everywhere else and it's lovely and practically bucolic. After a few months, the architects notice that there's dead grass in paths between buildings because the students are walking across the grass rather than walking up and down the sidewalks. So what do the landscaper do? They put like poles and chains to block the grass. You can't walk across the grass. They're trying to force people to walk on the sidewalks and then replant the grass. After a few more months, they find out that it turned out the grass died again. Oh, people must be stepping over the ropes. This is terrible. What should we do? I got it. Let's plant hedges and flower pots and all these other elements to keep people on the sidewalk. Suddenly, you've gone from this nice open bucolic campus to this thing which is chopped up by all these different things. You've got ropes over here, you've got hedges over there and all of a sudden, it no longer looks like what we tried to build and all those things got added because we wanted to convince our students not to do what they wanted to do. That's relatively clearly the wrong thing. Suddenly, you're wasting vast quantities of money rather than simply listening to your users. An alternative way to build campuses is to put up the beautiful buildings and plant grass everywhere. And then in four or five, six months, notice where the grass is dead and put sidewalks there. You accomplish the same goal, right? You have sidewalks to keep you from killing grass. But you go at it entirely differently. You go at it by saying, what do your users want to do and find some way to secure that, find some way to make that work? That is the next battle for us. What do our clients want to do? What do our employees want to do? That is the challenge for the next generation of security. We face this challenge not because we're stupid, but because we, too, are affected by psychology and incentives. So let's assume you're a security officer and there's a major breach. You shuffle into the CEO's office and say, hi, boss, how's the coffee? Listen, it's been kind of a bad day. It turns out, well, someone got ahold of all of our credit card records and about half of our users now have credit card fraud and it's really expensive and my day is kind of bad. What do you think is going to happen next? Your CEO is going to say, well, thanks for the information. Your day is about to get a lot worse. Go see HR. We are heavily incented to protect against downside because we're going to get blamed for it. And of course, we have the availability heuristic as well, right? Every day you're walking around going to the bathroom and someone grabs your arm and says, hey, did you hear about this problem or this breach or I don't understand why SMS isn't secure? Reference to talk yesterday, anyway. So every time someone asks us about security, we have all kinds of things available to us. So again, we're going to overestimate the likelihood of a problem. Our incentives are wrong and our psychology is wrong. On top of that, every day you have a call from a vendor who comes into your office and legitimately is trying to sell you a solution. They're not being sleazy. That's their job. Their job is to sell you stuff. So to sell you something, they have to convince you that their solution solves a problem that you have, which may involve convincing you that you have a problem in the first place. This adds to your fear. So basically you live in this miasma of information, of press stories telling you bad things, about people stopping in the hallway telling you bad things, about vendors telling you bad things. So of course, so of course you're going to have something which plays a loud noise in a talk. Of course you're going to overspend on security. You can't help it. So we have to have a different way to think about security. Fundamentally, security needs to no longer be your problem. There are several ways to do this. You can change jobs, change careers, work for a company that's in a different country, and then 10 years later you're not in security anymore. Of course, at that moment you're me giving a security talk 10 years later. So that approach doesn't really work. There is a different approach however. You can stop making yourself solely responsible for security. The security team at Google did a great job of this. I was really, really impressed by what they did. They fundamentally distributed security across the entire organization. Every piece of code that got pushed to production had to be code reviewed. So if someone wrote the code, someone else checked it. Part of what they checked for was common security problems. And so basically when they reviewed, they would say, yep, this isn't a problem, this isn't a problem, looks good to be next. QA did their testing, and they did all the functional testing, the regression testing, et cetera, and they also did security testing. Again, you know, not perfect, but materially raising the bar. We also had a different working structure. We didn't specify the endpoint that our engineers could use. Some used Windows machines, some used Macs like me, some used Linux machines. People used everything they could possibly think of. So we didn't believe we could have endpoint security that worked because we didn't have any endpoint control. So we built security far, far into the infrastructure. We didn't run antivirus on endpoint machines. We ran it on the mail servers, which of course is the primary virus methodology. We ran applications near our routers to say, if some link suddenly has a very strange traffic shape, something's probably wrong, go check that machine. We had automated systems to track for confidential information, et cetera. There's a study recently showed that showed that 50% of our organizations have at least one person largely dedicated to checking employee behavior on the internet. One out of every two of you are paying NARCs to watch your employees. There's no one from the DEA in the room, right? Okay, good. So after you were being arrested for using the word NARC, the point of all this effort was to try and make security spread across the entire organization, but then we did something which spread it beyond the organization. Larry Page came to us and said, you know, I think we should put it in Gmail. We should tell our users who else has logged on to their Gmail account. And we were like, hmm, I'm not sure what that means. So we went round and round and round, and ultimately we ended up with what's done today. If you go on to Gmail and you go all the way to the bottom of the page, you'll see a little tiny ebdy font, a thing that says your account is also being accessed at IP blah, or you have two active sessions from IP blah foo and bar. When we put that out, my comment was this is never going to work. No one knows what IP addresses mean, like what are you supposed to do as a result. This is never going to work. It turns out actually that there's a vast number of people who click on the what's this link associated with that. A lot of people are looking for it. And as a result, you see people killing the other sessions, etc. It turns out that by asking our users to help themselves with security, even though we did it in a way which is not very easy to understand, they would help it. It's sort of akin to the fact that fast food chains have figured out a way to outsource their work to you by giving you an empty cup and having you go fill up your soda yourself. Fundamentally, that's a cost-saving mechanism for their side. But market research shows that you are more likely to spend more money at a fast food restaurant that has outsourced its work to you by giving you an empty cup than in a restaurant where they fill it up for you. Similarly, our users, Google's users I should say, loved the fact that they got to be involved in their own solutions, that they got to be part of the answer. So Google's approach was security isn't the security team's problem. It's all the engineers and all the users' problem. And as a result, the team was able to focus on what mattered. So for example, the team spent all this time focusing on the crown jewels. User login. They built a very good, clean user login system that was super easy to use. So rather than inventing it, inventing anew, people building other products would simply link it in. Make it easy to do the right thing. Most people will do it. We focused on credit card numbers and search logs. We focused on building tight, very hard security around the things that mattered and letting the organization secure the rest. That allowed us to have a much better leverage. How many of you figured out the title of the talk? Anybody? Anyone notice the similarities of the titles of the slides? Okay. The title of the talk was is that you, baby, or just a bridge in the sky. That is a commonly misheard lyric from a Bruce Springsteen song called Brilliant to Skies. The line is, is that you, baby, or just a brilliant disguise? Many, many people misunderstand that lyric. If you play that loudly at a party and people start singing along, the vast majority of them will sing the wrong lyric. Much like the vast majority of us are saying the wrong things and doing the wrong things to our companies. We're not doing them because we're dumb. We're doing them because we don't know the right lyrics. As you change from being a security officer to being an executive to viewing the world differently, it's easy to notice who's singing the wrong lyrics and why. The right lyrics involve recognizing your own incentives, recognizing your own decision biases, recognizing that other people's incentives don't always align with yours, and trying to find a way to fight the next war, not the last war. Trying to find a way to make security actually add value. Not downside protection, but actually add value and do it in a way which allows workers to be flexible and not feel like they're skisballs for doing it. Because fundamentally, fighting the next war turned security from being a roadblock into the good guys. It's much more fun to be the good guys than the bad guys. It's way more pleasant. And it increases the chances that your organizations will succeed, increase the chances your bonus will be nice and your stock price will be up. Getting the lyric right, getting the communication between you and the executives right, getting the understanding of what your users want right is a hugely valuable outcome and it's something we are extremely bad at. As a result of being bad at it, we risk becoming irrelevant. Almost 80% of employees at Fortune 100 companies report at one time or another volitionally breaking a security rule. 8 out of 10 of your employees are breaking your rules at least once, knowing it and not caring. That suggests that we've built sidewalks and maybe even ropes and hedges and flower pots instead of building sidewalks where the users want to go. The challenge I leave you with at the end of my talk with a dumb title and every slide title a quote from a song is go figure out where your users are, tear up your sidewalks, spend money to build sidewalks where they're going and stop paying people to be narks. Seriously, that's awful. And with that, I'll take questions if there are any. It looks like the question mic is up here and if there aren't any, I'll be across the hall afterwards. Thanks very much.