 So the topic of this talk or at least the official title of the talk is Fishing Tips and Techniques. I realised afterwards that an alternative title could have been something like Hacking the Mind because there's a lot of information in here about the way human cognitive processes work and their interaction with usability in things like web browsers. So the background for this is, OK, just some standard figures. Who knows how much money being made from fishing. We know that it's a serious problem. We know that it works really, really well. What we don't know is the general reason why it works. A typical reason why is that users are idiots, which isn't really a good reason. So what I wanted to look at is basically the nuts and bolts of why it works. So first of all, what are the actual threats that we're facing from the fishers and what are the weak points in our defences? So the obvious question to why a user can't get security writers that users are idiots and that's an actual thing from Slashdot where someone says pretty much that and that's a typical opinion you hear when you ask security people about the users they have to deal with. Well, users are idiots. We build all these cool security countermeasures. Users get them wrong. It's not our fault. But the problem with this is that given how successful fishing is, that would assume that almost everyone on Earth is an idiot, joking aside about stupid users. The fact that so many people are becoming victims of fishing attacks and that fishing is so successful implies that there's a fundamental problem that isn't being addressed by security technology. So what developers have done is they've crowded a whole pile of widgets for browsers and similar applications. I'm going to concentrate mostly on browsers because most of the fishing is done via browsers. So they've got things like the padlock icon, you've got the HTTPS indicator in Firefox and the next version of Internet Explorer. There's a coloured URL bar, certificate warnings, and you've also got optional browser plugins that tell you, for example, this is PayPal or this may be a fishing site or whatever. The problem with this is that it was never actually ever tested on users. So developers stuck it in. They thought it sounds like a good idea. Let's stick it in. They never actually tested whether this works. There's a human-computer interaction sort of principle which says that if users don't understand it, it's not there. So if users don't understand the padlock, you may as well not have it because it's not going to be used and it's not going to be effective. So last year a group actually did some testing on whether these mechanisms were usable. So whether users were actually being helped by the padlock and HTTPS and so on and so forth. And they found out that, well, you've got the figures there. 65% ignored the padlock, 59% paid no attention to the HTTPS. More than three-quarters didn't even notice the address bar colouring. And in this particular test of the people that noticed, only two actually knew what it meant. The rest thought it was just some sort of decoration. And most importantly, the prime sort of security feature behind SSL or basically web browsing is that you use SSL, you use certificates. The assumption is that if you've got an invalid certificate or some sort of problem, then users will know not to go to that site. What this test found is that 68% of users, once this warning dialogue popped up, clicked OK without even knowing what they were doing. So it's not just that they read the thing and didn't know what to do and clicked OK. It was a reflex action. They just got rid of this warning and connected to the site no matter what. Only one single user in this test was able to explain what they'd actually done. So a standard sort of approach to this is, well, OK, we need to educate users. We need to tell them what the padlock means and what the certificate means and so on and so forth. Also, it turns out we've been educating or at least mis-educating users for years. Every time you go online, you get DNS errors and 404 errors and missing plugins and JavaScript and your security settings don't allow this to be run and so on and so forth. So users expect to be constantly bombarded with all these irritating little minor warnings and things. And they've come to accept that if you just click OK or cancel and in some cases try again in half an hour, it's going to work. So you've got this constant stream of warnings, dialogue, error dialogues and so on and so forth and they've conditioned users into ignoring them. Now the problem is if you get a network attack, you get symptoms that are exactly identical to the standard three-degree background radiation of noise that users expect. So basically what happens is the browser is trying to detect these attacks with a 100% false positive rain. Here's an example. If I were at Windows the first time you use it, you go to eBay, you search for dog food, you get that warning dialogue. This dialogue tells you absolutely nothing about what you're doing. You're going to send information to the internet. Well, yes, obviously, you're surfing the net. You have to send information to the net. There's no context for it. If you go to eBay for dog food, you get this dialogue. If you go to your banking website, you get the same dialogue. Even the programmers admit that this dialogue is a complete waste of time. You see the checkbox at the bottom that says don't bother me again. So even the people that created this dialogue realize that you don't want to see this and it's just a noise. That's a translation of the dialogue into what the users actually obtain from this. That's all that the users actually absorb. And again, the Windows developers and I'm going to use Windows specifically as an example because it's the most used platform, but on other platforms they're not much better. The Windows developers actually realize that users aren't going to understand this. So in Windows 95, one of the beaters, they actually put in an error message saying in order to demonstrate our superior intellect, we're going to ask you a question you can't answer. Because the developers knew that they were doing things that the users would not understand. So basically we have, from the last 10 years so we've got an entire generation whose computing experience is based around clicking OK to error messages they don't understand. So in general, if you're talking about general usability, not specifically security usability, that's just moaning about bad design. The problem is, once that becomes security usability, it's a really, really serious security flaw and it's a primary attack vector. Here's some examples. There was a large banking site about a year ago whose certificate expired. So every time you went to this bank site you got this pop-up saying the certificate is invalid and the standard browser error warning. During the time when that certificate was invalid, 300 users visited the site, one single user turned away. So basically the certificate warning had no effect whatsoever in preventing users from going to the site. And the problem with this is, as I've already mentioned earlier, that SSL security depends entirely on the user handling of certificates. So if you look at this chain, you've got AES and 2000 bit RSA keys and then this user judgment call in front of a dialogue box. So obviously no one's going to bother attacking AES encryption or RSA. They simply go for the weakest link, which is the user in front of the dialogue box. And that's why phishing is so incredibly successful because that is the weak link and that's the attack vector that everyone's using. Again, this is an example from a government site that was used to make property tax payments. So it's thousands to tens of thousands of dollars per payment made through this site. At the bottom you can see a standard certificate warning, a big thing, invalid certificate of red cross. So the security mechanisms are working exactly as they were intended to work. On the other hand, the effect on users was apparently nothing. That was up there for about two months before someone told them to fix it. So either zero or close to zero users were actually deterred by this particular warning, even though that was exactly what the browser designers had intended. So the first phishing tip, invalid certificates don't bother users. I'll go into this in a bit more detail later on. So another problem we've got is that financial institutions are actually training the users to ignore these certificate-based security indicators. These are three typical screenshots from large, well-known financial institutions. What all of them are saying is that when you go to our homepage there's no padlock, there's no HTTPS, there's no security whatsoever, but go ahead, we're a large bank, trust us. Just enter your password details anyway. So again, obvious phishing tip from that, target US financial institutions. So this is the worst online security of any banks anywhere. And a sort of a follow-on for that is that users are heavily conditioned towards accepting these security practices. So again, with these sort of messages on their home pages, they've basically trained their users to accept very poor security practices. And then there's sort of, depending on geographic region, they sort of go from really appalling all the way through to reasonably good. The European banks are quite good. They use things like PIN calculators so you log into account, you use a PIN and then every time you move money around you have to enter a one-time, what they call a TAN which is a per-transaction PIN to authorise that individual transaction. So they're a much harder target to attack. So the results of this conditioning of users is that SSL security isn't really very effective. If you look at the Security Space Survey, they've found that about 58% of all SSL certificates are invalid. That means they're expired, they're self-signed, they're signed by unknown CA's, they're for an incorrect domain and so on and so forth. Usually people don't see this. Most people go to Amazon and Hotmail and whatever. And their certificates are valid. So they don't see this vast, massive invalid certificates. But once you go to the smaller site you start running into these invalid certificates. The problem with this is that browser vendors can't afford to fix it anymore. The majority, or at least 58% of websites would break if browsers suddenly started refusing to connect to a site that has an invalid certificate. For example, if Microsoft fixes it, the complaint will be while they're using the monopoly position to force everyone to buy their sign certificates. It won't fly. Conversely, if any of the other browsers fix it, the complaint will be while it works with Internet Explorer, it doesn't work with, say, Firefox, therefore Firefox is broken. So the browser vendors have a very serious problem in trying to fix this. So again, the study I commented earlier that showed the effects of the padlock and so on found that certificates basically had zero effect on people visiting a website. It was pretty much indistinguishable from placebo. And there's a comment from that study. Users basically dismiss the error messages and so they have very little protection against man-in-the-middle attacks. In other words, the very thing that they were designed to protect against, they are not actually doing. There's a mechanism used for accepting small cash payments. I don't know if it's used much in the US. It's used in some countries where it's called an honesty box. So if you've got something like a newspaper or whatever, you have an honesty box next to it and you trust that most people are honest so they're going to drop in the right amount of money before they take the newspaper. And it works most of the time. Typically it's used in situations where it's not worth having someone sitting there counting the money and making sure that people are paying the right amount. And that's pretty much the same security as what UCL certificates get you. You spend $500 on a verisign certificate and people will visit your site. You use an HCA certificate, people will still come. You create a self-signed certificate, exactly the same as the $500 verisign one. You use an invalid certificate, again, they'll still come in. And if you're a bank, you just put up a nice warning message saying, don't worry, there's no security indicators here but trust us, we're a bank. And again, people will still come. So if you're buying newspapers for sort of 50 cents, it doesn't matter if you lose the odd 50 cent deposit. And if you're up against crooks who are determined to always be dishonest every single time, then this will provide basically zero security. So there's one other interesting thing that the study found. It found that users treated a site with no certificates at all as being less secure than one with an invalid certificate. What they assumed was that there's a certificate there, okay, it's expired but, you know, it's a certificate so it's good enough, it's got to be good for something. If you think about this, if you go into a lift or an elevator, there's a certificate in there. Is anyone here just to get a show of hands? Is anyone here ever checked the security certificate in an elevator? So a safety certificate. Has anyone okay, has anyone checked that the safety certificate in the elevator actually matches both the building and the elevator they're travelling in? Okay, very paranoid people. Which is basically what you need to do to validate an SSL certificate. And, you know, that's the example that people have from real-world usage. If you go into a restaurant, you may possibly glance at the food safety certificate on the wall while you're waiting to be seated. But in general, people don't check to that level of detail and when they go to a website they get the same thing. It's got a certificate, it's good enough for me. So the result of this is it's not only indistinguishable from placebo, it's actually worse than placebo in that users behave less insecurely when they go to a site with no SSL than when they go to a site with SSL. Obvious phishing tip from that. Use a self-sign certificate because it gets you a lot more respect and it's not good at all. And the fishers are starting to realise this. So last year alone there were about 450 secure phishing attacks that are known. There are probably, who knows how many more that weren't actually realised. The two main ones, apart from the obvious cross-site scripting is self-sign certificate. And you get a genuine certificate for a sound-alike domain. For example, that was a real site that used SSL, visasecure.com. The issue with that is that visa actually has a lot of bugs and so on and so forth. So if you're used to domains like that and you see visasecure.com, there's no reason that you can, for a typical person, not to believe that is a genuine visa site and it's got a certificate issued by a trusted CA has to be the real thing. Okay, so that was some of the actual problems in terms of user interface. Now I'm going to look into the actual background, the nuts and bolts of why users behave in this manner, why they simply get rid of these dialogues. Up until about 20 odd years ago, it was assumed that the human decision-making model was something called the economic decision-making model. So that assumes that you've got a... you generate a set of alternatives, you sort of weigh them up, you decide, this is the best one, I'll go with this one. The problem is that in many situations and in this particular example, battlefield decision-making, humans make really, really bad decisions. So if you follow the economic decision-making model that shouldn't really happen. So the US Department of Defence commissioned a study to find out how these decisions are actually being made. And they found that in this particular case which I'll go into in a minute, the way users make decisions is that they simply generate options one at a time. They never compare any of the two options. They generate options one at a time. They take the first one that matches. If it doesn't fit that well, they throw it away and go on to the next one. It's something called the singular evaluation approach. And it's very, very different from the way that, you know, that people were expected to make decisions. So if you look at things like web browsers, they were based on this particular decision-making model and not at all on the way that people are actually making decisions. So the situations when you switch from the economic model to the singular evaluation is when you're under pressure, which is pretty much automatic if you're using a computer. You want to do your online banking and some stupid dialogue pops up. You want to get rid of it as quickly as possible. So you're automatically under pressure. When you're in dynamic conditions, so you can't really perform detailed analysis, you can't sit back and think about all the possible options. And you've got very little basis for analysing and comparing choices. And again, in this case, you know, with these complicated certificate-based messages, users have no idea what that's talking about. This is typically just another example screenshot I use. That's the American Express homepage which has an invalid certificate. But again, it doesn't seem to stop people from going there. So when you're dealing with computers, you use this all the time. It saves time and effort. You can't just click OK or cancel. And in fact, the web browsing model itself encourages this kind of poke and hope thing. So you go to a website, you click on something, it's not the right link, you go back, you try something else, you go back. So the fact that users are using web browsers, the actual environment they're working within is conditioning them as they're using it to use this type of decision-making means that, you know, an attack that takes advantage of that is extremely powerful. Because web users are basically constantly immersed in this type of decision-making. So the reason why humans do this is that if they didn't, they wouldn't actually get anything done. It's not some defect or bug in the way the brain works. It's what makes humans function. AI researchers have actually tried to do this. So they've tried to computerise singular evaluation, or at least sort of computerise common sense, I guess would be a better way of putting it. And the software has had to grind through millions and millions of possible implications So AI researchers call this a frame problem. How do you frame a particular problem in such a way that it's easy to solve? Some humans actually don't use singular evaluation. They use the sort of economic model for everything they do. And that's a psychological disorder called Sematising Catatonic Conversion, which is the catatonic. It may indicate what happens is that you simply grind to a halt. Because every single action you take, you have to go through all the possible implications. And you never actually get anything done. If humans did exclusively use this economic decision making model, they wouldn't work. It's not a bug. It's required for humans to function. So this isn't grumbling about stupid users. It's basically a law of nature. You can't educate users out of this. You can't avoid this. You have to deal with this. And more importantly, you can't patch this in a hurry. It's always going to be there. And things like salespeople, for example, we already know about this. So maybe they haven't got into the heavy duty psychology stuff, but they know that if you say something like call in the next 10 minutes, then people switch from the economic model where they realise that what they're buying is a pile of crap, to the single-evaluation model where they say, okay, I've got to act really, really quickly. I'll miss out on the deal and I'll go and buy it. So another problem that comes about from the way that humans do things is the difference between automatic and controlled processes. So a controlled process is something that's relatively slow, requires a lot of mental effort, but on the other hand it gives you a great deal of flexibility. So an example of this is if you're a novice driver, you have to manually check for things and change gears and look out for pedestrians and traffic lights and so on and so forth. It requires quite a bit of mental effort, but the fact that you're specifically going through and checking for all these things gives you a great deal of flexibility. The opposite of that is an automatic process, which is what an experienced driver has. So they will automatically, without really being consciously aware of it, check for traffic lights and signs and whatnot. The feature of an automatic process is it's very quick to do, requires very little mental effort and you're basically acting on autopilot. So the thing with humans is they're creatures of habit. If you have an automatic process, it's triggered by certain stimuli. Once that trigger happens, it's very, very hard to stop an automatic process. And another thing is you're not actually consciously aware of doing it. An example of this is locking the front door. You get halfway down the drive and you think, did I lock the front door? It's an automatic process. You do it every single day. You don't think about it. You're not conscious of having done it. And so at some point you realise, I can't actually remember ever having locked the front door because it was simply never recorded. So the thing with this is that once users become habituated into a certain behaviour, it's very, very difficult to break them out of this. One thing that Microsoft found, for example, if you're running XP Service Pack 2, you'll note that they've actually turned the security update thing into nagware. So every couple of minutes the stupid thing pops up again and says, you have updates, you want me to restart the machine. Microsoft didn't do this because they're like annoying users. What they found in user testing was that people would just automatically click away at dialogue. So they'd automatically download the updates. This thing would pop up. Users wouldn't even be aware they'd done this. They'd just move the mouse to the OK or cancel or close or whatever and get rid of it. And so you had this massive amount of security updates that were sitting there unapplied and users weren't actually aware that they were preventing them from being applied. This is something that's been known about for a long time. The dark ages of psychology, the very first psychologists realised that users basically resist attempts to change their behaviour. So once they've become habituated into this bad habit, then even if you go to them and say, what you're doing is wrong, they still won't change their behaviour. And software vendors have tried to work around this. The most notorious example probably is the Microsoft Office paperclip. Hey, I've noticed you're doing this in a really long-winded and stupid way. Here's two keystrokes that you can do it and users really, really hated it. So a consequence of this is that every time you go online, you have to endlessly authenticate yourself. This is a screenshot from Firefox. So you've got one example which is a blog for discussing knitting patterns. And the other one is PayPal. And either one, with no obvious difference in the security levels, you go to that, you have to authenticate yourself. And I'm going to sort of complain about Firefox. Internet Explorer is no better. I just use this as an example. So when Firefox asks you for this master password, you have no idea of telling. Is this a high security thing or is this a low security thing? So if you participate in, let's say, blogs, every blog you go to, you have to type in your password automatically without even thinking about it. And again, that's something that fissures are taking advantage of. The fact that every time a password dialogue pops up, you simply type in your password as an automatic process. And in fact, even a legitimate application's requests for passwords are pretty much incomprehensible, let alone a phishing site. So what that actually says is, I'll just read it out if you can't see it at the back. It's pleasing to the master password for the software security device to be useful heading prompt at the top. Just to give an explanation of what that is actually saying. OK, for the average non-technical user, that's complete gobbledygook. All I can see there is that thing at the bottom, which most will be Klingon. They can see the enter and password, everything else, they can't understand. It's just jargon. And just for the trickies in the audience, yes, I realise it's not proper Klingon because you can't say please enter your password in Klingon. So it's actually just garbage. So... So even for technical users, you know, to understand what that's actually asking, you have to know that the Netscape-derived browsers use internally a crypto API called PKCS11 and which is designed generally for smart cards and crypto hardware, but most people don't have that, so they have a software emulation of a PKCS11 crypto device. And the PKCS11 standard specifies two different types of sessions you can have to a device, a public session and a private session. A public session will let you read certificates and stuff like that. A private session will give you access to keys. And so in order to access the keys on this pseudo crypto hardware device, you have to authenticate yourself with a password. So what this is saying is please enter a password to establish a private session with the PKCS11 software emulation crypto device built into the browser. Nobody will understand that. You know, this is going beyond just the average user not understanding it, even most geeks, I think, unless they know a lot about how the PKCS11 crypto interfaces work will not have a clue what that dialogue is actually really saying apart from enter password. So users are basically habituated into entering their passwords for everything. Any dialogue that pops up, they enter their password. It's an automatic process. Once the stimulus appears, once something appears with the word password on the dialogue box, they type in their password. Biometrics are going to be even more brilliant. I read this cool article about one or two weeks before I came here, which was about biometrics for protecting against phishing. And they've actually explained how they were going to do this. It was a long thing about how fingerprint readers work. But biometrics are going to be even worse for this because biometrics are even easier to enter. You know, you've got this fingerprint reader here. Click, authenticated dialogue box. Click, authenticated. So becoming habituated into that and that becoming an automatic process is even easier than typing in your password. So it's far more vulnerable actually than straight passwords. So some phishing tips derived from that. You know, because of this lack of differentiation between high and low value passwords, try phishing for a low value sign. Most people, okay, maybe if they go to a particular banking site, they might be a bit more careful. If they're going to a blog site, they're not really going to care because the password isn't worth anything. At the moment, phishing for banking sites is still so easy that it's not really worth doing this. On the other hand, if the banks ever get their act together, that's one thing you can try and do. And then obviously, try the phish credentials at high value sites so you get someone's hotmail password and try it at Bank of America. Another interesting thing you can do, again, based on this automatic process thing, is you reject the first few passwords the user enters. So they type in their password without even thinking about it. And you get the thing back invalid password. So you've typed in your password. You weren't even aware you were doing it. You weren't aware of which password you typed in. Okay, maybe I typed in the wrong password. I'll try password for a different site. Okay, I'll try a third password. So basically, you can get several passwords for the price of one. So another problem with human minds is they're very bad at generating testable hypotheses. And in particular, they will try to confirm something rather than to prove it invalid. So humans exhibit something called confirmation bias. So they go to a website and instead of saying, I'm going to try these things to check whether it's a fake site, they say I'm going to try these things to try and confirm that it is the real site. A consequence of this is that people are more likely to accept an invalid but plausible conclusion rather than a valid but implausible one. So again, from we'll use it testing, here's an example of how this confirmation bias problem pops up. You go to a site, how do you check it? Well, you type in your username and password and if it accepts them, then obviously it knows that that was your password and you know it's the real thing. This is absolutely appalling. You know, if security people look at this, this is absolutely appalling but this was from real user testing. This is how users try and verify potential phishing sites. Humans are really, really good at rationalising away almost anything. This is a really extreme case. There are some patients who have epileptic seizures that are so severe that the only way they're to brain hemispheres and so what psychologists did with those is they told one half of the brain to do something in one particular experiment it was get up and walk around the room and then they asked the other half of the brain why they were doing that. Now because they were physically separated, one half of the brain literally had no idea what the other half of the brain was doing and yet they always came up with some rationalisation like, I wanted to stretch my legs, I wanted to get up for a drink, whatever. So you know that's a kind of an extreme example. No idea why it's actually happening. And here's an example of how that works in the case of phishing. So you've got a bank site located in an unexpected place. Again, this is from user testing. These are actual genuine user responses towards, you know, why is this thing located in some really weird location. And all of them kind of make sense. You know, ecialyahu.com, okay it's a sub-director of Yahoo. Some site in Brazil, well they have a branch in Brazil so obviously that's why I'm getting sent there. I've had to go to websites that are simply IP addresses rather than proper URLs and so on and so forth. And it's very easy to rationalise that. So the phishing tip from that is people basically really, really want to believe what they see. So just create a very good copy of the site and it doesn't matter if it's in Romania, they'll still believe it. And you know, exploit the confirmation bias. So make it very easy to confirm the site's authenticity based on these sort of, well basically stupid things, unless you realise that's how the user's brains work. There are probably stupid ways of testing things. So another problem is that financial institutions in the real world have invested a great deal in anti-counterfeiting technology so if you look at banknotes you've got watermarks and things printed in see-through register and taglio printing and a whole pile of other stuff. And it's all based on the fact that it's very, very difficult to replicate certain physical security artefacts. So the result of that is that people assume that complexity means authenticity. So if you go to a website and it's got a whole pile of flash animation and automated graphics and really complicated layout they assume that just like a banknote and a check-in sign with anti-counterfeiting measures that because of all this complicated detail in there that must be the real thing because if you've got a flash animation for someone to sit down there and manually copy all the little details of this flash animation across onto their site would be extremely difficult. They just assume that the digital world follows physical copying rules. So exploit that and again this is from actual phishing tests and usability tests. So if you've got a site that's using flash and animated graphics and so on and so forth copy that and very prominently feature it on the homepage and that's something users have actually said they went to a phishing site and they had said there was this animated dancing bear or something on this site and I trusted it because it had this really complicated animated graphic that the fishers couldn't possibly have forged. One thing you have to be careful with if you're just sort of slipping down the entire website is be careful with very literal copies. If you've got things like dates and stuff on there you can't just take a snapshot of the site maybe pass it and update dates and other things like that. But as long as the site looks plausible it'll actually work because the user's assumption is that no one will bother creating an entire fake site like this and copying every single feature. That's an actual phishing site and this is so the example I gave earlier of these banks that put little padlock pictures on their home pages and then have a message saying there's no easy sell but trust us. So that's the actual padlock and the difference between the Bank of America site and the phishing site was the Bank of America site you click on the padlock and there's a message saying we have no security but trust us and on the phishing site you click on that padlock and nothing happens so these people just have no pride in their work they couldn't even copy the simple bank sign. So there's something called the Simon C's problem which hits browsers a lot and what that is is when you're expected to change your behaviour in the absence of a stimulus so not the presence but the absence of a stimulus. So on web browsers you've basically got this tiny little padlock and when the padlock is absent you're supposed to treat the site as being the problem with this is the hamming weight of that is basically close to zero. Users simply don't notice it. When Internet Explorer 6 Service Pack 2 they introduced this tiny little blue ribbon which indicated I think it was a pop-up had been blocked or something like that and so folks did a usability test on it after Microsoft had released it and found that not one single user actually noticed that this security ribbon was there and this has happened in a lot of other cases as well. There was a case where some folks were doing some testing on a spreadsheet and they had this little pop-up dialogue box that came up saying there's a $50 bill taped to the bottom of your seat take it and no one noticed it and no one took this $50 bill even though this thing popped up in the middle of when they were using the spreadsheet. So there's a whole sort of science about this called inattention blindness. It's only been studied in the last couple of years it's actually a really really interesting read if you don't mind sort of reading psychology texts. One of the best known examples of this which people here may have seen because it's been shown on TV in a number of programs. Two psychologists in Chabris did this in 1999 so they taped some people playing basketball it was a black team and a white team and they asked people to watch the two players sorry to watch the two teams and halfway through the game this guy in a gorilla suit sort of walked across and sort of stood in the middle of the screen and then walked off again and this gorilla suit in the middle of this basketball game was on screen for about nine seconds and only 43% of people actually noticed that this gorilla had walked across in the middle of the game. This gorilla is in our midst if you google for it you can find it. And there's been a pile of similar studies that people have done on this and others here there's some science called inattentional blindness so what happens is that people are focusing on the particular target they need they're interested in and basically weeding out everything else and a common example of this is you know you hear of accidents someone's driving down the road they heard a cyclist and they say I didn't see the cyclist right in front of me and that's an example of inattentional blindness because they've got this automatic process scanning for you know traffic signs and traffic lights and other cars and so on and so forth what it's not scanning for is cyclists and so they run into the cyclist without even seeing them and again this is what makes it possible for humans to function it's like a singular evaluation this isn't a bug this is actually required for humans in order to function so at the very high level you've got the sensors they're filtering out light and sound and so on and so forth an example that anyone who was around yesterday evening would have noticed and he could pick out the one conversation that you're interested in because your brain is filtering out all the other noise in the background so humans have basically learned to focus on what's important things like flashing lights and wild animals and whatnot so in the case of a browser you've got this tiny little padlock it's simply nothing something that evolution has never conditioned us to notice as being important and so we don't notice it so in terms of phishing if you want amount of phishing attack don't worry about the security indicators most users simply that do notice and won't know what they signify there are security toolbars which try and make this a bit more obvious so you've got this big toolbar rather than a tiny padlock and an HTTPS on the other hand first of all and most of them aren't installed by default so you've got to actually know about them and go out and find them and install them as plugins which the typical phishing victim is not going to know about and in any case even with all that again there's been a study on this and about 39% of users even with these toolbars and all the other and also again you've got US banks who are training very hard or working very hard to train users to ignore all this stuff so it lets to sort of revisit this thing about why users can't get security right it's not actually that users are idiots that security people are weirdos the sort of human conditioning human evolution and so on has trained people to ignore padlocks to ignore HTTPS and so on and so forth so only security people whose minds work very differently from the rest of the population would actually stop and look at the padlock and check certificates and so on and so forth no normal person would ever actually handle a user interface that way so the reason why you have browsers built this way is that they're built by security people and you know the coders writing this stuff assume that everybody else will also use this in the same way as they do not realising that there's some you know eight standard deviations off what anyone else would normally do there was one study on PKI usability which found that the researchers who ran the study took about two and a half minutes to do this some certificate based operation the typical users these weren't even normal you know Joe Sixpack these were actually people with PhDs and computer security and they were given this sort of paint by numbers series of steps on what to do took about two and a half hours to do the same thing security researchers who looked at this simply couldn't believe that it would take in this case highly skilled technical users not even in the street that long to do it you know we designed this it's supposed to be simple to use how can anyone possibly find it this hard to use so another thing that CAs have been doing recently or trying to do it will probably appear in sort of the next release major use of the browsers is to introduce these things called high assurance certificates so what the high assurance means is you've got a high level of assurance that they're going to cost about five times as much as the existing certificates well the thing with CAs is all they can do is issue certificates and secure again the problem with this is that most users don't even know what a CAs is Geeks know about it and that's about it and even going beyond the average user I don't know if there's anyone on Earth who actually knows all 40 to 50 CAs that are hard coded into their web browser some of them have been in there for years and years and whoever it was that approved those CAs going in there have moved on to other jobs so there are so many CAs in there nobody actually knows what all of those things are and particularly if you've got the most insignificant mainstream brand they've got more visibility and more presence than the most significant main CA brand so again from a user study users will ask which brands they recognized and the outcome of this was that more people recognized visa as a trusted CA than Verisign or the problem is visa isn't even a CA and Verisign is the world's largest CA so basically CAs even if you know what they are the actual brand recognition is close to zero phishing tip from that is kind of obvious create a self-sign certificate make it your own CA call it visa people recognize visa doc visa they don't necessarily recognize Verisign and certify your site okay so you've got HTTPSVisa.com it's certified by the visa CA of course it's got to be the real thing so another thing that's being added to browsers is phishing blacklists and this is going to appear in the next revs of both Internet Explorer and Firefox so if you go to Marcus Random's website he's a really cool security person he's got this list of the six dumbest ideas in computer security blacklists are number two on his list he generalizes this as enumerating badness in fact it's actually a special case of the default allow which is the number one dumbest idea so side stepping this is it's pretty trivial you know just avoid the blacklist many phishing working group reports that the average site lifetime at the moment is about phishing site lifetime is about five days now to get that onto a blacklist it's probably going to take a lot longer than that spammers are already using websites with six hour lifetime so what they do is they send out the spam wait about six hours we're just to the site so there's something there once people get into work in the morning and check their mail six hours later the site's gone so the chances of a phishing blacklist responding within six hours of the site appearing particularly if it appears at you know till in the morning are pretty low another way of getting round it which phishing sites are already doing is just run a reverse proxy via a botnet so instead of having just one single site you've got 10,000 owned PCs there's constantly changing array of IP addresses you can't blacklist all of those so basically blacklists are well you know they're in there because it's something it's better than doing nothing at all but the actual effect is going to be close to zero so an argument I've heard with this is okay blacklist to work with virus scanners so why shouldn't they work with phishing and that's you know the people actually pushing these phishing blacklist are actually using these arguments well the problem is that a virus scanner it's got about 100,000 fixed files on disk it just reads through all of those and finds a virus even then the most popular scanners have an 80% mis-rate what happens is people download the latest copy of Norton off the net if you're writing a virus you tune it so Norton doesn't detect it you would then release your virus now to block phishing you've got I don't know a billion odd internet connected machines I don't know what the actual count is they're constantly changing moving around and so on and so forth the only way you can actually somehow blacklist all the things out there is to monitor every single machine all the time to detect when it's been owned by some phishing guy so really this is never going to work basically yeah nothing to worry about just make sure the site isn't around long enough for it to get blacklisted if anyone's into sort of World War History in World War 2 Germany built this bunch of super guns that the Gustav get at and that the Carl and Thor mortars and these actually helped the enemy because they diverted resources away from the main attack moving and firing these guns so it actually helped the bad guy well, helped the enemy in this case but in the case of phishing working on these blacklists again is diverting resources away from actually really combating phishing and it's helping the bad guys because instead of taking proper security measures you're fiddling with these blacklists so basically summary of the previous tips create your own CA for a well-known brand people will recognise that more than any major CA if you've got a phishing site certified using the CA take advantage of the fact that you know people trust a site if it's got a certificate indirect phishing at the moment isn't too necessary because direct phishing is still too easy to do make it as close to as possible to the real thing you've got confirmation bias working on your side leverage the watermark fallacy so copy flash and fancy animator graphics in as much detail as you possibly can target US financial institutions your best friend if you're a fisher copy the US banking disclaimer so make sure that when people click on the padlock they get the actual Bank of America or whatever message saying but trust us anyway don't worry about security indicators okay some people it's going to help some people but enough people simply won't even notice it that you're still be successful in phishing yeah short lift sites reverse proxies and the final thing is you only need a 1% success rate in order for it to work that offenders need about 100% success rate which at the moment they're not getting anywhere near if you want these slides available on my homepage I've also got an incredibly long usability tutorial I'm actually going to be travelling for the next week so they're not up yet wait about a week and you can get the slides from my homepage and there's also a long tutorial on potential countermeasures so I've talked about all the attacks here the countermeasures are a lot more complicated and take a lot more time to talk about but wait a week and I'll have the slides up okay any questions no questions okay that's it then thanks add a question why can't Congress pass a law making it necessary for US companies using secure sites to use valid certificates from a recognized list of CAs in order for it to be accessible by a browser I don't know one of the problems is that you would be legally enforcing the monopoly the fact that you said a recognized list of CAs I don't know what would happen if the government tried to legally enforce a monopoly of a small number of CAs but yeah the question is I don't know software industry lobbying I really don't know the answer to that sorry I just want to make a clarification with American Express their certificates are not invalid their home page is hosted by Akamai which is where the search issued but again if further obfuscates the real problem it makes it more difficult to navigate the terrain so I appreciate it this is a little off of I guess your area of focus but have you run any sort of studies on the installation of adware and spyware and people agreeing to things that are actually somewhere embedded maybe in a YULA or it's somewhere on there but they just just click through get through it and installation you've run any studies related to that I don't know if it's been studied specifically but the principles are exactly the same as what you're getting in phishing this message pops up and they want to install a program they don't really care and you know the automatic processing kicks and they've got the stimulus dialogue box with an OK click get rid of it and installs the spyware so yeah it's pretty much the same thing I've got a comment about the banking login it's actually both better and worse than the way you painted it if you look at the source on those bank login pages a lot of times it won't be SSL for entering your username and password but when you click OK the actual form post is an SSL but that doesn't matter because if you go to a website and you enter your username and password you want to know it's the real bank oh yeah absolutely you know to authenticate the web page it doesn't really matter I fully agree and actually I work at a a large online bank and I fought that very battle and convinced our marketing department to not implement that kind of login page so yeah one of the reasons for mentioning this is hopefully by pointing this out and you know pointing big signs at it banks will fix that it's all about user conditioning like you were saying and if users are accustomed to entering their username and password for their financial transactions on a non-secure page Ryan yeah alright thank you I'd like to know if there's any help in third party hardware devices for the masses that they would set before their firewalls that claim to effectively be able to filter fissures and key loggers coming in the door so that would prevent that aspect of the attack again it's a blacklisting it's the blacklisting problem in order to filter fishing you know sort of in your router or whatever you need a blacklist of sites that you don't allow in and that runs into the blacklist problem you know user reverse proxy and you can't filter that anymore do you think that that will be a sign becoming bigger like acquiring geotrust and growing that eventually something like that will become habitual to users or do you think that that's not going to be a solution in the future I don't think it'll become visible if you look at you know like the big credit card companies and banks they're spending I don't know what they spend hundreds of millions or billions of dollars on advertising and you know every airport you walk into and so on you've got these huge posters visa mastercard whatever Verisign simply can't compete with that I'm really interested like the web of trust PGP type stuff and I'm trying to merge the two ideas in a sense I mean we're looking at a central authority all the trust is dictated from this one site is there any way you can envision this kind of merger and I'm thinking also instead of contrary to the blacklist where you know if we get a certificate from Verisign and they say okay instantly we trust this site how about we we build trust over time and some clients can see that people have signed this site over time they can verify time stamps over time I'm trying to think of like a page rank kind of algorithm what do you serious problems with this distributed web of trust or what do you think about something like that it's subvertible it would help a lot the problem with measures like that is once you start building them in they become a target for the bad guys and they subvert them like you already get this on eBay where some people are open like a thousand bogus sites and create bogus transactions and leave positive feedback for the one user and so they'll get a thousand positive feedbacks and people will trust them and yeah it's a page rank thing based on users voting on whether the site is good or not it is subvertible but it is yeah it's a good idea until the bad guy sorry the problem is you're doing adding a layer of security complexity you need someone to manage this and maintain it the real problem is the user interface not so much the nuts and bolts behind it you know in theory it's a good idea and if you can put a really really good user interface on it it would be good but all of these attacks the user interface attacks not so much security mechanism like with ESL it's working exactly as intended the problem is the user not the crypto I've got a question more from the registry side to prevent phishing because I work for registry in that particular section what about a group looking for future fraud trends future phishing trends and then cutting them off at the past by making sure that these words these phrases are watched over and when these things are purchased then tracked so like if you know a lot of companies like Verisign GoDaddy NameDomain.com if you buy anything that says Visa SSL eBay they're on you that gets flagged it goes to a department they're searching it they're making sure the credit cards are real they're making sure all this information is real if it's not well they've got a little thing in there disclaimer says I own this, not you you're just leasing it take it back now you don't have access to that domain name making phishing that much harder what about a group getting together trying to find these trends and then cutting them off at the past that works to some extent there's a whole pile of sort of measures that people are taking that help the problem is that if users are going to you can make yourself anyone you want and it doesn't really have to be that's the honesty box problem you just create your own certificate and you can bypass that okay I guess that's it thanks