 I'm doing a speech on user interface security design for dummies. Who is that? I have to apologize. How many people in this room have at one point in their careers signed in NDA? Okay. When I started working on this kiosk package that I was planning on demoing a little bit of here, I too had to sign a nondisclosure agreement and to present parts of that kiosk package, I asked for a waiver of parts of that nondisclosure agreement so that I could present things to show you. And so I got a message on my voicemail as I was getting off the plane to come here from the general counsel of the institution that I worked for. Incidentally, the NDA says that I cannot disclose who I work for, ironically. Saying that the waiver of my NDA had been rescinded and that I wasn't going to be able to tell you anything about the kiosk packages that I was supposed to be presenting here. So I've been up since Thursday at 9 o'clock in the morning. So if I'm a little bit sluggish or if I miss match terms or if I make absolutely no sense whatsoever, please forgive me. Thank you. All right. Okay. From the primary sources for a lot of the conclusions that I'm drawing in the speech come from five years of help desk experience dealing with users of all intelligence and experience levels and six months of dedicated standard usability testing of ideas that me and my fellow developers came up with for a shell replacement kiosk development package. First question, who is the bigger threat to your IT department? Trust me, it gets worse. The elite hacksaw. He knows where you live. He knows header files. Yes, he is elite. He cried when attrition.org stopped updating their defacement mirror. And he has never used plain English when communicating with any of his friends. Even over the telephone, believe it or not. And now we have Ms. I think my computer hates me. I told you it got worse. She's a frequent visitor to buddyicons.com. She has a system tray in Windows NT longer than her hair. She actually thought Clippy was sexually harassing her the first time that he came up. You know, he's all cute. You know, he popped up, had that little wink, you know. He ironically was recently promoted from billing to help desk supervisor. This is my life. The fact of the matter is that they're both a threat. I was actually supposed to ask which one of you thought was more of a threat, but screw it, they're both a threat. The fact of the matter is that all users are only humans. Humans are fallible beings. Humans have reactions to stimuli that regardless of how simple or how complex or how threatening that stimuli, their reactions to it are often guided by instincts and limited information, prejudices that they come to pretty quickly. And as humans, they're prone to follow behavior patterns that they're familiar with. Once they get into a set established pattern, they're more set to regress right back to that than to try something new or then to grow. I mean, we're only human. We all do that. Our interface design puts the user in position to compromise systems indirectly by either screwing up their security settings or by being deceived by someone who's trying to intrude the system through the user. First I wanted to talk about, well first, I have basically two big items that I wanted to hit on. Users are often betrayed in security by their own sense of paranoia or by their own comfort level and comfort zone. Probably the most disturbing single thing are pop-up events that happen such as this. In any kind of usability testing, we've always found that these kind of pop-up events, regardless of how benign, regardless of how simple or how complex they are, are usually the single most jarring thing for a user that even though it's a course of daily life in virtually every interface, it's still jarring for users to be presented from their plane of vision into something that just pops up right in front of them. I mean, it's a naturalistic. If you're just sitting there and someone pops up right in front of you, you're going to be scared. Humans while they don't have the same reaction to pop-up windows because they're more used to it are still on a subconscious level threatened and heightened into a slight state of alert by pop-up messages. Another common thing that we found was no exit opportunity events. When a page has poor code and your poor browser is trying to debug it, you have to go through screen after screen after screen after screen after screen after screen after screen after screen of these standard little pop-up messages. There are a lot of other type of no exit opportunity events, things such as bombardments. When you type one key wrong, when you're entering in the URL, boom, you pop up and you're suddenly bombarded with 50 windows and things like that. The other thing that can be used to exploit users through their own paranoia are confusing security levels and appropriate metaphors for security settings, clustering security that should not be clustered, and all of this causes frustration that leads to exploitable user paranoia through no fault of the user's own directly, even though the user more often is not as stupid. Of course, this being DEF CON, I have to disun windows, and I mean legitimately, the security options in all of the new windows flavors are extraordinarily complex. They are far, far, far beyond most people's knowledge about system security. Even if you know general basic security user paradigms, if you're an educated user, a lot of the options presented these menus can be very confusing and can be threatening. The other really big disadvantage to what windows does is that it sets supposedly custom security levels, high, medium, low, and completely fucked. I'll take questions at the end if that's all right. It's a very, very poor security model, and more often than not, users have different needs. Users need to access different sites. Users need to use the internet in many different ways, and they need to set up different security paradigms for what they're going to be doing. This is one way just by confusion that the user becomes paranoid, and it creates a weakness in the system that's user-based. Of course, the most obvious one is, I think I lost my place here, I'm sorry, just going back to what I was saying, paranoid users will often interpret security information incorrectly and revert to insecure behavior because they draw analogies out of network technologies from things in real life. One actually somewhat common one in the environment that I was working with were professors who are under the impression, well, professors who are under the somewhat correct impression that digital cell phones are usually less prone to casual eavesdropping than party lines than PBXs and things in your office. If you want to make a private call, you're probably better off taking your cell phone outside than risking someone else in the house picking up the phone casually eavesdropping it or having the line tapped, things like that, we've all heard those stories. These professors think that because digital cell phones are more secured than generally wired phones, they think that wireless networks, of course, have to be more secure than wired networks. I'm not kidding, people actually think this, and that's one way that user paranoid can be exploited by drawing and reaffirming incorrect analogies such as that. Smart users, like I said, will also seek familiar escape routes out of visual oral bombardments. When they're presented with a lot of pop-up windows like that, if you can design interfaces that exploit the user's own patterns and own tendencies to try to escape those things, you can, of course, in plantarogens, you can penetrate security, you can get to things that you're not supposed to be getting to, and of course, you're going to get to them because the user's trying to escape. They're not trying to keep their system secure. They're not trying to understand what's going on. They're just simply trying to get out. You can install a lot of things by presenting another bombardment and leaving nice little simple, okay, okay, okay, okay, okay, okay, okay, and just install a whole host of stuff through visual bombardment like that. Paranoid users, ironically, in what we found in some of our usability testing, are prompted to shut off security warnings. Of course, being paranoid users, they fail to follow up on passive security indicators. They think that simply because there's nothing telling them that something wrong. I mean, they remember that they shut off some things, but they still expect the computer to tell them if something's gravely wrong, and, of course, the computer's not going to tell them that there's anything gravely wrong because the computer's a computer. You told it not to do something, odds are it's not going to do something, and so that's another way that a user's own paranoia can be used to betray them to compromise security. And also, paranoid users, like I said before, trust familiar visual clues. Another common way of exploiting things is just by saying, you know, install this free security update, or you need this security update, or you need X, Y, or Z. If you can prevent, if you can present an interface that's familiar enough to them, that's another way that you can compromise user security. Of course, the flip side of that is exploiting the user through user comfort and sources of false positive information. One quick example of this is relying on the preponderance of visual cues. The security alerts like these are kind of, you know, half and half more often than not, yeah, it's fine, that, you know, Microsoft for some reason has decided not to list that issue of security certificates. But the way that the average user reads this is very similar to the way that the average user will read a newspaper or will read a magazine. The first thing that they'll see is that big icon that's on top of the big white space towards the left. That's the first thing that draws their eye in. That sets the user into a state of panic. You know, it's yellow, it's a warning there. And then the average user will go down there and the iconography will register in the user's conscious before they read any text. They'll see, you know, okay, there's a smaller warning icon there. But oh wait, my eye is drifting down below. After that initial smaller warning, I see two green things. So everything must be okay. You know, even if the user does read the text and even if the user is not confused by the text, it's just because of the way that the iconography there is laid out, the user is more apt on a subconscious level to just say okay, especially with the line at the bottom saying do you want to proceed. Of course, the user is trying to do something. This is just another hindrance. Oh, it's trying to tell me something, oh, but okay, I guess it's okay as they read down. And even though the default option on that window is no, more often than not. Even with educated users, they're going to click yes. And like I said before, comfortable users, people who have a certain comfort zone about things, will follow patterns of usage that are routine or are diversionary. You know, one of the most common ways that users can be exploited by their own sense of routine or familiarity is through friends and family viruses and email and things like that. You know, Melissa, I love you and stuff like that. And actually, there's one particularly nasty friends and family virus that actually employs no code. It's actually a hoax. It was the, it's basically a message that people forward to each other, just like, you know, the good times of ours and things like that. Only this one employs the user to take action to find a specific file. In this case, it was a file that was a utility that restores long file names and windows. And the host, you know, the hoax, I'm sorry, used that system of trust networks that people have in forwards and in email. And it's, you know, extraordinarily common, a very common behavior pattern on the net to exploit the more comfortably user. Ironically, this kind of behavior also works on the more paranoid user because when you present a paranoid user with any kind of warning and at the same time present the user with a call to action, more often than not, the paranoid user is going to immediately flow to that call of action regardless of what it is because that's what they think the best course of action is right there. It's just like, you know, when somebody says, oh my God, oh my God, he has a gun. What's your first course of action? Duck. You know, regardless of where the gun is, your first, the first thing that you're told is the first thing that you've seen other people do that you've seen in movies is the first thing that you're going to do. The first thing that you're presented with in an email like that is odds are what you're going to do. And of course, another very common thing in help desks with lots of secretaries who like cute things are cute little Trojan applications. These are lovely. I mean, let's go back to Ms. I think my computer hates me. Okay, she has tons and tons and tons of these cute little applications. That occasion will have, you know, a little nasty Trojan that'll wipe out her computer and every computer on her subnet and all these things like that. And then of course we have to deal with it. I'm not bitter. So because I don't have the kiosk package to present to you, I'm just going to jump right into the five points that I'm presenting that I believe can both not only increase the security of the interface, but also increase the usability of the interface. The first precept that I'm going to present here is actively using intelligent agents, things that are similar to IDS. In the background, to predict user behavior, to record user behavior, to route around problems, and to automatically readjust security parameters based on what the user does and does not do. One of the first things, one of the first sub points of this precept is avoiding intelligent agents in the foreground. You know, let's go back to pop-up menus and things like that. You know, users hate that. Even though honestly, even though I made a joke about this initially, in one help desk job that I was working at, there was actually a secretary who went through and actually made it through the whole way of the sexual harassment standard procedures. Complaint thing because of Clippy. Of course it ended at the help desk, it ended in a trouble ticket, and of course the nicest, most sympathetic guy who was working at the help desk had to calm her down because she was so traumatized by this little paperclip that appeared in the lower right hand corner of her screen and winked at her. Honestly, these people have nothing better to do. And of course the best thing that you can do is predict and adapt to suspicious behavior. I actually got clearance to tell you one of the things that we were going to implement in the kiosk system that we never actually ended up implementing, the bombardment attack like I was pointing out earlier, pop-up windows and things like that, what we were trying to do was map that behavior pattern. And before those windows got a chance to propagate all over the workspace, what we wanted our kiosk application to do was to automatically dock all of them at the bottom. And then after all of the pop-up windows had come to the surface and they were docked at the bottom, what we wanted to do was write an intelligent agent that would figure out if any of those boxes were actually needed to gain further access to the site. And then of course give the user the option of either browsing through all of those pop-up windows to find what they wanted or more often than not, the user just simply wanted to get rid of them. And that's the kind of things that we can do with intelligent agents now. I mean, we didn't end up implementing that, but we were a small development team. It's not at all in any way, shape or form, beyond the scope of any kind of professional development team to develop intelligent agents to predict that kind of common, that kind of common jarring behavior that leads to insecurity for users on the internet and route around that. The second point actually comes from a paper that was presented in a journal of the Association for Computing Machinery. The paper is called the Anti-Mac Interface. And one of the big points they made was don't hand controls to the user when they're not in a position to be in control. It's OK to have the little five-year-old come into the cockpit and see everything like that. It's OK even for an older person who's in aviation to hold the controls of a plane for a second like that. But you don't want Joe Navas on the street flying your flight out of LA to wherever you're going. You want an experienced pilot to know exactly what they're doing flying that plane. And it's the same sort of thing for users. You don't want to give them controls when far and away they're not qualified to handle the kind of responsibility that that control invests in them. Second point in that is don't give the user any information they can't use. If you have an intelligent agent application system set up beforehand, what you really want to do is to try to send this information into some kind of log file, into a place where it can be examined, send it to network administrators if you're running a series of kiosks, or at least bury the information in some kind of log where you don't give the user the opportunity to act on any of that information. Where they can look at it, they can see it, and if they know exactly what they're doing, they can fix and predict and adapt to that behavior. But you don't give them a dialogue box, the choice yes, no, or cancel. This actually leads to another one of my funny little help desk stories. Our network admin changed all of our keys for SSH on all of our login pool servers. And unfortunately, he forgot to send out the customary email the week before saying, oh, hi, the keys have changed, your SSH clients are going to say your keys have changed to this. Just click yes and continue going on and on and on. So this came up all over in our department. We got a couple of calls about it. And when we were looking at the servers later, we found that a lot of users who we had told to use SSH because it was more secure would look at this and panic and think, oh my god, oh my god, SSH is insecure. And what would they do? They would open up Telnet. And like I said before, because that's a familiar behavior pattern to them. Because we said SSH was more secure, it didn't necessarily register in the user's mind that Telnet was insecure. And that's what they had used before SSH. So that's immediately what they went back to when they perceived SSH to be compromised. And of course it wasn't, but in any case. And the second to last precept of this is in kiosk and office applications, the interface should not accept new software from the user. Of course, this is most often in commercial applications not possible. But in office applications, and in places where you can restrict things like that, where you do have a cohesive policy that says don't download cute little sheep, this is the better course of action to do. And you can have kiosks out there that know exactly what software they may need in situations, reprogram the occasionally say, OK, new software is coming out. You can download this and not have to take up so much room on the kiosk if you don't have that room. But the trade-off to this is that, and this is again enabling usability point, afford conveniences to the user that make them feel in control. Give them more powerful controls than they have now to manipulate the objects that they need to manipulate in their environment. But don't give them the tools to manipulate the environment itself. What makes you feel like you have more control over a car, a responsive stick, responsive steering, responsive gas and brake pedals, or the ability to reroute the oil into the gas if you need to? I mean, when you're using the car analogy, control about a car is all about how you use it to get the work you need done done. And if it's responsive and if it's active and if it's using the agents that we're trying to describe and trying to develop, then it will actually become more usable and at the same time more secure. And the other thing is please don't build your interfaces like brick houses. Make it as modular as you want, but be sure that the bricks don't fall out too easily. I'm sure that anybody who's written a lot of papers in Word has at one point or another accidentally dragged the menu bar out of where it's supposed to be. And as in another paper I quoted, there is absolutely no reason that most users would want this functionality. OK, maybe there are a few crazy free-handed quark heads out there who think that that's cool, but far and away that's a quote-unquote feature that more often than not leads to a less usable operating system. I mean, most of the people in the room know, OK, you drag something out of it, you're going to drag it back and it'll be fine, but like I said, the average user out there in the world, that'll happen to you and they'll think that they've broke something. I've actually gone on help desk calls where people said, I broke Word, it doesn't look the same, windows are popping up everywhere. And then they were incredibly amazed. I mean, they thought we were all magicians simply because we went over there, we clicked on it, we moved it back to where it's supposed to be and that's where it popped up. Like I said, I'm not bitter. The third out of the five points is design interfaces that function far more like a workshop or a playground and less like an office. I so, so wish I could show you the kiosk that we were doing, but instead you're going to have to deal with lame examples. Avoid metaphors that don't translate well into real life. The magnifying glass example is one metaphor that translates very well into real life. You know, people go into graphical applications, they see, oh, there's this magnifying glass here. I'm sure that it'll either let me zoom in or zoom out of wherever this picture is. You know, zooming out is a little bit more complicated of a feature, but that's a kind of functionality that translates directly into a real life world. For the life of me now, I still can't figure out why they called cookies cookies. I mean, I know why, but you know, like I said, you're working at a help desk. You have a paranoid user, you know, honestly, once any of these ladies downloaded a cute little sheep and got a virus, they automatically went back in the security setting, said everything is paranoid, and so every day when they wanted to log into any system, they were so happy because they got to receive cookies all day. I mean, they were happy than that. I mean, users are dumb. You have to present better metaphors or simply don't use metaphors at all when you're designing things like that. Develop reactive interfaces that encourage play and that are receptive to new interface technologies. We're still stuck in the WIMP paradigm, you know, Windows, icons, menus, pointer. That's, it's some, it's for people who have never, ever, ever used it. It's counterintuitive, and the only reason it caught on was because there was a critical mass of people out there who could help other people, you know, understand the paradigm, and it's really not that hard of a paradigm to learn, but as far as interfaces go, it's not as robust as it could be. I don't know where it was here. One thing that a friend of mine and I were playing with in the cave, where I go to school, but I can't tell you where that is. It's a little school in southeast Michigan. You might have heard of it. They have a really big football stadium. One thing, one of the things that we were playing with in the cave was trying to develop a three-dimensional file system, and we had picked on some work that was being done in Berkeley and Germany about people who were using the tree as a paradigm for file systems, and it's really not that different from nested folders that we have in virtually every operating system today. Only the different thing about it was that every single file you could assign a different attribute to, and depending on what you needed to do and depending on how you needed to organize your files, you could arrange things in a tree and branch structure. Every file that was similar, but part of a separate thread of a project would be arranged in that way in 3D space, and you could manipulate it. And the really cool thing that they actually ended up getting to work, it doesn't really have a practical application now, because we can't all walk around in caves, was the ability to break off a branch off of your tree and hand it to someone else. And based on those same attributes and based on the attributes that the user on the other end, the user you were handing it to and you had it set up, they would accept it. They would put it on their tree and automatically that branch would configure to the way that they had everything there set up. Maybe they had the same project organized in a little bit different of a way. Based on the attributes in that file, they could re-relate their whole three-dimensional file system architecture to be responsive to them and do what they wanted. And that's this whole sort of thing. There are amazing new interface technologies that are coming out that people can use, but that aren't being implemented into operating systems because there's simply no way to put them into the WIMP paradigm. And like I said before, that leads to a lower degree of usability and a lower, in many ways, a lower degree of security. If you give the user more time to play with things in the interface, they spend less time screwing around with security settings and sending me their houses at three o'clock in the morning, I'm not bitter. The fourth one is use more subtle cues to convey varying levels of security information. I know these are kind of bad examples, but things like color, contrast, and sound you can use to communicate security paradigms and security levels and things like that much, much, much more efficiently than normal. Am I almost out of time? No, we can talk for the other time. I can't talk for too much longer. I mean, I've been making up jokes all day. Avoid the use of buried small icons, pop-up menus, and extraneous messages to communicate threat-level security, or threat-level and security-level place. Things like that only we look at when we're putting in our credit card numbers and things like that. It's in such a small place there that the user is not going to be able to notice it, that the average user may casually glance down there, but as they're progressing through various pages, they're not going to go back and reference that same point every single place. A better way to do it, even though we weren't really able to do it with the technology that we have now, was to communicate varying levels of security using a blue opaque background behind whatever they were doing, so that they knew, just simply by subtextual cue, that, okay, this interface is a light blue, that means that it's secure enough to do things that most people wouldn't bother to get information on. When the hue turned to a deeper and to a deeper opaque kind of blue, then you knew that it was secure to do even more things, entering credit card numbers, enter in personal information, check transcripts and things like that, things of that nature. I mean, we don't have all the answers and the guidelines that I'm presenting here are, I'm just throwing out there. I mean, a lot of people accuse user interface design of being a big old crapshoot and they're not necessarily that wrong. You know, we invent these paradigms and we throw them out there for use and what I'm trying to do is bridge some user interface paradigms into the realm of security so that we can prevent, I think my computer hates me from compromising all of our networks. I went through that. And the fifth one, and this actually, believe it or not, does not require that intelligence of an agent and is actually a simple thing that I can't believe hasn't been implemented to a greater extent. Filtering commands issued to the operating system based on user initiated behavior. Just a couple of examples. The browser should not accept that and pop up an MS-DOS terminal or an MS-DOS window in Windows. But it does, you know. I mean, that's maybe a usability that we wanna have but when you have a kiosk system where pretty much everything that you're doing is through a webpage like that, that's not something that you wanna give the user who just walks up access to. Even dumb filtering can prevent simple, simple things like that. Really out there, I don't know off the top of my head what they are, but there are 12 proven ways to crash 90% of the kiosks out there. They're as simple as unplugging the back of the machine to crashing IE. And when you crash IE, the TCPIP stack falls and everything goes to hell. And then you don't know it, but script kiddies in Canada think they own the box, but that's another story entirely. Yeah, I was in the Toronto Union Station. I saw a bunch of little kids that I later recognized to be one of the 2600 crews near Toronto had simply unplugged the back of the box, plugged it back in, set Windows NT to VGA mode, boom, and then they had everything. It's an incredibly stupid setup, but anyway. And the other thing that you wanna look at and this also can be done with not a tremendous level of intelligent filtering is determining whether or not commands being issued by things operating on the interface level. To determine whether or not they're actually coming from the user or a sub-process that the user may or may not have initiated on their own. We could stop a lot of the friends and family viruses, a lot of the really destructive virtual basic script if the interface figured out, well, wait a second, the user simply opened up the email and opened up the attachment. The attachment is not supposed to start rewriting stuff on the hard drive and automatically cut off that action. And one of the reasons that intelligent agents work so well is because they log a lot of the activities of users and then compare that against a database of user, compare that against a database of pre-programmed user behaviors. That's why Clippy knows when you type deer in the middle of a letter that, oh, you're writing a letter. Or if you say, to whom this may concern, it knows why you're typing a business letter because those are common conventions that can be used by intelligent agents to figure out what you're doing. There are common conventions in every interface in the world that you can use that intelligent agents can determine what you're doing and why you're doing it. And just some concluding thoughts. We're spending a lot of time developing these intelligent agencies to keep the elite hack source out of our networks. A lot of this technology is being used and being refined and is working to a certain degree. You know, it's imperfect. It's another realm of intelligent design. We're just on the base of another big jump in artificial intelligence. And just me as a user interface programmer, I'm saying we can use a lot of the things that they're learning and researching in IDS to make more secure interfaces to bring intelligent agents up to the front. And like I said before, interfaces need to keep up with current input technologies. We've had 3D gloves and 3D manipulation heads up displays for a really long time, but they haven't been integrated and did anything. Why? It's not because people don't want to integrate. I mean, you know, Microsoft would love to sell more stuff to you, but it simply is so contrary to the paradigms that we've been using in interface design for so long that it's really hard to integrate that kind of technology into a usable way, into a way that actually will catch on and achieve critical mass. And that's why we need new interfaces, the new interface paradigms. We need to start at the drawing board and figure out new ways to use these new input technologies in our everyday lives. And even simple interfaces, you know, WINT paradigm interfaces, even with complex agents, should be applicable everywhere. A digital interface is desirable. Everything from palm units to cell phones to massive desktop workstations and servers. Simple interface paradigms apply everywhere and you can use the same visual and subtextural cues on a palm pilot as you use on sophisticated workstations and servers. And that's the thing about paradigms and that's the thing about other things. The WINT paradigm is extremely portable. You do see it everywhere. You do see the same kind of menu classifications and you do see the same kind of standard operating routes for users no matter what digital application you're using. By using simple interfaces with complex agents not only are we enhancing usability but we're also enhancing security which is becoming increasingly important. Has anybody studied the WAP protocol? How many people think it's secure? Do you see any hands? No, I don't see any hands, exactly. That's pretty much all I have. I just wanted to give a lot of thanks to these five people who helped me in a very difficult situation last night. We had a couple of productive conversations and were able to somewhat salvage this speech, I hope anyway. What? I was not drunk at the time of that conversation. The other guys were drunk at the time of that conversation. I'm glad they were. And so do I have time to keep on going? Okay, so we had a couple of questions, I think. Yes, sir. Instead of installing the minimum in question part for the person who did the job, typically, and the other part of that is is when you try to go to Microsoft or anybody else, their help desk tries to eliminate things you don't want. The help desk agents typically don't have enough knowledge about their own products that you may explain to the administration. Yes, I know, I worked with them for five years. Without even developing anything new, it's just helping out the current processes. The user basically was making the point that a lot of people, and actually, he was making the point that we don't have educated people working in a help desk. We don't have educated people working on help lines, and that's true. Some of the dumbest computer people I ever met were working in the help desk. I kid you not, I swear to God, I'm not better, I'm not better, but in every single help desk job I've had, I have never received an hour of official training. The only reason I knew all this, is because I, like most of the rest of the people in the audience, went home and loved to break his own computers and his mom's and his dad's computers at the same time, and that's how we learned. But that's not how we should learn. I mean, in designing user interfaces, you should have room for play. You should have room to make mistakes. You should have things where mistakes are sometimes rewarded, where creative thinking is not necessarily detrimental to the security of the system or the usability of the system. And that's what I'm advocating. I'm advocating that there are much, that with these five points, we can try to start over a little bit. And by using subtextual cues, and by using intelligent agents and things like that, and by starting from the ground up by thinking, okay, we're at the beginning of user interface design, how would we do things better, knowing everything we know now from the WIMP paradigm? Question over here. The operation, it says, thanks a lot, you know, you will die in five seconds and you have to call the system out for me, also. Right. There's just every message that was trying to communicate something to us, there's been a bit of a rubbed of freeway. And it was kind of a fault rather than, you know, the computer's fault, which in case you were just... Right, no, exactly. Secretaries are wonderful people, every time that they had some kind of kernel32.dll error, they would really, really helpfully write down every single memory register. You know, they would bother strolling down, you know, write all this stuff down, you know, take it down and record everything and expect us to know what the hell it was. I can't read memory registers. If I could read memory's registers, I wouldn't have a job at a fucking help desk, okay? I'm not bitter. Any other questions? You over here. I so wish I could talk about my kiosk application. God damn it. No, I mean, I hope I can say that we employed a very, very similar real world paradigm based, you know, on that. That we did a lot of, we, well, it wasn't necessarily the same type of thing. We were doing things like that. There's a couple of features in it that were really, really helpful. They weren't necessarily forward intelligent agents, but they were resident. I think I can talk about this because I don't think this was my idea, but isn't that funny? You can't talk about your own ideas. Doesn't that suck? Anyway, but no, we had those kinds of things. We had a much softer interface. We had help available, you know, in a static place. And the actual icon would do things like that. Like one of the things that I don't necessarily advocate with this advocated and user interfaces now is using careful animation of standard icons that we see everywhere. You know, there's standard iconography that you see in airports, that you see in bus terminals, you know, that you see on highways and things like that for hotel, for help, for police, for medical emergencies, for fire. And there are simple, simple, simple methods of iconography that we can take from the real world in order to communicate subtle things like that. Like the whole thing about the color scheme in employing security level. And then saying to the users who wanted to know exactly what was going on to be able to have a gradient bar at the bottom to have specific information there to look at, okay, well, what does this shade of blue mean? And have it tell them exactly what that means. You know, and that's the kind of successive disclosure that I think will be a part of future interfaces and that I do heavily advocate as a user interface designer. By the way, I'd like to thank everybody here for staying. I know user interface design is probably the single least interesting topic at this year's DEFCON, but I just want to thank you for staying in. Yeah. Oh, I don't know what the next talk is. Just so you know it's not on your schedule, but White Knight's coming up with a good talk on video surveillance on the global scale and he knows his shit real good. Right here. Same back channel sometime soon. I think so. I'm sorry. It's a bonus track. It's a bonus track. All of you guys got to stay. Those of you who actually ate a late lunch. I think you were next in the back there. Could you speak up? I can't hear you. There's a good paper out there that Zimmerman may have been a co-author on. I'm not positive about that. It's called Why Johnny Can't Encrypt. No, I'm serious. That's what the paper is called. Type it into Google and you'll be able to find references for it. Look it up in your library, but it's a very, very famous, like far away in I think 40% of the research that we were doing that was presented as a resource. And if you want to look into more about that, I mean that's not something that I'm gonna touch with a 10 foot pole. But I mean, I think that there are ways employing these paradigms that more creative people than I can use. I mean, user interface people are building upon paradigms and crushing paradigms and reaffirming paradigms every day. And I think that these five points are enough to build from. This is what my research and my experience has given me. I think I can take one more question, yeah. I hate attachments. No, I hate attachments. That was the bane of my existence in my help desk days. Do I have time for one more? Okay. I don't know why, but it's because we talk to most users tend to use what they do at their jobs. And if we don't allow them- Yes, but this is why we have network and this is why we issue guest accounts to many, many people so that they can go retrieve it for themselves instead of having them crush our mail servers because this person has to send every dissertation that any of the students have ever written. Yep, you're talking about yourself as a producer. This is the way that makes work being done. No, but that's a real world paradigm that failed. That mail attachments. Yeah, yeah, go ahead. Well, no, I mean, I'm serious. Like that's a real world paradigm that we were very, very ill prepared for was the mail attachment. I mean, people love mail attachments for some, well, I mean, I understand because that's the way that they do things. You know, before they put a dissertation in a FedEx package or in a USPS postal envelope and send it off in the mail. Same thing works for email. Well, it should anyway, but it doesn't. And it's a real world paradigm that failed because we don't have the technology we haven't written the protocols to support that kind of massive traffic. Right. Back to the floor. Right, I mean- Now we just went through- Well, I mean, that's, I advocate SneakerNet. I advocate, I mean, this one person was sending massive, massive files across campus. And it was still easier to send one of our help desk people with an orb disk, get on their bike, ride up to our campus or take a bus up to our campus. With, regardless of how fast our network was, and we had a really fast network, it was always, always easier for them to ship things that way. SneakerNet is still a very, very viable thing. You know, just because we have, you know, 100 megabit networks and fiber, and all this doesn't mean we have to use it. No, but that's what- Mm-hmm. We've taught them, the IT people, we've taught them how to do this, not the other way around. The IT people have given the users too much power. I don't think it's necessarily the IT users fault, but of course, as you know, I'm really, really biased. Are there any other questions? Any other lingering concerns? Yes, you sir. You smack them upside the head. No, but I mean, in all seriousness, that's, I mean, that is why we need better interfaces. For people who just want to get their stuff done, you know, we try to teach them, we try to give them information on how to get stuff done, but just because interfaces are so intimidating on a subconscious level, you know, there are a lot of people that we have to really bash into the pattern and bash into the kind of rut that we want them to go into when, you know, we want them to finish their jobs. Right, but the answer's not always the same to each question. I'll take this up to the outside. I'm taking time away from a white knight. Thank you very much. And if anybody has any jobs in San Francisco, I have.