 All right, DefCon 12, let's please present Nick Matheson who is a lead developer on the Mixed Menion project, who is also a co-developer for the tour project which was presented earlier here today. He lives in Boston and he shall begin speaking now. Thanks for the intro and for the live mic. I appreciate giving me a live mic and thanks to everyone who came to listen. If you're in the back and you want to come up front, then there's plenty of room to do so, but if you like it back there, then stay where you're at. So I'm going to talk about anonymity. Before I get too much into it, how many people have seen at least one other talk about anonymity stuff today? Excellent. Okay, this is good. So you should get some interest while I go a little speedy during the intro bits. My talk is about snake oil anonymity. This is not a new product I'm selling. This is a class of things that I'm suggesting that we as intelligent people maybe ought to consider if we care about anonymity, never ever using. So the term snake oil originally comes from quack medicine peddlers of the American West. And to this day, you can still find medical products that make extravagant claims for what they can do to you, where if you have a little bit of medical knowledge, you can know that like aligning your DNA to the healthy flow of the Earth's electromagnetic spectrum is not actually going to provide any concrete health benefits. And yeah, no. And people have used this term to apply to cryptography made by people who don't know any better, that makes no sense, that doesn't accord with actual research in cryptography. You usually get a whole bunch of gobbledygook. You get people who decide it would be fun to write a code, who do so in their off hours and come up with something that they think is the bee's knees. And they promote it on PsyCrypt and they're never heard from again. And this too is snake oil. But a kind of area where we haven't got as much public awareness as to what the good systems are, what you should be expecting from good research is anonymity. And you know, somebody comes up to you and says, hey, I've got this new program. It's called Smoke Machine. And it's the ultimate anonymity. And it will keep you completely anonymous. You can do file sharing and whatever forever. Well, do we have the machinery to evaluate this claim or not? And I'm going to suggest that we do. And I'm going to try to impart this on you today. Now, a little bit about snake oil notes. The people doing these things are not bad people. They're not even dumb people. This is usually made with the best intentions. And it's really easy. Like just to have cryptography, anyone can make a code that they themselves don't know how to break. Anyone can design an anonymity system that they themselves cannot do traffic analysis against. And it's actually kind of harder with anonymity. Because even if you have a break for a code, you can show people what the break is and actually break some sample cybertext. Whereas, you know, the break for an anonymity system may require you to have the resources of a major government to demonstrate that it is effective. So that's one of the problems. And yeah. So I started talking a little bit about snake oil. Now, you can sort of call snake oil what falls significantly below the high end, the state of the art of the field. So I'm going to talk a little bit about where exactly are we at the state of the art in the field? You know, what's the most that anyone currently knows how to give you? And then I'm going to move on to how much worse you can do and some warning signs that this webpage you're looking at may not be a very good system to entrust with your privacy. And I'm going to close with how any protocol designers out there can avoid getting on next year's version of this talk or wherever I give this talk next. And I'll close with Q&A. So where's the state of the art at right now? First basic idea is what do we mean by anonymity? Best definition is anonymity is being unlinkable within a given set of possible actors. That is, this is called the anonymity set. So for instance, if I sent a message and you can't tell who sent the message, you can narrow it down, for instance, to say all of the users of Mixmaster, then all of the users of Mixmaster are the anonymity set for me, for that message. If on the other hand you can narrow it down to everyone who sent messages last week with Mixmaster, then I got a slightly smaller anonymity set and so on. Another property besides anonymity that often comes up is non-linkability, which is that when you can't associate two different actions as being by the same person or not, and related to anonymity is pseudonymity, where all of your actions are unlinkable to your actual identity, whatever an actual identity means in this day and age, but are instead, but are linkable to one another. Like you signed on Batman whenever you fight crime. Anonymity is often confused with, and in researching this talk, I found lots of projects that used anonymity when they actually meant something along the lines of steganography, as Rachel was talking about in her talk this afternoon, that's where you can't tell that a message is there at all. It's often confused with non-retention. People ask, am I anonymous? And they'll get the answer back, well, we don't keep logs here. Well, yeah, okay, if you trust them not to keep logs and no one's looking, without cryptography along the lines of, you know, is my data encrypted? Well, we don't read it. Yeah, whatever. And people often use anonymously in non-technical sense, meaning I don't write my name on it. This is the sense where if I go into McDonald's, I pay cash, I order a Big Mac. I am in some sense anonymous, unless someone actually cares and takes a picture of me. So, yeah, this is somehow how the word is used. I'm not going to use it that way today. So, how is anonymity achieved in practice with the systems in common use? Well, basically you try to have some sort of network where a bunch of people send messages or something in, and a lot of people get messages out, and the goal is to make it so you can't tell who is sending to whom, in the simplest kind of case where you want to do just forward messages. And the way we achieve this is by having a whole bunch of servers, having each particular Alice who wants to send an anonymous message, choose a path of servers through the network and encrypt her message along the path back to front with a public key for each server. Each server, so it goes to her first server. The first server removes a layer of encryption. It says, oh, here's a message for server two. It sends it off to server two, which removes another layer of encryption and realizes it's for server three. Server three removes the last layer and sends it to whichever of those four Bob's Alice really wanted to talk to. Now, each server in Alice's selected chain only knows about the server immediately before and the server immediately after. So to a first approximation, if any of the servers are honest in Alice's path, if any of them actually do unlink incoming from outgoing messages, then Alice gets some anonymity. So these systems have two major flavors. You heard about one if you were at Lens Talk. You heard about another if you were at Razors. The first are high latency mix nets. These are suitable for email. They introduce large message delays, which make it harder to correlate timing on incoming and outgoing messages. But because they introduce such large message delays, they're not suitable for web browsing. Unless you're willing to tolerate a 30 to 90 minute delay in between, you click the page and you see whatever you want to see. On the latency side, you're available to a larger range of attacks, but you're more useful. So if you can tolerate the kind of threats that can break low latency networks, then they're much better bet in terms of usability for you. Well, now, we're going to be talking about security here. And as good security people we know, you don't say, this is secure. This is not secure. You say, well, this is secure from what? What do I assume my attacker is going to do here? Do I assume they've got physical access or do I assume they're on a network? So first of all, what might our attacker want to do? They might want to identify the originator of a message, a particular message, or they might say, hey, here's two messages, the same person like them. Or maybe they don't care who wrote stuff. They just want to shut up the troublemakers. And there's different attacks depending. Are there many targets that a given attacker cares about? Like, am I trying to eavesdrop the world or am I just trying to eavesdrop one particular person I don't like? In that case, if I'm a well-funded attacker, I should probably just bug that person rather than going after the anonymity network. Am I out for probability or just certainty? That is, am I willing to say, OK, I know with 99% chance you wrote that, or do I need proof? Am I interested in patterns or just incidents? Somebody's giving me feedback. Oh, that's giving me feedback. Am I interested in, you know, do you regularly visit this website? Or am I interested in, on this particular term, you visited this website? And finally, what rules of evidence apply to me? What rules of law apply to me? Am I willing to break the law to break the system? Now, what do we assume attackers can do? In all of the really good systems that are out there today, you assume an attacker who can participate in the system as a user themselves, who can eavesdrop anywhere between a little bit and a lot of the network, who can compromise some of the servers on the network, and who can start up servers of their own. This is all stuff that we're all pretty sure that there are people and organizations out there who can do that. And there's probably some folks at this conference although I'm sure all of you are totally right hat who would have no problem breaking into any number of servers on any number of anonymity networks should they so choose. And you can only run your own, so. Now, I guess these threats, how good does the state of the art do? What should you do in order to be as good as we currently know how to be? First off, there's a large category of systems that you only see in academia, you only see them in published papers. These are provably secure and they tend to assume things that don't exist in practice like that no one ever enters or leaves the network, there's no one ever downloads the software and starts running it after it's first released or they assume if anyone stops sending traffic the entire network goes down. These are really neat if you like mathematical proofs and I don't need to put them down at all but you can't really use them for what most people want. High latency systems currently can resist very strong attackers for a while that is to say if you send a lot of messages to the same people over a long time eventually you can get nailed and I'll say how in a moment. How long is an area of research? I'll talk about that too. And low latency systems currently can resist attackers who only see one end of your connection and while you're talking about that a bit before. Any system that promises you more than this today is either a major groundbreaking work that ought to be published in peer-reviewed conferences and will make you a hero anonymity or is mistaken. So against low latency systems what do you do? You watch both ends and you win. You notice, okay packets come in, packets go out and because it's low latency they come out soon after they go in so you could actually do a correlation there. And you might not even need both ends if you could recognize a certain volume of traffic that would say okay this request followed by something about this big back followed by this many requests followed by this much stuff back is the CNN front page. So that's a attack that does work against low latency systems. Well, couldn't you just add padding to these systems to make them work a little better? That is, you know, just send all the time so no one can tell when you're sending and when you aren't. There are systems that have been proposed that would do this but none of them have actually gotten built and used. Zero and all systems with their freedom network did it for a while and then they realized actually paying for the traffic for this padding was unfeasible. And even if you could do it it wouldn't help much because active attackers who can introduce timing patterns to your mixed net or to your traffic as it goes into your network can defeat these systems by just, you know, I add a little hiccup here and no, there's a little hiccup over there. I just figured out where that's going. Against high latency systems, currently the best known attack against a well-designed high latency system is long-term statistical timing correlation or better known as an intersection attack and it only works for long-term patterns and it takes a long time to succeed and currently we're sort of holding out a hope that we could maybe make it too long to be practical. Like, if you need a thousand years of traffic and most people don't live for a thousand years that's as good as secure. But recent research is sort of trying to hone in on this question by how long it would take. The method we're using is pretty simple. Any decent programmer could implement it if you had a global eavesdropping network. The idea is you notice, okay, when some particular Alice is sending the following bombs receive on average 0.0001 more messages in the following few hours. Therefore, over time, you know, you do some statistics and you become more and more confident that they are who Alice talks to regularly. And then, of course, there's other attacks that are always going to work on any enemy system you could build because you can always go around systems by social engineering, by like sending some anonymous person a message saying, I agree with your political thoughts. You are a genius. Let's meet at the mall and talk more. I'll buy you a soda. Then there's stylometric techniques where you try to compare people's writing styles like how they got Ted Kaczynski. And, you know, there's other ways. Like, a system is only strong as its weakest link and computer security is pretty darn weak these days. Then there's direct eavesdropping on targets. If there's not that many targets in question and you're an intelligence agency, you can just eavesdrop people's houses. Then there's standard meet space attacks which I don't need to elaborate except to say that if anyone watching this here or on video is employed by a government agency who does meet space attacks, please consider sending someone to try to seduce me before you send someone to try to break my kneecaps. So, that's about the best we can do. Any system that claims more than that probably is wrong or is groundbreaking, but there's a good chance that they're wrong, especially if they're not published about it. And there's also a good shot that they don't know too much about anonymity. If you do less, if you use a system that gives you less than this, then why would you do that? You could get more. So, what are some other warning signs that we might be looking at a snake oil product? I'm going to be mentioning some particular systems here. A few comments. I'm not doing that to say that the people who did them are wrong, stupid, bad, or at all incompetent. I mention these systems only because I want them to be better. I like anonymity. I don't care who actually does it in the long run. I just want it to exist, and I want to move the field forward, and I want to do that by making people expect more. So, one of the best warning signs that a system might not be really well designed is that they can't be too clear about their threat model. What their threat model is. Who's being protected from what threat exactly? Lots of folks say anonymous, and actually a pretty common trend you'll see for more hackerly projects is first they'll say, we are anonymous, and then someone will point out the standard attacks, and then they'll put in a fact entry that says, we're pretty anonymous, but there are some attacks that work, but they never actually say what kind of threat model they have. So, are they resisting a global adversary? A non-global adversary. An active attacker or a passive attacker? Without this, you wind up with systems that have lots of unnecessary features, and you're not really sure whether the attacks that they're neglecting are attacks that you care about or not. Some actual quotes I've seen on some websites that are fairly extravagant about threat models are right there. One is, now you can fully control what others on the network learn about you. Full stop, no exceptions. Or your IP is hidden, so no network analysis tool can reveal it. Wow, that's pretty impressive. So, what are these amazing systems that are promising to do better than the entire research community has done in 20 years of work? They are single hop commercial proxies, or as I like to call them, just trust us system. You know, we provide it to proxy and everything is encrypted, and we strip you of identifying information. You trust us, and we don't keep logs. We promise. So you're anonymous. You know, we promise, and if you trust them, and if you trust that they aren't being eavesdropped, and if you trust that they haven't been compromised, and you trust that no janitor in that building is going to start, you know, tapping them and doing timing correlations, then, yeah, it might work. Of course, for timing attacks, these systems might actually be worse than going directly where you want to go under some circumstances against some adversaries. For instance, if I want to eavesdrop all the world, that's pretty expensive, but if I want to eavesdrop all the traffic that goes through anonymizer.com, I have to install far fewer black boxes to do so, and I have to break into far fewer computers. Presumably, these are quite secure computers, but, well, you know, it's worrisome stuff. And anonymizer is just sort of the best of breed in this commercial category. If you do a Google search on anonymous proxy, you will find a crap load of systems that do not, in fact, strip identifying information or any information, and some of them actually say, yeah, we keep logs, but not for very long, so don't worry, and we promise to destroy them. No honest. So that's a dangerous category, and actually some academic types who should know better have fallen into this trap for a while. For instance, okay, we'll do the right thing. We're running an entire chain of servers, so it's not a single hop proxy, except that it is if all of the servers are run by the same students at the same German technical university, and they're actually kept in the same closet. That's not really functionally different from a single hop proxy in terms of the threat model. This is the Java non-proxy. They have gotten better since this is no longer their threat model. But when you have a situation like this, it's also a great legal target because if the same people are running the whole network, you only need one judge to make a decision that goes against you, and all of a sudden, the whole network has been subpoenaed, and the Java non-proxy people were in fact forced by a German court to backdoor their network for a while. So this is worrisome. Now they have servers in more countries, things may be better, but I'm not going to promise anything. And if you search on the web, the largest category of information you'll find is even worse. You find out lots of hacker sites, things are even worse than just trust us. It's just trust him. Like, hey, I found this proxy over there, and I don't know who runs it, but it sure seems to anonymize me. And if you use that to connect to Hotmail, I bet you'd be really anonymous. No, really. Well, how do you know he's not keeping logs? How do you know that like the RIAA didn't put up a bunch of proxies just to try to find people who like to use anonymous proxies to connect to their favorite file sharing networks? You know, this is... We should know that you can't just trust some random computer that somebody leaves around for unclear motivations, you know, just because it seems to anonymize you when you try it to connect to yourself. So that's that. Another category you see a lot of in... Actually, occasionally in academia and often for people who know better is lunatic legal theories. And I have actually consulted with a real-life attorney on this, and she said that, yes, it would not be legally wrong to call them wacko, although there were a bunch of other adjectives I wasn't supposed to use, so take that as you will. Like, one thing you hear a lot is the idea of plausible deniability and weird convoluted mistaken ideas of what exactly reasonable doubt is. I'm not a lawyer, but my understanding is reasonable doubt is there is a good chance you didn't do it. Not... There is any chance whatsoever you didn't do it. Like, if the chance that you didn't do it is the same as the chance that you're going to be struck by lightning 20 times next year, or you're going to guess a 128-bit symmetric key on your first shot, then chances are you're going to jail if you've done something naughty. Other lunatic legal theories are ones where you say, aha, I'm going to put up three files, and if you put them together like this, it makes the Constitution. And if you put them together like this, it makes a Metallica MP3. Therefore, you can't censor the Metallica MP3 because then you'd be censoring the Constitution. Yeah, I wasn't able to explain it to any lawyer in any way that made sense. The system that proposed this briefly was an academic system called Tangler. They had other neat properties that makes their system not a complete waste of time, but this argument was not one of them. And if you think these arguments are reasonable, your opinion doesn't count unless you are a judge, and your opinion of what a judge is likely to do doesn't count unless you are a lawyer. But let's talk a little bit more about plausible deniability because lots of systems have said, well, we provide plausible deniability because everybody relays. So when some user on the network transmits some file over in Vegas, let's say, you know, they decide to transmit Don't Be Cruel on the network, and they accidentally transmit it to an attacker, they've got plausible deniability because they might have relayed that information from any number of people. Well, okay, if you have a file showing a network where people send a file at once, but after that attacker has seen a message come in from the user for Don't Be Cruel, and then they see nine copies of Don't Be Cruel go through and then a bunch of other tunes by the king, then jailhouserock becomes increasingly appropriate, and deniability is no longer plausible. You know, you watch the network for long enough and you can become increasingly sure that people are doing bad things. There are many sort of hackerly enemy systems that have made this argument. I don't know which of them still do so because it's hard to tell which of their documentation is most current, but at various times, Ute, FreeNet, and Invisible IRC have only made this argument and this has a more academic system called Crowds. And I've done it myself before I knew better, so there's no shame in it. But you don't get out so easy. Another big warning sign you should watch out for when you're looking at something you found on the web, incomplete or missing design documentation. Like, when I was researching this, the largest category of hackerly systems I found were systems where I could not categorically say, this is insecure. And the reason why is because the only way to find out what their systems actually did was by reading their code and I did not have time to read all of their code and even if I had, I would have no way of knowing whether what their code did was what they actually intended to do and which parts of their code represented their system and their design and which parts of their code were momentary conveniences that they intended to replace at a later date. So with no specification and no docs, then you cannot... And actually the developers of the system cannot evaluate the security of their own system. You know, if they're only communicating by rumor and innuendo with one another how the system is supposed to work. And the best design documentation I found on this whole thing was the entirety of which was design forthcoming. This was on a commercial system that invited me to buy a copy. No, you design at first. And a category of never-first in design that you see a lot, you know, Nuna was doing this for a while, now Nuna isn't. I've also seen with like the bulk of various long-lived systems. And you know, there's a reason for this to happen. You know, this is understandable why this happens but you come up with a design and someone comes up to you and tells you an attack. Okay, so you come up with a fix for the attack but did you amend your design document and you know, revise your specification, your byte level specification, not a description like you might write on the back of a napkin but your spec that's good enough so that I could reimplement your system from scratch and be compatible. You know, did you reintegrate the fix for this attack or not? Because over time, once you have like 12 of them, are they all compatible with one another? Can I guess which 12 posts on your mailing list archive constitute amendments to your specification and which of the 100 others who just need ideas that one of the developers posted once? You know, representative quote is, oh yeah, yeah the one on the website is broken but we don't do that anymore. Okay, at that point, you no longer have a written design, no one can analyze it and what we can't analyze, we must assume to be insecure. And of course, when we don't analyze things, a lot of systems, the developers don't spend most of their time trying to break them, they spend most of their time trying to break them which is, I guess, reasonable but you have to actually know what attack you're addressing and with every feature you add and you need to know, you know, whether or not the feature works. So if you see, we do this, need defense, you know, we juggle your bits three times and sprinkle them with holy water and put them in a bag and take them out again. You should look for some analysis that suggests that actually helps against an adversary who exists and it's not just being done for homeopathic reasons or something. You hear a lot that people say, oh, I bet that attack isn't practical. That would be really hard, nobody could do that. Okay, have you simulated the attack? Have they simulated the attack? Usually not, usually they just, you know, thought about it and they couldn't figure out how to make it work and you know, if three guys who see all day can't figure out how to make it work, then, you know, then it must be impossible, right? Examples of this are bad extensions to the cypherpunk, aka type 1 remailer protocol as discussed by one sassaman a bit in his last talk, you know. Basically the idea is, hey, let's make a better remailer. Well, what's better? Better is more features. Sure, well, not actually because the more features you have, if these are optional features, every single user now has to be a security expert and make their own security decisions. So even assuming every user makes really good decisions for them, you're still bad off because what's the best decision for you? May not be the best decision for you. So all of a sudden, you're no longer behaving alike. And if you're no longer behaving alike, you no longer provide cover traffic for one another and you have a larger and larger number of smaller and smaller anonymity sets. You're no longer just another cypherpunk remailer user. You are, all right. You're the person who uses this feature set to yes, this one set to no, this one set to maybe, who prefers a larger and better and so on. Some of the features that got added, the original cypherpunk remailer didn't have any padding. All messages were different sizes. So something would go in about it. There was about a meg big. It would come out about a meg big. It went in about a K big or come out about a K big. So it was easy to tell messages apart. So the guy who wrote Reliable decided to fix it. How? And padding. How much padding? User configurable. This would be great if all of the users in the world configured the same size of padding. But we don't. So all of my messages are a megabyte and all of your messages are 1K. Have we gained anything? Not really. Was this feature ever analyzed? Was analysis of it ever discussed? No. It was first specified in the user manual, which was the entire design documentation. A similar one was MaxCount, which was supposed to close a hole in the original cypherpunk remailer. Basically, in order to make replies work in cypherpunk, you'd have something called a reply block, which if you had one, you could use it to send any number of messages to a person. Well, okay. The way this worked, it enabled another attack where if you took a message that was going into the network, you could make like 20 copies of it and send them all into the network and some poor fool on the other side would get 20 identical messages coming out and you would know who was talking to whom. So, okay, we'll fix that. We'll add an optional directive that says, here's the maximum amount of times you're allowed to use this particular route. Okay, this makes matters worse for a number of reasons. First off, it makes sense to use this for messages that are going forward but not replies. So, okay, now all the forward people are the replayable people who don't use this because not everybody uses it and not everybody has it and not all the remailers have it. Then there are people who do use it and they're in a separate little anonymity set all of their own and then there are the reply messages which now stand out even better than before and the whole remains on close at all if anyone ever makes a reply path to themselves, it still works. So, it was a feature designed to close a hole that turns out actually to be necessary for most users, for many users of the system. So, it didn't really do much good. There's a feature called Remix 2 where no one has ever been able to actually prove that it helps security. Although people have proved in practice that it helps deny all service attacks. Basically, you can use it to force servers to send your packet wherever they like. So, you can force it to send your packets to everybody sends my packet or say many copies of it to serverX and then to serverX and then to serverX and totally nail server heavily. And Lenn talked a bit about late in time so I won't go too into that. Basically, it was to try to fix a flaw of Cypherpunk and did it in a way that if it had been analyzed you know, if anyone had actually said, does this help? If so, how? Does the attacks still work? If so, how well? They would have found out that it didn't. One large sub-variety of unanalyzed thing you will encounter is what I like to call voodoo padding. Like, I'm going to send a bunch of dummy messages into the system so you can't do timing on me anymore and you can't do traffic analysis on me anymore. Okay, did you try doing traffic analysis on your padding scheme or are you just really sure that it's going to work? Like, the representative quote here is, we send a big pile of fake messages to work traffic analysis. That's from the Invisible IOC documentation. If anyone has a specification for their actual algorithm they use to generate fake messages, I'd like to know. There are many ones that have been specified and are known not to work. There are zero that have been actually shown to work against a reasonable adversary in published research. If you're sitting on a padding scheme that actually does work, I'd love to know it, but if somebody tells you they've got one or they do padding and it solves everything with no further analysis, you know, they're probably not great. And the standard attack here is a simple signal extraction because adding padding to your real traffic is just like adding noise to a signal and anyone who's ever done any audio or any analog processing in general knows that signal extraction is not an unsolved problem and breaking in any systems is even easier than standard signal extraction because rather than trying to remove noise from an analog signal and get something close to the original signal, you just want tiny little specs of it and any spec, you know, any Alice to Bob connection for any single message is probably good enough for a lot of attackers. Another sign that you're dealing with a system you shouldn't trust is that they've been working in a vacuum like if instead of seeing mix or relay, they say bounce or shuffle. This is a good shot that they're ignoring most of the past research in the field. So there's been published papers and there's been for like a good 23 years now, you know, more older than some people who work on this stuff and there's been systems that have been built for over a decade. None of us is as smart as a room full of 50 smart people, I think, and even if we are that smart, we're not going to solve in the time that it takes anyone to do a typical problem, the entirety of anonymity as it's reached for 20 years. So if it looks to you like someone makes no citation in their design, if they don't compare their system to other designs, if they seem to think that this is a new field that they just discovered and aren't actually comparing it to other stuff and are basing it on past research, there's a good shot that they missed something. There's a good chance that they didn't know all of the attacks. There's a good chance that they haven't thought of the ramifications of all of the defenses that almost work but don't. And there's other warning signs, you know, the usual suspects like if people say make crypto claims that make no sense, there's a good chance you shouldn't trust them with your crypto or your anonymity like the, I found one system, I think it was invisible, I think it was like using, I think they had independently discovered that electronic code book is not a good way to use a block cipher and so they came up with something that at best is reinventing cipher block chaining, I think, but I can't tell from their description because it's not a specification. And at worst is a broken cipher mode that it's not expected well enough for me to analyze it. But if you know about crypto, you'd say I just use CDC. So crypto ignorance is a warning sign. Other warning signs are secret designs or source you can't look at. I didn't even need to talk about that. So what do we do as designers, as what should people do not to be on this talk the next time I give it? First off, design and specify your system. Your system might be broken, but a system that is specified in enough rigorous detail to actually know what it does well enough to say this attack will work or this attack won't work is a cut above most of the systems that anyone has built or used. I'd love to see and just be clear by specification I mean something that's good enough so that someone who didn't know you created going only from the specification implement a compatible system. Also it keeps yourself honest. Every system I've worked on that I've had to specify in the process of writing the first few drafts of the specification I realized that there were huge areas of the system that I had earlier just been hand waving to myself about. For instance, oh gosh, in the earliest versions of Mixed Minion there were knee attacks where you could scratch a few bits and notice bits coming scratch down the other side or in the earliest drafts we forgot to actually consider the fact that once you get a message you might want to know how long it is. And because we didn't have a spec at first they covered that in detail, you know, until we wrote one we didn't know what we hadn't thought of. And also it should be an integrated document because interactions between your components will matter. Often people will come up with a fix to one attack and a fix to another attack and both fixes are great but it just turns out you can't implement both of them in the same system. A lot of the earliest versions of Mixed Minion while I was working on them spent most of their time trying to solve two attacks at once, either of which we could solve individually. Have a clear goal. Whom exactly is your system trying to defend from what attacker? What do you want at the end of the day to guarantee your users? If you can't tell them such and such an attacker will have about this heart of a time breaking you. Such and such an attacker will fail completely and such an attacker will win immediately. Then the best you can say is you're sort of anonymous sometimes and this is not useful information. Attack your own system. You should be your own worst attacker. Be really pessimistic about your own system. Assume you're broken until proven secure. You know, just because too many people they hear about an attack. They come up with one reason that the original version of that attack won't work. They fix that. And then they'll say, okay, now that I'm done fixing my system, how can I fix the attack? I mean, you know, I'm going to assume that my adversary is at least as smart as I am. How would I attack my system now? And if you can't show that an attack fails, your intuition is not a reliable guide. If you can't show it if it fails, it succeeds. If you can't prove it's hard, it's easy. And last, you know, know the literature. This is not a field that's come out of whole cloth. There have been papers since the early 1980s. There's lots of broken designs for you to look at and really become an expert about how people have tried things before, how people have failed new fast paths. People have been on really cool ideas that never got built for one reason or another. And also there's lots of attacks so you don't have to think of them yourself. And this is my last slide. There's an anonymity bibliography. That's a great place to start learning more about actual research at that URL. Sheamus Plugs for Torra and Mixed Minion, both of which I work on. And which I hope are not completely guilty of too many of the practices I've listed above, but hey, I... Me too. Don't trust this just because I gave the talk. Don't mistrust someone else just because I said they were odd. And use your heads, use the information I've given you and try to do something with it. And now I'll do questions. And if you're in the back, you should either shout really loud or come up in my hearing using what it used to be. Random topics, people would like me to rant a bit.