 and J. Beal and Thor are in the next two slots. Can check your programs to see what they're talking about. So I'm Ian. You can call me Dr. Ian. I'm from Zero Knowledge Systems in Montreal. I've come to DEF CON, I think this is my fourth year to talk about whatever random shit I wanna talk about. So, today what it says on the schedule I'm gonna talk about is a paper called The Ranging Anonymous Rendezvous, or Privacy Protection for Internet Servers. And the idea here is to extend the realm of privacy from that of the consumer. There are a number of systems out there, including freedom, onion routing, things like FreeNet and others, that hyah, and a pen. Okay, this talk will now go much better, because I can draw things. Right, extend systems which provide for client side anonymity, which is something I'll describe in more detail, and allow them to be used to provide for server side anonymity. So instead of just being private, when you use an internet service, you as the provider of a service can also remain private, keep your identity, your IP address hidden. And I'll talk about a little why there's a useful thing to do. Now, it's really early in the morning. I mean, 11 a.m. is not supposed to be really early, but we all know what we were doing last night. I had to duck out of my own party early so I can get some sleep and come here this morning. So, I want this to be more of a kind of interactive session here. So, if you have any questions at any point on pretty much any topic, just do this and I'll shine a green light in your eyes. And then, speak really loudly to ask a question, or to make a comment or something. And if you ask a question I have no idea the answer to, I'll just bullshit and you won't know. What's that? So the folks at the back can see. Okay, well I have to carry all my shit up here now. I can't turn the mic up. Oh, they can turn the mic up, woo-hoo. Oh, I can't actually, because I need this. Well, I guess that can be moved up too. Okay, so while I'm moving this, I'll just. Okay, how's that? Excellent. Yeah, just toast the chair entirely. I'll stand. Okay, and the Sharpie. Okay, so, I'll just start by talking about what I'm supposed to talk about. And if you want me to talk about something else, because you think that's more interesting, just jump in. So one of the most useful aspects of the internet are the services on it, right? The internet would be kind of useless if you had this great network of machines all hooked up together, but they were all clients and there was no actual services out there, right? You can get some peer-to-peer stuff going, email would probably work, but the whole worldwide web idea wouldn't have happened. So, what's that? So it did. So these, we have these great services and we want to be able to use them. And we have this problem that I'm sure you're all familiar with. If you rewind your mental clock back, maybe about 10 years or even a little less than 10 years before the worldwide web really happened, and back to when it was just a bunch of us like dot edugeeks on the system talking to other dot edugeeks. Basically, the internet was not designed with the idea of adversaries in mind, right? The internet was designed with the idea that everyone on it is cooperative and friendly and it's okay to have protocols like RLogin, right? We've whacked them down. So lately, like since the mid-90s, it's become clearly evident that there are adversaries on the internet. How many here are adversaries? Woo! So we need new protocols that design security into the system. And those things have happened. We now have SSH, we have a lot of cryptography and that's all great. Now what we're seeing is that, oh, security isn't all we really want. We want protocols to have good privacy properties as well. And what I mean by privacy properties is that a protocol might inherently reveal a lot of information about you. For example, anytime you use IP, there's a source address in your IP, in every IP packet you send out. So that means basically, no matter what you want, if you want to speak IP or TCP IP to another host, you've really got to tell them where on the internet you are and that correlates highly to your identity. The first thing I'm gonna draw, I think, to give you a good idea of what's involved here is something we call the Nimity Slider. And the idea of the Nimity Slider, wonder if I can keep this page from going away. Oh well, maybe not. The idea of the Nimity Slider is it's a way to indicate how much information about your identity is revealed in any given transaction. So what I mean by that is at the high end, we have what we call veranimity, which just means true name. This is a transaction, a transaction would be put here if when you do the transaction, you necessarily have to give out some piece of information that strongly correlates to your identity. It could be your name, could be your phone number, could be your credit card number, it could be your SSN, anything that basically is easy to take and look up and turn into a unique person. So examples of verinimist transactions are going online and buying something with a credit card. You give your credit card number and that credit card number encodes a whole bunch of things about your identity. All the way at the other end, there's what we call, I'll stop scribbling, unlinkable anonymity. And the idea of unlinkable anonymity, or complete anonymity, is that you can do the transaction and no information about your identity is revealed. It's not even the case that if you come back a week later and do another transaction with the same merchant that he can tell that you'd been there before or that that transaction and the transaction of a week ago were by the same person. So a simple example of this is going into a store and paying for something with cash, okay? Now there are all these security camera issues and things like this that might make the transaction not actually anonymous, but we'll ignore those and we'll just focus on the properties of the transaction. So you come in wearing a paper bag over your head and you pay cash and you leave and next week you come in with a different paper bag over your head and you pay cash again and there's no way for the shopkeeper to tell, well, depending on how many customers routinely come in wearing paper bags, that these two transactions were done by the same person. So that's what we call unlinkable anonymity. So as usual in any kind of slider-esque system, the interesting points aren't the end points. The end points are the extreme versions of whatever you're talking about and most of the real-life examples are somewhere in the middle. So above unlinkable anonymity, we have what we call linkable anonymity and the idea of linkable anonymity is that it's just the same, there's no information about the transaction presented, about your identity presented in the transaction but the merchant a week from now when you come in can identify that you were the same person as you were last week. And an easy example of this is using one of those frequent buyer cards. So you go to the shop, you have your paper bag and you pay with cash and then you flash them your coat card and that has a unique serial number on it. Now, when you signed up for this card it might have asked you for your name and address and you wrote down Mickey Mouse, 101 Storybook Lane or whatever and you lie, right? And honestly, they don't care, right? That they care not at all about what name you put on that card. What the information they want is they want to be able to track your purchases. So they know you bought, how often you buy beer, right? And things like this. So, right, Coke card, Safeway Club cards, all these things provide for linkable anonymity. Above that we have a point called persistent student anonymity and I think this is one of the most interesting places to put a protocol. If you're going to design a protocol this is an interesting place to design it. And what I mean by persistent student anonymity is that the transaction does have an identifier for you in it. But that identifier is not linkable to your real world meat space identity. Moreover, you can have more than one, okay? So the big difference between this one and this one between student anonymity and verinimity is basically you only have one real name, right? If somehow that real name gets tarnished like someone does identity theft on you or something like this, you're in a lot of trouble, right? Your credit report is pretty hard to keep clean, to re-clean if it's been altered. With persistent student anonymity you can have multiple identities. And this also lets you do things like separate out different aspects of your life. If you post a resume to a job search site, you can do it under one identity and if you post a personal ad to an online dating site you can do it under another identity and search engines should not be able to correlate that to the resume, right? At least that's the goal. So one of the fundamental properties of this slider which is what's important to this talk is that it's actually more of a ratchet than a slider. It's easy to move up and hard to move down. And what I mean by that is that if I give you a protocol that inherently has a low level of nimity in it, okay? So the paying with cash protocol. If for some reason the shopkeeper and I want to agree that maybe he'll give me a discount if I give him some hint about my identity or show him my buying patterns or something like this then we can negotiate that and it's very simple to add nimity I just show him my identity or I show him my club card or something like this, right? So I can easily take a protocol that inherently has a low level of identity revealed in it and add identity just by in addition to performing the protocol you just show your identity. Very simple. On the other hand, today basically the only real way to buy something online is with a credit card. And credit card as I mentioned is pretty much verinimus. How would you go about changing that protocol to buy something online without revealing your identity? This seems to be a much harder problem. You have to start going back and reworking how credit cards work and maybe issue you different credit cards that don't have your identity linked to them but still look like a valid credit card. And it's not anywhere nearly as simple as just you show your identity, which is how you move up. So the lesson to be taken away from the nimity slider is that when you're designing a new protocol you should always design it with as low a nimity level as possible. Even lower than you think you need. As I mentioned, persistent student nimity is a good place to put things. I mentioned the differences with verinimity, the differences with anonymity is that persistent student nimity you can gain what we call reputation capital or in general just reputation. If people just post anonymous messages over and over again and those messages aren't linkable together so you don't know that they're the same person posting them then that poster never gains any reputation. When another anonymous message comes in you can't tell is this person this clever person that's been posting these things or is it this nutcase, right? You have no idea and you have to take each message at its face value. But if you have persistent student nimity you can have basically you learn that this student nim posts things you agree with, you think he's clever and insightful and this student nim is like some random guy talking about how Venus should be moved to the orbit of Earth to, yeah, some of you get that. So yeah, so it's a good question. What's the difference between persistent student nimity and linkable anonymity? Basically what I mean there by the difference is that in persistent student nimity you're generally given an explicit identifier. So you have some false name that can be used generally where a regular name can be used. So like an email, the from line could contain a pseudonym. With linkable anonymity the transactions are linkable together so you could assign the pseudonym that of the form I am the same person who did this transaction on this particular date at this particular time, right? But it's not a name that generally would be used to build reputation or be used in context where verinims are normally used. So the advantage of having something like pseudonymity just a little lower than verinimity is often you can use the same mechanisms that we use today that expect verinims but you just stick a pseudonym in instead. So if I'm filling out a form online it's asking me for my name, my age and stuff and I can fill in incorrect but consistent information every time I go there, right? That would be a pseudonym. Linkable anonymity is they don't ask for that even but they have some kind of way to track you like with a cookie and they just link together all your transactions themselves. So it's merely a difference in the way it's observed and not so much in the like fundamental mathematics of it. Okay, yeah, right, it's a good question just repeating quickly, question was it's good from a technical standpoint but government certainly aren't going to want you to have multiple identities, they really hate it when you have multiple driver's licenses and passports and things like this. So is this really gonna happen? Um, the answer is it will happen first in places that aren't governments, right? So governments might insist you have a single national ID card and some countries have that. The US basically has that nowadays with the new requirements on driver's licenses and things like this. But it turns out that recently we've seen new legislation coming through that's very privacy friendly in many jurisdictions. The most recent example is up in Canada where I'm from we just passed Bill C6 into law and that has a whole bunch of really privacy friendly ramifications for consumers. And we have a privacy commission in Canada it's a branch of the government whose job it is to protect individuals privacy, right? You don't have that here in the US. Duh, what a shocker. So certainly you might believe governments will come into the pseudonymity idea sort of maybe in more enlightened countries first. Yeah, Fran. Okay, that's a good question. The question was what level of nimity do we believe the anonymous remailers and hushmail provide? So as always when you're looking at the level of nimity of something you have to say with respect to whom, right? When I'm doing a transaction with a shop the shop might learn my identity but the guy standing next to me in line might not, right? So you have to ask who is your adversary who gets to learn this information? So for example with hushmail hushmail certainly learns your identity because you connect to them with TCP, right? So they learn your IP address. If they do their job right then no one else should learn your identity. So that's what we generally call a trust me system. Some company, namely hushmail in this case says trust me, give me your personal data and I promise not to release it and then there's usually a lot of fine print. Like, well unless someone comes to us with a warrant or a subpoena or a rubber hose or a gun to our head or something like this and then they will release it. So the better answer when you're designing a system if you want to be one of those providers the better answer is not to have that information at all which will be a theme when I actually get into the content of the paper. And then people can comment you with all the rubber hoses they want and you can't give them any information. So your best strategy is to make it extremely well known that you don't have this information so that you're not inconvenienced by people showing up in the middle of the night with batches of rubber hoses. I mean, if you like people showing up in the middle of the night with batches of rubber hose as well, you should move to San Francisco, yeah. Right, so, right. So the comment being made is that it's not just hushmail which learns your identity. It's the people on the way, the backbones, the routers. Now, in one sense that's right and in one sense that's not. What the people along the way learn is that you are a hushmail user. They don't learn which one because the communication between you and hushmail is encrypted. So they never find out which hushmail user you are. So, yeah, at this point I think it'll be useful to draw a little diagram of protection layers. So when you, when Alice wants to send to Bob a message over the internet, she can, okay, no attention to the sheet behind the curtain. She can protect it in a number of ways. The simplest thing is do nothing. She just sends, Alice sends the message to Bob. Anyone on the internet basically can read that message, tell what Alice is sending to Bob. That's basically the way email works today. Above that, Alice can use encryption. And the goal of encryption is to protect the message data, the contents of the message. So the eavesdropper eave will see that Alice is sending a message to Bob, but it'll be scrambled and you can't read it, okay? And this is basically the, has been the state of technology for the last little while. We understand encryption, PGP is your friend, SSH is your friend. And we know how to protect contents of our message is from interception. The next level up, let's say Alice and Bob are like CEOs of big firms and suddenly they start exchanging encrypted information. Eve notices this, right? Even though Eve can't read the message, Eve can gain a lot of information and potentially useful insider trading information just from the fact that these two CEOs have started regularly communicating. So now what we want to protect is not the data of the message, but the metadata. The metadata are the headers, things like the from address, the to address, the subject. Oh, rock, great. So for that layer, we use things called PETs or Privacy Enhancing Technologies. And those are technologies whose goal it is, is to not hide the data in the message, but the metadata. And it could be some or all and it could be your choice as to what metadata it hides. Now let's say that Alice wants to hide something even stronger. Instead of just hiding the data and the metadata, so here the attacker might see, let's say the particular PET you're using is just hiding the destination. So if Eve the eavesdropper looks at the internet, what she sees is that Alice is sending a message but it's scrambled and she can't read it and even the destination address is scrambled and she can't read it and it's just going to someone, but she doesn't know who. But now let's say that Alice is a human rights worker in a less than friendly country and of course the telecommunications infrastructure is owned by the less than friendly country's government and it would not be a happy thing if they found out Alice the human rights worker was sending a lot of encrypted email to unknown addresses. So what you really want to do is hide the entire existence of the message and for that we use a technique known as Steganography. And with Steganography not only is, are the data and the metadata hidden but in fact the entire existence of the message is hidden from an attacker. So there are all sorts of techniques to do this, like hiding messages in other innocuous messages, like take your email message and the number of spaces after every period encodes the bits of your other message and does it in an encrypted fashion. So these are the layers Alice can usefully do. I'm gonna talk about primarily privacy enhancing technologies. Encryption is done. We basically totally understand encryption, right? I mean that's a pretty strong statement but if you look at where we were in the 60s, basically the spooks knew lots and lots about encryption and the public research community knew basically nothing. Fast forward 40 years, now we're basically pretty caught up to the spooks we think. We understand the fundamentals of encryption, how encryption works, public key encryption, symmetric key encryption. There are probably some things we're missing but the tools we have are probably good enough for whatever we want to use. On the other hand, privacy enhancing technologies and the field devoted to attacking privacy enhancing technologies is called traffic analysis. With traffic analysis, the state we are in today is pretty much the state we were in with crypto in the 60s, right? The spooks have been doing this forever, right? Traffic analysis is their big thing, right? Even if they can't read the encrypted messages, they still use the fact that a message was sent from here to here and one from here to here and one from here to here and they work out, oh, this must be their headquarters and this is where their field generals are sitting and can do all sorts of clever things and you can read a little bit about that in the public literature, but not a lot. And in the public literature, we basically know almost nothing about traffic analysis and hopefully that will change in the same way that the state of knowledge of crypto changed over the last 40 years and maybe it'll take another 40 years for us to know as much about traffic analysis as we know about block ciphers today. So this is where the current research is in privacy enhancing technologies, traffic analysis and both how to do traffic analysis and how to protect from traffic analysis. So we're gonna talk about PETs. So these internet services I mentioned earlier, you can have like some common internet services you can have chat rooms, online shops, mailing lists, say something more detailed like age verification services. So there are lots of, wow, we're gonna blow away here. So there are lots of things that are offered online that you might want to use and you might want to use it privately, right? If say in a chat room, you might be in a spousal support group, on shops you might be wanting to buy medication online, you might want to participate in certain subculture mailing lists or you might want to use an age verification service to let you into a porn site. All these examples are particular cases of where the customer might want privacy of his identity, right? He might want to control who gets to learn his identity when he's participating in BDSM mailing lists or something like this. So that's great and the technologies to do that are in their infant stages right now but they're out there, right? I mean my own company has the freedom system for doing things like this, there are other things like FreeNet, there's the Free Haven, Onion Routing, basically there are a number of services coming up now to let you use Internet services without revealing necessarily your identity. But what if you're the operator of say, a chat room that talks about drugs or you're a shop that offers free services and offers Nazi memorabilia online or you have a mailing list of political distance that you run for a country like China or you provide age verification services that might let you into websites that contain illegal content in some countries, right? Now none of those examples is made up. All of those examples are real examples of things that within the last year, year and a half have been attacked and shut down by various countries around the world and one of the big problems is that these countries who are so used to sovereignty now have this problem where technically their constitutions are only local bylaws on the internet. What applies in one country doesn't necessarily apply in another. In France it's illegal to sell Nazi memorabilia. In the US it's not. Yet, Yahoo had to be shut down or the Nazi parts of Yahoo had to be shut down because they were accessible to people in France, right? So I mean, the US, France, China have all done things to shut down services of this form. Now to avoid turning the internet into the lowest common denominator of thing that is acceptable, right? On the CypherPunks list, Tim May often goes off on websites like Women Without Vails, right? Clearly illegal website in many Middle Eastern countries but certainly not in the US, right? Do they, should the entire internet be watered down to what's acceptable to everyone? Well that would be AOL then. So you've got one of those. So you've already got one of those. So the rest of the internet needs to be a place where communication can happen. The point of the internet is communication. And maybe commerce if you believe that. Well maybe two years ago you would have believed that. So we want to be able to protect the operators of sites like this from persecution, right? Either by adversaries, I mean, we've seen a lot of right wing groups attacking certain websites. You see, I mean, you see left wing groups attacking certain websites as well. Every, many groups have other groups that they don't like and they flame each other and try to shut them down legally or extra legally. Send big guys with perf guns to their machine rooms. So the goal is to take the kind of privacy that consumers are having today that's admittedly in its infant stages and be able to apply it to servers. And the goal of this work is to be able to do that in a way that doesn't require the redesign of whole new technologies. But rather take these client side technologies that are in their infant stages and grow them into technologies which can provide for server side anonymity. So over the page. So I'll talk about some different kinds of privacy enhancing technologies. You can take any privacy enhancing technology and divide them, you can classify them in a number of ways. One way is called mutually revealing versus not mutually revealing. And what that means is when Alice talks to Bob using a privacy enhancing technology, the idea is that Eve doesn't learn the identity of one or more of Alice and Bob. The question is, does Bob learn Alice's identity and does Alice learn Bob's identity or not, right? So some privacy enhancing technologies let Alice and Bob know each other's identity or require that Alice and Bob learn each other's identity and some don't. By our property of the nimity slider, you would say that the ones that are mutually revealing are higher up on the nimity slider than the other ones which are not mutually revealing. And so by the lesson of the ratchet, you can see that if you were to design a system, you may as well design it to be not mutually revealing. And then if you want your protocol to be mutually revealing, you just reveal the name as part of the message, right? You just send a message that Bob receives it, the headers don't indicate who it's from, but the bottom it says signed Alice, right? You just reveal the name in the message. And the reason you design, I said you should design your protocols as low as possible on the nimity slider, even if you want to end up with a protocol that's higher, like in the pseudonymity range, is that lets you change your mind. If somehow for some reason, later on after you've designed and fielded this protocol, you determine, oh, there are a few specific situations in which I don't want pseudonymity, but anonymity, complete anonymity is fine, right? Then it's really simple. You just take the part of the protocol that explicitly added in the pseudonym and don't do that, but you don't have to redesign any infrastructure or anything like this. So it allows for good engineering. So likewise, when we design a privacy enhancing technology, we're going to want to do it at the not mutually revealing level. And then if we want a mutually revealing technology, we just show the name. Another way to divide things is whether it's a store-in-forward type of system or an interactive type of system. So for a long time, we've had things like anonymous re-mailers where you send in your message to this big cloud, it gets mixed up with all the other messages coming in and eventually it goes out. And the way the anonymity is maintained is by collecting say five hours worth of messages from all over scrambling them up and then sending the results out. But that introduces a five-hour delay in your transmission time, right? Now let's say you want to use that kind of method to do worldwide web, right? So let's say you take your TCP SIN packet, send it in here, after a five-hour delay it comes out, right? Suddenly your TCP three-way handshake is not fun anymore. So we can't use those same techniques for store-in-forward networks for interactive services. So we had to come up with new techniques. And the new techniques are the kinds of things that onion routing, freedom, and so on use. And a third way to divide things is who does it protect? Does it protect the sender of a message or the recipient of a message? Or both? Now arguably any system designing for ongoing back-and-forth communication has to protect both, right? Because when I send you a message, let's say you're the one that wants to be anonymous, then you have to protect the recipient. But when you reply to me, then you're still the one that wants to be anonymous, so now it has to protect the sender. So any system designed for ongoing communication really needs to protect, be able to protect either the identity of the sender or the identity of the recipient. The difference between, so what we say is that we don't divide it along those lines, but rather protection of the client versus protection of the server. The difference between those things is that with a system that protects the client, then the client will connect to some well-known server with a public name. The client will be anonymous, but it will set up some short-lived way for the server that one particular server only to reply to the client. So that server can send messages back to the client without knowing who he is. But this is a short-lived thing, it's only for the duration of the conversation when the client gets bored, he goes away and tears it down. In contrast, when you're trying to protect the identity of a server, then the reply mechanism, the way you send packets to the anonymous party now has to be long-lived and addressable. You have to be able to figure out how to get these packets to this server who's pseudonym you know, but you don't know its real name, its real IP address or anything like that. And that is a much harder problem, right? Freedom doesn't address it, onion routing doesn't really address it. The idea of this system of rendezvous servers is to provide an easy piece of the puzzle you can snap in that will change that if you have a system for protecting the privacy of a client, it will just magically turn it into one to protect the privacy of the server, okay? And I'll just briefly sketch the system because it's pretty straightforward. I have paper online, it's actually part of my thesis that goes into great detail about all the little bits and about how to make it robust against failures and malicious attacks and nodes going up and down which are certainly large problems for distributed things. So the idea in just a few minutes which is basically all I've got left is something we call a rendezvous server. And this rendezvous server can be operated by anyone in much the same way that anonymous re-mailers are operated by anyone today. But even more so, the system is designed in such a way that these rendezvous servers can be going up and down really fast, right? And they just very quickly might appear, might disappear and there's no requirement that the person that runs one today will still be running it in a week from now. So remember, we have this system which we'll picture by this cloud by which various clients can connect through this cloud to various services on the internet and it should be impossible or at least hard to be able to correlate which client is talking to which server. That's the goal. And we'll assume we have one of those which provides for protection of the client. Now we want Alice here to be providing an internet service. Bob here wants to use it. But Alice doesn't want to ever reveal her IP address. So what she does is makes an anonymous connection to the rendezvous server. You got a note, okay. Makes an anonymous connection to the rendezvous server. This establishes a return channel by which packets can go that way. But only for the lifetime of this connection. Alice then, the rendezvous server then basically publishes an address on which it will accept packets and forward them to Alice. And how does it do that? So I mean, one of the big problems with anonymous services is that anyone seen to provide the service will be under pressure to shut it down, right? I mean, that's the whole reason why it was anonymous in the first place is to prevent that kind of pressure. So you can't just host your site on GeoCities or something, right? Because your adversary will just go to GeoCities and say shut it down and GeoCities has zero incentive not to comply. So you need there not to be any central point of attack in this system. So great that Alice can connect to this rendezvous server. Why can't the rendezvous server just be shut down now? Well, the idea is that as I said, there are lots of them. People just bring them up and put them down all the time and they just, they're transient and you can use any of them on the internet. So that's great. There are now all these rendezvous servers. How do you find them? So now what you need is basically a name service lookup. Some kind of database lookup that isn't DNS. It doesn't have a hierarchical property. It doesn't have a central node. So how does Bob, say Alice has a service that she's advertising with spam or something and it's called the big buck service. Bob wants to connect to the big buck service and make money fast. How does Bob know where it is? The answer is when Alice connects to the rendezvous server, she tells it that this is the connection for the big buck service. The rendezvous server then publishes in some directory the fact that the big buck services should currently point to that port. Or that IP address. What is this directory? Well, this directory has to be basically a way to look up where files live, where things are. It's distributed, has no centralized point of failure. Can anyone think of something like that? There's Nutella. That's a great example. So the answer is to how we solve this problem is we don't and we let other people do it for us. There's a maxim in computer science that says all problems in computer science can be solved with an additional layer of indirection. So that's what we do. We just treat Nutella itself as a distributed file system and this rendezvous server just publishes to Nutella that, oh yeah, I've got this, the file rendezvous server dash big bucks and the contents of that file are the IP address of this. If you're even clever, you can say the file is rendezvous dash big bucks dash and the IP address and you let someone do a wild card search to look it up. And then there's no actual file to be transferred. You just look up the name. You can also put digital signature in that name if you want to protect from attack, which is really fun. So anyway, that then Bob just has to use his regular Nutella client to look up where the big bucks service is and then it'll tell him it's on some port and he connects to that port and then the rendezvous server, all it does is shuffle packets back and forth between Alice and Bob. There's a totally trivial piece of software. It's tiny and it should be on source forage as soon as someone gets around to actually writing it. So that's the basic way that Alice and Bob communicate. Now, there are all sorts of more details on, okay, how do you handle if Bob is connected to Alice and has an ongoing conversation and this rendezvous server goes away, right? How do you deal with that? It turns out you can deal with that as well and some of the solutions require Bob to have special software in his machine and some don't and of course the ones that don't are better. And the simple piece of the puzzle here is all that's needed to turn a network which provides for client side anonymity into one which provides for server side anonymity and in a very strong sense, right? If you assume that anything Alice connects to through the cloud can't tell the identity of Alice, then no one further than that, certainly the rendezvous server, certainly Bob, also can't tell the identity of Alice and we use this looking up in some global database, distributed database like Nutella to solve our name service problem. So seems I'm running low on time so I'll take a few quick questions. Okay, so the question is distributed services like Nutella generally have a longer refresh time than the time these rendezvous servers will be going up and down and that's possible and turns out not to matter much. And the reason is that one property a lot of these services have is the way they do their distribution, FreeNet in particular has this property, the way they do distribution is by caching, right? So originally there's one source for the data and as someone requests through a number of hops each of those hops caches it. So it's okay in this system to have stale data which is what's useful. When a new rendezvous server comes up you want that fact to be known right away but most of the systems have that property that new files are visible right away. The problem with these systems that old files remain visible for a long time but it's not a big deal. If I want to find a rendezvous server that's advertising big bucks I just look for them. I'll find five of them, three of them might be gone by now but I'll try them, I'll fail and I'll go on to the next one. So it's what we call in systems soft state as opposed to hard state. Soft state is state that is refreshed often can be dropped, it can be replicated, it can be cached and basically it's just acts as a hint and it's not, the running of the system doesn't depend on that state being correct all the time and systems of soft state are much easier to maintain. Yeah, the web address of my thesis is you can go to my homepage at Berkeley the it's www.csberkeleyedew slash twiddle I-A-N-G and you'll be able to find obvious links to it there. Yeah, right, so the question is I mentioned briefly using a digital signature to prevent people from falsely claiming to be the big buck service. So the answer is whenever Alice advertises her big buck service she puts in a public key. So no longer can an attacker do an attack of the form well Alice's big buck service is really over here and redirect people away. They might put up, they might send out their own spam saying I have Alice Prime's big buck service which you can't tell the difference and the answer to that that's just a marketing problem and is also solvable by reputation capital, right? When Alice tells her friend Carol about this great big buck, sorry, when Bob tells his friend Dave about this great big buck service he's been using he should forward her not just the name big bucks which is easy to forge but also the public key for it and just whenever the tag gets distributed the public key gets distributed along with it. So I think I need to wind things down here. Take a quick question. For serving anonymously, so right now not a lot. So I mean there was Mojo which actually didn't do a lot of anonymous. It should, we can fix that. We can use this to fix that. But as I said, this is not a lot of code, right? It's just basically a name server and packet shuffling. Someone with actual copious spare time as opposed to my copious spare time might very well want to just take the paper, implement the thing, stick it on SourceForge and then let people run with it. I mean it's a really simple thing to do. Okay, so I think I've got to get off the stage because we have other people coming up and I have this note.