 The problem that I think we're all seeing, and certainly that we've kind of been growing increasingly concerned about for some time is that it's not enough anymore that you can participate in a peer-to-peer network and nobody can tell what you are doing. Increasingly it's the case in this country and elsewhere that simply the act of participating in a peer-to-peer network isn't itself a pretty risky thing to do. As a result, we believe that future anonymous peer-to-peer networks will need to limit who your peer connects to, to a group of trusted friends, people that you have decided you don't mind that they know that you're part of this peer-to-peer network. The big question is, is it possible, and if it's possible, will it be useful? Just for those quickly to clarify the term peer-to-peer, because if you ask 10 people what peer-to-peer means you'll get 10 different answers. From our point of view, a peer-to-peer network is designed to help people find information. Specifically, it's designed to help people find information that is widely distributed across a large number of computers which are connected through some network and practice typically the internet. Users of this system want to find information, they need to know where the information is so that they can go out and retrieve it. There are different approaches to this. The first peer-to-peer network in many ways was a piece of software called Napster, and its approach was pretty simple. Every peer in the network simply reported to a central server or eventually one of several central servers what data was being stored on that computer, and so if you wanted to find some data you'd go to the central server and say, you know, where's this thing? And it would give you a list of computers where you could find it and you would then connect to those computers directly and retrieve it. Subsequent to that, when Napster and similar systems were being shut down, because of course a centralized system is very easy to shut down, people started working on what we call semi-centralized peer-to-peer networks where it's essentially the same idea, but instead of one centrally administered index of data, you have a large number of decentralized indexes. So when you connect to the network you're allocated one or more of these indexes and you use that pretty much like a mini Napster, because that is probably the archetypal example of that approach. Other peer-to-peer networks strive to be completely decentralized such that in essence each peer in the network does pretty much the same job and has loosely the same degree of responsibility as every other peer in the network, and FreeNet is one of several examples of peer-to-peer networks like that. Another way that you can classify peer-to-peer networks, and it's particularly useful for the purposes of our talk, is into light and dark. Examples of light peer-to-peer networks, or you could also call them promiscuous peer-to-peer networks, include Nutella FreeNet distributed hash tables, indeed the vast majority of peer-to-peer networks that you've probably heard of. In essence, a light peer-to-peer network is defined by the fact that your computer will willingly connect to strangers, which means that strangers, people you don't know or trust, can discover that you are running the software. The advantage of this is that if you do it right, it can be globally scalable, which essentially means whether you have 100,000 users or 100 million users, you can still find information pretty efficiently. The disadvantage is it's vulnerable to harvesting. If, for example, the Chinese government wanted to get a nice convenient list of all of the people in China that were running FreeNet, they could do that. It wouldn't necessarily be easy, but it would certainly be possible. When we first came up with the idea, we didn't really see this as, well, this wasn't really on our radar, but over the past few years we've realized that it's not only an increasing threat in countries like China, but also in this country. I think most people here are probably familiar with recent court decisions, which have further rolled back the freedom of people to share information in the ways that they might want to do. The alternative to this are dark or you can also call them friend-to-friend peer-to-peer networks. These networks are characterized by the fact that peers in this network only talk to other peers directly, only establish a direct line of communication with other peers that they trust, that their user trusts. So the user is saying, I don't mind that these people know that I'm part of this network. An example of this is a piece of software called Waste. The advantage of this, as I said, is that only your friends know that you're part of the network. The disadvantage is that until now this type of network really doesn't scale. Typically you have isolated pockets of perhaps five to ten people, and while some people have found that to be a useful thing, clearly there's a big difference between that and being part of a global network with millions of users. Shifting gears a little bit, networks like FreeNet in completely decentralized networks rely on something called a small world phenomenon in order to find information in this completely decentralized way. Some of you, I'm sure most of you are familiar with the small world concept, or you may also be familiar with the game of Kevin Bacon where you try to get from Kevin Bacon to any one actor just by going through movies they've been in. So a guy called Stanley Milgram in the 1960s did a very interesting sociological experiment where he wrote the names of a number of people actually in Cambridge, Massachusetts on some letters and gave them just a random people spread throughout the United States with the instructions that they were to try to get these letters to the intended recipients but only by giving them to someone they personally knew who themselves would give it to someone else they personally knew. And very interestingly he found that when those letters arrived they did so and perhaps they went through just five or six people. So that's five or six people in a country of 270 million odd people. Clearly a very scalable way to get something from where it is to where you want it to be and interestingly also completely decentralized. These people are not relying on any central phone directory in order to figure out where they should route this information. They're doing it purely on the basis of their friends. So what characterizes a small world network? Well typically in small world networks, what you find is that you have short paths exist between any two people. If you know how you can get from any one person to any other person in a short number of steps even if there are millions of people in the network the challenge is that these short paths aren't necessarily easy to find. In the case of Milgram's experiment people were able to use information that they had about their friends in effect as signposts as to which of their friends they should send the letter to next. They knew that if one of their friends perhaps lived near Massachusetts that they would be a good choice to forward the letter to them. If having reached that person, that person knew somebody who was close to the street or perhaps went to the same university as the intended recipient, they could send it on. So in social networks people are able to use their knowledge of their friends as signposts for finding these short paths. Well the reason this works is that people who have a short path well the reason this works is that people have a concept of similarity between other people. You know that if two guys live on the same street they're likely to be more similar to each other and therefore more likely to be connected to each other than someone living out in Zimbabwe. So the algorithm is simple once you've got this concept of similarity you simply route to the person that you know of that is closest to where you want to be at each step. We call this greedy routing. FreeNet and distributed hash tables rely on this principle to find data in a scalable decentralized way. So Ian just started by talking about these peer-to-peer networks and essentially the problem is the dark peer-to-peer networks that we want to address and then he talks about the small worlds and social networks forming small worlds. So what we want to talk about here today and what we're interested in is how can we apply this knowledge about small world and about the social networks to deal with these dark peer-to-peer networks and try to make them more useful. And essentially this starts with the realization that the dark net is just a social network. This dark net, the peer-to-peer network that's formed when people connect only to people they trust will be just this big encrypted network of people's relationships just like the social networks that sociologists and people work with. So the environment that we're trying to deal with in our peer-to-peer networks is just this social network that Milgram's experiment took place in and that we want to find a way to deal with. And we're interested of course always when we're trying to search in being able to find a way from one point to another in the network. We have this network that's large network connections going all over the place and we need to find a way from one point to another. Well, we know that in the social network and the dark net is like a social network, is a social network, people can find a way to go from one place to another. Milgram's experiment showed people can root in social networks and they can do it well. He took only five or six steps to go from one side of this country to another to a complete stranger. And people can do that. So if people can do it, then well, so should computers. I mean, computers are much smarter than people, right? So to try to do this somehow, make computers do this, we need some sort of mathematical understanding for what's going on in a small world network. And this mathematical understanding is something rather recent, but a Cornell professor named John Kleinberg provided a model about five years ago that explains how these networks can look in order to be navigable. In order for it to be possible, as Ian said, to find signposts in the network that will take you where you want to go, it needs to have a certain configuration. And this configuration is that the possibility of being able to find roots efficiently from one point to another rests on the fact that you have people who have similarity to one another and, as Ian said, are more likely to be connected and then you have a certain proportion of connections of different lengths with respect to some sort of position. And in the simplest model, position can just be where people live. And you say that people are more likely to be connected if they live close to one another, and as we get further away, we need this to be the probability that people know each other that they are connected needs to decrease. And when this happens, that's when you can find an easy way to root in the network. So for our talk, we're actually going to allow people's positions to just be in a ring. And that doesn't mean that we think that people exist on a ring, but this is the very simplest model. We allow each person to just get a position on the ring and then we see people knowing each other as these cords in the circle that skip over the people. So here I have two examples. We've placed these positions in a ring in the mathematical idealized model and we say that there are two green people who know each other and two red people who know each other. But if you can see the red one covers more of the circle, it's a longer link, therefore it needs to be less common. And according to Kleinberg's model, it needs to be these long links which this one covers double as many steps along the edge here so it needs to be half as common as the green one does. And if this is the case, if you have these connections, if we can put people into this base model with a circle and we have connections that agree with Kleinberg's model. Every once in a while we run into a long connection. Somebody happens to know somebody who lives way on the other side of the world, but that's rather rare. More commonly people know lots of people who live close to them, who are close to them, who are similar to them. If this is the case and you have this model, then we can do this greedy routing that Ian describes which just says, if I'm going to some destination, I always just step to the neighbor who's closest to my destination. And if I can do that, if the configuration of these friends is correct, then the greedy routing can be mathematically proved to perform in something on the order of log squared n steps. And when something is log, that's good. If it's not log, it's bad. But we have log here, so that's going to help us. That means that we have an efficient way of finding steps from one way to another, which is exactly what we want if we want to be able to search and route in our dark peer-to-peer network. So this is an example, a simulated example of how such a network would look. Now I've imagined again, I've just placed people's positions on a circle like this. All around there are people placed on the circle. We have two red people who are circled at the end. And if you want to do a greedy route from one to the other, you simply follow these chords which represent the friendships and you try to get from one place to another by always choosing the friend who is closest to the destination. And so a route would look something like that. In the beginning, in the very first step, we can take a very long step because that takes us a lot closer to our destination. And as we get closer, it becomes harder and harder to find steps that take us a long way. But eventually you do reach the destination and you do so in a bounded number of steps. But this is greedy routing and this is on a circle. And I placed everyone on a circle in this imaginary world where it is extremely easy to say if you're closer to somebody than somebody else because you have a circle and you can say how many steps is it to him. But if we're trying to do greedy routing in a real social network, in the kind of social network that our dark net is going to form, we don't have an easy way of saying is one person closer to another. We end up having to answer questions like this. Is Alice closer to Harry than Bob? I'm trying to route in a social network. I'm this person in the Milgram experiment with the letter and the letter is going to Harry. And I have to ask myself, should I send it to Alice or Bob? The letter has to go to Harry. I have to make a decision. Should I send it to Alice or should I send it to Bob? Well, as a person, I make some sort of a judgment. And in Milgram's experiment, people have to make this judgment based on some idea of closeness between people. The most basic form would then be where do they live? If Alice happens to be Harry's neighbor, then she is probably closer. There's a greater chance that they know each other than that Bob would know Harry. But you could also go on, for example, what their occupations are. People who are all hackers are more likely to have met a DEF CON than somebody who's an ice dancer or something. But you could go, you know, jobs, interests. As a person, we do these kind of judgments all the time and it's not that hard for us to say what would be closer to one person than another. But now what we want to do is we have this dark net which is this social network. We want to find a way to root and we want the computer to do it. And in practice, we can't really ask the computer to make these decisions based on what people live what they do, etc. I mean, actually this isn't as insane as it may sound because people have actually tried this and one can try to put into the computer as much personal information as the people are possible. Then you try to do some sort of closeness algorithm based on people's personal information trying to match it so that you can make these routine decisions. And I know that the fact is that what it has shown even if you try to use a large number of criteria computers are not good at making these sort of value judgments. And even more than that, what we're trying to build are anonymous file sharing networks and they will not be a big hit if the first thing we do is ask everyone to put in all their personal information so that we can root. It just doesn't go over well with our target audience. So we have to find a different way to make these sort of answer these things with Alice, Harry, and Bob who is closer to who. And so the thing is, and this is what's extremely important is the information is there in the network. We can let the network itself tell us who is closer to who. And we don't need to know anything about them except the network itself. We don't need to know where they live. We don't need to know what they do. All we need to have is this network and we work based on that. So how would we do this? Well, we have this model of John Kleinberg that tells us what should the world look like for it to be possible to greedy route efficiently. Well, what the world should look like is that there should be few long connections and many short ones. Most people should be... Most friends that you have should be close to one another whereas you should once in a while have a long connection out to somebody who does it. And then knowing this and having this graph, all we need to do is basically give people positions in a way so that this becomes true. We just say, forget about the world, forget about where you live, forget about what you do. I don't care any about that. We're going to make believe, we're going to make positions for you based on the network and make it so that the properties that we are looking for are fulfilled. So what you can say is, in a way, rather than taking the positions and trying to route on people's real positions in this sort of social space, we reverse engineer the positions of the people based on how the social network looks. And that is what allows us to give people new positions that we then can use to route. And once we have these positions, we can then do the simple greedy routing of always trying to go to the closest point and we get what we want. And the method to do this is actually surprisingly simple. There's some deep mathematical magic involved, but it actually turns out to be extremely simple. When people join the network, they simply choose a position that would be a position on the circle and they do so in a random fashion. Everyone just decides, okay, do a random number generator, give yourself a position. And when you do that, of course, this is not going to be true because two people who could be very close in some sense in the social network, of course, could have chosen positions at opposite ends because it's completely random. In fact, they will choose positions at opposite ends. But then we have an algorithm where people get together and then they swap their positions with one another so as to minimize the product of the distances to their friends. And this is extremely important and it has to do with the math. But they get together, they contact each other over the network and they say, I have positioned this, you have positioned that, maybe we'll just switch and we'll be better off. And this is an example of that sort of switch. These two people have chosen their positions randomly and so have their friends. And you have a green person who's got three green friends marked by these chords in the circle. And you have a red person who's got three red friends. But as it has been, since they chose their positions randomly, they happen to choose them on opposite sides of the network from their friends. And this is not what we're looking for. We're looking for something that fulfills this model that people who have lots of friends on one side, they should be on one side. Well, what if red and green were to swap positions with one another? Then we get something that looks like this. And this is much more in line with what we are looking for in our model. We have one or two long connections. We need some long connections. But mostly we have these short local connections so that people are close to the people that they are connected to. And so this is essentially the algorithm as it goes. I mean, there's some math involved and some formulas and stuff that I can give to the interested people. But essentially people choose positions randomly and then they switch when it looks like switching will make this network more like what we're looking for. And some things to note is that this switching step is essential. We couldn't say, for instance, let's just have everyone choose a position and just take a position that's close to their neighbors. Because what happens if you do that is everyone ends up choosing the same position in the end. Once you've done it enough times, everyone's like, oh, I want to be closer to my neighbors and they want to be closer and just convert this on the same point. You get a bunch of people who are all saying that they're at the same point and I promise you, you can't root based on that. So you need this random choice and then the switching back and forth. And one problem with this technique, of course, is that these identities that we use to root are going to be changing as people switch. And that's something that has to be dealt with at the application level because people do not have a permanent identity. But there are ways of going around this. As long as we have one way of finding path from one path to another, one place to another, we can deal with the fact that people's identities may change. Okay, so now for the simulations there's always a necessity to show that we've done some simulator data to show that this can actually work. And so what we have is three different modes here which are essentially two controls and one good and one sort of where we have applied our algorithm. The first control is the random root. If we start with a network, we have no positions whatsoever and we're just looking to find a way of looking for data in this network with no positions, no knowledge about anything. We can't do any form of greedy rooting. What we can do is stumble around randomly and hope that we find our destination. So this is sort of the control of where you start out, the random network, the random search. That's all you can do. And the other control is going to be the completely idealized situation where we have created a network that agrees with this model according to some position scheme and we have this good network but known to perform in log squared time. This is where everything is good but that's not the situation that we actually have. A situation that we actually have is something where we have a network just like this good one but we don't know anything about the positions and we have to use our algorithm to try to restore the positions and that's what I have as the third data set will be the restored data. That will be our algorithm applied. So the first thing I'm going to do is show the success rate within these log n squared number of steps where n is the network size. It's algorithmically there at the bottom up to about 256,000 I think for this data set. You can see that the random one falls off very fast. Random routing simply cannot perform in this number of steps. It fails to find data as the network grows whereas the good line will stay at the top. It always succeeds. The math tells us that it must so it would only be a bug in my simulator and we can see that restoring applying our algorithm doing these switches actually brings the success rate all the way up from the red line almost to the green line and only at the very largest size do we see some significant deviation but even at that level we're talking about 96% success rate for the restored network and the other thing one's interested in of course is the number of steps that it takes on average and we can see that even with the falling success rate the successful queries in the random walk will increase as the size of the network grows dramatically. The good ones will stay low just like the math tells us and the restored one can bring us almost all the way down to an acceptable number of steps. But of course that simulated data and we started with the model that we're applying and that's always a little suspect and basically doing giving myself an idealized situation so a more interesting question is how do we do this if we have real world stuff? Well so to get some real world stuff rather than asking for permission we simply borrowed some and we went and grabbed 2200 people from Orkut.com which is the social networking site that Google started I think about a year and a half ago everyone ran there because it was a fad or something and put in all their friends and then no one's done anything with it as far as I know it's rather useless but it does provide a very good data on social networks and how they look. So we grabbed some data from this and we spidered a set of people starting with our friend Ian here and going and taking people so as to create a rather dense social network and as it happened this was done rather randomly but as it happened and I guess it's not too surprisingly and maybe Ian's starting with Ian but most of the people were programmers, techies, people in the technology community and mostly Americans, very few non-Americans in the set and I think that I would imagine that maybe some of the people who are in this set are actually here today not just Ian but somebody who is in it and it's just as a note we got no Brazilians if someone has logged into Orkut recently and it's been overrun with Brazilians because apparently in Brazil it's useful for something so they're like 70% or 80% Brazilians there but this set apparently that social network is rather disconnected from this one so our set contains programmers, Americans there weren't a lot of these Brazilians who do whatever they do on Orkut and just as a note because when people talk about small worlds they often talk about degree distributions but we did have a degree distribution here in the sense that there were a couple of people when you talk about a degree in an effort you mean the number of people that somebody knows so the most popular person here and I think the most popular person in the set was Orkut Buikokin himself the guy who founded the site he had 289 friends in the set so out of 2000 he knew a hell of a lot of those people and whereas most people had only around like 70 neighbors there as you can see so this is the sort of power degree distribution that can affect the network so dealing with this data set now we want to apply the switching algorithm and hope that this will do something good something that can be just a random search and a random search performs rather well on this data and the reason that a random search performs well is that it will is because of this degree distribution because you run into people who know so many people it's easy to run into the person that you're looking for so with power loss it tends to perform rather well so in this log n squared steps and you should all be able to do that in your head but 2000 log n squared that's about 120 something right so in about 120 steps the success rate was 72 and of the successful ones it was about 43 so for randomly searching a network of 2000 this is rather good so the question of course is if we run the switching try to apply this this is a real social network can we just get somewhere with this greedy routing by assigning identities and the answer is yes so with our algorithm we can search with a 97% success rate in this data and on the mean number of steps is only about 7.7 so we are rather happy with that result because it really shows that we can massacre the random search by performing this ordering and we can get something that actually seems it actually looks like the social network lives up to the model and what we were looking for and one thing one can see is that this degree distribution the fact that there are people who knows many people can affect the results so if we clip the degree distribution meaning I don't allow anybody to have more than 40 friends I just throw away everybody's friends after 40 we can look what was actually going on and in the random search you can see that yes in this case success rate falls dramatically because we don't have these people who know tons of people but in the case of our thing it adds a couple of steps but the success rate is still good and it's still dramatically better than the random search so we can say that our algorithm can take advantage of this degree distribution but it was not use it and it seems like it actually does work starting with the social network you can reverse engineer positions so as to make it possible for a computer to do what people will do when they're trying to pass a letter along based purely on this stuff we're going to go with the demo so to demonstrate this we were going to show rooting in this between some people in this data set for more good so one thing to start with is is starting from Ian himself who was of course the start of our set and going to for instance Sergey Brin who I think we all know Ian wants to know how many steps to Sergey Brin because I want to borrow some money or something so and they will actually search the set and you can see that it skips from friend to friend and finds a path from Ian to James in four steps and what needs to be noted about this is that when this does the search it does it in a decentralized fashion it never looks at anything at any step except who your friends are and it decides which of Ian's friends was closest to Sergey Brin which of Sean Parker's friends was closest to Sergey Brin and then when it gets to James Joachim it finds that he knows Sergey Brin so the search is completed another example we could do person in the set number 1708 I hope I can read my own handwriting here which is Fjodor who I think is at the conference somewhere and we could go from him to not somebody else who's in the set who's less sick and that one also succeeds and you can see how we are honing in in these positions in the circle always choosing the friend of Fjodor who was closest and then the closest and it actually has managed to assign these positions in a way that it can find this path going in so one other interesting thing just while we have this tool that we discovered is that there are certain people in this data set well they are real people but it's kind of an abuse of the system so for example you get celebrities or politicians, one example being John Kerry who someone signed up as John Kerry and then people will connect to that person I guess to show that they're going to vote for him or something but the interesting thing about people like John Kerry in this data set is that they don't fit into this small world model if you've just got random people connecting to you then it doesn't fulfill this criteria for a small world whereby if you know somebody and you know somebody else then they're more likely to know each other so let's say we try to root from Sergei Bryn to the fake John Kerry the results are kind of interesting and what you actually find is that this will keep going for 200 steps and never actually succeed in finding John Kerry and again the reason for that is very simply that this John Kerry user does not fit into the social network model that we assume because he doesn't actually know the people that are connected to him they don't know him either so I guess so that's kind of the theory and I hope we've demonstrated through simulation including a simulation on real world data that this idea can actually work so just real quick should talk about some of the practical concerns we're actually implementing this right now we're about 3 weeks into coding and we hope to have a prototype of it in the next couple of weeks hang on this is gone mad okay so theory works what are the concerns in practice well preventing malicious behavior obviously this free net is the type of peer to peer that expects to be attacked and therefore it's essential that one person cannot easily take down the network or otherwise make themselves a nuisance it's essential that it's easy to use this is a problem that free net has certainly experienced but right back for the past two decades of security software including for example PGP people have always found that when you create a really cool piece of software that does something really cool in terms of anonymity privacy or other kind of crypto stuff people just don't use it or at least it doesn't achieve wide adoption because it's just too difficult to use with this next iteration of free net we really want to tackle that head on so we need to think about how do we actually make this thing usable additionally how is data going to be stored in this architecture and perhaps maybe it can do other things other than just storing and retrieving data as free net currently does so preventing malicious behavior what are the threats well somebody could potentially select an identity attract certain types of data or otherwise act maliciously they could potentially through a similar means manipulate the identities of other nodes for some nefarious purpose how do we ensure ease of use well one requirement is that peers will need to be or at least most peers will need to be always on that's not so much of an issue now that broadband is better deployed and being deployed so quickly how do we introduce peers can we do it by email kind of a PGP style thing where you have a block of data that you can cut and paste into free net and that can facilitate introduction maybe you can do it by phone verbally phoning up your friend and saying hey let's do this here's the kind of code number to allow us to connect to each other securely or maybe you're willing maybe you've got a lower threshold and you're willing to trust some third party to negotiate this introduction what about nats and firewalls one of the big problems that peer-to-peer has experienced over the past few years is that with peer-to-peer it's essential that people can make incoming connections to your peer and nats and firewalls make that a lot more difficult fortunately people have developed techniques to get around that but that adds challenges in terms of peer introduction so we can use UDP hole punching as kind of an increasingly standardized way to achieve that used by didger and another one of my projects and Skype but that would require a third party for a negotiation so in conclusion real quick so we have time for a few questions we believe that this is not only possible we're going to do it and we are doing it there's still a lot of work to do on the theory in other words you know can other models work better in particular right now when nodes are when nodes choose each other to decide whether swapping would be advantageous they do so at random that's kind of nice for some of the mathematics that we haven't really discussed for lack of time but it may mean that it's not it doesn't converge to a good network as quickly as it could so we need to explore perhaps a more directed search for peers to swap with and it needs like everything it needs to be tested on more data we've learned the hard way that practice is more difficult than theory particularly with free net security issues clearly are critical that's one of our major focuses and deployment of the network you know we could come up with the coolest thing in the world that's ultra secure if it's not easy to use we'll get ten users and it won't be useful to anyone so if you're interested in participating helping free net if you don't know it's an open source project where we try to be very open to people who are willing and able to make a contribution so if this interests you our website is freenet freenetproject.org and Oscar insists that I show you this slide so I guess we have about five minutes for some questions if we have any yes here at the front so the question was what is the incentive for people to acquire a lot of neighbors well the incentive is actually quite simple the more neighbors you have the better connected to the network you are and therefore you'll be able to find and retrieve data more quickly if you let's say you've only got one connection to the network chances are they're on a ADSL broadband line they've got perhaps 10 or 20 kilobytes per second upstream if you want to download something you're just relying on one person that's going to place a serious limit on the speed with which you can do that so do you want to it is an extremely important question that needs to be addressed because you really do need people to have a lot of friends if everyone just connects to one person you get a network that's one long string and you're not going to get anywhere so you're right and that has to do with this when we're talking about ease of use that how we deploy this how we get people to install it and what they do right afterwards is really going to affect whether it can work or not so it's something that one needs to be very careful with and this is the insane part about trying to write these networks which rely on these emergent effects that some of it is out of our hands it's not like deploying normal software we just write software that works we actually need people to use it in the right way but I don't think we've gotten there well yeah you basically you were just disconnect for them from them at that point I mean you can you can shut down relationships just as quickly as you can start them up and also I mean there are lots of these trust issues like for instance you could want to deal with people who are less paranoid might want to allow for a friend of a friend connection so as to speed the network up and stuff like that so if I can if I can kind of address your core point because no one else can hear you so I think your core question was we make the assumption that shorter paths are almost always better and in this we've ignored the fact that some people are on modems some people are on fast ADSL connections etc etc and that's a good point we haven't discussed that but we have actually considered that what the approach that we're considering in order to address that issue is in effect you split up the friends that you're connected to into several groups let's say it's two groups for simplicity you have your fast friends and your slow friends so the top 50% who's faster the bottom 50% who's slower yeah so you don't slot them into fixed categories you can adapt that as time goes on and then when you root you start out rooting you basically you want to try and find it from your fast friends first so when you initiate a request you will send it to your faster friends and if that fails to find it then you resort to your slower friends I'm sorry I don't have time to give a more in depth explanation well that is one of the dangers people trying to manipulate a network with fraudulent swap requests and there are a variety of ways that you can mitigate there are a variety of ways you can mitigate it one example would be well I don't have time to go into it but we have thought about a technique that's a variation on how you can do secure group random number generation so basically get someone to cryptographically commit to a certain rate well I can't really explain it in the time available but we'll talk later and I think we're we're just done one more question well I mean that is in effect up to users self-interest it is in users self-interest to trust the people that they're connecting to and as such our hypothesis is that they will tend to form a social network it so happens that the criteria for a structured network to work here are actually pretty robust you don't it doesn't demand that you have exactly the one over distance distribution that Kleinberg suggested it's actually very very robust and beyond that addressing the issue you're while you're saying the John Kerry problem in many ways that's kind of usability issues steering people towards a certain model of usage of the software just in terms of how it interacts with users I'm afraid we're going to have to end it there thank you we'll talk about that