 OK, so my name is Ian Clark. I'm the founder and project coordinator of the FreeNet project. FreeNet is a piece of open source software that's been around for about four years now. FreeNet first gained some notoriety back in the Napster days because of its potential for distributing copyrighted information in a way that nobody would really be able to enforce copyright law on the system. That wasn't actually the purpose of the software, but hey-ho. So what I'm going to talk about here is firstly just give a really quick rundown on what FreeNet is just as an introduction. And then I'm going to talk a bit about something I think makes FreeNet somewhat unique among the increasing number of projects in the distributed emergent peer-to-peer architecture space, which is that we first deployed ourselves for about four years ago. And it's been relatively widely deployed. And we've learned a lot of hard lessons from that. We've made mistakes. I'm going to talk about some of the mistakes we've made, some of the solutions we've found to those mistakes. And hopefully this experience will prove useful to other people considering projects which also explore the distributed, decentralized, emergent area of the design space. So quick background on what FreeNet is. Our goal is to permit publication and retrieval of information across an entirely distributed peer-to-peer network, which provides practical anonymity for both the publisher of the information and the consumer of that information. From a user point of view, FreeNet looks a bit like the World Wide Web, or at least that's one way you can use it. Unlike the World Wide Web, of course, when you publish a website, you don't publish it to an identifiable web server. And rather, it's distributed around the network in a somewhat haphazard way, which makes it very difficult to identify who the original publisher was. And secondly, it's also very difficult when someone looks at that website to identify who those people are. Whenever you're trying to build a system which is designed to make censorship more difficult, one of the biggest mistakes you could make is say, OK, we'll just have one big central server, and everyone will trust us. And hey, we're trustworthy, so why wouldn't they? We quickly realized that that approach just isn't going to work, firstly, because I'm very susceptible to torture and would quite happily compromise someone's anonymity. And I'm also quite susceptible to being bribed. So we realized that we'd need to make the architecture completely decentralized such that even its own creators would have a very hard time compromising its anonymity if, for some reason, they were motivated to do so. The architectures inspired my background is that I studied artificial intelligence and computer science for four years. And one of the areas that really interested me during that time was that of emergent architectures. We see examples of emergent architectures throughout nature. If you look around, you'll probably see an ant colony outside your hotel room, or if you're unlucky, inside your hotel room here somewhere. And one of the fascinating things about, for example, an ant colony or a flock of birds is that these individually quite simple creatures or creatures following quite simple rules can exhibit very sophisticated behavior when you put a whole bunch of them together and allow them to interact. I was very interested in trying to find a practical application for such an emergent architecture. And I believe I found such an application in trying to build FreeNet to achieve these goals. So obviously, another important criteria of any such architecture like this is that it must be scalable. Since FreeNet relies on emergent principles heuristics, you can't really make guarantees about its scalability characteristics. However, can everyone hear me, by the way? It's quite noisy out there. However, you can make average case experimental analysis. And we've determined that it pretty much has logarithmic scalability, which is somewhat in line with what our concept of how it works would say, because it approximates a distributed, sorry, a binary search tree in terms of how it works. You could look at it as a binary search tree on drugs, extremely confused, but relying on somewhat the same principle to find information efficiently. And it's robust. It'd be pointless to build a system like this, particularly when it's open source. If someone can just get the source code, modify it in certain ways, reintroduce their cancerous peer back into the network, and that can have a significant destructive effect. So one of the other things we try to achieve is make the network tolerant either to faulty nodes or to malicious destructive behavior. And we've tested this quite thoroughly because we have tend to do a pretty good job of ourselves creating nodes which, not deliberately, but creating nodes which have a pretty destructive effect on the rest of the network. History, very quickly. I first started thinking about this around 1997, September 1998, while a fourth year student at Edinburgh University in Scotland. I started work on the design, which I finished in July 1999. That basically very basic simulation, which more or less demonstrated that this thing was going to scale. It was fault tolerant and was worth investigating further. I then invited volunteers to come and help me make this thing a reality. We decided to implement it in Java for reasons of portability because it would happen to be the language I knew at the time. And in March 2000, we released version 1.0 of the FreeNet software. Since then, we've had over 2 million downloads of the software. And the current status is that we're preparing for the 0.6 release. So what am I going to talk about here? I'm basically going to talk about to really give a snapshot of what the development process with a piece of software like this is like when you're trying out ideas, you implement some code one day. The following day, a bunch of people are actually running it in the network. Sometimes it screws things up. Sometimes it helps. How do you figure out what's going on? If it screws things up. How do you know what screwed it up? And there are really a lot of competing goals that we have to deal with. We have to keep our users moderately happy. Sometimes we just need to do things to learn what if we try this. And so sometimes we piss off our users. Sometimes we piss off developers. But it's a balancing act. Firstly, I'm going to talk about something called Next Generation Rooting, which I think about a year ago. We started to think, well, how are we going to improve the FreeNet Rooting Algorithm? I'm not in this talk describing the previous rooting algorithm. So it might be a bit difficult to contrast them. But basically the previous rooting algorithm assumed a flat internet. It assumed that any two IP addresses were equal. It made no distinction between an IP address that was just down the street and an IP address which was on the far side of the planet at the end of a 28.8K modem. We wanted to allow FreeNet to take that kind of thing into account when it's deciding how to find information. In doing so, we obviously wanted to retain the spirit of the decentralized emerging architecture, something I think this achieves. And the basic principle was that nodes in the FreeNet network, which are designed to learn how to root information more effectively with time, we wanted to have those nodes make better use of the information that is available to them. The other thing I'm going to talk about is rate limiting. FreeNet broke in June 2003. That's essentially stopped working during a surprisingly short period of time during which we haven't really done anything. This was annoying, because when something like that happens, it's impossible in this kind of architecture, it's impossible to apply the classic reductionist approach to debugging. When normally you want to debug something, it essentially you start a process of elimination, you simplify the problem, you simplify what it's doing, and eventually you narrow it down to the specific thing that's broken. And generally speaking, except in some pretty heinous cases, given sufficient time, you will be successful in debugging the problem. That does not work with a large scale emergent architecture. If you notice that this ant colony is suddenly just not working anymore, and the ants are wandering around and not collecting food, and not doing what they should do, it's actually pretty difficult to take an individual ant, analyze its behavior, and figure out why that local behavior on that specific thing is having this nasty global effect. And maybe it's something that's nothing to do with the ants themselves. Maybe it's because the temperature just got warmer or whatever. So it's very difficult to debug these things when they break. And it took us about six months to debug this. The problem turned out to be chronic overloading of the network. And the problem turned out to be somewhat analogous to an ant colony suddenly stopping working because the temperature increases. Because what caused the problem were, people started to use FreeNet in a different way, and they started to do things, pretty anti-social things, like pump requests into the network, essentially pumping more requests into the network than the network could handle. And the network was not equipped to deal with that. But it had not been an issue previously. So we wanted to find an elegant solution to this problem. That's the second thing I'm going to talk about. So next generation rooting. Just going back a little bit, very basically, what a FreeNet node does is it receives a request for a specific key piece of information identified by 160-bit hash, 160-bit number. And it tries to get that key and return that data to the person that asked for it. The easiest way for a FreeNet node to do that is if it happens to be caching the information locally, in which case it will do it immediately. Failing that the FreeNet node has to decide which of the other FreeNet nodes of which it is aware, it should forward that request to, which other peer is most likely to be able to respond to that request, or if not forward it to someone else who can. This is quite similar to the way people often try to find information. If you're looking for information on a specific thing and you don't know it already, you will think, well, who do I know is most likely to have that information? Even if they don't have it, they're more likely to know someone who does and so on. And that's actually a very effective way to find information, both when people do it and when FreeNet does it according to the simulations we've done and the data that we've recorded on the live network. The question is, how does a FreeNet node make that decision? And that's the major change that occurs with next generation routing. The old routing algorithm made a very simple decision. It basically said, well, I'm looking for key x. I'm gonna forward that request to whichever other node responded positively for a key that is close to x. So it's very simple. It's using its experience from previous routing to make future routing decisions. And that actually worked quite nicely, but we wanted to make it a lot smarter. So what next generation routing essentially does now is it will collect a shitload of data about the performance of other nodes in the network. What proportion of requests that I sent to this node for this key succeeded? If they succeeded, how long did it take? If they failed, how long did that take? If they succeeded, how quickly was the data transferred once it succeeded? And a bunch of other similar statistics. That information is then combined. When we receive a new request, we use all of this previously collected information to, for each peer, come up with an estimate of how long we think it will take to find the data. And we take into account the possibility that it might not find the data, in which case the time is the time it took to fail plus the time for someone else to try to find the data. So we can come up with an estimate for every peer in the routing table, then we simply choose the peer with the lowest estimate and that's where we forward the request. Okay, so then the next issue we need to tackle is, well, how do you internally represent this information? So to simplify it a little bit, let's say we're just focusing on how long the request took to succeed, assuming it succeeded. Well, as I said previously, every request in FreeNet is a request for a 160 bit number, a very, very, very large number. And what we wanted to do was we wanted a FreeNet node to be able to say, well, a request for key x when I sent it to this guy took this long. A request for key y when I sent it to this guy took this long and then infer through interpolation how long a request for key z is going to take. Here's how we did that. And this, we had an earlier version of this which didn't work very well and this is kind of a simpler version. Essentially what you're looking at here is a FreeNet node's internal internal representation for a given peer and for a given key sent to that peer, request for a key sent to that peer, how long is it gonna take? You can see here the estimate is represented by a green line. You'll see also that the key space is split up into these vertical bars. Within each vertical bar, we maintain a running average of the samples for keys which appeared, which we sent to another peer that was there. So let's give you a specific example. Let's say we send a peer to a request to this peer for this key here, oops, along this vertical here. And it takes this long. Well, we plot a point there and we integrate that into the running average that we're maintaining for that sector. We then shrink that sector slightly so we bring its edges closer together. So the reason we shrink sectors is if we're getting a lot, let's say this node, as you can see, is pretty good at responding to requests in this area. So we're assuming that this node is kind of specialized in that area and other nodes are gonna figure out that this guy is good at serving requests in this area and therefore they're gonna send it more requests. Now since this peer is a specialist in that area, we wanted to be able to maintain more detailed information about what kind of estimates or what kind of request times we're seeing within this area. And so we shrink the sector size so that we can represent the information with a greater granularity in areas where that node is getting a lot of requests. Hope that made sense. So that's the basic idea. There are some problems, well, not problems, but challenges with this. Firstly, I apologize for the writing being a little bit, blame Linux. I know why it's like that. But we don't know how much data, I can't even read it here, hang on. Okay, so when we send a request, we don't know how long, how big the data is going to be in response to that request. So what we have to do is essentially normalize our estimates to the average size of data in our data store. So we're essentially normalizing out the variations in the length of data that we might receive for the request. Another big problem is at the moment, well, okay. So if a node forwards a request, if that request is for data that is in the network and the node fails to find it, well then we should punish that node for that by increasing that node's estimate for that key. But if a node forwards a request for some data that is not in the network, then it's not actually that node's fault that it didn't find it. And we needed to statistically account for that when we're coming up with these estimates and I won't bore you right now with how exactly we do that. Okay, and then the third point. Okay, so does it work? Basic answer, we don't know yet. We're still testing it. We've been testing it for a while. There have been some problems. But to give you a feel here, each of these horizontal lines is basically, each of these horizontal lines represents one peer, one free net peer in a small network of one, two, three, or eight peers. Each narrow vertical line indicates that that peer is caching a piece of information at that area of key space. Now what we did was we initialized each of these peers such that each peer is caching a certain section of the data. This is obviously artificial. This would not happen in a real network but we're kind of testing things out. So these completely dark areas are areas that were pre-initialized with keys so you'd expect them to be dark. What's interesting is that around these dark areas you see that nodes have accumulated data that is similar to the data that they are caching. The reason they've done that is other peers have figured out what their specialization is and they are sending requests to those peers which they think based on their existing knowledge of those peer specializations, they'll be able to answer. So what this demonstrates in a very simple toy way is that next generation routing is actually allowing nodes to learn true things about the expertise of other nodes in the FreeNet network and route to those nodes accordingly. What are the problems? Complete, bitch to debug. We've been debugging this for a year and roughly once a month we find a bug which we know would have totally precluded any chance of it working whatsoever. That's slightly impeded our progress, as you can imagine. We've actually made a lot of progress very recently on that by setting up small test networks and we've kind of been kicking ourselves for not doing something similar earlier or rather other people have been kicking me for not listening to them. And we've made a lot of progress because of that. So hopefully we're nearing the end of that rather tedious cycle. We're not convinced that nodes are learning quickly enough what the specializations of their neighbors are. This is a problem when people are, when nodes don't have that long a longevity. If you spend two days gaining a detailed expertise as to the specialization of one of your neighbors and then that guy just disappears. He wants to run Quake or whatever and JVM and Quake doesn't quite co-exist to the extent that they need, then you've just wasted a lot of time. We've tried to alleviate this problem by allowing nodes to share information about each other's specialization in a way that incorporates some precautions against the obvious ways that that could be abused. So we're making progress there. It's something we're actively working on right now. The other big problem we've had which I'm about to talk about next is this overloading problem, which it's pretty difficult to debug one thing when another thing is completely preventing the network from working at all. We've had problems but we've soldiered on and I think we're seeing the light at the end of the tunnel now. So rate limiting, the other problem. Network stopped working July, 2003. We noticed that over 90% of the messages being sent between nodes were messages saying don't send me any more messages, I'm overloaded. This was essentially a consequence of too many requests being pumped into the network more than the network could conceivably handle and that effectively led to a catastrophic collapse of the network which really confused us because it happened suddenly as catastrophic collapses tend to do which made us think, did someone press a button or something but in fact it was just one of these weird things that can happen with this type of network. Something, it's a sudden symptom of a gradual problem which can be very confusing. The underlying problem here is that free net nodes are very, they're kind of like dogs. You keep throwing the stick and the dog just keeps going to fetch it and if you're throwing two more sticks than the dog can handle, the dog isn't gonna kind of go, you know, fuck you, I'm gonna find a new owner. The dog will diligently try to answer every request and end up getting very confused and essentially the solution to this problem was to make free net more cat-like. Effectively, well, in, wait, do I say this here? Okay, so basically in other words, we needed to place a limit on free net short term goal of being a good citizen and diligently asking every request with a longer term motivation or imperative to not get overloaded such that you can't answer any requests. In other words, it's better to turn some people away than to fail to respond to everyone, which is what was happening. How do you do this? Well, this is essentially a load balancing issue and we thought, well, wouldn't it be nice to try to, well, actually, no, we didn't. We spent about six months screwing around with a bunch of ideas that didn't work and then we thought, you know, okay, well, we should kind of come up with a solution from the first principles here because the kind of alchemy solutions, you know, just weren't working. So we went back to basics. What is load? What is load balancing? What is the goal? What does a good load balancing algorithm achieve? Well, firstly, what is load? We define load and we think it's a pretty reasonable definition to be the percentage of the total capacity of our least available resource that we are currently using. So if your least available resource is bandwidth, your load is whatever percentage of your available bandwidth you're using, if your least available resource is CPU, the same, et cetera, et cetera. What does the perfect load balancing algorithm do? Well, it basically ensures that we do not exceed 100% load. That's according to our definition of load. In some cases, 100% load is a very, very bad thing, but according to our definition, that's kind of a desirable situation. So a good load balancing algorithm will ensure that if the network is constrained by load balancing, in other words, if it's receiving more requests than it can handle, then all nodes should be losing, using close to 100% of their least available resource because that means that you haven't got some nodes sitting semi-idle while other nodes are struggling. So how do you do that? Well, you, I cannot read this. Okay, so what you need to do is add a competing imperative to prevent overloading by limiting the short-term imperative to respond to requests. How do you do that? Well, it's pretty simple. Each node estimates the total number of incoming requests that they can handle. For example, if our current load, which we can calculate quite easily, is 50% and we're handling 4,000 requests per hour, we can estimate that if we're handling 8,000 requests an hour, we're gonna be at 100% load and that's our desired situation. From that, so we now know what we think our total capacity for requests is based on the information we're looking at right now. We base it, this is essentially a quota. This is our total quota of requests that downstream nodes can send us. So what we then do is we assign, we break that quota up and we assign a little bit of quota to each of the nodes that want to send us requests such that we know we're not gonna get more requests than we can handle. More requests than would create 100% load. Lastly, and I'm gonna explain this, we need to employ something we call a dynamic reallocation of quota. Why do we wanna do that? Well, the naive way to achieve what I just described would be to say, okay, our quota is 8,000 nodes. We know about 80 peers, therefore we're gonna allocate a quota of 100 requests to each of those peers. That's not a good idea because most peers are not going to use 100% of our quota, so we're gonna end up being under loaded. So what we wanna do is reallocate quota that has been allocated, but which is not being used to other nodes that will use it. Also, if a node has been under using its quota and its quota has been reallocated, but now it needs it back, we need a way for that node to get the quota back. And that's what our rate-limiting algorithm achieves, we hope. So very simple equation to achieve this. It's the only equation here. Basically what we're doing here is we're saying your quota is whatever wasn't used by other nodes previously plus what you used last time. So we're essentially saying your quota is what you used last time plus what other nodes weren't using. The reason we have a max thing here is then we say your quota is at least what your quota would be had there been an equal distribution of quota. Not sure to what extent I should go into explaining the rationale behind that, but I'll try a little bit. The basic idea here is that a peer which is under using quota is going to get less than the quota it should have. Now actually I'm not even gonna bother explaining that. Read the paper because it's just not gonna work verbally even with the help of this wonderfully illustrated slide. So does it work? Yes, actually amazingly well. This graph which I realize it's kind of difficult to see represents, it's based on each different colored line is a different resource. The green line which is difficult to see, but the green line is the incoming bytes per second. The blue line is the outgoing bytes per second. The orange line is request incoming. The red line is request outgoing. You'll notice something different about the blue line which is the blue line which is outgoing bandwidth in that it's actually kind of flat. Most people who've done work in peer to peer will tell you that typically the most constrained resource is going to be your upstream bandwidth due to the annoying fact that most broadband connections can download at 10 times the speed or approximately 10 times the speed that they can upload. Our rate limiting algorithm has figured this out. It's basically realized that outgoing bandwidth is the critical resource. It's the resource that we're kind of bumping up against and by limiting the number of requests we're getting, it can actually maintain that resource such that it does not exceed. I think we're assuming that the allocated bandwidth is about 10.2 kilobytes a second and you can see that this resource pretty much never exceeds that. Sometimes it drops below that and that will typically mean that another resource just momentarily has become the critical resource. So it works and we thought it was pretty cool that it works because it's essentially behaving like a thermostat. It's controlling the effect something has, namely the incoming request or by limiting the incoming request, it's able to control an indirect result of the incoming request much as a thermostat controls the temperature in a room except most thermostats operate on a kind of, I'm on or I'm off basis. This can actually have a more, it can actually be better refined so it can say heat up this much or cool then this much. That was kind of cool, but we, it's not perfect. It has successfully, very successfully solved the stated problem that we thought we had that individual nodes were getting overloaded. Unfortunately, it isn't fully solving the problem of the network as a whole being overloaded. We think the problem for that is there's no tit for tat strategy and node which is exhibiting bad behavior by pumping too many requests into the network is not punished for doing so by this algorithm. Yes, if it's using a lot of requests and other nodes aren't, it'll be permitted to do that but when other nodes need those requests back, it basically goes back to the same status. It's back to the same boat that everyone else was in even though it's actually placed a lot more load on the network than everyone else. So we're thinking about ways that we can not so much punish but if you're using more requests than you deserve, then you essentially build up a debt to the network which you're forced to repay when other people need one to make more requests when previously they were good citizens. Another problem is it's seriously inhibiting next generation's routing's ability to route to the peer that next generation routing thinks is going to produce the fastest response. Why does it do that? Well, essentially rate limiting works in practice by putting nodes on a timeout bench and node, when you send a request, will come back and say, okay, you need to wait five, 10, 30 seconds before sending me your next request. That node for the next five, 10 or 30 seconds is not considered by the next generation routing algorithm and what can happen in practice is that next generation routing is getting its fifth, 10th, 15th, sometimes even 20th preference when it sends a request. That's okay to a degree because it encourages next generation routing from sending all its requests to the same place but in excess, it's kind of preventing next generation routing from doing what it's supposed to be doing. We think that this is basically a symptom of the, not having it, well, these are basically all symptoms of the same problem. So, conclusions. I'd hope to, okay, sorry. I'd actually hope to, it didn't display one or two things that I wanted to display, one or two graphs that I wanted to display so if you can just give me a second, I just wanna show you, oh no, actually I'm confused, it did, so cool. So real quick, some conclusions and then, hopefully some people may have one or two questions and we've got about 10 minutes for those. If not, I'll warble on some more. So, oops, bugger, sorry about this. Okay, conclusions, next generation routing has not yet adequately demonstrated its effectiveness but the results to date are promising and it's not all next generation ratings fault. Rate limiting achieved its stated goal but it seems that its stated goal is not sufficient to fully address the overloading problem. Lastly, and this is kind of a higher level comment about the style of development process we have which some would say it's extremely haphazard, it gives some computer scientists a heart attack but it's really a style of approach that comes out of the AI background, the AI way of building emergent architectures, neural networks, connection to systems which is, rather than sitting around with whiteboard and theorizing for weeks or months, we come up with an idea, we try it, we see if it works, we tweak it if it doesn't, we throw it away if we can't get it to work and it's essentially is extreme embodiment of the release early release often notion and it pisses some people off, we get, I think one of our FAQs is, why don't you do integration testing and the problem with a network like this is that deployment is the integration test. There is no other integration test other than deployment for this type of architecture. So, it gives some people a headache but we're pretty confident that it's the most appropriate development model for what we're trying to do. And with that, I'll see if anyone has any questions. One over here, very good questions. So, yeah, I'll repeat the question. The question was, in terms of the NGR algorithm learning too slowly, aren't there knobs we can tweak to speed up the degree to which it learns? The answer is yes, but we have two competing goals here. Essentially, think about a running average algorithm which is basically NGR consists of a bunch of running average algorithms. Now, with a simple, let's say you're talking about a simple decaying running algorithm, running average, you have a decay factor. The higher the decay factor, the more quickly it will adapt to the information it's seeing but the more quickly it will forget older information and that can basically make it too fickle. And we have been tweaking these things and when we tweak them, we're essentially, you know, dealing with those two competing problems that if we go too far in one direction, it learns too slowly and if we go too far in the other direction, it ends up jumping all around the place and just doesn't learn. And we've experimented with, you know, we started out with simple decaying running averages where you have a half life and, you know, we've kind of moved on to, we've experimented with a wide variety of different running average algorithms where we have, you know, a window and we take an average across that window or we take averages of averages and all sorts of different things. So essentially we're doing exactly what you're suggesting that we do but unfortunately it's not just, you know, they're competing problems that we have to deal with. Any question there? Okay, so the question was in the rate limiting algorithm, we treat all nodes as equally deserving of quota and that is absolutely correct. And that actually gets to the heart of these problems that I was talking about which is, well, there's no distinction between client and server in FreeNet. So we don't assign labels to nodes. Everything up here knows about another node is stuff it learned from personal experience just because you can't really trust another node to say, you know, if you say, well, what are you good at? It'll come back if it's malicious and say, I'm good at absolutely everything. Send me all your requests and I'll do nasty things with them. The problem is, or we think the problem is that we do need to distinguish between different nodes. We do need to say that a node that has hardly been able, that has hardly sent us any requests that's hardly used any of the quota we gave it does deserve a higher priority than a node which has been using all available quota and using all the other nodes quota and that's been flooding us with requests. And so, you know, part of what we're really working on literally as we speak is how do we achieve that? How do we incorporate that bias again, you know, in favor of better behaving nodes into the algorithm? Any other questions? I think this gentleman here was first. Essentially, the ladder is incorporated into the former so obviously the further a request has to go the longer the request is gonna take. If the node to which you rooted the request is not very good at rooting requests or not very good at rooting that type of request then it's not gonna root it very well and the chances are the request is gonna take longer. So, we don't explicitly measure how many hops the request goes through but that's kind of taken into account because we're measuring how long it takes in terms of time. Does that answer your question? Another question here. Well, that is, yeah. In an ideal world, yes, that would be good but unfortunately because of the way FreeNet uses indirection to achieve its anonymity there isn't really any way that a node can do that because a node doesn't know where the request originated if it did it would compromise FreeNet's anonymity. However, what we're hoping is that the rate limiting algorithm will basically have a kind of trickle back effect much as backward error propagation in a neural network. It'll have a trickle back effect such that the node that is actually flooding the network will not instantly but will slowly find that all of the other nodes it's trying to send requests to will figure out, look, you're being a pain in the ass and we're gonna limit your request. So, we don't do it directly but our hypothesis is that through this trickle back effect that will occur. Any other requests? This gentleman here? Passive up, okay. Okay, so the question is a very good question. We upgrade FreeNet very, very frequently. In fact, we produce a new snapshot daily and a lot of people actually keep pretty up-to-date with that. One of the problems with that is in order to upgrade you have to take down your node and it'll forget stuff and it'll get disconnected from other nodes and that's actually kind of a pain in the ass both for you and for the rest of the network and would there be, the question was would there be any way that we could allow the peer to be upgraded without taking it down? The answer, unfortunately, is that Java makes that very difficult. It is, you know, and it would probably be pretty difficult with irrespective of what programming language we were using except perhaps, you know, scheme or something. So, unfortunately, the answer is no and I think it's unlikely the answer will ever be yes to that, I'm afraid. There was another question there. I think I've got about a minute left. Okay, the question was to what extent do different nodes share information about writing to different peers? The basic answer is that when you find out about a new peer, which kind of happens, there are a couple of ways that you can discover new peers in the network associated with the address of that peer will be information that has been collected by another node about that peer. So, in effect, another peer will say, you know, A, you're looking, you wanna find out about a new node, I know this guy, Joe, here's what I've learned about him. You know, and you can use that as a basis for your own statistical information about Joe. The threat there, obviously, is that if I send a request and the request goes to Joe, then Joe can say, oh, well, yeah, I'm really, really great and you should send all your requests to me and the way we kind of thwart that problem is that if you're forwarding a piece of information