 So welcome everybody to my presentation. I'm Tillman. I worked for a company called CrowdStrike which is an American startup that deals with targeted attacks. But today I'm going to talk about something else. I'm going to talk about one of my favorite topics, one of my hobby hobbies which is peer-to-peer botnets. And peer-to-peer botnets are interesting because they're designed to be resilient against attacks, right? And I'm usually trying to attack botnets and have fun with them. So, let's see. So yeah, there's an agenda. Okay, let's start with a quick introduction to peer-to-peer botnets. I guess most people in the room here are familiar with peer-to-peer networks in general. I mean, there are networks like BitTorrent, eDunkey, like file sharing networks and others. And usually the purpose is to build a decentralized infrastructure that's self-reorganizing. So if parts of the infrastructure go offline, you know, it recovers itself and so on. And usually people build peer-to-peer networks in general because they want to get rid of any central components so the infrastructure cannot be taken down so easily. When you analyze a peer-to-peer network of some sort, you want to understand the protocol first. That's not too much of a problem for all the popular file sharing networks because they're well documented. But if you look at peer-to-peer botnets, well, they usually use their proprietary protocols that you have to reverse engineer first and understand first. So you have to look at the samples, do the reverse engineering and so on. But if you do that for several peer-to-peer networks, you will at some point see that there are different approaches. One is based on gossiping. So if you think about that, you have all these different nodes that are interconnected somehow and you want to propagate some information in this peer-to-peer network, right? You can either do that by what we call gossiping. So each peer kind of gossips information to its neighbors. So basically forwards information to all its neighbors and these do the same and so on. But you can, if you think about that, that's probably not very effective, right? Because probably several peers will receive information several times. So you fill up the network with more information than you actually want to or have to. So more advanced peer-to-peer networks use what people call an overlay network. So you have addressing on top of, you know, the general addressing methods like IP. So every peer has an ID or some sort of address and then there is a routing method so you can address specific peers. And if you want to send information to a specific peer, then well, you can, if you know its address, you can route that through the peer-to-peer network. An example for that is idanki. You have a distributed hash table on top of, you know, the IP network. Every peer has a hash, which is at the same time its ID, its address. And then you can look up data in the hash table and so on. But I'm not going into detail about that. One important thing when we talk about peer-to-peer networks is bootstrapping. Bootstrapping is the process of establishing connectivity with the peer-to-peer network when a new peer comes online. And that's a very important aspect. That's a very important thing because when you think about that, you want to get rid of any central entities in your peer-to-peer network, right? So it might not be a good idea to have a seed server that all peers contact to request an initial peer list, right? That would be a central component and you don't want to have that. So what people are doing is they deliver a seed list, a seed list of other peers together with the node itself, right? So for example, with the executable, that's then executed on the node system. But what happens if these peers go offline for some reason or if they are not online, the computers have been switched off or something, then you need a fallback method. And that's where it's getting interesting. And if you look at the box at the right-hand side, the third entry is ConFicker, which is a very famous or infamous piece of malware that was active in 2009 in the following years and still is very active. And ConFicker used random scanning. So it scanned the internet for other peers randomly. And of course, there's no way to block that, right? There is no information that the bot relies on when it's first started. It just starts scanning the internet until it finds other peers and then it can learn other peers from that one and so on. Do that recursively to establish connectivity with the network. Yeah, speaking of that box, that's my own private history of peer-to-peer botnets I analyzed. So I started in 2008 with the storm worm, which used the eDunkey network or protocol, together with some other people, some of them are here in this room. There are earlier peer-to-peer botnets that are known. I think Nugash was active in 2007 and maybe there were some others, but I think Nugash from 2007 is the earliest I know personally. Then there was Walledac, which people believe is a successor of the storm worm. Because storm was, you know, it caught a lot of attention by researchers and lots of security people try to investigate storm and try to understand the protocol. Some even designed attacks, how you can attack the peer-to-peer network to knock it offline or take it offline, take the nose offline. So apparently the people behind it decided to abandon it at some point and turn or create a new botnet and that was called Walledac and that was not relying on any existing peer-to-peer infrastructure. So no eDunkey anymore. Instead they implemented their own proprietary protocol, which was very similar to, maybe I shouldn't say very similar to eDunkey, but the overall concept behind the botnet had similar structures and design characteristics. So that's why people said it's probably a successor of storm. Yeah, then I already mentioned Configure. Configure was interesting because it started out as a bot that was entirely centralized with its command and control infrastructure. Many of you probably have heard about the DGA, the domain name generation algorithm that it included. So it generated pseudo random domain names all the time and then tried to contact, resolve these and contact that host and ask for basically updates. Later on these people switched to version C, the third version they switched to a peer-to-peer protocol as a fallback command and control channel because there was some effort to block access to the generated domain so they needed something else, otherwise they would lose their 8 million nodes botnet. So that was Configure and then in 2010, the Kileos era started, that's also known as HLUX or I think HLUX is the other most well-known name. And that again is believed to be a successor of Walladec and that is because Walladec was taken down by some people and myself with a peer-to-peer poisoning attack and I will talk about that a little bit more in a minute. So that bottom was taken away from them so again they created a new one and that was called Kileos A and that's actually interesting because if you look at the list, Kileos A was attacked as well with success so they created Kileos B, a successor and tried to fix some stuff that was taken down as well and again they created Kileos C, a third version. We attacked that as well. It wasn't too successful, it somewhat survived because we didn't manage to own all the peers and just recently they changed something in the protocol and added private public key encryption to it. It doesn't make sense at all because you might want to encrypt your traffic but you can do this with symmetric encryption. It doesn't make sense to do private public key stuff because the peers have to generate their own keys and exchange keys and so on. I mean anybody can do that, right? You can still infiltrate the botnet by just doing the same so it doesn't make sense. Anyhow. Okay and then in 2011 there was the minor botnet and I will show you some protocol examples for that. A really stupid piece of malware that was written in .net if I'm not mistaken and the protocol was HTTP based so it was a plaintext protocol and they made several mistakes so it was trivial to take down. Okay and the remaining two, zero access and peer-to-peer zoos are somewhat interesting because they're still around and they're really successful. They're some of the biggest and most prevalent botnets that are around these days and they're mostly used for dropping other malware on the infected system, especially zero access. It's basically a platform that is used to deploy other malware. Like clickbots and so on. Zero access is actually split into I think seven or eight separate botnets. I don't know why maybe they have some affiliate program or something. They also distinguish between 64 and 32-bit systems because they want to be able to inject DLLs into other processes and that might make sense to maintain two separate infrastructures. Okay well, going back to my slide here, obviously people built peer-to-peer botnets because they want to, they have the same goals as other people who built peer-to-peer networks. They want to create a resilient infrastructure that is resilient against takeover attempts or take down attempts. So that's the goal and that's why they're getting somewhat popular. I'm sure there are other peer-to-peer botnets out there that are not on my list. I'm aware of a few but I haven't looked into them so I'm not going to talk about them. Interestingly for I think all, yeah, all botnets that you've seen on the previous list, the architecture is not entirely, not purely peer-to-peer. It's a hybrid architecture. It's what you see here. So the thing at the bottom is the actual peer-to-peer network and the dashed lines represent a peer being in the pillars of another peer. But when they want to receive commands for, I don't know, sending out spam or something like that, they still reach out to central components and the boxes you see in the middle, can you see that? Yeah. The boxes you see in the middle are proxy servers. So they usually have another layer in between like systems like burner systems. So if some of the proxy servers get taken down, they can easily replace them without losing their command and control infrastructure. And then there is command and control server on top that is the actual back end. Okay. There might actually be multiple layers in between the peer-to-peer network and the C2, but well, unless you get access to one of the proxy servers, you don't see what's behind it. But we're fairly certain that in most cases these are proxy servers because, you know, for example, when they speak HTTP and they respond with an NGINX banner, and well, you can't be certain that it's a proxy, most likely at least. Okay. Let's take a look at some protocol examples so you get an idea what these people create and come up with. This is the already mentioned minor bot. And as I've said, that was really a trivial and also a stupid protocol. It was HTTP based and the bots, all the bots implemented their own tiny HTTP server. I mean, it wasn't a full-bon HTTP server, but, you know, just a very rudimentary one that was backed up by the file system. So if you would issue a GET request with this search parameter and the, you know, the IP list to value, that file name would be looked up in the respective directory and then delivered to the requesting host. Okay. So it was really, I mean, if there were other files on the file system in that directory, you could request them as well with this method. And that was probably not intended by them. Yeah, anyhow. So you can see the response here. In that case, I think the NGINX server header is fake. They just copied that from somewhere and send it with the responses. And you can see at the bottom is the actual payload, a list of other peers, a list of IP addresses. And minor always responds with the entire peer list that it has, right? All peers that it knows about. And that's stupid because this can be huge. And also that makes it easy for us to, you know, enumerate the bots and understand how many infected machines there are and so on. If you want to attack it, for example. So, I mean, this is only the start, right? You can see it's 11K in size and this is by far not the largest request or response we've seen. You can try to recreate this peer to peer graph because it's basically a graph, right? It's nodes who know about other nodes and talk to other nodes and so on. So it's basically a graph. You can try to recreate that by crawling peers. And we will talk more about crawling. I mean, that's the topic of the talk, right? We will talk more about that in a minute. But if you request a peer list from one peer, you can recreate these links in the graph and then take the response, the IP address from the response you got back and do the same for these and so on. And then, you know, plot pretty pictures like this one here. I think that's about 37K nodes, which is only a subset of the minor botnet at that time. But it takes like ages to render this picture here. So we only did that for a subset of the nodes we found. You can see that other peer to peer protocols are somewhat similar. This is zero access version one. There are two versions out there. This is the earlier version. And they define, again, it's a proprietary protocol that they implemented. They define, I think, six different message types. And one is a get L, which means get list, get peer list from another peer. And the red L is the return peer list message. This is what you get when you reverse engineer the message format and decode it. And parse it. It's not plain text. I think zero access version one had a four byte key that it hashed with MD5 and then used that MD5 hash as an RC4 key to decrypt its messages. But it was always the same key. So it was basically symmetric encryption with a static key. And the other version just used X or with another key. Version two. So if you undo the encryption, you end up with something like this. And you can see here, in the case of zero access version one, the peer list has 256 entries. So it always returns up to 256 entries. But since the botnet is so, is large enough, every peer has always more than 256 entries at any time. So whenever you ask a peer for its peer list, you will get these, most likely, these 256 entries. And you can see there's some order there. So the first number is a timestamp or a time delta, so to speak. Because the botnet favors peers that have recently been active. And that makes sense because you don't want to keep, maybe, if that is your strategy, you don't want to keep like peers from the stoners in your peer list that might be offline already or, you know, they reboot from time to time, get a new IP address, so the entry becomes invalid. And so you might want to favor peers that have recently become online or that you have recently talked to. So that's why they sort these peer lists by the time delta and then return the 256 most recent ones. They changed this protocol a little bit in version two. So this is zero access version two. And you can see there are, again, these two message types. I've already mentioned that the encryption is slightly different, but for the most part the protocol is very similar. So there is a get L and a red L. Again, you have the timestamps and you have the IP address, but they figured that they don't need to send back 256 IP addresses. That's way too much, you know. It's sufficient if you respond with only 16 IP addresses that makes the message smaller. So, you know, less overall communication in the botnet. And the reason is, I mean, zero access version two is really huge. We've crawled some of the botnets and they count like, you know, 3.7 million. I think that was the count we got. 3.7 million in fact in machines. And if you have 3.7 million machines talking to each other, that's a lot of traffic. So you might want to reduce the message size. So that's what they did, but if you take a look at the IP addresses, you might notice that the last octet looks a little bit strange. It's always very high. And that is because they do some deduplication. You don't want to or multiple entries with the same IP address in your peer list, obviously, because if you allow that, it's trivial for other people to poison your peer list and inject one entry multiple times and overwrite all the legitimate ones and then you're not connected to the peer botnet anymore, right? To the peer to peer botnet anymore. So that's why they do deduplication. And in order to do that, they sort the IP addresses and then, you know, go over the sorted list and if, you know, they have two consecutive entries that have the same IP address, they kick one out. But because IP addresses are at least on PC sort in little Indian, you know, and they sort them, you have in the result these IP addresses with a high last octet in the response. What's interesting is that they do that, but they don't filter out invalid IP addresses. So when you crawl the botnet, you come across IP addresses like 255, 255, 255, 255, 255, so all bit set, which obviously is an invalid IP address, but it regularly shows up in these lists because it's, you know, when you sort the list, decreasing order then it's the top most entry and it's always included. And they have some other garbage in there, so for some reason they don't filter out these entries, which is interesting. Okay, let's talk about crawling. So, I mean, crawling is nothing else but recursively enumerating peers. You start with one peer, you request its peer list, you take a look at the response and do the same for all the returned addresses, right? And so on until you, you know, want to go offline or I don't know. So that's all that crawling is, but you really want to think about a crawling and one important thing is crawling speed. So ideally we will be able to take a snapshot of the current peer-to-peer graph and then, you know, enumerate the peers of that, in that snapshot, but that's not possible. First off, because, you know, you have to do that actively, you have to send out requests and process the responses and that takes time. And while you're doing that, the structure of the graph might be changing, right? Peers might go offline, new peers might come online, so you will never be able to get that snapshot, right? But to come closest to that, you want to be as quickly as possible. Yeah, and when you do that, you have to think about things like unresponsive peers. What if you, if somebody sends you an IP address back that's offline, how do you deal with that? Do you want to keep it in the list and try again later? I mean, you don't know why it's unresponsive, right? You might lose packets, the network might be overwhelmed with your traffic because you try to be as fast as possible. You don't know why it's unresponsive. Or, yeah, there is some hiccup on the Internet. So you might want to keep it in the list and try again later, but, you know, you can see it's getting a little bit more complex. And what you see in the top right corner is the result of us crawling peer-to-peer Zeus, which is also known as GameMobile, by the way. And the red, the red line, the red graph shows you the number of IP addresses that we learn. So we call them known peers. But most of them are not peer-to-peer Zeus. But most of them are not actually reachable, although the protocol is pretty robust, so they don't include any in-dollar IP addresses in it. But most of them are not actually reachable. So if you count only the peers that you can talk to, you end up with a green line and you can see it's way less. And you see, if you see these little dips in the red line, that is because for Zeus, peer-to-peer Zeus, we chose a strategy where we cleaned up the list of known peers from time to time. So we said, okay, these are unresponsive for too long now. Let's kick them out to keep the list small because otherwise, you know, you have an endlessly grown list. But what you can also see is that the green line converges very quickly, and that means you have probably reached the number you are able to crawl. And that gives you some size estimation. Okay? Okay, there's some fancy animation here. You might wonder why anybody wants to crawl peer-to-peer botnets at all. I mean, it's interesting to play with that. It's interesting to understand the protocol and reimplement it and so on and then play with the botnet and maybe, you know, snoop on what they're doing. But we usually have other goals. I mean, reconnaissance is usually the foremost thing, right? But why do you want to learn about the peer-to-peer botnet and the infected machines? I've already mentioned size estimation. If you talk to the press, they really like high numbers. So if you tell them, you know, zero access is 10 million infected machines large, they will love that. But next time you have to tell them the botnet is 15 million infected machines large or something. So, yeah, size estimation is one thing. But you have to be aware that you can only crawl behind that, behind gateways. You can't directly talk to them. You can't reach them from the internet, right? But they're still part of the peer-to-peer botnet. They are like leaf nodes in this graph. So it's not trivial. If you do what we did for peer-to-peer zoos and you end up with this green line and you get a number of machines that you can talk to, you have to extrapolate from that number to get to a more important point. Infection tracking is something that people are doing who want to remediate or kill these botnets. They want to learn about infected machines and then can report the IP addresses to, let's say ISPs who then pass the information on to their customers and hopefully they clean up the machines so the botnet dies off. But I have never really seen that being successful. Geographic distribution is something you can do geolocation lookups and then if you want to plot them on a map like what we did here and I want to mention Mark Schlusser and some other guys who created the code we based this on. This is actually a live thing. So we send in a live feed of the crawling results and that displays these nice little red dots. Okay, but what we're usually after is we want to attack peer-to-peer botnets. So I mean if you can, for example, if you know all the nodes, you might want to try and send them commands yourself, right? If you also understood the command control protocol. There are sometimes interesting commands like uninstall commands. If you can send an uninstall command to all the bots you've identified and they are the ones you can talk to. So it's the backbone of the whole graph, so to speak, right? Then you can send requests for more information about the infected machines. You can, for example, get information about the operating system version or other stuff. So that's usually interesting as well. But you can also probably manipulate the peer-to-peer infrastructure. So think about it. If you can generate your own peerless and then propagate these in the peer-to-peer network, you can create edges, you can kill other edges by replacing them and so on. You know, temper with that infrastructure. And we will talk more about that in a little bit. Ideally, you might be able to sink the whole thing by replacing all the legitimate entries in the peerless with your own ones and by that have all peers talking to your own machines. Which means that nobody else has access over them anymore. If you think about crawling strategies, you might ask yourself, do I spend a depth-first search or a BFS. But it doesn't really matter, at least that's what we think, because first off, it's not a tree, it's a graph. I mean, you can distinguish the two strategies anyway, but it doesn't really matter because it's dynamic. So it's changing all the time anyway, so it doesn't really matter which nodes you start with and which nodes you continue with. At some point, if you're looking for the biggest part of the ritual machines. If you track the infected machines, you need to be able to distinguish have I seen that IP address or have I seen that before? Do I want to include it in my list or is it a new one? And if you rely on IP addresses only, that's a bit of a problem because I've already mentioned there is a lot of IP churn. You know, IP addresses change after 24 hours and if you happen to crawl a peer or contact a peer and IP address changes and you contact it again, you count it twice, so you want to avoid that, otherwise you get screwed numbers. Some peer-to-peer protocols are nice, they implement unique IDs, especially the ones that implement overlay networks because you need them for routing, right? And if you have that, well, then wow, we just gave it to me. Who knew? So part of the DEF CON experience is the best technical talks delivered by the top speakers. It's very hard to get accepted to give a talk here. You all should consider what you're doing to maybe become a speaker at some point. This gentleman, this is his first time, let's give him a big round of applause for the DEF CON. Typically first-time speakers do a shout-out on stage. So, cheers. We've had to do this all now. And now we'll see if you can pick up the talk and start off where you left off. I know that some of you have probably seen this many times, I'm going to not make him do that entire speech next time. Did my voice sound any different? I'm going to nullify the previous one. Let's finish this before the stuff kicks in. We already said that you're done with the crawling when this curve converges because you don't learn about any new peers anymore. And if there are some changes then it's due to churn. So what you see here is an analysis of the convergence for the crawl. I hope you can read that. I realize it's rather small. But on the left-hand side you see curves similar to the one we had on the previous slide. Like the actual number of machines that we identified. And you can see the I mean it depends on the size of the bottom. Of course the upper curves are zero axis which I already mentioned are pretty large. So you get way more hits. And the ones at the bottom are so that's a bottom called sality that I haven't looked into myself but one of my friends has and he has provided these numbers. So you can see depending on the size of the bottom that the scale is different but the shape is more or less the same. So you can see that all of them kind of converge against the straight line and then you know you're more or less done. You can also take a look at the population increase or on the right-hand side which basically correlates with the other graphs. Yeah, so by the way I did mention that I'm going to read some code after this presentation. So we figured that whenever we want to crawl a P2P botnet we end up writing the same code. So after some time we said okay let's build some basic code that we can add the protocol implementation to but do it right once and then add the changing stuff to that and I'm going to release that as open source later on. Yeah, so how do you distinguish peers and I already talked about that. You have IP addresses versus versus IDs in the case where you have IDs. In the case where you haven't you can still derive some conclusions from other cases where IDs are available. And once you see here, I mean I'm cheating a little bit here because these graphs are not generated by crawling. This botnet that's actually Kealio C. So the last version that was attacked earlier this year. These numbers are not generated by crawling the botnet but in this case we did node injection so we propagated a special peer list entry in the P2P network and then it became very prominent and then all the other peers reach out to that machine and by that they are not directly reachable because at some point the entries propagate through NAT and through gateways and so on. So this gives you way more accurate numbers and that allows us to compare the IP address count with the ID count and what you see here is so green is the total number of bots so that's the total number of unique IDs and blue is the number of IP addresses and you can see that this goes up even though we have almost all unique IDs so the slope or whatever it's called is much slower for the green line and that's actually very similar so the ratio between the two after say 24 hours or 48 hours is almost the same for all botnets we've taken a look at and we have a paper out on that where you can take a look at all the numbers but I'm not going to cover that here so you can see after 24 hours that's where the two lines cross even if you don't have unique IDs you can say I take a look at the IP address so I can collect in 24 hours and that gives me pretty accurate numbers. I already mentioned speed, speed is important you want to be as quickly as fast as possible but being fast is not easy I mean if the protocol is UDP based it's a little bit easier because you don't have to worry about session establishment and so on and timeouts actually I didn't get to finish the UDP code most of these botnets use UDP for a reason the overhead is less but I didn't get to finish the crawler template code for UDP so that's left as an exercise for you all you wait until I'm done with it and check it into the repo but UDP is way simpler usually people have either two threats one that sends out messages and one that consumes incoming messages but if you do that many bots work that way actually most of the UDP ones we have seen if you do that you have to worry about synchronization so you have to have a peer list that you lock when you want to send out stuff or select a peer that you want to send data to or when you receive data you also probably want to lock the peer list so you have to synchronize the two so we usually use non-blocking IO in the main loop and just a single threat because it's faster yeah when you're talking TCP it's a little bit more difficult you have to establish TCP connections and you have to worry about timeouts because you don't want to get dust if you don't worry about all these things and you crawl the network they might create half open connections and not respond to you at all or keep connections open forever that are established and then you're running all the file descriptors and your crawling doesn't work anymore so you probably want to have a limited set of file descriptors or sessions that you're able to handle so what we do, what the code does that I'm going to share publicly is it allocates a fixed number of slots for sessions and that's the amount of simultaneous sessions the code can handle and when it wants to contact a new peer it takes the next free slot from that array so by that you make sure that your crawler doesn't get dust I talked about timeouts already another thing is if you talk to a peer then you can definitely say that it's live, that it exists thank you that it exists and the question is how long do you want to keep it in your peer list flagged as active because as I've said previously you want to distinguish between IP addresses or peers that you have encountered and the ones that you can actually talk to that are live but if you talk to a peer in his life for how long do you want to consider it live so that's another thing do you want to consider it live for 24 hours or only 3 minutes or do you want to periodically re-contact it and if it doesn't respond anymore then you say it's not live anymore so these are parameters that are really really important it might not sound like that but they are really important for the specific botnet that you're crawling to get accurate numbers also I mean packet loss especially when you're talking UDP you can't send out lots of UDP packets per time and if you fill up your own line, your own pipe with UDP packets you will have packet loss sometime and then you get funny results either get a bigger line bigger bandwidth a little bit so you want to have a parameter that allows you to slow down the whole crawling process so Prowler is the name of the tool that we're going to release today as I said it just implements the crawling framework so to speak and you have to add the protocol implementation yourself it provides you with some stop functions that get called and that's where you have to implement the protocol so if you want to check it out you can do that too as I said it's only TCP for now and you can see what it looks like at the bottom of the slide you can even see that it distinguishes between known peers and active peers and you can if you take a look at the last two lines you can see that the number of active peers goes down from 719 to 717 and that is because after some time some peers don't respond anymore so they're not considered active anymore and get flagged as inactive and in that case we were crawling Kileo C and that was in February so the the pillars I started off with only contained two entries you see that on the right hand side and Kileo always shares if you request another peers it always shares 250 entries at the first line while that is why it immediately goes up to 250 known peers it contacts one peer it learns 250 entries so it knows 250 other ones immediately and then it continues from there but if you take a look at the two graphs again the green line is active peers that it can talk to and the red line is peers that I have seen in peerless you can see that the green line gets very quickly so it converges really quickly and somewhere in the range of I don't know what is that 700 that's in line with the numbers below and that is because Kileo also favors more recent peers thank you more recent peers so they have this backbone of what they call router nodes and there's never more than in the range of 700 so that's why we'll never be able to talk to more than 700 peers at a time and you can also see these I don't know steps so whatever we want to call them in the red curve and that is because if new peers come online they propagate in the peer-to-peer network and become active at some point and then you know you get these steps because they immediately when a new peer comes online they immediately get propagated to all peers that are online and that's what causes this effect so I'm almost done here this is the good repository everywhere you can check out the code as I've said I will hopefully add a UDP version soon and I mean I've checked in like that version like one hour before the talk so there might be some bugs in there but I if you tell me that there's something buggy I will fix it or you can fix it yourself and send me a patch but I also want to talk about the alternative that we already touched on briefly which is node injection as I've said by crawling you will never be able to reach the peers that are behind gateways network registration and so on so you can actively participate in the peer-to-peer network as an alternative and propagate your own IP addresses and then at some point depending on the popularity of your node the other peers will reach out to you and say take me down or send me commands yeah and that's actually a comparison here between tracking based on sensor injection and crawling so you can see the top two lines are again this is peer-to-peer Zeus so we have IDs, unique IDs and IP addresses so we distinguish between the numbers for unique IDs and IP addresses of course the number of IP addresses is much higher and the top two lines are achieved through sensor injection and the other lines are what we achieve through crawling and the bottom lines are the active IP addresses or the active peers that we can talk to so you see it's much less than the peers that show up in the peerless okay that's basically my presentation I want to give shout to some people here because they're awesome and they work with me here and deserve credit for it and that's it I think we have a few more minutes left maybe three or so so if you have any questions you can ask me now or hunt me down at the bar later on thank you