 I am Paul Vixie from Internet Systems Consortium. You would usually have heard of us because of bind, but we also do some security work of which this is an example. I'm going to turn it over to my partner Andrew, who's going to present the first part of the talk. For those of you that maybe saw me earlier today on Meet the Fed, my name is Andrew Freed, retired special agent with Treasury. Closer, louder. Is this better? OK, I don't want to get too overpowering. So what we actually have here is an FBI NASA operation that requires some outside assistance. Those of you that are familiar with, anybody here not heard of the road digital DNS changer? Anybody here who has heard of it? OK, good. So essentially, what happened? This was a very unique situation. The FBI wanted to do a take down of a very large botnet. But the botnet essentially would have taken the internet access away from those people that were infected. And they were concerned that they would break the internet if they just took everything down. So they started a group of people called the DC working group, domain changer working group. And our goal was to try and identify the people that were infected and to provide a notification remediation while at the same time doing something very cool, which is what Paul did. And that was that we usurped their botnet, replacing it with our own DNS infrastructure so that we could continue to provide DNS resolution from the time the FBI took the infrastructure down through to the time that they were quote unquote mitigated. So that's really what we're going to talk about today. And how many of you have actually run an act of sync call on the internet? Yes, no? OK. Well, essentially what a sync call is is you're running a box that's grabbing data from the internet and you're trying to make sense out of all that data. And for those of you that are familiar with DNS, DNS is a UDP protocol which is easily spoofed and that kind of played into some problems we ran into. But essentially what ISC did is we set up a sync call that also did DNS resolution. And what we're going to discuss is how we ran that sync call, the problems we ran into it, and the problems we had trying to fulfill the remediation obligation. So Trend Micro gets first blood on this. They found this first in 2007 and reported it through the usual law enforcement channels. A company called Rove Digital, headquartered in Estonia, had bought some time. They had bought some plug-in slots on the Illurian botnet. But they really were pretty tame. If you're buying time on a botnet that's got hundreds of thousands of victims, there are all sorts of things you might do in terms of key loggers and credential theft and so forth. In this case, they just wanted to get into the advertising business. They wanted to sell banner ads. But in this case, they were going to get their customers illegally. So they changed the DNS resolution path on a couple hundred thousand computers, I guess 600,000 that we found at the start. We don't know how many it was in 2007. So that instead of using the ISP name server or open DNS or Google DNS or whatever had been there before, the infected computers or perhaps the upstream customer presence equipment like your modem, your DSL modem, your cable modem, whatever that is, would use the Rove Digital name servers for all of the DNS resolutions. And again, this was very tame because they could have intercepted all kinds of cool things. Like whenever you visit Gmail or Facebook or your bank, they could steal your credentials. They did not do this. All they did was intercept things like ad.doubleclick.net and other fairly well-known domain names that are used by real ad sellers when they are delivering content to their real customers. And they just had the, they would give back the address of their own ad server. And that gave them the ability to charge for ads and have a legitimate and large customer base and a lot of eyeballs, I guess $26 million was found on their books fairly early on. Now one thing it did do, by the way, is it prevented most of the victims from being able to get antivirus updates. They actually intercepted the calls to the antivirus products and basically null routed them so that from that point forward they weren't able to do any updates which meant that a very significant percentage of the people that were infected by Rove, while not in and of itself was bad, they got infected by lots of other things because their antivirus products were not kept up. So Andrew has mentioned the DNS changer working group. This is a completely ad hoc thing. Don't get the idea that somebody ran down to the courthouse and incorporated a 501C3 to do this. This was just a bunch of guys and this is the domain name we picked for ourselves. So we had a couple of law enforcement agencies represented FBI very heavily, also the NASA, OIG, some international folks, the antivirus community, not just Trend Micro, but everybody else that had customers that were infected with this. ISPs, very important in this case because they were the ones that were going to get all the phone calls if we turned it off wrong because of course the victims of this malware had no idea that they weren't using the ISP name servers so if it went dark they weren't going to be calling Rove digital, they'd be calling Comcast or whoever. Security researchers, me, Andrew here, David Dagan, various people that you've seen on this stage before who normally get pulled into this kind of thing, various public interest nonprofit organizations, for example my company, Internet Systems, and some universities who either had the money to do some analysis or were just interested in helping because it was going to be going to look good on somebody's PhD thesis. So generally when an operation like this happens there's kind of a protocol that takes place and the law enforcement would take charge of it, they would investigate it, identify the people who were involved, they would arrest and prosecute. In this particular case they had to go that step further and do the remediation and that's what kind of made this a very unique operation. Now as many of you know when you do a sinkhole or you capture data on the internet, what you're going to see is victim IP information but there's no way for us to be able to determine that John Doe or Aunt Jane is the one that had that IP address. So we had to rely on working with large remediation companies such as like Comrie, Shadow Server, to provide them information that they in turn could relay back into their communities and we also try to notify a lot of the ISPs because the ISPs are able to take an IP address with a date and time and resolve that back to one of their customers and then send notifications out. Go back. So it's important to note we are not in the business of reinventing wheels so there are maybe 50 to 100,000 companies that you could refer to as internet service providers in the world and they know what IP address space they have and if they put it into who is correctly after all the mergers and acquisitions then we can find out too. So we could have said we have all these log files with the victim IPs and let's figure out how to send notifications but we didn't do that. That is a well solved problem. Shadow Server and Comrie would be two examples of non-profit entities. Arbor also has a public benefit side to their operations so pretty much if you can get the victim syslog information to those people they will do a very good job of the outreach it takes to make sure that every ISP who has victims in their network will then get it so we did not try and take on that part we just asked them to join the working group. So the operation was all scheduled to take place initially on November 8th. There was a multinational investigation and on the evening of November 8th arrests were made in Estonia and those arrests were coordinated with takedowns we did primarily in New York and Chicago. I flew up to Chicago to handle the takedown there. Paul was in New York and we had some other people that were dealing with the people and stuff that was in Estonia. They were able to arrest six of the seven people that they had warrants for. I believe one of them, the Audrey Tame, is still at large. So you may have heard of these people before. Est domains was part of another criminal empire and there was a widely publicized Brian Krebs out outing and takedown effort a couple of years ago. This Est domains the EST here refers to Estonia and was at the time before the takedown, before the arrests, before Brian Krebs outed them they were one of the top 10 most profitable companies in Estonia and nobody knew and not nobody, nobody in the business community knew that they were actually a criminal empire in disguise. And here's the example, I can't translate the Estonian but if you look at number one on the list you'll see that they were taken quite seriously in the Estonian business community. So the other interesting thing that we had to do was we essentially usurped their address space and redirected it to our own servers and that's how we were able to put our own DNS servers in the space that they were currently answering. You sirped is such a strong word. So wearing one of my other hats, I am a member of the board of trustees for Aaron who is the address allocator for United States, Canada and most of the Caribbean islands. And I can tell you that we care very much, we Aaron care very much that address space not be pirated. If you're aware of cases where that's happening send us email, if you don't know anybody else at Aaron send me email, we will look into it. But so we didn't really usurp it. In fact what we did is we got a court order that seized the equipment including the router on which these IP addresses were configured and sort of that court order designated internet systems consortium which is my day job as the custodian, as the keeper essentially. So it's not unlike if they had children living with them when they got arrested, those children would have been put into foster care. So their router came to ISC for foster care and according to Aaron rules, if you are transferring assets that use IP addresses then it's okay for the IP addresses to be transferred with the business and with the equipment that uses those. That's how mergers and acquisitions work for example. So yes, we did seize the equipment that used the IP address space. We did not seize the space itself. The space came more or less along for the ride. This was actually some pictures that we received from the operation in Estonia. This was one of the, I don't know if you can see it clearly but it was actually one of the police vans that pulled up to start doing the seizure and processing in Estonia itself. These are some of the agents removing some of the equipment from Estonia and once again, all three of these locations were hit, the search warrants at exactly the same time. So I was the hands on the keyboard in New York City where most of the action went and it took place and yes, I read the day before did some installation, got the new servers working. We all went out to dinner and the deal was we had to wait for the arrests to occur in Estonia before we could go intrusive on their router and actually reconfigure anything because otherwise it was possible that they sitting there in Estonia might see on the monitoring system that all of a sudden their equipment was out of their hands. So there was a nine hour time difference and it really did take until three o'clock in the morning Eastern time, I guess a five hour, six hour time difference there, it's nine hours Pacific. So at three a.m. we finally got the thumbs up, okay, we've arrested the bad guys, you may take their equipment at which time I broke into the router and I wanted known it had been at that time 17 years since the last time I had typed commands at a router. So it took a couple of hours to move the first block of address space and then it took a couple more hours to move the rest. So we started with the New York space and during all that time, Andrew was cooling his heels in Chicago because he couldn't unplug any of that stuff while it was still serving victims. So after we got done in New York, we moved Chicago and once we got Chicago moved, I gave Andrew the okay, they all clear at which point I was due back in Washington DC so I ran for the train and FBI had the unenviable task of actually loading all that equipment into a truck except the router and my servers, but everything else went. So this is what it looked like in Chicago that night, it was kind of a misty rainy drizzly night which is typical Chicago weather and when we got into the co-location facility which was a massive building, this is what the systems actually look like. These were some of the road digital systems that we actually ended up seizing. So this is what bad systems look like, they look just like good systems in a large building. Hard to tell. So all of these wires were kind of in our way and having been an IRS person that was very fluid and doing seizures, we decided to get most of these things out of our way. I came well prepared but we didn't get a chance to use any of those things but that's what a mitigated rack looks like. So when we left Chicago, they had a nice clean rack, there were none of these wires in our way and the FBI had a bunch of people there that packed the stuff up and carded it off. I think my total time on site was about 28 straight hours. I want to note for the record that to maintain the chain of custody, the agents who did the seizures in Chicago had to drive a truck to New York City in order that they could swear that this really was the same equipment that they seized and they couldn't use a shipping company or an airplane or anything like that. So the name servers we put in were completely brain dead simple, there was no wizardry at all. It was FreeBSD, it was a couple of 1U Intel boxes, multi-core of this and that, running bind and we did put in a little bit of software that isn't in most name servers, it's called nMessageTool which is part of the security information exchange work that we've been doing at ISC in recent years and all that, you can think of it as being kind of like a background super TCP dump thing that makes well-named files containing packets. So we grabbed all the data, all the queries, importantly we did not grab the query names. The judge was rather clear on that point, it was all right with him that we record the victim IP address and UDP port number and whether they were requesting recursive name service, the so-called RD bit and even the timestamp but not what they were looking for as far as the actual domain. That means that if any of these victims were committing some other kind of crime, child abuse, something like that where you could tell by the use of a domain name that a crime had been committed, we would never be aware of that. We just, that would be what they said in Ghostbusters, don't cross the streams. So there was no privacy violated at all. All we knew was that somebody was infected, not what they were doing and we also never made the data directly available to the FBI or any other government entity because they didn't want to have it. There's all kinds of laws about what they're not allowed to have and they aired on the side of not having any of this. So what we gave them was statistics. This is how many hits we had per day, that kind of thing, not who are the victims or how many times did they use their DNS server that day. The actual victim details were only made available to ISPs and the so-called super-remediators like Comry, Shadow Server, Arbor, people like that that I've already mentioned. So we had an interesting thing happen. The FBI had been collecting some data so that when the take-down happened they would send some initial notices out to some of the larger ISPs advising them that they had addresses within their assigned space that was infected. Unfortunately, this is one of the few lessons learned that we had. They sent out data that was like three months old which was absolutely useless and when they sent it out, they sent it out with a paper letter that wasn't email. It was actually the old snail mail with login information for the individual companies or ISPs to be able to retrieve the data but in their infinite wisdom they ended up assigning like a 32 character password all with uppercase L's, O's, zero's, ones. So nobody could make sense out of this stuff and if they spent hours trying every permutation they finally ended up getting data that was actually unactionable. So when Andrew describes the data as being useless, I want to go a little bit further. The taps they did before I got there were of a trunk circuit that contained many VLANs and their other consultant didn't bother to tell them that and so if you ever hear or if you read in the history books that there were six million or excuse me, four million victims and then you hear us say there were 600,000 victims, that's because there were some other IP addresses on other VLANs in the first tap and trace they did that we didn't see because we were properly configured. So the other thing we noticed is that our initial interest was in trying to get victims notified so they could take some end user remediation. What we found out was that most of the ISPs had one goal and that was to not have the customer end up calling their help desk. That was going to cost them money and effort and they didn't want that to happen. So that actually ended up causing this whole operation to get delayed from what was initially going to be four months to an eight month period. Now during that eight months, some ISPs were very aggressive in setting up walled gardens. They did live notifications. They did all sorts of good stuff. We had some that did just bring our email notifications and some ISPs did absolutely nothing. So clarifying one minor point, I want to give you the idea that a bunch of large companies who stand to lose a lot of money can just swing a big hammer and make the federal government change what they were otherwise going to do unless it involves oil leases. So the FBI takes no official recognition of whether an ISP is going to lose money or whether a lot of customers are going to call in and perhaps overwhelm call centers. What they do care deeply about is that there not be a headline that says FBI breaks internet for what they thought at that time was four million people. That was what they were trying to do is keep their name out of the news. So the first court order expires or expired in March and it turns out trying to do something like this in four months where two of those months are the dead zone of American calendar where everybody has gone for Thanksgiving and Christmas was not practical. A lot of companies have what they call a change freeze where they can't make any new configuration changes after Thanksgiving which we were obviously asking them to make some changes in order that they properly remediate these victims and they just said no, sorry we can't start that until January 15th. I wanna ask that somebody get the door closed behind me. So we went to the judge. We said sorry four months wasn't enough and could you give us an extension please and what the judge said was okay I understand your problem I'll give you the extension you're asking for but don't come back, do not come back again. So that turned out to be a good statement because whenever somebody would say is there gonna be another extension do I really have to clean up all my victims by July 9th I got to say well yeah I don't think there's gonna be another extension so you better get it done. So there were two additional surprises to us and that was that we also later learned that some of the malware was causing home gateways like the links as routers to also have DNS changes and what that essentially did was it caused reports of IPs being infected when in fact the victims checked their system and they weren't infected. Or they were macintoshes. Or they were macintoshes which obviously weren't really affected. We also discovered that in the initial stage when we first did the takedown there were no tools available to clean this up. So the only way that people could remediate was to literally low level the machine and rebuild it. That was the other reason this took so long. After about three or four months some tools were really available that really made that a lot easier. So with the stats we had some problems and this is one of the other lessons learned. Anytime you're pulling data off of the internet you end up with an IP address. But that IP address could be a single host. It could be a firewall with 10,000 hosts behind it. Or it could be one of 20 different IP addresses that one single machine gets in a single day because they keep moving and getting dynamically assigned IP addresses. So trying to deal with the numbers was a little challenging when people would ask us how many systems were infected. All we could say was this is how many different IP addresses we saw hitting us each day. You need to fix the wall here. So the other problem that creeps in when you are trying to estimate a population size is that our sinkhole was UDP based. It was not TCP based. With TCP there has to be a three-way handshake where both sides have to really be able to be confident in the other end's IP address before communication can take place. With UDP that does not happen. There's no state, there's no handshake, there's no nothing. You have one packet that goes out, you have one packet that comes back and it's very fast for that reason. Much less time spent waiting for packets to go back and forth, but you also have much less, actually you have no confidence at all in what is the other guy's IP address. If you're a server and you're receiving what looks like a query, you just answer it. You have no idea that that address was forged at the remote end or maybe even refers to a part of the network that isn't even live. So on June 16th, the statistics that we were gathering got particularly bad. All of a sudden it looked as though something like two million different victims were all checking in and actually no, it was just one DDoS using randomized source IP addresses. So I get to mention here that there's a DNS firewall capability in BIND now. It's called RPZ, Response Policy Zone. So if you'd like to be able to do in your DNS recursive server the type of firewall and you've been doing at the IP layer or maybe at the email layer, you now can. And we used that feature in these name servers in order to answer differently depending on what the query was. So the real DNS-OK sites that Andrew will describe in a moment, the real ones had some IP address that was in the normal DNS and it was in the normal web and it's just where you would go unless you were one of my customers, one of these victims. Come to my server, there would be a firewall rule that said, ah, you're asking about this name, I'm gonna lie to you now, I'm gonna give you the name that the IP address I want you as a victim to have that is different from the truth. So in the earlier days, we did a config or eye chart. We were able to set up very simple websites that people could click on and get a visual indicator whether or not they were infected or not. So we wanted to try and replicate some of this, same simplicity for the average population and we decided to do this using the BinderRPC feature. Essentially, we had a number of systems throughout the world that were set up and if you went to that website, normally you would see a page that looks something like this. This is the page called dnsok.us and in DNS, it would come up to a specific IP and you would see this screen. If however you were infected and you went to the road digital address space, we would retrieve a separate, this different IP address and you would end up with this page which would say you're infected. Now this is just some really easy DNS wizardry. We did nothing on the systems. There was no JavaScript, no JPEGs that were dynamic. It was just really simple, straight HTML. Yet, we still had somebody reach out to us and say that they weren't infected until they went to our website and that we infected them. So, you know, you never, never... Before we get off this point, I wanna say, this is an extremely low grade, unreliable method and I was embarrassed that we were doing it but it was all we had. So if you can, I don't know if you can read this from the back of the screen. What it says is your computer is using the DNS change your name servers and is therefore probably infected which is to say, if you have a Macintosh and you could not possibly have been infected, you might still be behind some home gateway that got reprogrammed when the Illurion virus was passing through that house. And so we're not guaranteeing that you're infected just cause you're seeing this page. You're probably infected. The same works the other way. We have to say, you're probably not infected on the green page. It appears to be looking up IP addresses correctly. We don't actually know that that happened because for all we know, you are using some kind of a DNS proxy who is correct but only if you're using certain names and without certain local exclusion lists of your own and so forth. So both the green and the red page were extremely unreliable but they helped us get our name in the papers and so I think we'll call it a success. Yeah, one of the problems we ran into is a number of the ISPs decided to do internal redirection which means when they saw any queries going out to the address space it was identified as row digital name servers, they internally redirected. So if somebody was infected and they tried to do a DNS query, the ISP would redirect them to a good DNS server and they would end up falsely with the green symbol. So that was a problem we couldn't get around unfortunately. So operating the Sinkhole, we ran it for 244 days. As data came in because of the volume I do some deduplication because there's no reason to log the fact that a system hit us 3,000 times in one day. And overall from the beginning to the end of the operation that dedupe ratio was probably somewhere around 350 to one which means for every 350 times we saw a system we only had to log it once. Total dedupe points, if you wanna see that number with all the commas is about 1.2 billion records that we ended up having to produce reports for. And inside the United States from start to end we saw about 6.7 million unique IP addresses and worldwide about 42 million IP addresses. With the efforts we did, it wasn't quite as effective as we had hoped but we actually did see some remediation. So when this thing started, we went from about 600,000 systems and towards the end we were probably at about 220 to 230,000. But once again the caveat is that the ISPs had plenty of time to do internal redirection and we may have been seeing the fact that the ISPs were preventing their victims from actually hitting us. I wanna mention that the importance of getting your name in the papers is to get a lot of people to visit your dnsok.us server or .de server or whatever. There were a whole bunch of dnsok servers. And every time a newspaper story would come out referring to this effort and referring to those websites, terrible things would happen to all of our web servers because they weren't necessarily provisioned as well as say the ones that are serving up the Olympics in London right now. But we had no other outreach channel. So calling reporters and saying by the way the world is on fire and you should set your hair on fire and run down the street now is how we got the word out. And you can see a reduction in the infections small though it was after each news story. So we're gonna get these lessons learned. I guess we have 15 of them here. None of them is you should budget for press releases cause we pretty much feel that the every day is a slow news day and we can always get some reporter to go along with us when we need to get the word out. So all of this data we actually had to make quarterly reports back to the US magistrate that had authorized a search warrant. And you have to be precise because at some point someone could challenge you saying you did your job wrong. Well we constantly had to come up with a disclaimer that we had certain statistics however that may or may not be accurate because external things could prevent us from having the correct numbers. And the example that was the eye chart we just mentioned if somebody went to it and their internal redirection was putting them to the right address but they were infected we wouldn't have the correct number. So by giving advanced notice to the ISPs and having them do internal redirection before some of the initial stats yet it kind of borked the eye chart and also some initial stats. As I mentioned with the FBI if you're gonna send out lots of notices don't send it out with broken links long passwords that you can't decipher and then nobody's gonna be able to read. And once again the passwords they sent out were 30 characters long. It was not something you could cut and paste because it was on paper. And I got one of those letters and I could never get the password to work. You wanna talk about one of the biggest concerns we have from the very beginning is our operation had a finite period of time but we knew that there were going to be hundreds of thousands of systems that are gonna remain infected. What happened to that address space? It becomes a poison space. Yeah, so this is a small problem for the Aaron region and a larger problem for Europe and for the world and for the whole sort of internet. This is maybe collectively a slash 17 worth of space. And that's not very much in terms of internet growth but it's gonna seem like more and more, right? We're out of IPv4 space. You've all been told that you have to switch to v6 and you're all making plans for how you're either gonna switch to v6 or buy IPv4 addresses on the black market or some combination thereof. This address space is going to become more and more valuable as we get further and further into the post-apocalyptic days when you can't get new v4 space and we're stranding it. It's scorched earth at this point. So many ISPs have redirected it that if we tried to reuse this space, it wouldn't work. Nobody could actually start a business using this space other than bad guys who will almost certainly set up pirate radio stations on this frequency and attempt to lure some DNS traffic toward themselves. We need a better plan as human society for how we are going to deal with this type of scorched earth. So the other problem we ran into is that once we started creating the data, we had a lot of people hitting us up for requests. How many systems or IPs have you seen from this ASN or how many have you seen from this net block or how many have you seen from our country code? And that actually became probably a 10 to 15 hour a week job for me for the duration of this. So if you're gonna get these operations going and it's gonna be a public interest, you need a plan for having somebody paid to dedicate quite a bit of time to dealing with the ingestion of the data, processing reports and getting them disseminated correctly. Yeah, so all of this communication work that I've described where you're trying to get reporters to help you or you're just trying to put out reasonable web content. And we had a project webpage, dcwg.org. It requires somebody to have as their day job keeping it up to date, making it accurate, doing something with it every day. We made a mistake, made any number of mistakes in this project, but one of our mistakes was to not fund that, not to say, by the way, FBI, in addition to running these name servers, we really think that you need to have an outreach function and you need to contract for that separately. That's probably the most important thing I can think of because what we had was 15 or so of the brightest volunteers that I've ever worked with, but they're busy. They've got day jobs, this isn't it. It's not their job to keep the website up to date. And so fairly often the website didn't say what it needed to say, and then we gotta fix that. So in addition to just a general reporting, we also had to do some analytics and that took additional time. Running your stats for trending and things like that was additional time spent that somebody basically had to volunteer their time, which I think was me. That was Andrew. We learned that it takes like 100 to 1,000 times more time to fix a problem than it takes to create it. And a perfect example of those of you that know Convicker, we still track like well over a million addresses per day that are still infected with Convicker. That's been like four years now, still running. So we mentioned this before. If you're doing something with UDP, whether it's running a normal DNS server or running one of these, Sync Hall, you have to cope with spoofed source forgery. A lot of your queries are gonna be coming from places that don't exist or coming from places who did not actually send the query that you received. And you have to be able to somehow discount those. Not answer them would be one thing, but, and we did some of that, but also not included in your statistics. We had any number of people call us and say, where did these two million different new victims come from on the 16th of June? Well, they weren't there, they were phantoms. Well, that was because it was automated reporting, which we also found was a problem. We noticed the problem with the UDP attack that we saw, probably mid-morning, but by that time, the data had already been adjusted by other companies and automated reports went out to them. How are we doing for time? Well, on that point, the number of ISPs were doing automated reporting to their customers. And when that two million, that blip of two million fake source IP addresses showed up and we reported it, they also reported it. It was automated all the way down the chain and they ended up sending remediation notifications to people who had never been infected, but their IP address had just randomly shown up in the DDoS, so I think automation was a mistake. We talked about the publication of news articles. We got crushed on two different occasions. One day, the DC Working Group site was taken down by something like 70 or 80 million queries almost simultaneously. The DC DNS OK site got like 60 million within about a four hour period. This was running on like a VMware world. Banding with wasn't a problem. It was basically a patchy that couldn't handle it. I do wanna shout out to Cloudflare who upon receiving an urgent call from us set up their web CDN for the DCWG web server at no charge and in about five minutes and they saved our ass. So we talked about the other problem of running these public services is you're gonna have to deal with stupid people that are gonna say you infected me. It's a problem because it takes time and when you have all volunteers that time can be pretty valuable. And yet, as I think about a paid call center with a script, I'm not totally sure we could solve the problem that way. So basically if you're going to notify large numbers of people, the usual fraction of them is going to want to call somebody. You need to know who that's gonna be. We also had problems with information that was published by the press that was materially wrong. Then we had other people that were bloggers that really went out in left field and said the FBI was monitoring everything they were doing and that generated more emails saying, why are you doing this to us? So all of these things were things that we ended up having to take into consideration from a time basis. We're running out of time ourselves here. So we had partial data coming in because we wanted to strip off privacy data. Well that stripping of data actually made it very difficult, almost impossible for the most part for us to defend against bad data. And in fact, on some of the monitoring we did for bandwidth flows, when we suspected bad data was coming in and we did some subsequent checking, our solution was to basically just totally redact that data because there was really no way to cleanse it. So that kind of bit us in the butt a little bit. And basically that's what we had to talk about. We have plenty, a little bit of time for questions. Anybody have any questions? Yes sir. Louder please. Would DNSSEC be adopted? The question is would DNSSEC have made a difference? And so I've been working on trying to get DNSSEC deployed and I guess defined and deployed for 16, 17 years now. And it's hard for me to believe it will ever be deployed well enough to solve this problem. But there is a theoretical possibility if laptops were running it. Not just your ISP name server where we currently expect DNSSEC to work, but of every laptop, every smartphone, whatever, all the way down the line was running what we would call a stub validator so that your own ISP was incapable of lying to you than yes. I don't expect that to be a ubiquitous mode of operation because it would make the internet very fragile, make it seem like things didn't work very well or everything took longer than it should. So as currently proposed where your ISP uses DNSSEC to keep people from lying to them but then tells you whatever they want you to hear it would not have solved this problem. You sir. Well the question was was a consideration given that the infected machines posed a risk to themselves and others because of the subsequent infections. The strange thing is this was probably the easiest botnet to deal with. All we had to do was shut down the DNS name servers and they were basically out of business. Unfortunately that wasn't something we could do and the ISPs are the ones that really have to govern how they're going to deal with their customers and it was up to the ISPs to determine whether they notified them. Some ISPs actually did drop them, block walled them, some of them sink-holded them. But it was a decision the ISP had to make with a room customer. Let me follow up a little bit. Convicker is another example of malware whose first act upon getting into your machine will turn off your ability to fetch antivirus. That turned out to be the signature we use to have Convicker iChart websites because if you couldn't fetch the logo from Microsoft Update then you probably infected that kind of thing. I believe that this is a huge problem because the idea that we're gonna patch things fast enough to keep bad software from hurting us is absurd on its face but it is at least something and it is the only thing we're doing and the idea that you can infect somebody and turn that off. I think it does make the world materially less safe. It may be that we need to rethink patching and we need to rethink the daily regime of reaching out to see if there are new patches and whatnot but yes, a lot of consideration was given there. Anybody else? Okay, well thank you very much and I hope the rest of your evening goes well.