 My name is Rob Stuckey, I'm a security consultant from Phoenix, Arizona. I work mostly doing pen testing, incident response, architecture. I've been coming to DEF CON since 1996 and I'm a big fan of altitude photography. Also, this weekend is my 11th anniversary and I want to thank my amazing and understanding wife, Linda, who didn't realize that marrying me at DEF CON meant that I'd be in Vegas every year on our anniversary. And I'm very sorry about that. So I'm going to present on a variety of topics related to the research that I've been working on over the past few years. The common theme that you'll see is that if you aren't monitoring your DNS traffic and understanding just how things behave when everything is normal, you're probably not going to notice when bad things happen. So I've been abusing DNS for a few years now and it's always been one of my favorite attack vectors to play with. You can spend a fortune hardening your perimeter, but if I can get one of your devices to trust me, it's game over. And no vulnerability scanner in the market is going to find the kinds of misconfigurations that will always lead to me getting that foot in the door. So these are the various topics that I'll be covering today. First is my adventures in DNS bit squatting and some fun with people and misunderstood end point behavior, a bit about people who don't register their domains and some malicious domain hijinks. So Artem Dineberg spoke at Black Hat in DEF CON in 2011 about something he called domain bit squatting. For anyone that isn't familiar with his research and if you're interested in DNS mayhem, I suggest you download the video of his talk. When I read the summary of the talk published before Black Hat, I was excited to get started into how I could abuse it. So I'm not going to rehash his talk, but I'm going to quickly cover some of the relevant details, I'll explain what it is, why it happens, and then I'll show some examples of things that I've seen and what kind of risk it presents. So sometimes a single bit in memory will encounter an error that causes it to flip its state. A one will flip to a zero or a zero is flipped to a one and without ECC memory, this will be uncorrected. And every once in a while, that bit will get flipped in the middle of interesting strings of data. Artem explored the phenomenon where single bit errors in the right part of memory at just the right moment would cause a client to query a completely legal, valid, and yet incorrect name. His talk revolved more around explaining the likelihood of this happening and its causes, and I continued to ray left off and began exploring all the ways that I could abuse this for malicious purposes. So DNS bit squatting is anticipating the ways that these errors will mangle DNS names, registering those domains, and then answering those misdirected requests. So in memory, the domain name for Google.com that just got passed to your resolver looks like this. Let's say high energy proton strikes one of the individual bits of memory here, flipping its state so instead of being a one, it's now a zero. Your browser just asked DNS to resolve Google.com and it'll happily go get the answer for you. DNS sec isn't going to help you here and in fact there's very little that could have prevented this from happening in a completely invisible way. Other than ECC memory of course, though outside of high end servers, ECC really isn't very common and even if you had ECC, there are plenty of places that this could have occurred where it's never used, like on your NIC or in the DRAM cache on your hard drive. So Dynaberg's talk covers the causes of these errors with a lot more detail, but what's important is memory errors happen, sometimes they corrupt memory and sometimes that affects DNS. Common causes are heat, electrical problems, radiate. It's after five o'clock somewhere. Wait, that's here. You know the drill. They're new. We need somebody from the audience. Is there somebody here today who it's their 11th anniversary? Anybody? Oh, all right. Is this your first time on the DEFCON stage? Awesome. I'd like to introduce you to 2,500 of my closest friends. Wait, what's your name? Linda. Everybody, this is Linda. Linda, this is everybody. Thank you guys. Why don't you two get a room? Where was I? Okay, so where was I? Common causes of these memory errors. Heat, electrical problems, radioactive contamination and cosmic rays. I found the most interesting were the heat in the cosmic rays. So you're probably thinking that this is a pretty rare phenomenon and you'd be right. If you're thinking that this hasn't already affected you in a serious way, I still have more slides. So with heat being a major factor in these memory errors, and I've observed that smartphones are especially vulnerable to this because of the extreme operating conditions that they're exposed to. Most other devices have adequate cooling to prevent this, but sometimes people intentionally run their devices too hot. Not long ago, Google released a lot of information about the way that they run their secret data centers. One of the interesting things that I learned was that in an effort to save energy, they run their data centers a bit hotter than most of us would find reasonable. Typical data centers operate from 60 to 70 degrees, but they advise companies to approach 80 degrees. Google themselves runs some of their data centers, like the one in Belgium, as high as 95 degrees. Intel and Microsoft say that servers do just find it at higher temperatures. And Dell will warranty their servers to run in environments up to 115 degrees. I think that this might just be a bad idea. So with heat being a major factor in memory stability and Google operating their data centers at fiery temperatures, is there anything that we can do to take advantage of this? It turns out that there is. When I began exploring domains that might be especially vulnerable to this, I wanted to find the most commonly queried names to increase the likelihood that I'd encounter a bit error. Now, I've been collecting the DNS logs for large companies in my database, so I queried it to find the most common domains our clients were asking for. Removing local names and names queried by third party software, I found the most commonly queried name was Gstatic.com. That's one of the domains that Google serves static content from, like CSS, images, JavaScript, XML files. So I wrote a script to enumerate all of the valid bit flipped possibilities for Gstatic. Of the 34, five were registered for legitimate purposes and 29 were available. So I bought them all. I immediately get my first hit. Here somebody was doing a Google image search and the content returned to them somewhere along the way was corrupted and their browser asked me to serve one of the images. In the request, I see their source address, I see the mangled name they queried, I see the resource they were trying to retrieve, I see the page that was referencing that content, and I see what kind of client it was. But going back to the referencing page, you see an interesting artifact. It contains the original query. The requests were pouring in more sources, more mangled names, more image requests, and more referrers containing the original query. So I've collected over 50,000 unique queries. This was the most common. So sometimes this happens at just the right moment to screw with one of your queries. Big deal. The odds of that happening are so small, do we really care? Maybe. But sometimes this happens at just the right moment. Something's right about to be stored permanently at a disk. And that's a bit more interesting. And I don't know what kind of odds we're looking at there, but I bet it's a lot more likely to happen in a 95 degree data center. So by now my logs are full of a lot of noise. I'm getting so many requests a day that I can't review them all by hand. So I'm writing scripts trying to find patterns. And this is the biggest one that jumped out at me. I was getting a lot of requests for the exact same image with the exact same mangled domain name. All the requests were coming from phones, and I was getting requests every few seconds. So these phones were trying to go to the Google mobile site, and they were asking me to serve that tiny Google logo. I had found a single web server out of the entire Google cloud that was serving content that had a permanently mangled domain pointing to the logo. Coincidentally, I happened to have an image and that exact path on my server, and those clients were fetching it instead. So where they meant to fetch that little logo, they got this one. For two years. Hundreds of thousands of requests came to me for that logo, and instead of what Google had intended, this is what they saw. Then one day they all stopped. Google pushed out a content change from mobile sites, and it was overwritten. So now I'm finding more patterns in the logs, and I started working on trying to figure out what they all were. Another one of them turned out to be very regular, but this time it was clearly a naturally occurring bit flip in memory instead of something stored. I was receiving requests like these at a rate of about one an hour. They didn't look familiar, and the user agent String said the client was Google Feedfetcher. The requests were all coming from devices in the same network. All of them were Feedfetcher, and all the requests were for XML files. So I did some poking around, and I found that Feedfetcher is the mechanism that Google uses for grabbing updated content for i Google, and I found that those source IP addresses were in Belgium. So those requests are Google's own servers fetching updated content for the various widgets that make up the i Google personalized home page. Each widget is an XML file that finds its content, and Google was asking me to serve that content to their presentation servers. So I wondered, would Google serve content to their users that they accidentally received from me? It turns out that they will. So I grabbed the XML files that Google was asking me for, and I picked them apart. So there are two sections. A header describing the module and a C data section that's packed with the HTML, CSS, and JavaScript that make up the widget. So I just modified the link to the background image, and I changed them all from Gstatic to grtatic.com, and left everything else alone. Put the XML files in the path Feedfetcher would pick it up at and waited. Almost immediately, Feedfetcher asked me for one of the XML files, and once it did, I immediately started receiving requests for that background image. So I removed the XML files I modified and waited for the requests to stop, and they didn't. And for 35 days straight, the same 61 devices continue to ask me for that background image every day. What's also interesting is that every single one of those devices were clients and virgin media in the U.K. So when one grabbed that XML file, Google served it to 61 people. And over the past year, 500 unique Feedfetcher source IPs have asked me to serve those modules over 15,000 times. So I could have touched quite a few of their users with something a little more malicious than a modified background image. Here's some more from with Google. If you haven't heard of Postini, they were recently bought by Google, and they do anti-spam, email security, email archiving, and so on. The way that their service works is you modify your DNS to point your MX record to their domain. The MXU.2 will be your domain.sum4characters.psmtp.com. One of the interesting things here is that the domain is so short that you can easily register all of the possible BitFlip versions. The other interesting thing here is that so many companies point their MX records to a single domain, and nobody thought that was a bad idea. So I registered just three possible BitFlips for that name, and prmtp.com seemed to be the busiest. These are the queries that I received in a single month for a single BitFlip of the psmtp.com domain. And these. And these. And these. If you use Postini, your mail has probably come to me at some point. So I don't think that anybody can say that Google doesn't take security seriously, but if anybody had considered what kinds of problems could result from heat-induced memory errors, they might consider compensating for these kinds of things. Don't let single short domains be such a factor in so much of your business like Postini. If your domain is popular, you're probably already buying the typos. You might want to consider the BitFlips as well. I highly recommend that people use response policy zones internally to correct mangled queries for your own domains. If you own gstatic.com and you operate a 95-degree data center, you probably want to make sure that every possible way that domain could be mangled doesn't allow a client to attempt to reach it externally to your network. By the way, of all the domains that I explored, the only company to actually preregister all of theirs was Yahoo. So in this next section, I'm going to demonstrate some behavior that many people don't seem to understand entirely. In all honesty, Microsoft has done a very poor job of documenting it, and especially as it changes. And some of that behavior really just is inconsistent. It's going to start by making sure that everyone understands how we've been told to expect devices to behave when querying DNS. Then I'll explain how that behavior becomes somewhat more unpredictable when we use search paths. Then I'll talk about some of the more misunderstood behavior, and then I'll bring it all together to demonstrate how dangerous this can be. So you type www.google.com in your address bar. Your computer sends a query to your local DNS server, and the job of finding and returning an answer now belongs to that guy. It asks a root, which refers it to a .com server, which refers it to the authority for Google.com, where it finally gets an answer, which it sends to you. This is the normal behavior that I won't expect. Your device had a question, which it sends to your local DNS server to do all the hard work to get an answer. In reality, all this is only happening after some very important steps. Your device was trying to find an answer to www.google.com, but that process we just saw is only what would have happened if you had typed www.google.com with a trailing period, explicitly telling it that this was a fully qualified domain, meaning that its relation to the root of DNS is defined. Many people believe that a fully qualified domain would end at the com, which is incorrect. Without a trailing dot, everything is just assumed, which is where we get into trouble, and have fun. So whether you typed in www.google.com, just Google.com, just www, or www.google.com with a trailing dot, these will all result in completely different behaviors. Much of this is configurable, but almost never is. So let's explore what actually happens in these situations. There are multiple features that influence decision making on the part of the client before it ever decides to send a DNS query. Two of these features are Suffix search paths and DNS devolution. Both of these have numerous configurable options that influence their behavior and act very differently between every version of Windows and ServicePack. So this is how most people would use Suffix search paths. If your company name is Foo, and you own Foo.com, and your active directory name is AD.Foo.com, you might put AD.Foo.com and Foo.com in your Suffix search path and either push that to your clients as part of the system build or in group policy. If one of your clients tried to resolve the short name www. The default XP behavior would first depend AD.Foo.com, then Foo.com, and then it sends a NiteBIOS query. If your device was querying www.phx, the default behavior would be the first query www.phx itself because it has two labels, then append the Suffix search path, and then because the entire name is less than 15 bytes, it will again attempt NiteBIOS. Everything after XP SP3 will try to resolve www.phx with DNS, then with NetBIOS, and then stop. No Suffix search paths are appended, and so, of course, since most people make design decisions based upon what was at one point expected behavior, this broke things, and this is when people start playing with all the registry and group policy settings to fix these compatibility issues. Then Microsoft broke DNS devolution. Before XP and until Microsoft changed the behavior in XP, in the absence of a DNS search path, Windows would use DNS devolution to find an answer. So without a search path, if the client was querying www, Windows appends the only domain it knows. It's connection-specific domain it got from DHCP or from being a member of Active Directory. It first queries www.phx.ad.foo.com, then it begins walking up the domain removing labels. It queries www.ad.foo.com, it removes another label, and finally queries www.foo.com and stops. It assumes your organizational boundaries foo.com and that you wouldn't want to query www.com. Microsoft changed this behavior because obviously not all organizational boundaries are two levels, and that if your top level domain was co.uk, with a connection-specific domain of ad.foo.co.uk, the default behavior would have included querying www in co.uk, which would have fallen outside of your organizational boundaries. So the fix introduced by a random security hot fix was to change the number of levels that must be in the name 23. So devolution in this case stops at foo.co.uk, which would have been desired here. But in our example of foo.com, the new behavior introduced by that random hot fix is to stop at ad.foo.com, and this broke hundreds of thousands of clients at companies where their designs depended on that original behavior. So what did these companies do? Did they change their infrastructure design to fit the behavior of a random hot fix? No. They changed the behavior back to the way it was. This is the decision tree that Microsoft uses before making a DNS query. I know it looks simple, and you're wondering how anybody could ever misunderstand this, but there have been dozens of updates to this behavior from version to version, and they've documented it once. If one of these branches changes and breaks the way that you're using DNS, you'd have to push out modified settings to restore that original behavior. Then you have the problem where new updates change the way that settings you applied behave. But maybe next time it doesn't break anything. It just changes the behavior enough to do something you wouldn't expect. So either through registry changes or group policy, companies push out changes to restore the original behavior, and some of them don't get it right. Or the behavior of those modifications are changed by another hot fix. They push out changes to restore that lost behavior because they want clients to look in food.com again. But what happens in many cases is they don't just restore the two-level devolution. They removed the original two-level limitation that existed in the first place. Thank you, Microsoft. Starting in Windows 7, though, it defaults to three levels. We'll let you change it to two, but we'll not allow it to devolve to one level. So that means it's fixed, right? No. What about BYOD, mobile devices, and all those previously broken Windows XP configurations? So I set out to test how many broken configurations might still be out there. I registered some dot-com domains that I thought might be commonly used in enterprise environments. The first domain is the short name the Office Communicator will query when looking for its SIP server. The next two are short names I found by Googling for the name of the registry key that holds the web proxy. So I point these names to my server and I waited to see if clients would contact me. And they did. After registering SIP internal, I began receiving requests from Office Communicator clients. This example was from a DHL asset, and there are thousands of random devices all over the world trying to register with me. I've only done a little playing with it, but it certainly looks like there are a few attacks that a malicious SIP server could have on a Communicator client, but that's going to be my next talk. For proxy-Phoenix, there were some random endpoints from IBM and HP that began asking me to be their proxy. Now they both share a client in Phoenix that pushes proxy-Phoenix as a short name to their clients. I thought that was interesting, but set-proxy.com turned out to be very interesting. So the first hits that I received for it were from thousands of Windows clients attempting to download a proxy pack file. So investigating that source IP, I found that it was registered to Arthur Anderson, the failed accounting firm that went down with Enron. So Accenture was spun off from their consultancy group, which makes more sense of the next part. This traffic was from Accenture, and it looks like their mobile device policy sucks. They're pushing a configuration that references a location for their proxy pack file by short name. And there are thousands of iPhones and iPads appending a .com to that short name and asking me for their pack files. And even though they're pushing a proxy configuration that the clients are clearly not getting, they're still allowed to go out directly to the internet to ask me for it. So looking closer, I see that not only am I getting requests from Accenture, but the employees that they have on site at their customers are contacting me as well. So Accenture, through poor DNS configuration, had not only opened themselves up to having every one of their devices hijacked, but introduced a foothold into all of their customers' networks. I had devices contacting me directly from IBM, GE, HP, Dow, Nokia, Merck, Medco, all asking me to be their proxies. I would have expected a little better from Accenture, but not much. The lesson here is, watch your DNS traffic just because Windows behaved one way doesn't mean it's going to behave the same way after the second Tuesday of the month. You need to understand what normal traffic looks like before you make changes. So you have something to compare it to. This one's my favorite. So I've demonstrated some fairly unique ways that DNS can be dangerous. Bit squatting is not a huge threat, but it is an interesting one to play with. Unexpected behavior introduced by Microsoft patches combined with unique configuration is somewhat understandable. But one of the worst things that I've seen companies do from a DNS perspective is 100% self-inflicted and careless. What I'm about to demonstrate to you I've seen companies do repeatedly over the years. Sometimes it's an accident. Sometimes it's just willful disregard. But it always invites an attacker to completely own every piece of infrastructure you have. And those invitations even get left around forums all over the internet waiting for someone to accept. So the problem I'm talking about is when companies use domains that they don't own, those domains get put into Suffolk search paths, pushed to all of their clients, and inevitably posted all over the internet when somebody needs technical support. So I set out to explore how many companies I could easily find that were doing this and it wasn't hard. I started with a little help with Google. So searching for the name of the registry key that stores the search list over the output of an IP config, this will return thousands of hits from bleepingcomputer.com and other technical support forums that encourage users to post information about their workstation configuration to help troubleshoot. So I scraped Google for the results of these searches and created a unique list of all the domains and then began registering them. So I stumbled upon this IP config output and the name RSQuanta.com made it into my list of domains to register. Immediately after registering that domain, I began receiving a flood of requests for it from thousands of devices. I had no idea who this company was, but I was eager to find out. So it turns out this company is called Quanta Computer. They're a massive Taiwan-based manufacturer of electronic hardware. They have 60,000 employees worldwide and they build hardware for Amazon, Apple, Cisco, Dell, LG, HP, IBM, Rackspace, Cloudflare, and others. They designed and built the OLPC $100 laptop and they worked with Facebook to design and build their new servers. So from their devices, I was receiving queries for proxy auto detection, various proxy names, SMS, WSIS, mail servers, file transfer servers. There are dozens of ways that I could silently hijack these devices, inject exploits, steal credentials or intercept file transfers. When they're asking me to help them locate resources, a man in the middle attack would be trivial. Then I noticed the source of all the requests I was getting and that's what really blew me away. I was just seeing regular traffic from their assets sourced from their customers' networks. Now these queries prove there are Quanta assets onsite at Cisco, Apple, 3M and Dell. So they must have employees onsite helping to design that hardware. And these are just some of the customers that I know that they have employees located onsite at because they query me for everything they're trying to access. Those assets could be a foothold into these companies. And what are the odd devices have sensitive intellectual property? They're pretty good. So there's plenty of passive information that I can gather from their traffic as well. I've enumerated every device name on their networks. I can see traffic from companies that they don't publicly do business with, possibly indicating a new contract. I can see traffic may suddenly stop from a company indicating they've lost a contract. I can track wherever they go. I can see when they're traveling. I can see that there must be an open Wi-Fi at a dry cleaners near one of their offices because hundreds of the devices have come from there from time to time. And I can even see that they have people in town for a black hat and death con. We'll talk later, guys. So this kind of mistake is serious. Don't let this be you. Verify your internal configurations. Monitor the internet for details of your internal configurations being leaked. Places like Paced Bin and Bleeping Computer are notorious. Monitor your DNS logs to verify your clients and the clients of your onsite vendors are querying what you expect. If you collect your DNS logs, look for the most commonly queried domain per individual device. You can easily identify every one of your corporate assets, given the fact that each of the most commonly queried domains are going to be what is in their Suffolk search paths. The domains that are in your onsite vendor search list are going to be their most commonly queried domains. So a couple years ago, I started buying expired domains that were previously used as command and control servers. It's been a lot of fun. So first, I wanted to understand how much residual infection was still out there. But second, I wanted to understand what kinds of devices were still infected and where. So finding lists of blacklisted domains is easy. So I had thousands to choose from. And with a 99 cent sale of domains on my registrar, I started buying a few. This was the first one that I got. Microsoft Winner Security.com. Now, this was the C2 server for a spy eye variant. This bot hooks various functions to grab URI and post data, and then just sends them to this domain as an unencrypted post. I don't have to do anything. I send it a 404 every time it talks to me, but it just wants to keep sending me still on credentials. This bot reports its unique ID, the process name, the function was hooked, and the payload. Now, bots like this are a bit unique from others as many of them will phone home and check into their command and control servers, but they only deliver payloads if they're instructed to. This one blindly posts the payload whenever it's collected. Now I've registered dozens of domains just like this one with thousands of devices still infected. And even though these bot nets were taken down by various groups, they just allow the domains to expire. So what's the point in investing all that effort into taking down a bot net when you just allow the domain to expire and the original bot net master can go re-register it? So from my first six dollars, I had 23,000 devices reporting into my server. So these domains were all taken from published blacklists. So why are so many companies allowing clients to contact domains that have long been considered malicious? We can all do a better job of controlling one of the easiest mechanisms we have to prevent this. So from mining my DNS logs, I stumbled upon a domain that looked really unusual. One client out of 82,000 devices on my network were querying it regularly. And when I investigated it, I found that it had been expired for six months. This domain was not on any blacklist and there wasn't a single hit on Google. But it just looked too suspicious. So I bought it. And the clients started flooding in. So this was just a basic form grabber. It was sending its payload unencrypted as a post and there were 10,000 infected devices all over the world. Some of them were in very highly secure areas. And I found that one of the infected devices was at a wastewater treatment facility in Phoenix. So I reached out to somebody there so I could get a sample of this. And it was difficult to identify the infection but once they did and sent me a sample, I uploaded it to VirusTotal to see what it was. And out of 42 AV scanners, not a single one had a signature for it. And this malware was two years old at this point. And I thought it was strange that nobody would have the signature but what was even more strange was why had it been abandoned? It was still completely invisible to AV and it had some really high value devices infected. And here's one of the ironic examples. This is a QA engineer at Symantec. I hope he's not here. So here I can see him working in their web-based ticketing system. And in this case, he's closing out a ticket about a security issue with their Sym product. I think it's somewhat sad that a piece of malware that's two years old is still on an antivirus company's PC assigned to an engineer who's working on a security defect in a security product. And before submitting the sample to all the AV companies, there were quite a few infected devices that would be considered high value. There were devices belonging to court clerks, devices at the U.S. House of Representatives, money transfer offices, newspapers, even a teller at the Federal Employee Credit Union in Langley. This was definitely the best 99 cents I've ever spent. So a frightening thing to consider is that there are likely millions of infected devices trying to reach command and control servers that no longer resolve. If you use OpenDNS or the local DNS for almost any large ISP, they'll give infected devices an answer for anything they're trying to resolve. Nope, it doesn't matter if it doesn't exist. They spoof a reply for anything that doesn't resolve and point you to the IP address of a default landing page because they want to serve you ads. The logs on that server that hosts that page contains untold amounts of stolen data. Do you think they're doing anything to protect that? They're trying to monetize misdirected traffic with ads and they're inadvertently stealing private data. You'd expect that if a piece of malware tried to resolve a command and control server that had been taken down, it wouldn't go anywhere. Sorry. OpenDNS, mustISPs, oh, they're a little bit more helpful. It'll give you an answer for anything that doesn't resolve. It doesn't matter. And then that malware happily posts it's stolen payload to that server. So I've registered quite a few domains now that aren't on any blacklists. If I can manage to get my hands on a sample, I commonly find that it's undetected by most of the major AV vendors. All of them I found by mining logs that people give to me. And I hope everybody considers mining their own logs. Some of the easier things that you can look for domains being queried for the first time. Any domains that are new should have their registration dates logged. Log the name of the person who registered it, the country that they're in, the authoritative name servers. Look for domains that resolve to 127.0.0.1. They're probably botnets that have recently been taken down. Look for domains only being queried by one client on your network. It's pretty easy to find things that stand out as unusual. You could generate a list every day that's small enough to manually investigate. And here's some resources that you should check out if you wanna get started with DNS intelligence. Barrow supports logging DNS queries at Cs. I highly recommend that. The DNS anomaly detection script is a simple thing that will look at your top domains from yesterday and compare it to the top domains today. And when something changes, the next Microsoft patch that gets rolled out, you'll see a big change in that the day after. So that's a good one. Passive DNS allows you to capture queries and answers from a PCAP file or an interface. And response policy zones are a great feature and bind that will allow you a fine-grained approach to blocking and redirecting queries for specific domains based upon the answers or the authoritative servers for the domain. And when used with DNS singles, you can impersonate the remote server so you can log exactly what your clients were trying to send. And these are some white papers that you might find inspiring. Start mining your DNS traffic. You're gonna find much more than you expect. And thank you for your time. Please feel free to contact me. Thank you.