 So, hi, everybody. Good day. I'm Matt Reincek. I'm Aaron's network operations manager and I'm going to talk about IPv6 implementation at Aaron, basically how we did this and when we did it and problems we encountered, successes we had, things like that. I'm really curious how many people were in John's presentation an hour ago. Okay, it looks like a lot of you were. He covered a lot of IPv6 history and things like that. So I'm not going to get into that. I'm just going to assume that you know what IPv6 is and what it does and that you probably need to look into getting it soon. So with that in mind, I'm going to pretty much skip this slide. I think one thing that's worth mentioning, though, is IPv6 is still in development. There have been several updates in the last 10 years that are worth knowing about and worth reading about. Most of them are related to security or routing table bloat and growth issues. So you can go back and look at this slide later on in the conference materials and give you some good reading material. I also want to mention just kind of my little anecdote for this presentation. What happened to IPv5? And John hit it at this during his presentation, but there's really kind of a bigger bit of trivia here. It's not just IPv5. There were a bunch of different versions of IP when they were developing the next generation after IPv4. IPv5 was a streaming protocol. It was developed, I think, by BBN originally, but it was Apple, IBM, Sun. A bunch of people looked at it in the 90s. This is before IPv4 had things like QOS and other things that made multimedia really possible. The streaming protocol was thought of as the answer to doing any sort of real-time multimedia stuff, whether it be voice or video or whatever. And eventually, because of QOS and other features in V4, especially hardware that really didn't suck, it kind of fell by the wayside. But it hints at kind of a bigger thing. There were actually IPv7, IPv8, IPv9. There were actually two different versions of IPv9. Two I referenced here and one of them never had an RFC because it was developed by the Chinese and it never really saw the light of day. I know that it had 256-bit addresses, so it had absolutely just huge headers. And the argument was that IPv6 wasn't big enough. If you heard John's presentation, he compared the golf ball to the sun sort of thing. There's a lot of V6 addresses and the danger of running out is pretty minimal, at least during our lifetimes. So maybe some other generation's problem. So here's the timeline for implementing IPv6 at Aaron. We started really thinking about it in 2002. V6 was pretty much finalized and I guess deployed in 1999. The RFC came out in December of 1998 and I think the IAB, part of the IETF, they ratified it in the spring of 1999. So in a way we were a little bit behind. We looked at things, we want to see if this was something we could do. I know that RypenCC, which is the European counterpart to Aaron, they had already done this. I think APNIC had done it. And we really wanted to get on the bandwagon and do it ourselves. So we started planning for this. And as you can see, we've deployed, well, six, five IPv6 networks throughout my time at Aaron anyways. And really these days we're standardized on dual stack. We do V6 everywhere. And I'll explain that as we go through this timeline. The first network we set up was a little standalone stub network. It was based on the Sprint T1 circuit. We used a Linux box with a Sangoma T1 card. We actually had Cisco routers that could do V6. The Cisco's have been doing V6 for a long time now. And I'm not completely sure I remember the rationale for why we chose a Linux router other than it was kind of cool. I remember that we had to actually hack the Sangoma T1 drivers to get them to work with V6. And Sangoma was actually pretty thankful for that. We gave them the code back and I think that they used it for a while while they were doing this. It really helped us with troubleshooting. And I think that was kind of serendipity. We didn't intend for that to be the case, but having TCP dump and other things like that directly on a router was really, really helpful at this time to figuring out how things were breaking and things were breaking a lot actually at this time. We used an open VSD firewall. I can't speak highly enough for open VSD in this kind of situation. Its support for V6 even back in 2002 and 2003 was exceptional. You know, full state in V6, the ability to combine V4 and V6 rules together. It's really, really good. We use it to this day. I don't know if you're familiar with open VSD as a firewall, but, you know, Carp and all this stuff, it just rocks. And it supports V6 really, really well. Another feature that it has that is really nice when implementing a V6 network or any sort of experimental network like this is its ability to log drop packets directly to PCAP files so that you can look at them in TCP dump basically in real time. This also proved invaluable as we were tuning our rule set trying to figure out what the right thing to do was because we were really worried about security when we did this. It was one of the reasons why this was a segregated network. We didn't do a dual stack. We were really afraid of somebody's, you know, snack-smashing attacks, the ping of death. I don't know if people remember the 90s, around 95 or 96 somewhere around there. There was the infamous ping of death that one packet could dump, well, pretty much the entire internet for a while there. It affected everything from routers to Unix boxes to Windows boxes, just about everything. We were really worried about that. So we kept this network completely separate. We deployed our website on it. However, we did not have a quad A for www.erun.net at this time. We used v6.erun.net. We had a DNS server and an FTP server. Erin has a big FTP site that hosts a lot of the Erin zones, a lot of statistical information about the IP addresses that we allocate. You can go there anonymously and look around. There's actually all sorts of interesting historical information on there. In a way, it reminds me of FTP.UU.net from way back when. It has tons of stuff on there. It's interesting to go explore. Issues we had with this network had a lot of path into you issues. Basically, what that means most of the time is that somewhere there's a tunnel on the network and Sprint certainly had tunnels at this time. Even though our T1 to them was native IPv6, upstream here in their Sprint head infrastructure that didn't support IPv6 at all. So they did set up tunnels to get around that. This caused us problems because he had these sudden drops in MTU. V6 has mechanisms that are supposed to detect this. You have path MTU discovery. However, that didn't always work for us at the time. I think that's partially because our Linux router didn't work so good. OpenBSD, we had firewall rules that dropped the wrong kind of ICMP packets. So fragmentation would break and packets would just fall on the floor. We also had a lot of routing issues. We saw everything from Sprint would drop routes here and there, but frankly, usually they weren't the problem. We'd see entire countries fall off the network. They were doing maintenance or whatever and all of a sudden Finland would be gone. You wouldn't see them anymore. They would come back maybe the next day or the day after that. You'd see the same thing with a lot of corporations. They would just go away and some of them would be gone for weeks. It's because nobody noticed usually. Nobody was using this stuff, so it didn't really matter. Over the years, service has gotten a lot better. We actually just retired the Sprint circuit early this year. It served us really, really well. Things have changed. We're able to get V6 on a lot more circuits and it just didn't make sense to use this anymore. But that was our first 4A. In 2004, we basically did the Sprint circuit all over again. The reason why we did this is we, well, VentSurf offered it to us. That's probably the easiest reason. We have a data center at Aqueducts out in Ashburn, Virginia. And VentSurf, who is somewhat affiliated with Aaron, he's right down the road at the time. He was working for MCI, which was part of WorldCom. He had an experimental IPv6 network and he knew that we were running V6 and he was really interested in getting us to use his network and basically help us test it. It would give them traffic that they could look at. This is very similar to the Sprint network. The biggest difference is that we used a Cisco 2800 router this time. It just made more sense. In fact, we ended up going back later and replacing the Linux box with a Cisco router as well after the Linux box died. Pretty much everything else is the same, though. We stuck with OpenBSD. It continued to work really well for us. We had the same website, same DNS, same FTP, still a segregated network. We were really worried about security issues. People compromising V6 or bringing down hosts and causing problems on our V4 network. So we just kept everything completely separate. Still had a lot of path into discovery issues, had a lot of routing issues. Nothing really changed there. That's really about all there is to say about that network. Until we got to here. I mentioned earlier that we were in equinex. This is really the Worldcom network. What happened is Worldcom went belly up. I think people remember that. We had this kind of zombie T1. I think we got billed for it for a while, but eventually the bills kind of stopped coming and we got lost in this ether. We wanted to keep the service up. We kind of used it as a DR site for IPv6. Around the same time equinex started the Equisix IX, which was an experimental IPv6 exchange. Talked to some people at Aaron meetings and Nanogs and stuff like that. We figured out we could join that and it was free at the time for any commerce since it was in beta. All we had to do was pay for a cross-connect. We could get transit. We have an organization called OCADE. OCADE was kind of an IPv6 evangelical place. I think it's still around today and they'll provide transit to any network. The autunnels are natively if you can get to them. I think you have to be an ISP or an enterprise. There's an agreement you have to sign, but it's a pretty good deal. It's a lot like HEnet today or GoGo 6 or 6s and some of these other tunnel brokers. They'll do very similar things for you. However, they will offer native traffic if you can get to them on an exchange somewhere, which is pretty cool. Stuck with the San Francisco router. I think it was a 2811. OpenBSD firewall. Same website. The biggest difference now is that we felt like since we had good V6 connectivity for the first time, that we added a quad A for www.aren.net. This proved to actually be kind of a neat thing. It meant that people finally noticed that they could no longer get to Aaron and started emailing us. This led to an opportunity to help people figure out why their V6 was broken. You have to understand that we don't make money from our website. We aren't Google and we aren't Facebook in places like that. Having a quad A associated with our main website really was kind of an alarm bell for people. People that come to our website are usually ISPs or other networks. When they notice that things are broken, they do what they normally do with us. They email host master at aaron.net and they say, hey, your website is down. We say, no, it's not. It looks like your V6 is broken. That would lead into this whole conversation between my department and their ops department. We'd figure out what was wrong between us and them. Sometimes it was them, sometimes it was us. Usually it was somewhere in between and we could get a filter lifted or figure out where there's a tunnel or a routing issue and get all of that fixed. By having a quad A for www.Aaron.net, we were able to actually make service better for a lot of people and I think educate, which was really, really cool. I think it was the first time that my department really got involved with helping customers troubleshoot their network. This is also the first time that we started to play with dual stack. Because we had 100 megabit ethernet connection into the V6 internet, we were really interested in getting away from the segregated network. It made pushes really hard. It made everything really hard. We had to have these weird bastion hosts that had V4 and V6 and we could kind of push stuff and hold it there and then push it over later. We wanted to get away from that. We really wanted to get some hosts that had access to this great 100 megabit V6 circuit. So we started to dual stack very slowly. This slow rollout ended up V6 on our entire backbone after about a year. I think by the end of 2007 we had V6 across our entire backbone and in 2008 we finally rolled V6 over to our client network where we had hosts on the network. This included Windows XP hosts and Linux boxes, Macs. Basically we got to the point because of this network and how well it works, where we were deploying V6 everywhere we could in a dual stack configuration. And that led to these networks where they were dual stack by default. Basically what these networks were, there's two separate networks here. One was powered by NTT, the other one was powered by Tiskley at the time which is now TI net. These were Aaron's next generation public facing services networks. Basically what these networks were were the home for all of our public facing stuff. So the DNS, all of the Slash8 Internet or ARPA stuff, the Aaron.net zones, the Whois clusters, the Internet routing registry, and eventually a lot more list mail servers, web servers, just about everything that you can get to over the public we were placing on these networks. So I mentioned they were dual stack by default. They were completely standalone networks, basically stub networks, and we bought all new gear for these things. We used what we're fairly new at the time, Cisco ASR routers. I think we used 1004s. It's all gigabit. They worked really, really well. Had no problems with those. No firewalls at all. We had load balancers that we put in front of these services. We used Foundries at the time, had some problems with the Foundries to be honest. V6 support was beta, and for those Foundries, V6 support I think will be in beta forever. They're no longer, they are not going to release a production version of this software. I think we helped them take it as far as they could. If you look at their new stuff, it supports V6 far better than these things do. But we were able to get things to work. While we missed a lot of advanced V6 features, there's certainly not parity between what you could do on V4 with these Foundries, with what you could do on V6. We were able to get the support that we needed, and things worked pretty well. These networks are still around today. They've seen some upgrades. We're getting ready to replace the Foundries with newer gear that has better V6 support so that we can do things like GSLB over V6 and some other stuff. We're working closely with vendors. Brocade now. They bought Foundry, and we're working with them on their code to get all of this stuff working. And it's actually been a really good relationship. They really listened to us, and it's kind of surprising, but it's also cool. Other networks that we set up over the years are meeting networks. Aaron has two meetings a year. I think John mentioned that earlier. They discussed policy, and we were able to take advantage, starting around 2005, of the good V6 connectivity that we had developed at Aaron to set up our own tunnel broker, basically. So no matter what network provider we had in a hotel, usually we get a sponsor to come in and give us a DS3 or whatever for our meeting networks. But we could provide our own V6. So we'd set up a router on the edge, and we'd tunnel back to the air offices, and we'd get V6 to our members so that we could say, you know, all of our networks are V6-enabled. Try it out. You know, let us know if you can get it to work. This led to us doing tutorials where we would spend, like, half a day with people at our meetings helping them configure IPv6 on their laptops to get it tested, you know, whether it's Windows XP or Vista or Linux or Mac or whatever. We'd set up DHCPv6 servers, or we'd use RA, or we'd use Slack, whatever. You know, try out these various things, try to get them to work, and it was a really cool exercise for both Aaron's staff and for the members that attended our meeting. This evolved over the years into a testbed for all sorts of transition technologies. We've tested NATPT, IV, which is kind of a variation on NATPT. It's developed by the Chinese, and I think that it's basically dead as well. It's worth noting that NATPT is also dead in the water. Nobody's really using it anymore. Things like NAT66 and NAT64 are in the work at the IETF. This kind of goes back to the IETF finally starting to take the transition seriously, which is something that's fairly new. We've also tasted a lot of the CGN technologies in NATLite. You know, John mentioned earlier Comcast and the work that they're doing, deploying V6 in several different modes, and one of the things that they're doing is something called NATLite, which basically deploys RFC 1918 V4 space to a customer on their internal network, and then uses V6 transport from their CPE up to the network to another big NAT box that strips off the V6 and converts it back to V4 to get people to the V4 internet. We've tested that at our meetings. You can actually test that yourself. It's worth noting. It was developed by ISC. It's completely open source. You can go to the website and download it. There is an image for open WRT or you can deploy it on any Linux box, and it works actually really, really well, surprisingly so. At least on a small scale. We'll see how it works in large scale rollouts. Comcast is finding out right now, I believe. So how much V6 traffic do we see? We don't see a whole lot. I don't know if you can see the red on these graphs, but there is some red on there, I promise. We get a lot of traffic. We're a little tiny company. I think Aaron is about 50 something employees, but we get a lot of traffic. We get many hundreds of megabits worth of DNS traffic, for instance. So in a way, half a percent of DNS traffic over V6 is actually really good. Who is? I think that this status is a little bit wrong. It's gone up a little bit. So I think by the end of the year, we'll start to see probably about a half a percent of who is traffic over IPv6. Web traffic is kind of interesting. We get over a half percent of web traffic in general, which is kind of high for most sites. If you look at other sites out there, they're a little bit less. If you count Aaron into these statistics, we get almost 10 percent of our traffic over V6. Because internally, so many of our hosts are dual stacked that all of our push scripts and everything else goes over IPv6. All of our monitoring for web content and stuff, that happens over V6 now. So we generate, actually, quite a lot of V6 traffic internally. I think that that kind of helps speak to how well it generally works. It does just work. So that's kind of the end of the internet works. I want to talk a little bit about security. This is DEF CON, and I think that this is something that everybody likes to think about here. We learned a lot about security, and I think that it's worth noting how security changes in V6, what's different. Many things are the same. John hinted at this in his presentation as well. TCP is still TCP, UDP is still UDP. You still have source, you still have destination, you have things like QOS, and all these other mechanisms. That's really all the same. If you understand that in V4, you understand it in V6, there's a lot of differences as well. There are a lot of unknowns. V6 stacks really aren't all that well tested. Nobody really knows if there are these pings of death out there. It's worth noting that. I think it's something you need to be aware of. It's just new territory. Built-in features of V6 that promote security, they can also promote insecurity if they're used for evil, and we'll talk about that a little bit. Also, just multiple protocols means multiple policies. I don't know if people, how many people in this room were around in, say, 1993 or 95 in that time when TCP IP was really taking off, not just on the internet, but also on corporate networks. At that time, I was working on networks that held on network. So we had IPX, and in one particular place I was working on a network that had a nationwide IPX network. I could see printers in Alaska. It was basically all done using RIP. It was a big, well, it was a big mess, actually. They also had IP on their network at that time, and the access control lists and just the policy about routing and everything else was an absolute disaster. You could easily have this problem with v6 and v4 if you aren't careful. I think it's something that the policy side of this is probably the most important thing. If you want to have a secure network, you have to make sure that any decisions you make about what happens in v4 also happen in v6 in a similar way, at least wherever possible. More protocols, more problems. IPv4 and IPv6 are not the same. IPv4 features don't necessarily equal IPv6 features. For instance, IPv6 doesn't have ART. This is a huge thing. People are used to securing their networks with kind of layer 2, layer 3 ARP stuff to where MAC addresses are bound to ports and other things like that. You can't necessarily do that in v6 the same way, because v6 uses ICMP v6. You have similar concepts there. ICMP v6 is very multicast based, and it supports similar things. You still have MAC address on hosts and stuff like that. However, finding a switch that supports filtering of ICMP packets in that way right now is very difficult. D-links make some stuff that works, believe it or not. Cisco makes some stuff that works, but you have to have the right iOS image and things like that. It's just not very easy to do. So it's something to be aware of. One thing you can do right now, if you need that level of security, is to look at 802.1x, something that's basically agnostic to the protocol, and it works at the ethernet layer. You can simulate a lot of that same access control and whatnot using that technology, which at least will help you right now. I expect that within a year or so, you're going to see just about every switch that Cisco makes, everything that Juniper makes, and you know, given that D-link has switches out now, even Linksys and other people will start to make managed switches that support some of these v6 filtering functionality. It's also worth noting ICMP v6 is critical to v6 functionality. Fragmentation of v6 is drastically different than it is in v4, and ICMP plays a really big role in that. If you filter ICMP, you're going to break your v6, and this is basically how it works. In v4, fragmentation happens at routers, or it can happen really anywhere a v4 packet is passed. Usually that's in a router. In v6, that's not the case. What happens is somewhere you send a packet out, somewhere along that transit path, a router will say, this packet is too big, I can't pass it on. It sends an ICMP packet back to the source host. The source host then fragments the packet itself and sends the fragments on. This is a huge difference. If you filter those ICMP packets, the packets just fall on the floor. Your host never knows that it's packet was too big, doesn't get forwarded, and your connection will literally just die right there. Another big difference is DHCP. John mentioned earlier DHCP was developed for IPv6, and router advertisement, all that auto configuration stuff was, you know, people thought about that for v6. It was all back ported to v4 in one way, shape, or form. When v6 came out, it had no DHCP v6. It actually just had router advertisements. DHCP v6 was added on later in order to provide functionality that the router advertisement stuff just didn't do. So, for instance, with RA, you can't get name servers, at least not by default. And that's a big problem. You can't get network time servers and things like that. This has changed a little bit. There's been some updates to RA that allow some of this functionality, but a lot of people still want DHCP v6, and it functions a lot different. Basically, it uses ICMP to do all of this. It is nothing like how it works in the v4 world, and that's worth noting. Hardware and software support is less than ideal. I mean, this is just a fact, and I don't know that this is going to change overnight. It's changing slowly. I mentioned earlier that we've worked with Brocade as one vendor that's starting to introduce really good v6 support for their load balancers. But things like firewalls, finding good v6 support in firewalls outside of the open source world is difficult. I think the checkpoint in some of the big vendors have v6 support, but I don't think that they have v6 support that will look at all of the extended headers and some of the other things yet. That's going to take a while to happen. The good news is that there's basic support there now, I think, pretty much across the board. So you can do the basic routing policy. I think there is some worry about people taking advantage of some of the optional extended headers and v6 to full firewalls and do some other things because they won't know exactly where to look in the header offsets to figure out that this is a bad packet or not. So probably the best thing to do is just have really strict rules that drops all that anomalous traffic on the floor and look through your logs and sort it out later. I've mentioned switches and load balancers previously. Certainly they lack support there, but it's getting better all the time. One area that is actually really good is routers. I think if you have a Cisco or a Juniper, it supports v6 and there's almost complete feature parity between the two, which is really, really great. So security through obscurity. What does this mean? It means that we don't really know about much about v6 security in some ways and this is something people need to be aware of. IPv6 has been in many of us for a long time, 10 years. Even Microsoft has had v6 since Windows 95 and 98. They released I think something in 1998 before it was even in final form that would give you basic v6 support for Windows 98, believe it or not. The Comma and Wide Projects for Linux and FreeBSD provided IPv6 support basically while the RFCs were being written. They've had it almost since the beginning of time and OSX has had it since it came out because it was based on FreeBSD, but they aren't very well tested. How many people in this room have run IPv6 on their laptops? Some people have, but it looks like the majority of you haven't. That's generally the state of things. A lot of the stuff just hasn't been tested. Applications haven't been well tested. You have, it's not just the field, well it can even put an IPv6 address into this application. Is it capable of logging a v6 address? If you have any sort of log configuration stuff, does it support v6 well? Apache is a great example of an application that works really, really well, but there's a lot of applications that just don't. Which leads to stack smashing, buffer overflows. Who knows what we'll find with v6? I think that there's a lot of unknowns out there for applications that people, they tacked on v6 support real quick, but they didn't think about the consequences of what they were doing. That's all going to be exposed and it's just something I think that, you know, over the next few years we're going to have to deal with having good network monitoring, good log analysis, things like that are areas where you can try to catch this kind of stuff. There are just many unknowns to v6. The good news though is that, you know, exploits aren't well known either. I'm sure that there are probably even people at this conference that have taken v6 apart, read the RFCs, look at all these stacks and they have some zero days in their pocket right now waiting to wield against some site somewhere. But I think that those people are in the minority. The good thing about that is that security today is a lot better than it was back in 1995 and 96. Back then people didn't even really think about security. There wasn't a lot of malicious activity on the internet the way that there is today. When v6 stacks were designed for a lot of these hosts they were designed with the fact that people are going to try to smash this. They're going to try to get into it. We have good practices for auditing code and things like that now that we just didn't have before. So I think that things are a lot better built now. That will help us. It's also difficult to scan v6 networks at least from outside the network. You know, in-map tools like that they don't work as well over v6. There are some multicast exploits that you can use to discover v6 networks. Some of those can be used remotely. A lot of them you have to be local and using, basically 802.1x and other local network tools to secure your ethernet are good ways to mitigate those kind of attacks. It can be hard to guess addresses. It can also be easy to guess addresses depending on the addressing format that's being used. We'll talk about addressing formats here in a minute because that can be confusing in itself. But generally v6 addresses are just complicated. It's not nearly as easy to sequentially walk through a v6 network to try to discover hosts that way. And that can be helpful. In a lot of ways everybody is starting over again. Everybody is going to have to explore this and figure out what's the right thing to do is how to exploit it but also how to protect it. The good news is that you can't exploit something until you know how to exploit it. So at least right now I think the good guys have a little bit of a lead. So security features built into IPv6. These are actually really, really cool and very, very useful. You have the ability to set up encrypted sessions between hosts cross-platform. It just works. Right now it's mostly using pre-shared keys and things like that. However, opportunistic encryption, which you can do in v4, it's coming to v6. It's really just a matter of people writing the software and doing it. This allows you to have on-the-fly VPN servers. You know, on-the-fly encryption between hosts and authentication of hosts between hosts. It all just works. It's really neat. You can use this to enhance your routing security. So you can have routers running BGP between each other using AAH to make sure that there's the most spoofing going on between those links, which can help shore up your BGP connections and other routing connections. It also can provide routing layer security. You can have your typical three-tier application, for instance, and use AAH on the back end to make sure that all of the hosts they know who they're talking to. They're authenticating all those sessions. It's a lot like running SSL or something like that on your back end, but you're doing it down at the IP stack. And it should be lower overhead. I think eventually you'll start to see hardware that supports accelerating this stuff, just like you see some TCP and IPv4 header acceleration stuff in hardware today. You'll start to see IPsec acceleration for v6 because this is just really, really handy. It gives you an additional layer of security that you don't even really have today. I mean, technically, I guess you could do this in v4, but nobody does because it's just too hard. I think having it ubiquitously everywhere is going to make it much more usable. Of course, this also means that this stuff can be used for evil. Having encrypted packets can make deep packet inspection impossible. Difficult. It depends on who's doing the encrypted and if you have the keys to that. But potentially, your packets are completely opaque. This can be used for a lot of evil. Think botnets and command and control that packets are just completely, completely, you're blind to what's inside of them. How do you know if they're doing bad things or not? H is hard to configure. H is authentication headers. Basically, this is what allows you to set up two hosts to cryptographically authenticate each other. They know that they are who they say they are which can help prevent spoofing and replay attacks and things like that. Right now, it's really hard to configure and maintain. I think in some cases, it's not even cross-platform compatible. It will be eventually, though. I mean, it's in the spec and there's only really one way to do this. And eventually, you'll see interfaces like LDAP and DNS-based configuration for this stuff so that it becomes naturally easy to turn on. I believe that soon you'll see it integrated in things like Active Directory. So it basically just works, which is pretty cool. But right now, it is difficult to use. IPv6 has a lot of potential to be used for backdoors, Trojans, command and control, things like that because of these security features. As I mentioned, packets can just be opaque and you have a botnet that uses AH, for instance, to authenticate all of its botnet nodes, as well as the command and control servers and stuff like that. You make it a lot harder for researchers that research that stuff to break into them, but also to take them over and mitigate them or turn them off. That gets really, really difficult. I mentioned NAT here and it's not that NAT doesn't necessarily exist in V6, but technically, right now, there really is no NAT. There's no 6 to 6 NAT and this has one big security problem that I can think of right now, other than the fact that it's NAT and that causes its own security issues. And that's the PCI. The PCI spec right now stipulates that you must have NAT on your network. If you're taking credit cards or any of that kind of payment stuff online, you must have NAT. You have no NAT in V6 right now. There are some RFCs out there for NAT 6.6 and NAT 6.4 and all this other stuff, but that's not really, you know, I don't believe that any of that stuff exists on networks today, at least not in a production capacity. So if you have V6 only, you can't be PCI compliant and I think they're going to have to update their spec to address that. That's a real problem. It's also, IPv6 addresses are really complicated. They're long, they're confusing, they have letters, they have numbers, they have colons. If you have bunches of zeros, you can chop them off or you don't have to. It depends on, you know, what your preference is. Writing reg Xs for those, it's hard. Just reading those as a human is hard. You know, the ability to type them in is hard. It's going to lead to errors in confusion and things like that and just, you know, as a sys admin, working them with them is more difficult, which is going to lead to security problems because people are going to type them in wrong. And this is back to the dual stack. I mentioned dual stack earlier and how it can cause problems from basically a complexity standpoint. And this slide talks about that a little bit. If you have multiple stacks, you have multiple targets, you have basically, multiple problems there. You have a single host that has a V4 address and at least one V6 address, maybe more. All of these need to be in the right access control lists. They need to be in the right firewall rules, all of that. They need to be monitored properly. That's really difficult to maintain across one protocol. It's even harder across two. And, you know, so maintaining policy for routing or ACLs, all that's very difficult. Applications, they lack feature parity. So you have an application that does, you know, function X over IPv4, but it doesn't do it over V6. So what do you do? You know, and that can be as simple as listening on a TCP port. How do you fix that? So you have this application, it, you know, I have a box, it's dual stack, but the application that's running is only V4. How do you fix that? You know, there's six tunnel, there's proxies, there's things like that, but then you lose logging. So how do you audit these connections? How do you know when somebody is abusing the service and things like that? It's a real problem. You know, you have really the same problem with appliances. I mentioned the load balancers and firewalls and things like that. It's a problem that really we have to wait this out. We have to wait for vendors to come in and help us fix this stuff. So what did we learn at Aaron when implementing this stuff? We learned a whole lot and a lot more than really are listed on these slides. A lot more than we could probably talk about when we get into asking questions and things like that, but here's a few things. Tunnels are less desirable than native and that's really just to do with MTU. There's ways to work around MTU problems. For instance, for a while on the Sprint Circuit that we originally implemented, we just set our web servers to an MTU of 1280. That's the minimum MTU size you can have on V6. So path MTU discovery, all those problems just go away, but then you're kind of sending all these little packets around. It's not very efficient, certainly less than ideal. And that was to mitigate issues with tunnels. If you can avoid tunnels, I would say do it. If you have to use them, then go for it. Especially right now, there are companies out there that will give you free IPv6 transit over tunnels. And if you're in exchange, you can get 100 megabit tunneled connection to some really good V6 providers basically for free. Some of these providers will give you 48s. So you can set up and route your entire network. You may not get to get SLA for that, but it's great for your lab and things like that. So for experimental, use a tunnel. When you really want to deploy it on your network, try to find a provider that does it native. At least in our area, Aaron's located outside Washington, DC. And things have really changed in the last three or four years. It used to be that I could barely find a provider that knew what IPv6 was. And as providers now, even little ones, and little ones to me are kind of the remarkable ones, I can call them oftentimes, and not only do they know what IPv6 is, but they want to sell me a circuit with no hassle. It's harder to get that out of Verizon or AT&T or somebody like that. Oftentimes I can say, oh, well, I'm with Aaron and can we talk at DanAug or the member meeting and I'll kind of go in the back door and I can short circuit that. I think you guys can do the same thing. You just have to make a whole lot of phone calls. It's a lot harder for you to find somebody there that will listen to you, unfortunately. But that's changing. I really do suggest that if the big providers won't help you, go to some of the smaller ones. They may be a few hops away from the core of the internet, but to get native V6 circuit, it may be worth it. Continue to route V4 over your primary circuit. Use another circuit for V6. That works really well and I know people in the DC area anyways that are doing that through some of the smaller ISPs. Routing is not as reliable. This has gotten a whole lot better though in recent years. When we started doing this, the V6 internet was like the internet of 1989 or something. I mean it was really, really horrible. Now I would say it's like the internet of maybe 1999 or 2000. You still have routing issues, but they're really few and far between and I think a lot of the routing issues are actually related to growth more than people not knowing that their circuit is down or the router breaking or them running experiments during the middle of the day and things like that. It's usually because they're replacing equipment or they're having capacity problems on their old junky V6 router that they're using and they're upgrading them. So yeah, things have seen a great deal of improvement there. Dual stack really isn't so bad. You know, we feared this so much and we haven't had any problems with it yet. Though, you also saw our graphs and we aren't really getting any traffic yet either. So, we'll see what happens with that. You know, I have a great deal of confidence that our firewall rules and things like that are good and we're only letting the traffic through that we need to let through. So, you know, if we have an exploit there, it's really in some ways just as likely as it would be over before because it's probably an application issue anyways. Prox is really good for transition. They're bad for logging but they're great for transition. We use SixTunnel to proxy, you know, all sorts of non-HTTP services especially. You know, things like who is the routing registry, some of the before we replaced some of these services with V6 native versions, we used SixTunnel and some other home-built tools to make these TCP connections just work. We did lose logging but we were able to get the services to work. If it's a web-based service, Apache makes a great V6 proxy and you get full logging and all sorts of stuff with that. It works really, really well. I can't speak enough for how well it works. Squid also works but we don't have as much experience with it, Aaron, but it's worth mentioning because I know that it works really, really well. Mentioned that native support is better. DHCP V6 is not very well supported. Our client network at Aaron has dual stack. It's V6 enabled, it's got V4. We have DHCP on V4. We use RA on V6 and the biggest reason for that is is that most clients don't support DHCP V6 yet. If you have OSX, you know, I have a Mac, a lot of people at Aaron have Macs. You have a Linux box. It won't work with DHCP V6 out of the box. You have to go do stuff to it. The only platform that I know of that you can just install and go is Windows Vista or Windows 7 and right now we don't run any of that at Aaron. We'll probably have a few hosts that run it here by the end of the year. We actually have a lot of hosts that run Windows XP and they are V6 enabled and the way we do that is we use DHCP V4 to assign them name servers and they get V6 Transit. It just works, but I should back up a little bit. Windows XP doesn't support DNS over IPv6. It'll do, you know, ping, it'll do TCP, UDP, all that stuff, but you cannot get it to do a DNS lookup over IPv6 which is seriously broken. However, if you're in a dual stack environment it actually works just fine. So if you have this huge network of Windows XP boxes, you can actually dual stack them right now if they have V4 name servers that they can talk to. And that was a real boon for us because that meant that, you know, certain departments at Erin we didn't really have to do anything to other than go to their boxes and turn on V6. It's very convenient. So I used to think Microsoft was stupid for that but I actually think they were pretty smart. I mean, there's some technical issue that prevented from doing that but they did the best they could and it worked really well. Bugging vendors does work. Brocade and Foundry are great examples of this but gosh, Cisco, all sorts of people. Have been really good to us and they've listened to us and given us the support that we need. I really do think that it works. It's a lot like bugging ISPs. If you bug them long enough they'll start to listen. Every RFP that we put out for equipment, bandwidth, whatever, it lists IPv6 as a requirement. We will not do business with you if you do not support IPv6 and that has gotten vendors to support V6 because they want to be able to put Erin's name on their whatever it is, their website, their t-shirt, whatever and say that we are a customer of theirs and in order for them to do that they have to do V6 and it actually it does work in our case and I think that if you have enough buying power in your organization it'll work for you too. You tell them you're going to go somewhere else I'm going to use this transit provider I'm going to use this vendor because they have better V6 support they're going to listen to that their sales guys are going to cry when they lose that commission that they're used to getting from you. Security, DualStack makes policy makes policy more complex there's just no way around that I mean you have multiple protocols and policy is going to be more complicated the trick is to make sure that your policies are there's parity there wherever possible where there's not parity it should be well documented a good example that is in ICMP make sure that you document why ICMP is allowed in V6 in certain cases and it's not in V4 so that future generations will understand that IPv6 security features as I mentioned they're double-edged sword they can provide a whole lot of security but they can also be used against you and if your network is compromised I fully expect to see Trojans and other things that go into machine go into networks via web browsers to start using IPv6 especially when you think that Windows Vista and Windows 7 have V6 turned on by default and they'll even use Torito and other mechanisms to just tunnel it right out of your network it's one thing to detect a V6 connection that's encrypted and it's native where you can actually see it on the wire try doing that with Torito when it's inside a bunch of UDP packets over V4 it gets a whole lot more complicated V6X relatively untested relatively untested we talked about this before there'll probably be some exploits it almost seems inevitable maybe not but hopefully they'll be minor and it'll be a DOS kind of condition rather than UID0 sort of exploits but I mean I think that'll happen somebody's going to have some bug in their code and it will eventually get on earth this is a whole new world for hackers to explore whether they're white hats or black hats I actually think that's kind of cool no matter what your thing is if you're into compromising machines if you're into securing them this is a whole new playground to play in it's effectively an entirely new internet and I mean what's more cool than that really is something brand new and we're right on the cusp of it happening so understanding ICMP V6X is an absolute must you will break I mean I just can't say this enough fragmentation is very different in V6X in this related ICMP multicast is in tech vector it's a discovery vector it's also turned off by most ISPs right now and I mean much like it is in V4 finding multicast enabled provider or getting your provider to even turn on multicast free is difficult it's the same thing in V6X the biggest difference with V6X is that it's used to for neighbor discovery and other things so you can it's very easy to write tools that well basically it exposed an entire network to you or you can use that for real evil and use it to you know is for DOS attacks basically by using spoof packets and other things there's ways to mitigate this you know using ipsec and aah that helps having good filters on your network not allowing packets that are spoofed to get on you know basically common sense approaches that you use in V4 now a lot of that applies to or that she used for V4 also applies to V6X and one last thing is read rfc 4942 this rfc described security considerations when implementing ipv6 it's got a lot of really good information in there especially from kind of an ipv4 context talks about the differences between v4 and v6x and things that you can do to help secure your v6x network when you're implementing and that's that's it