 Okay, I think everyone is here. It's going to be here. So we'll get started. It's a great pleasure to be here in Warsaw. Lovely time last night. I've had less fun at weddings. We're here to talk about our Fault Aware Global Server Load Balancer that we do in DNS that we use for our CDN. Just a few words about who our target audience is for this talk. If you have an organization that has more than one point of presence, it's for you. If you or your application would benefit from a geographic segmentation of traffic. If you could use the ability to deploy simple failover, active passive or multi-node load balancing of your traffic. Do you want your server monitoring to automatically update your DNS? You probably do. GDNSD is the subject of this talk. We put in a port as our contribution. And basically wrote this talk proposal on the train back from BSD CAN last May. Did the port up in the summer. GDNSD was originally built by a programmer working at Logitech and he got them to open-source it, which was nice. So they used it to manage directing users to nearby driver download mirrors. And works nicely. Okay, a little bit about us. We're the scale engine guys. We're a CDN. I'm Alan. Alan Jude. I've been a free BSD admin for quite a while. Built all the architecture stuff at scale engine including our CDN and the video streaming network. Before that, I was a professor at Mohawk College teaching computer science and IT and then I host a podcast called TechSnap, which is the Systems Networking Admin Podcast. So if you're into that, it might be interesting. And I've been a BSD server admin for a long time. I did the varnish implementation for the Toronto Star newspaper in those exciting times in 2009. And I've taught a lot as well in Canada. And I guess this is the story of how we came to use this GSLB. So just an overview of what we're going to talk about. We'll give you an introduction on what we do. Talk about some of the challenges we've had with growth as people have started using us to carry their traffic. We'll discuss what is a global server load balancer, examine some of the currently available solutions, both proprietary and non-proprietary. And of course we prefer the open source solution. That's why we're here. And then we'll get into the actual GD and SD implementation and Alan will walk you through an example. So combining the port with this talk and the slides should get you set up. Pretty nicely. We're going to look at response policy. In other words, what do you do when a client doesn't look up? So what's the DNS server going to say? Advanced response policies with GUIP, so making it geographically aware. Use cases and examples. Agents monitoring. Adding capacity. In other words, on-demand capacity based on what's happening in DNS. The EDNS client subnet implementation and challenges therein. And workarounds for what you have to do if you're not on the white list. Okay. What is a scale engine? It's a global CDN. Well, it is now. We do a lot of video streaming. We do a lot of HTTP objects. And we do some application hosting. We are entirely powered by FreeBSD and have been for many years. So we do a number of things. We have edge side caching. And that's a varnish implementation. A CDN for global caching of anything over HTTP, anything else. VSN is video streaming. So it's live streaming for events and on-demand streaming to desktop using RTMP. And to mobile devices, lots of iPads and iPhones and androids and so on. Mostly with HLS using the Cupertino chunking. We also have an origin web cluster OWC, which is PHP MySQL mostly. And of course we have the GSLB that powers it. You want to talk about that? Yeah. At the moment we have about 70 servers spread across 25 different data centers in nine different countries. And so balancing load between those and dealing with failures that happen started to become much more administrative headache and we looked at ways to automate it. And in aggregate we can push about 50 gigabits per second up to the internet between those different hosts. All of which run FreeBSD 9 and are managed with Puppet. Thank you to Edward Tan for introducing us to that. And we also make extensive uses of jails with easy jail to make it easier to deploy the same containerized config for some of our applications on those different hosts and to move things between hosts when required. Well, let's do a little bit of stats. The biggest one is that in September we uploaded about 200 terabytes to the internet. At peak time that was about 18 gigabits per second. And our varnish serves about five or this month it'll be about six billion requests. But at peak time that's as many as 5000 requests per second that we spread across the different geo zones. As everyone knows varnish can do that with its eyes closed on one box. But the challenge is actually locating it around the world. So yeah we push a fair amount of bits. In terms of video we have very spiky event driven traffic where we see in a space of five minutes things go from 100 megabits to 10 gigabits. Video is a low request rate situation extremely high bandwidth sometimes one one and a half megabit. To each client and the sessions are long lived. So that's where we see our peak network load. In comparison HDP is smooth. It tends to follow when people are awake. Our largest CDN HDP object client is runs on lots and lots of news sites. And so you know people read the news. People read the news when they're at work. So the graph goes up when they're at work and that's how it goes. There's a graph of it coming up. Yeah we'll have a graph of that. So yeah we started a CDN kind of out of necessity and this is a few years ago and we were just doing hosting. And one of our hosting customers quadrupled in size so yay. But we were on a fixed commit at an expensive North American hosting provider and we had a very modest 10 megabit commit and we could burst to 100. But we're going to like max it out with this suddenly large customer. So we had to get creative. We have limitations. That's fine. And you use that to get better or you die. So you can get servers from cheaper providers. And we did that. They had cheaper network. In some cases better network. And we just created a sub domain instant CDN. So your image content is off loaded. And we were able to push out enough to avoid expensive overage charges on premium transit. So a little bit about HTTP and subdomains. Your modern browsers will parallel download if you serve content from additional domains. So that's good. Like normally a browser will only connect to the same site two or three times to download objects and then any objects after that get queued. But if you make all the CSS come from one subdomain the JavaScript from another domain and images from a couple of other domains are even subdomains. It will start loading more of them at once. And since most people have a broadband connection now and it's more the latency that's limiting this loading speed rather than the transfer rate we can load more of the page at once and be finished sooner. So this will improve speed if the user is reasonably close geographically to the server and conversely it will not improve speed it will make it worse. If the user is far away from the server you get the ability to strip cookies which can speed up user experience and it just lets us cash popular content. It's kind of on demand and that's the beauty of varnish. Here's your typical HTTP graph so people are all awake in the middle of the day and looking at news. And they go to sleep in the middle of the night and nobody reads the news. And it's interesting to watch those graphs across the geographically distributed servers because the hump in the graph is offset at different times depending where the server is. So the challenge is that just creating a couple of C names for subdomain is suboptimal and it quickly becomes cumbersome to manage this on a large scale. Users don't benefit from any geographic awareness with just straight round robin DNS. We also found that some resolvers were sorting results alphabetically so we couldn't really point people at a domain anyway. An IP that starts with a 1 always got more traffic. So we're benefiting by offloading network. Yes but user experience is not as good as we want. Here's just part of growing pains so your business is starting to get used and you need to sort some things out. External providers are cheaper for bandwidth and sold in large amounts which is what we need. But also they sold us transfer per terabyte of transfer rather than 95th percentile so it meant that for our bursty type of traffic it was much easier to get what we wanted for a reasonable price. You want to talk about and the other advantage with going with a bunch of separate external providers was that we could get different locations instead of having to have a co-location and try to manage that at all the different points on the globe. Right so now we don't have we have two racks in our data center near our offices but we have nine other data centers around the globe where we are sorry 25 other data centers on the globe where we have servers. And that means that we cover more of the globe with a server near them but also we're having a lot of different transit providers that way so that there's one that's topographically close to you on the internet. There's some where there's peering near you so you also get better connection that way and more diverse. So early struggles with this were to manually manage DNS suboptimal used bind views which inflated the size of the zone a lot. It was I basically took a geo IP database and summarize the subnets into larger blocks and made access list out of those and then define them as different views. So I had views for different regions like East Coast West Coast Europe Asia and so on and each of those would have to have a separate zone file and bind with different records. So when you went to this website if you fell into this zone then you were sent to this IP and fell into this zone you went to a different IP. But you know when one of those servers goes down it was you had to edit the zone files and market and it was cumbersome to manage it all the time and it often took too long. We thought about doing anycast but our core expertise is with server applications and we also had providers that were frankly reluctant to do that with us. Our basic technique then initially is to add IPs more than once into a record in DNS and that just compounds problems because you run into the DNS size response limit. So EDNS0 the original RFC stated that if a response is larger than 512 bytes you return it over TCP not UDP. In 99 EDNS0 was introduced and allowed for DNS responses up to 4096 bytes over UDP with fragmentation. The problem with that is basically a server will only return a larger response over UDP if the client requests it so that clients that don't support it wouldn't be sent a response they wouldn't understand. The problem is some older firewalls will have rules that say if the DNS response is more than 512 bytes block it. But the client behind that doesn't know that that's there and so requests the response to be sent with EDNS0. So it asks for a larger response but then some intermediary firewall would block the response on the way back. Cisco ASAs. And so they wouldn't get the response that they asked for whereas if they hadn't set the flag we would have sent the response in a way that would have got through the firewall. But because the client was requesting a feature that technically was being blocked somewhere in front of the client it resulted in when our DNS responses got too big when we were at like 20 servers. In North America it meant that all of a sudden some clients mostly at corporations and since news sites and mostly grabs by people at work all of a sudden they couldn't resolve the domain. And we couldn't figure out why at first and then we found out about these evil firewalls. Telling them to install PF Sense didn't work. So we start getting a lot of requests through video streaming and we start playing around with it. We get a lot of these requests from Europe. Entirely different scaling problems now with video. It becomes link capacity. That is the issue. 100 megabit servers not good enough anymore. Yeah so you have to have a gigabit up links. You need providers with lots of bandwidth and hey that's Europe. I'm surprised bandwidth is even cheaper in Europe. Some providers have better transit than others to North America. We learned your scaling is completely unpredictable with video compared to HTTP. So you don't have those nice smooth graphs. There's no trend. It's just all of a sudden there's something and all of a sudden it's gone. Yeah the other technical issue is that you cannot have contention for the wire in video. If you do that it breaks the experience for everyone. Right with HTTP if you have demand for bandwidth greater than your 100 megabits or whatever then everybody will just get this file they're downloading just slower. But with video if you're using most of your gigabit and everybody's getting the video at one and a half megabits or whatever and they're happy. And then you add a couple more users and you all of a sudden push over that limit of how much you can push through that line. Now all 1000 people that are trying to watch have their live stream stuttering and breaks. TCP tries to be helpful and just breaks everything. Or even UDP would have the same problem though. So as soon as you hit that limit you're not just slowing everybody down or something or you're not just having problem for the new people connecting. You basically ruin the experience for everyone connected to that server as soon as you try to exceed the link capacity. Okay so example news sports and so on one day there was an airport emergency in Iceland so the whole country went and visited the state broadcaster would carry the video for them. Yeah we carry all foreign video for RUV.is and an airplane had to broken landing gear or something and they had to circle the airport for four hours and burn off fuel and apparently there was like 10,000 people trying to watch this video. Live stream of this happening and it was just unprecedented scaling challenge for us. So we needed to factor geography into the load balancing. It was very important though to not send viewers to overloaded servers. Okay and this is why we start doing it out of DNS. But the first load balancing solution we tried was one that was based on the number of viewers on each server. So we just knew how many people were watching videos on each different server and would just send the next viewer to the least loaded server. But not everybody's watching the same video. Some videos are lower bit rates some of them are only audio. So that meant that just because that server had fewer viewers didn't mean to have more available bandwidth. And so we had to work out something more intelligent than that. Yeah so we have to measure network and application health before we send someone to a server and this lets us do that. Here's your spiky videograph and here's your very spiky videograph. It's very sharp it just goes from nothing to 800 plus megabits out of nowhere. Okay so what's the global server load balancer? It handles the direction of traffic to different nodes with a focus on geolocation and depending on the vendor there's a bunch of talk hand waving and cliches about high availability and optimal response time. In our case our global server load balancer routes traffic to edge servers near the requestor to provide lower latency and to spread load between a number of data centers that's important for us. As well as automatically diverting traffic from down servers whether it's down meaning you're maxed out you're at your 800 we're not sending anybody more to you. The hard drive gave out. Yeah so we first looked at what was out there there's a bunch of different commercial solutions like your f5s and barracudas and so on. The barracuda though required that we have a barracuda device at each different location. And when you make a configuration change there's no way to have that replicated you have to manually load the new configuration on each of the devices. And that wasn't very helpful and it meant that it really couldn't be automated. And also the response policies we come up with were quite limited it was basically we can decide on by region or geo IP but that was about it. And yeah there was no auto updating. And also the monitoring was fairly limited a lot of times it was just is this port open rather than can you actually check the health of the application. It's actually completely limited in terms of what you can do with this with gdn sd gdn sd just blows away that particular proprietary solution. And it's expensive. Okay so we looked at what was available on the source and originally we tried bind with views basically summarize the subnets into a smaller set because otherwise it would have eaten all of the RAM on the machine. As it was it was pretty heavy to have all these huge ACLs with different cider subnets. And it also made it take about 60 seconds to reload the bind config. And it required a separate bind config for each different zone and as we kept adding more zones it just became even more cumbersome to manage. And the worst part was that it broke master slave replication because when you do an AXFR the slave would get the copy of the zone for whatever region it happened to fall in whichever view it was in. It wouldn't get all of the views so then we had to come up with our own way to to push the zones out and update serial numbers and so on. We looked at power dns which is basically something like bind except for the the storage back end is usually something like mysql. But it's geo IP thing doesn't read the MaxMind files that we have they took some other format for DNS block list. And then we looked at any cast but it kind of had some limited flexibility. It wasn't application aware unless we ran like open BGPD on each machine and had some helper script or something to withdraw the route if our app wasn't healthy. And only a third of our providers were willing to establish a BGP session with each of our servers. So I did a little chart here showing the different things. And the other thing was that the E DNS client subnet is not supported in bind when I looked I don't know it might be better now. Power DNS didn't really support it yet either. And so basically GDN SD we found when we're looking for something that supported E DNS. So what the global server that load balancer does is we have a situation like this where our US viewers are being directed to US server UK Germany and France or whatever. And then the load balancer is monitoring each of those servers and checking its health when it detects that the French server all of a sudden doesn't have a connection to the internet anymore or is down or whatever. Then the French viewers are actually redirected to the next closest server or whatever business rules we've applied through the config response policy. So we send them to Germany. But then in a different situation we have our cluster of servers in the UK and they're reaching their load limit. They can't handle any more viewers without degrading the performance for everyone rather than just sending the viewers to the next closest server which if we're having a lot of load in one area it's likely covering the whole region. Instead of just sending to the next closest server basically knocking that over and then the next the second closest server knocking that over and continuing that because you know it's DNS so there is that little cash in delay. We and our load spikes really really quickly as soon as we detect a trend that we're going to overload the UK we fall back and redirect traffic to the entire region and spread the traffic around all the servers in the region so that we can handle a bigger spike. So in the example config which we will get to you'll see how we define different zones and how to fail. And the stages of failing up instead of over. So GDNSD basically came to the rescue. It's authoritative only DNS server doesn't have all the features but it has most of what you need. Importantly it supports the DNS client subnet. It's a draft specification engineer Google came up with and the basic idea is when you do a DNS look up the server doesn't see the client's IP address. They only see the request coming from the recursive DNS server. And when that was at most people's ISP that worked pretty well because the ISP DNS server will be generally located near the customers. But as people started moving to using things like open DNS and Google's public DNS. The problem was that geolocation points all of Google's IPs to the Googleplex in California even if that server happens to be in the Netherlands. And so people would notice things like iTunes use a CDN called Akamai. And if you weren't in California you would get lower speeds especially in Europe because if you're using Google DNS Akamai. You saw the request come from Google so they thought you were in California. So they send all the traffic to the California node that came through Google. And so E DNS client subnet is an extension that as part of the DNS request the recursive server passes along the slash 24 of the client. So not the whole IP address for privacy reasons but it tells you what subnet the client is in. So you can geolocate based on that rather than the request the recursive DNS servers IP. Because normally you wouldn't know what the client IP is in DNS. It reads the maximum Geo IP database directly through the C API. So you can use like Geolite cities or whatever or you can buy the more expensive one that's more accurate. It has monitoring plugins that can actually monitor your servers. And importantly is also include flapping detection. So we'll get into what the rules are like but it allows you to make sure that a server isn't going up and down constantly. Because if it is there's probably something wrong and you want to back off from that server not flip on it. And it has a number of different response plugins plus an API for you to write your own. So with simple failover you basically just have a primary and a secondary defined for any response record. And it returns the primary unless it fails in which case it returns the secondary. With multi failover you basically give it a pool of IP addresses and it spreads a load between those. And removes any of those IPs if they're monitoring before so they're down. With weighted you assign each of the servers in a pool of weight and it allows you to have unfair load balancing. You know if one of your servers has more capacity than the other then you want it to get more of the load. And then the geographic multi failover is the same thing except for you use the Geo IP database as well. It reads basically standard bind zone so you don't have to rewrite your zone you just copy it in. But you get a couple of new record types. Importantly it also let us limit the number of addresses that returned in a response. So it's just a little macro you put in your zone file and say for this set of records never return more than four IPs in the response. And that let us make sure our records or our responses were under that 512 byte limit. And there wasn't a port so we made one. So the two new record types you get are Dine A and Dine C which are basically dynamic A records and C names. So importantly being able to do dynamic C names is actually pretty cool. And it also supports the include macro and the record size limits. So when you're defining a response policy there's a bunch of things you can consider the failover and load balancing. By default you would just have basically what's called binary node health just checking if it's up or down. Like it has a HTTP status monitor that checks for 200 response but that's about it. Then we kind of developed more advanced node health checking that actually checks other factors to decide if the server should be a candidate for being returned in a response. And then it factors in the location what nodes to use at what time and how many nodes to return in each message. Do we want to return just one or an RR of two or eight or whatever. So the basic config probably a little too small to read but you basically define a service or a resource like public WWW. And then you define your monitoring types and basically a list of IP addresses. It will check the service on each one of those. You can even define multiple services like here and only if they're all up is the IP a candidate. And then it will return those three IP addresses and it can also do IPv6. You can specify separate monitoring for example if your DNS server doesn't support IPv6 or the machine it's on doesn't have a v6 address. But some of your services have v6 then you can't monitor the v6 part so you can make a pseudo service that's always up or whatever. Another interesting thing here has an up threshold. Basically as a failover in case something's wrong with the monitoring or and it determines that all of your servers are down rather than not returning anything. You basically set a threshold and you say in this case if less than 70% of our servers are up then it falls back to just returning all the servers. Because it assumes something's wrong rather than something's wrong with all of your servers at once. There's a number of things you run into when you're using GeoIP. The first one is that the data is not always accurate. For example the IP address of our co-location shows up in New York rather than Toronto. So sometimes the country isn't even right let alone the city level or state level. And the same thing was obviously happening with Google's IPs. They use anycast for the address that you interface with 8.8.8.8 or whatever. But the actual address is the request comes from our regular Unicast IPs and they're spread all over the world but all of them GeoIP back to Googleplex. And as I said the source IP that your DNS server sees is that of the recursive resolver not the client. So you're Geo locating the DNS server they're using not the user. So there's going to be an error in there of how far away the user is from their DNS server. Especially if they're using something like a corporate VPN or something and it's not good. The other thing is you can, in addition to loading the GeoIP database, GDNSD lets you define a list of overrides. So you can say for example our specific subnet for our own internal IPs goes to a specific server all the time even if the geolocation would send it somewhere else. So when you're using GeoIP there's two different methods. The first is the automatic. You basically define a list of your data centers and then for each one you define the latitude and longitude of the location. And it will geolocate the user and find the closest data center and return that. And then you can see you can do overrides we can say you know our subnet always goes to Toronto. And you see here we have a comma-separated list of data centers. It'll try the first one and then only if all the servers in that are down it'll then fall over to the second one and so on. And you can have an unlimited length of that list. In a more advanced configuration we define a list of our data centers and then we do a map from the GeoIP's database's actual geolocation. Yeah, that's in the automatic one and that's why we have this more advanced one where you can be more specific, right? You can have specific subnets be overrided to a certain location or you can have other stuff. But mostly we're just worried about not serving users from Europe in North America and North America and Europe because why go transatlantic when you don't have to? That cuts 80 milliseconds off and that's a big difference. So in this one you can even drill down to the city level if your GeoIP data is good enough and in addition you can also do subnets if you have specific places that are having a problem. So in this case we say for Africa use the European data centers for Asia use the Asian data center but we don't have very many of those so fall over to the backup pool. For Europe we have a default of a pool of all our servers in Europe but we have specific data centers in Germany, France, the Netherlands and Great Britain. And you see each of those in the Netherlands we prefer to serve it from the Netherlands but if those servers are down or unreachable then use the regular pool. So that's the map you define and then you do your resources. So you say which map to use, you set up the services that you want to monitor and then we map those data centers we created like North America and we say server in Seattle, its IP, Los Angeles, Phoenix, Dallas and so on. And then Europe. The other advantage we get here is unlike in Bind where we had the problem you can't have in a round robin you can't have the same IP twice because it just creates it into a hash. Here we can have the same IP twice to double up the load. To accomplish that in Bind we basically had to have two different IPs on the machine to be able to double up. And we have a naming convention here that factors in a little bit later. And then for your monitoring we're just using the HTTP status module but there are a few others and you can write your own. We set the V host and the path and so on. But importantly we have the thresholds here. So we only consider a server up if the last 10 checks in a row have all returned the up status. So if one of those checks in the middle somewhere returned a down status the server is still not up. If the server was in the danger state which happens as soon as there's one failure but before there's enough failures to be down then it takes five good checks in a row to get back to the up state. So if a server is being monitored and it has one failure it drops from up to danger then once it has five goods again it's back to up. Once a server has had two downs in the monitoring period then it moves from the danger state to down. And then we pull it out of the pool and stop returning it to users so that they're not going to a server that's either too busy or not even up. The nice thing about this is it's something that is pushed out from the actual location of where the application is. So this is all happening in DNS and you're directing clients. Really you're anticipating your application health as this happens. One thing we haven't mentioned is that you actually get statistics output from GDNSD. Sorry we don't have an example of it but that's also useful. There's a little web server built into GDNSD that lets you see the status of everything and it also has a CSV output which we make use of. So we've created a couple of custom monitoring modules. They're on the not as part of GDNSD they're just on our side. And so for a video server you hit a certain HTTP URL and it checks how much bandwidth the server is putting out right now. And once that's over a threshold our HTTP provider returns an error 500 instead of a 200 OK and that marks the servers down. So when a server is too busy it gets marked as down and we direct this traffic elsewhere. And we do the same thing for HTTP except for we check disk IO because after a cold reboot our varnish cast is hitting the disk so hard that it starts causing latency for responses. So all that worked until we ran into the situation where we don't have enough servers for the amount of bandwidth we need anymore. So we can fail over to a different region but we kind of wanted to avoid that. So I came up with the hate algorithm high availability through EC2. We hate to use Amazon because they actually charge more per gigabyte than we do. So using them cost us money. So we really hate to do that but we hate being down even more. So we wrote a little capacity manager script that basically pulls the CSV stats interface of the DNS server and checks the health of all of our servers. And because of our naming convention we can tell that look we're you know 80 percent of our servers in Europe are marked as down because of load. So we need to get some more capacity in Europe right away. So call the Amazon EC2 API Europe spin up some free BSD EC2's and they come up and start taking over. Basically we have the elastic IPs which are basically static IPs that can move between virtual machines and they're set up as the very last data center on the list for each region. Normally they'll all down but once we start them and boot them up they start coming up and all the load goes to Amazon. And then once our servers are back up you know the events over or whatever we pull the our HTTP provider on each of those servers and check how many viewers are still using Amazon. Once that falls below a threshold of about 10 or so once the servers quiet again we can kill off those Amazon instances to save money. So in conclusion we found that especially with HTTP distance adds latency and that can hurt our object delivery for HTTP. Especially because HTTP has a couple of round trips and usually the objects are so small it's not transfer time that's killing us. It's just the setup of the sockets and things like that. We also found that video performance lags if we have TCP rate transplants because flash and HLS especially are over TCP. They pause if there's a missed packet and have to wait for it and so trying to avoid that as much as possible makes a big difference. And proximity to the user we already said that. The proximity is less important for video because once your connection has been alive for 10 seconds it's usually settled down a bit. But the closer to the user we are the more reliable the packet transmission is so it means better quality video. And yes we found that our cache warming once we had millions of objects in our varnish when you cold restart varnish it hits the disk really hard. So having back off the traffic on that as when the disk starts to get busy and then either back in one and a couple of times until the cache is hot. Just to wrap it up because we are almost done. This is a solution that fits in the wonderful world of limitations. And so you're good at running free BSD servers you create a solution using what you're good at. And so it's an application layer solution. We find it means having lots of network transit providers we get to leverage that and make things better by making it an advantage instead of a limitation. If we lose transit we can drop something out and a lot of times something goes down because of poor transit. So we get to automatically recover. We get to distribute. We have this complete global view of services and we get to stage the distribution. We have load stages so especially for live video because of we're doing like an origin edge type setup. We have an origin server and as defeat one copy of the stream to each edge server. When we only have say 100 viewers watching a stream it's not advantageous to spread those out over 70 servers because that means we're taking 70 copies of the origin stream. So certain customers are on an elastic pool that uses a smaller number of servers and then once the if the load goes high there then it kicks over to using the rest of the network so that we have some level of efficiency in under lower load as well. And so we use that list of data centers to say use the small pool and then once that's full use the medium pool and then switch to the full pool. Okay. And I think if there is a question we could take a question or two. You may want to have a look at the thing called beachy PDNS. I did that 10 years ago about. And you can find it through a simple Google search and it uses beachy P information for the proximity of the client instead of doing this geolocation stuff. You better didn't find that before. Maybe I missed a small detail but I don't understand why you need to manually configure Google with JYP. Can you just look at the DNS information and always use that to. Right. I did miss to say that Google doesn't send the DNS client subnet information unless you're on their whitelist of servers that support it. They didn't want to risk breaking the Internet. So unless you're on Google's very very small whitelist of authoritative servers then they won't send the client subnet to you. So you don't have that information. So what we do is we have from Google we found a post that Google put out a list of all of their DNS servers and the actual location. And so we do an override of the JYP information for that list of Google servers and for the list of open DNS servers. Did you check what it takes to get onto that whitelist. Yeah it's not that easy. They're like email us at a faster internet dot com but then they don't really get back to you. And what's the failure of the TTLs you're sending out to the clients. Usually our TTLs about five minutes because we want to have really fast and. We have like eight of these GDS servers so we're not really worried about the DNS load. We'd rather have a high DNS load than to have a longer TTL where we serve up a dead IP for 10 minutes or something. We need it to be short because you know our load for video can spike by 10 gigabits in five minutes. So we need to be able to reposition that traffic very very quickly. So it's not fun when certain ISPs impose a minimum TTL. Any other questions. Do you want to list those ISPs. I don't have a list of somebody does I would like to know so I could shoot them. Does the stats web page. Does it also show statistics of the queries queries or only of the balancing some stats. Can you pull that up really quick. GDNS. I don't remember what port it's on. It's a weird port. We're going to pull up the stats page really quick so you can see. But any other questions. Someone over here had a question. So here's the stats page. You can see we see how many responses we have with no errors. How many we refused because they were silly. People looking up stuff that doesn't exist. Drop packets. You see the most of our queries. It's at 13.6 million out of 17.8. Support EDNS which is the packet up to 4096 K or 496 bytes. But we don't know how many of those behind a firewall that's actually going to block it if we try to use it. And then we see the UDP press. You see only 120 TCP requests out of 17.8 million requests since the server rebooted 3.6 days ago. And then you see here's our monitoring. You see certain servers are up. Certain servers are down. And there's also an orange danger state. But mostly we're up right now. And so danger state is a server that has had a failure but not enough to be actually considered down. Because sometimes you have just a small hiccup or whatever. We use a very short time out. So sometimes it's just there. Thank you very much.