 Mae'n gweithio gyd yw gwaith i'ch gweithio'r cwmhwyl yn gweithio'r gweithio ar ôl 10, 9, 10 a 10. Mae'n gweithio'r gweithio'r gweithio. Mae'r unrhyw i gyda gymaint. Mae'r company wedi cymdeithasol gyda IQ, ymgyrchu llwyddiant, a'r llwyddiant cymdeithasol. Mae'n gweithio'r cybonetiadau yma, nid oedd 7 o 8 oed, oedd unrhyw o'r cymdeithio. Yn gyfer… mewn ei hunain yn y locysyllt yn leironion gyda diwrnod Cymunau Felly, felly ei sydd gennym ni'n problemon CSF, ond ar nghymru sy'n gallu gwylio, bywro fyddiau rhanau saluwsiau ar y berthynats ac chi'n fyrdd i gynnwys arlau ac mae gynnwys ar-raddeddio yma. Felly, корп blenderu gweldó â'r brif, gan bethau cymryd ystafell Felly, mae wedyn yn y cymryd y bryd rhanau yn y bryd ystafell yma ac ym nid yn chael, mae ei wnaeth y cymunau, Mae gen i. Felly, mae'r mawr pawb yw'n gwawb, mae'n fawr, a mae'n eich wneud nhw. Felly, maen nhw'n gwybod, gwneud e i fewn iawn. Dyma ar y pethau cyfanediol, mae'r cyfanediol maen nhw wedi gweithio arfer fel gen i'r тыan gen yn gallu'i rhaid ar gerfodd cynhyrchu eich cyfanediol neu mae'n gweithio arall, ond mae'n cael ei gofodd mewn gweithio. Mae'r mirwnt nil-ud ymddangos o berthynas i'r toray. Felly, mae'n gweithio'r toray o'r Fybwyr Fybed, Dewch wedi cael fel FfDV2 yn Lyfran, a mae'n rhaid i ffrwng o'r sculpt ac i irrydd lryd i phenoglau'rcuss. A bywch, mae'n rhaid i ffrwng o funud o'r sculpt o'r sorts, hefyd, fe'n mynd i ddim yn y cyfunetos eu cyfirlog. Felly yma yw'n mynd i'rMyrddiad hybridau hyn. ac mae'n mynd i'n mynd i'n mynd i'n mynd i'n mynd i'n mynd i'n mynd. I want users to not know that they're talking to multiple clusters. I want our users that are accessing our applications, whether that's your internal applications on your intranet or whatever, or over the internet or whatever. I want the user to be completely ignorant of where that's running and I want to make it as seamless a kind of experience as possible, including stuff like failover, including stuff like how do we deal with faults, how do we deal with stuff like stateful services. And we're going to go through a little one. This talk actually came just from a chat with someone who was learning Kubernetes. And they were going, I've got a website. It's just a little website, just one. And it's just some static pages. And I want to know, how do I let people access it? That wasn't their voice, by the way. They've got a very lovely voice, not like mine anyway. I hate hearing my voice. I've got a website. OK, you've got a website. And you want to expose it? Yeah. Have you got anyone that wants to use it? Yeah. I've got one person. It was them. They wanted to use their own website. That's fine. And it was going to be public. So they're like, OK. So we've got it. Don't jump too far ahead. I've got to expose it over the internet. Brilliant. Have you got a domain? Yeah, I've got a domain. It's called example.com. Because I can't tell you what it really was. And what address you want, I'm going to put it at mysite.example.com. Oh, brilliant. OK, so what do I need to do now? Well, you've given them an address. And how do people find out where to send their data, their packets? Where do they send that stuff? How do they know through the magic of DNS? And I never really realised how exciting DNS was. But it is. It really is. There was an old talk. Some of you may remember if you got great hair like me. Or not much hair like me. By a guy called Chris Boytart. I always say this. Because you know when you're troubleshooting stuff, DNS is always the freaking problem. I don't know if you know that, but I've said that a lot in my life. It's a great phrase. But DNS also has some amazing abilities that we can use. So learning about DNS, learning about something that is so old and the annals of time, learning about it, learning how to use it for your benefit rather than thinking about creating something new is really where we're going to go. And we're going to use some real key principles of DNS through some of the projects when we get towards the end. But anyway, DNS. So this is their IP address. So a client knows, they look up the address book of the internet or the address book of networking in DNS. There you go. That's the address. It knows where to send stuff. Cool. It's got that. User hits the website. Yes! This person hit their own website and they were very happy. The end. No. But then it got popular because they told their family about it. And so they had more users. And then they were worried about the scale. I mean they had four users. They needed to worry about the scale. So they deployed Kubernetes. Of course. They should have done it. It's just one website. Because everyone needs Kubernetes. Of course you don't all need Kubernetes. Don't always think that. Anyway. And so they deployed more pods. Brilliant! But I can only route stuff to one of these. I've got one IP address. So I need to put a load balancer in front of it. So from my four person website I need a load balancer. And through that I've changed the IP address. If you notice, subtle difference. 10.0.0.1. 10.0.10.1. Clever. And so we haven't changed the address. We've just changed where these packets are going. And this is the cool thing about DNS for a user. You know all this. And it seems really simple. But I like to appreciate the really basic things. It's really important to me to understand these underpinnings of how the whole internet works. Of how all networking works. We take for granted. And yet they're really important. So spend some time to appreciate them. Anyway. Please. Yeah. So they've got a load balancer. And they haven't done anything different. They've just done that. And the user doesn't know anything different. The user's totally seamless. Cool. But it got so popular. And also it wanted to make sure the reason why he deployed four pods with his static website on, or their static website on, was because they wanted redundancy as well. It wasn't just load. They wanted, like, if one died, they wanted people to still access their super... It was something like Fluffy Cats, I think, that kind of website. And so they put two load balancers in front of it. Brilliant. And so we've got two IP addresses. So now what happens? I've got one name, two IP addresses. OK. What does that mean for a client? How do they know where to send it? Well, they can use either, right? Oh, yeah, yeah. Yeah. They can use either because the response back from DNS is saying, both of these are valid. You can use either of these. All right. I don't mind. It's cool. And then they went global. They still only had four users, but, you know, two of them were in Europe and two of them were in North America. That fits with the story I'm going to tell. It's like that's very, very clever what I've done. And so they duplicated everything, right? They put up another load. They brought up another Kubernetes cluster and they deployed their pods. And they basically duplicated everything. So now they have four IP addresses. Whoa, OK. So I've got no idea where anyone's going. Because they can go to any of these. They're all totally valid as a user to send stuff to. Right. Brilliant. Let's do that. If I can remember what window I'm going to. Oh, yeah. So I prefer live demos. I've had to record this because Wi-Fi is always terrible at these things. So, yeah, I'm typing. So I'm just doing curl. I'm just going to hit 10. I won't do that every time. So we're hitting an endpoint. I hope you can read that. OK. I hope it's big enough. Is it big enough at the back? You've got good eyes. I wouldn't be able to read it from up close. My eyes are old now. Anyway, so we've got this address, right? And you can see it says Envoy in there. Because I'm using Envoy Gateway. So I'm not actually using Ingress behind the scenes. I'm using the Gateway API, which you've heard loads about this week, right? I'm sure. Already. And you should. It's a brilliant project. So do investigate it. It doesn't make any real difference to what we're going to do here. I'm just saying, look at Gateway API as a replacement for Ingress. It still gives you Ingress. But anyway. So I'm just going to send 10 requests and look what happens. Boom. You know, as we thought. So what I've done is, again, super clever. The stuff I've deployed in Europe says, I'm in Europe. You get the restaurant. Anyway. So you can see round Robin. I've got four addresses. It's going to do the clients going to be sending to any one of those addresses. Round Robin doesn't necessarily mean round Robin. It like it has a chance. It's an equal chance over time. Right? So that's why you can see the order is not. Now I don't know if you noticed. I should have said before. Oh no, we'll see it in a minute. Oh, back to the slides. I'm very proud of that. By the way. Failure. Failure. So what happens then? I've got these four IP addresses and I've got one address to go to. God, better hurry up. I talk too much. Right. So what happens if I lose one of my sites? What's going to happen? Oh, by the way, I should have said this at the beginning. What I would do in your position, if I was sat there, is to pick holes in everything I'm doing. Right? So I hope you are. And I hope you're coming up with different ways of doing the same thing. Because this is just one way to do stuff. There are a million different ways. Anyway, we've lost USA. USA is gone. So we jump back. We jump back to a wonderful one. And what happens in the case of failure? So what we'll do is we'll scale down our EU deployment. So actually, I've done the opposite of the slides. And I've actually destroyed Europe, not destroyed America. I'm British. We like to destroy Europe. I don't. I don't. Anyway, bad joke. Anyway, so we're going to hit the same endpoint. And you know what's going to happen, right? Like, you can predict. Oh, look, some go through USA, which I didn't destroy. It runs through. And the other ones don't. We get 5.0.3s, gateway, time out, whatever it is. Oh, pause there. Oh, no. It's scaling it back up again. So by the way, this is in real time. This is the awesomeness of Kubernetes. I know you all know it. But like, when you're spinning stuff up like this, that wasn't like an Apple shortened display. That actually was in real time. When we get later on to the more fun stuff, we will get that. Then it's even more fun. I don't know. I can't explain. I get excited by the stuff. Right back to the slides. So we said we take out one of our zones. We don't have a clever way, right? Our users don't have access now to these services that are running there. And of course, there are things we could do. We've got multiple load balancers. We could scale horizontally. We've got more redundancy at our ingress level. Or we could, we've got more pods there anyway. But just bear with me. I'm trying to prove a very simple point. And then the other question is about stateful workloads. So there are reasons, requirements, legislation, of why some of our workloads can only run in certain regions. So there may be European laws around data can only be stored in European data centres. But I still want that data to be accessible from America. I just can't store it there. So what does that mean? What does that mean in a traditional ingress? So now what I've said is in Europe, I've deployed static and stateful. This has gone way beyond this person's website now, by the way. So forget them. This is now mine. So what happens in this case? So what I want to do again is a bit contrived. I want to hit this at slash stateful. There are other things you could do, of course. You could set up a DNS entry to point to these new services. But it doesn't fit with my story. Anyway, we're going to hit it under stateful. Oh, well, let's jump before we go there. Oh, giving it away. So what happens when we hit under this address? We hit a stateful workload that's only running in one click. You know what's going to happen. So you can see I've got slash stateful at the end of that. And, of course, what's going to happen is, as it's going to the place where it's deployed, I'm stateful when it goes to the place where it's not. I'm 404. Easy, just what you expected. And so what can I do? We can introduce Istio. So I've gone through the really basic stuff. I've just simple ingress. It goes out nice and easy. We all get that. And then, well, we can introduce Istio now. That's quite a leap sometimes. But we're going to go there. Don't be too scared of Istio. Who's familiar with Istio? Okay, fair number. It can be a bit painful. Learning curve can be a bit painful. And sometimes deployment and troubleshooting can be a bit painful. But what it enables you, the capabilities it gives you, are pretty amazing. A service mesh. I'm just using Istio just because. There's no reason. Right, so with Istio, what we've done is we've deployed the same pods. We've got the service. You know how Istio has these sidecars, these on-voice sidecars. We're not going to go through the architecture of Istio. Istio is the control plane for these on-voice sidecars. The data plane on-voice sidecars in every pod that has it injected. Or ambient mesh again. I'm going to gloss over that because I don't know much about it. And we're going to stick with the sidecar one. Okay, so we've deployed an ingress gateway. Again, this is using the gateway API, by the way. And you can see these things work the same. Just as before. There's no difference right now. So if I'm hitting either one of those IP addresses, which is now tied to the ingress gateways on here. That's the middle boxes. The blue ones on the edge. Exactly the same as before. It's being routed to the same cluster. And this is cool. This is great. But what's the benefit? Well, the benefit is it's two-fold, actually. So first of all, what happens with stateful workloads? So stateful workloads in Istio world in this way, well, you've got these pods, these services deployed in one of my regions. If I'm going to the ingress gateway in that cluster where it's deployed, just route straight to it. Simple, there's no difference. However, when we go to one way, it's not deployed. And this is where the first benefit comes. Woo! When you hit an ingress gateway in your other cluster, Istio is clever enough to have configured on-boy to route that traffic via MTLS from your ingress gateway, which is effectively just an on-boy proxy, route that across to the east-west gateway, another on-boy proxy to route that to the stateful service. So as a user now, it doesn't matter. I can hit any one of those lovely four IP addresses. It doesn't matter. It's transparent. Woo! There are other reasons why you wouldn't want to do this, but it's just to prove a point again. Now, what happens in the case of failure? This is like the second benefit that we talked about. One was what we do in failure, and the second one was how do we handle stateful things or services not deployed in that same cluster? So what happens if we lose a stateful service, a stateless service, sorry, in one of our clusters? Well, just like happened with the stateful service, boom! Immediately, dynamically configured, Istio will then route it over through the east-west gateway to our stateful service and the other one. So I've got failover between clusters. I've got transparency. The user doesn't know any of this is happening. It's great. Really cool. And there's loads of different stuff you can do in here as well. Remember, Istio gives you so much flexibility in that layer. For example, the initial configuration I had in here was to actually route to the local service in the cluster first. And then, if that's not available, then failover to the other cluster. So this is what you can do through destination rules, for example. So I have the flexibility to configure what my requirements are. My requirement for this demo was to be able to route it to a cluster where it works. Try locally to begin with and then route across to another one. That's what this demo was for. So we've sorted out stateful workloads. We've sorted out the ability to make that transparent to the user. And we've sorted out some failover. Yeah. So that's fine. But I was talking about in this one, taking out the stateless service. That's easy. The entry point for the user, that we've tried to, that ingress gateway, is still there. So from a user perspective, they don't know, because of that routing logic, what happens if we lose a whole cluster? Or we just lose the ingress gateway. What happens to the user at that point? Our services might be running fine, but our ingress gateway may have gone down. Istio can't really help now. Of course you can scale it, you can put redundancy in, but there's some catastrophic network failure. That's where KHB comes to rescue, but before we go there, we're going to jump back, because I did spend time doing these, and I think you should watch them. They may not be very good, but it's fun. I really enjoy it. If you want to know about the tooling I used for this, I found it recently, it's brilliant. Come on and ask me about that, as much as about any of the other stuff. Istio ingress global endpoint. It's just a slightly different address. I used a different DNS address, so I could show it in a demo. Istio rather than Envoy. Exactly the same as before in this case, because everything's up and running. We get that same round robin, although it's skewed towards Europe, that's for no reason. I did this in Europe, by the way. I was going to mention that. Let's see if we scale down the deployment. We're going to scale down our EU deployment, but what's going to happen here is that everything's going to work. Right? Because I recorded it before. But look at latency. I recorded this in Europe, by the way. Did you see that? It doesn't see much, but some of you work in areas where latency is important. When we go a little bit further on, you'll understand why some of the things we're going to do with our load balancing, with our global server load balancing, is because of the latency. That's just from the UK to Oregon. I think this was running in. So it's not far, really. But it still takes time. Latency is an issue, right? So we saw that. But it totally works for a user. The transparency for the user, they don't know that the service in Europe has gone down. They have a little bit more latency, but they'll survive, right? Maybe. Here. See the latency up between the European ones and then the USA ones. You'll be blown away by it. Do you see that? So that's recorded real time. So it does show a difference. I just found that interesting. I find too many things interesting. That's why I don't sleep very well. What about staple workloads? Same as we said, we've gone through the slide. So we know what's going to happen. Everything works. This is the same setup. Those four IP addresses to those four load balancers. Two in Europe, two in America. From a user's perspective, everything is brilliant. And so what the next bit is. Okay, yeah. So why don't we lose our region completely? So we'll scale down our Istio gateway. That ingress gateway. And then we'll hit it again. And we know what's going to happen. I love spoiling stories. Don't watch a film with me. And we can see that when it's trying to hit the European endpoint, there's a time out because our ingress gateway is gone. So from a user's perspective, it's fine when our services go down inside our clusters, but it's not okay when the entry point goes down. So we need to do something about that. Oh, yeah. That. Right back to the slide there. Okay, so K8GB. What is K8GB? Who's heard of K8GB? I don't even know if I'm saying it right. K8GB, KGB. Okay, so some of you. Who's heard of global server load balancing? More of you. Brilliant. Who uses global server load balancing? A few, but you know. Okay, so that's an interesting one. So everything I'm talking about here, by the way, some of you will be exposing services over the internet. Some of you will be on your own intranets, your own networks, and suddenly everything we're doing today applies equally. It does not matter. Anything we do can apply equally in your own environment to the internet as well. So K8GB. It's a cloud native Kubernetes global balancer. I think they should have put load in there, but that's what they said. So I'm just quoting them. We know what cloud native means. One of the key parts of that is just how responsive it is. One of the things about cloud native is to be responsive to change recovery. Right. We'll show that in a minute. Kubernetes is an interesting part of this. Why? Because it's everywhere. Everyone wants to run their stuff on Kubernetes. There are loads more global balancers, global load balancers out there in the wild. There are proprietary ones. There are hardware ones. There are software ones. There are SaaS solutions. There are loads of different ones. The fact this runs on Kubernetes is a real benefit. I love the Kubernetes ecosystem, but it means we can run it ourselves really easily. We'll go through it. That's one of the big things. Not to be reliant on other services. If you can run stuff yourself and there's a low maintenance, low overhead to that, then awesome. So it does it via DNS load balancing. What does that mean? It's going to configure the DNS appropriately for the user. Depending on which strategies you want to employ. So as a user, I look stuff up. Remember what I said before? I look up a name. I guess my peer address is any one of those is valid. What we're doing with KHGB is we're going to look up the DNS name and only return IP addresses that are valid to the user. Those validity rules are becoming more rich, shall we say? There's a lot more rich that needs to happen there. To be honest right now, there's a few. We'll go through them in a minute. It's using DNS to load balance. We want to say load balance if that reader is directing the traffic to the endpoints that you want to balance. It's not doing anything clever connections-wise or anything like that, it's just at the DNS level. That is pretty good, we'll see. There's no single point of failure. This is another part of the cloud native. Single points of failure are bad. Scaling is making the MHA is a really important part. This scales across clusters, which we'll show again in a minute. There is no single point of failure. It uses Kubernetes native health checks. One of the ways that most other load balancers work is they will do PINGS or TCP or HTTP to check from outside of the cluster. That happens on a polling interval. It can be slow. It can be non-reactive. There are problems with it. It's not perfect. I think that's useful in conjunction with something like this. KHB is only based at the moment on Kubernetes native health checks. When I say Kubernetes native health checks, I mean probes. So, readiness probes really are the main ones. You know with readiness probes there can be checks inside your application. The cube checks every so often. If it fails those probes in a certain time out and a certain failure rate then your service is taken out of service. No traffic will get routed to it. KHB relies on those readiness checks. That makes it more responsive for one thing. We're running these in cluster. It makes it more granular because the checks you can make can be totally... It's up to you how you write your readiness probes. You can write them as granular as you want for your specific application rather than as coarse as... If I hit that, do I get a 200? This is the difference. You have the ability to make granular checks because what Kubernetes provides and KHB consumes. It really is. It is either one CRD or annotations. So, here we go. This is what we're going to go. One sec. Let me grab a drink. While you're staring. Wonder. At YAML. Wish there was something better than YAML for this, but anyway. So, we can see we've got CRD, there's the API group version. I love the Kubernetes API. It's just so great. You know what you're looking at. You know what spec means. I haven't got status in here. But you know what spec means. This is what you want. I want... Again, there's some work upstream going on. This uses ingress. It doesn't use gateway API yet. It will do, but not yet. So, you'll effectively declare your ingress. KHB creates that ingress object for you and manages that ingress object for you at this point. And there's some bits down the bottom around the strategy. So, the ingress part, declaration for your ingress, like what host do you want it stuff to come in? What should we recognize? Where do I want to root stuff? Path prefix roots, all these kind of stuff that you get with ingress, normal stuff. And then there's strategy. And down the bottom, you can see DNS, TTL, split brain and type. We'll talk about... Well, we'll talk about type and TTL. In a minute. TTL is such a funny one. There's a killer in it, especially over the internet. We'll talk about it now. TTL is a timeout, right? DNS is quite chatty and because it's such a key part of all networking, you don't have to use DNS, but we're humans, we want to use something we can refer to easily. Because it's such a key part, it needs to be protected. TTL is a way of having a hierarchy of resolvers. I send my request to my local resolver. I've actually got it running on my laptop. I'm using systemD. Don't kill me. That then roots it to other DNS servers which then roots it to other DNS servers. It's kind of a hierarchy. That TTL, that time to live is telling these DNS servers how long the authoritative source, the actual root source of this, how long it should be available for, till it expires. I can set this to 5. This is great if you're running in your own environment where you control all those caching resolvers along that chain. If you're going over the internet one thing you learn is you've got no control over TTL. It's a suggestion. You can't say I want 5 seconds because going to Google you'll have a minimum of 60 seconds. I can't remember the other one. I found the other day it was 300 seconds. Total freedom over that, over the internet. You do have to be aware of that. One thing to bear in mind. And type, round robin in this case. What we're going to do with the initial one is just to do the same thing as we had before. You can also do it through annotations on Ingress. Depends what you like. I much prefer using an API. I much prefer explicitness over annotations personally. It works. This is the secret source that's going on. There's a few components in here. Just to go through. These are all ones. The core DNS, the KHGB controller and the external DNS are all part of the KHGB deployment. This core DNS is not your cube DNS. It's not your in cluster DNS server. This is another one. There's a reason for that which we'll get to in a second. KHGB controller watches that GSLB resource. External DNS . We'll talk about that in one second. What I love about this is the ability of composing projects. Each of these projects has that own purpose. Tightly controlled, tight scope of that. Composing functionality from existing applications is a great thing. We don't need to build the world. Use what's there, especially in the CNCF ecosystem. KHGB controller watches the core DNS Ingress. It gets the address for that core DNS Ingress. It disposes core DNS as the authoritative DNS server for the zone, for the delegated zone. These may be some terms that I may have times gone. This is again the key part. DNS works like this from the very root. Delegation of DNS is a key part. KHGB controller read the ingress for the core DNS. What's the address? What's the IP address? External clients will come in to see. It writes some DNS endpoints. That's part of the external DNS API. It writes that to the Kubernetes API. External DNS is also another external DNS. You may already be using external DNS. For those of you that don't know what external DNS is, it's a way of using CRDs to tell external DNS to go and manipulate your actual DNS entries. It works with most DNS providers, cloud providers or your own or bind or whatever. This is another configuration. You may be running external DNS. This can also work alongside that. It writes these DNS endpoints to the API server and then external DNS reads those and it writes to your DNS provider some information. That information is the name servers, the addresses of these core DNS pods or these core DNS ingress. It writes those as the name servers for your delegated zone. What a delegated zone is in DNS is when you look up a DNS entry, it asks your resolvers and your resolvers then know where to delegate that to. This goes all the way back to the .com domain or any of these TLDs. It goes back to that to know which DNS servers to delegate that request to. What we're doing here is setting up our own zone delegation inside our own core DNS. The second thing that KHB does is it reads healthy endpoints. We talked about that. It uses the readiness probes. It reads those and it writes more DNS endpoints to the API server. But with a different label or different annotation. External DNS ignores it, but core DNS with a custom plug-in doesn't ignore it. That core DNS plug-in that runs in core DNS reads these DNS endpoints and exposes it. User, look up stuff just like normal, totally transparent to the user and then looks up the address and that look up, the final look up, actually is delegated to the core DNS part of the KHB deployment. So all this change in your external DNS is setting up this delegated zone and is all done through these pieces. All you've done is set up your GSLB and some values when you've deployed KHB but that's right. Before we jump there back to my amazing demo. There's your GSLB. There's your endpoints. So there's a few things in here. This is the DNS endpoints for your zone delegation for the core DNS ingress. You can see this initial one. I want a point in my finger, but you can't see that and if I roll my mouse, can you see that? This is setting up name servers for your delegated zone, which in this case is that, and it uses some naming conventions in here so that when we have this no single point of failure both are deployment in the EU and a deployment of KHB in the US. They're both doing the same thing. They're trying to write to our DNS provider with these addresses and then they have one different one which is the local ingress endpoint. Each one of our KHB external DNS deployments updates the name servers for our delegated zone and the glue record which helps DNS to know where to root the requests to our delegated name servers. So we just show the same in the other one. So you've got different addresses, different targets at the bottom depending on which one we're in. We'll just skip this quickly. So these are the actual endpoints. These things get updated on the fly. Just showing that it's the same in both regions. And then loads of stuff here because I wanted to show you one thing. This is in round robin mode. We're going to currently have a stupidly long command to run it in an Amazon machine in Europe. And look round robin. Other load balancing strategies gloss over quickly but the one we're going to look at is GOIP because we have nearly run out of time but it's fine. He said I could overrun a little bit. So we just patch it. We change this value to GOIP. It uses MaxMind database format. And so if we're in Europe, now this is the first bit of cleverness from KHEB. We got there in the end, right at the end. If I'm in Europe, oh look I'm only sending stuff to Europe because of DNS load balancing. So I've chosen a strategy which says send clients to their closest endpoints. If you want to find out more about MaxMind GO database format it's really clever. It's not really but it is. And if I'm in the US then I get routed to the US, obviously. It's clever. And that's just that. That's just shining. So we're done. I could keep talking about it. Thank you. If you want to run through the demo it's at the bottom. There's the GitHub repo. You can see all the stuff in there. Does anyone have any questions? You've probably got a couple of minutes because I've talked too much. Do you have any questions? Yeah, you. Oh, it's a good one. It would be better to. Would it be easier to, is a different question? To set up your own anycast implementation is actually pretty tricky. For my demo. But there are other requirements around that. The clients have to understand anycast as well. Come and talk to me afterwards. Your perspective is from the... Oh sorry, hello. Where's it coming from? So your perspective is from the internet user to access your server. But what if my application is running inside of service mesh. So the application is coming to the inside of the cluster. Are you able to solve the same issue? Things are running inside the cluster. I don't think you need to. I think your service mesh will take care of that kind of for you. There may be plugins for your service mesh to do stuff like routing to local services. But this is all about your ingress. This is all about the external access to your servers. This is the layer that KHGB provides on top. So it's not able to router from the... Let's say your pagain is value. You're not able to router another cluster's pagain. Um... No. No. Come and have a chat. Let's see if you want. Anything else very quickly? It's been brilliant. Thank you very much. I've really enjoyed that. Take care. Enjoy the rest of your week.