 Hey everyone, welcome to another episode of Wired for Hybrid. Today, we are joined by Maheep Diora. He's the product manager for Azure Load Balancer. So let's get right into it. Yeah, thanks, Pira. And hi, everyone. I'm super grateful and excited to be here today. And I'm excited to do a deep dive on Azure Cross-Region Load Balancer, which is Azure Load Balancer's global tier load balancing solution. So today, I'd like to talk about what are the components involved with Azure Cross-Region Load Balancer, some of the scenarios that Azure Cross-Region Load Balancer is really suited for, and what we've seen our customers use it for, as well as then a short demo to really show the capabilities and power of Azure Cross-Region Load Balancer. Okay, Maheep, before we get started. Yeah. Is this like, what is like the main, like the overall 5,000 feet high level difference between just your traditional load balancer and your Cross-Region load balancer? Yeah, no, great question. So the one liner that I'd give, or at the very highest level, is that the standard Azure Load Balancer today is tied to one region. So you can only deploy a standard load balancer in East US or Europe West, right? Azure Cross-Region Load Balancer allows you to, through a single endpoint, have multiple regions in the backend. So you can have a application that's globally present that's all tied through a single load balancer. So that is the main real difference. Okay, and that's very different than something like Front Door or other kinds of content delivery network, correct? Yeah, that's correct, and that's a great question. So the biggest difference is that Azure Front Door operates at the layer seven level. So it's really optimized for web applications, web traffic, right? And the other big point about Front Door is that it operates at the edge. I believe there's 186 edge sites that Microsoft utilizes. Azure Cross-Region Load Balancer, on the other hand, is really optimized for layer four traffic. So customers that really require ultra-low latency for their applications. Okay. Azure Cross-Region Load Balancer, the way we do our routing is also at the data center or regional level as well. Okay, so it's not like in the past where we had load balancers in every region and then we'd have to put another layer on top of it to make it go from one region to another. Actually, that is very similar to how Azure Cross-Region Load Balancer works. I do have a slide that kind of shows the topology of how Azure Cross-Region Load Balancer works and how it interacts with the other components. Okay. Yeah, run right through this. Yeah, so as we can see here, there's a few reasons why customers choose Azure Cross-Region Load Balancer. We believe we talked about the layer four traffic, so TCP, UDP, Load Balancing. It's also important to note that Azure Cross-Region Load Balancer is a transparent or pass-through load balancer. So any traffic that comes in isn't really touched by the Cross-Region Load Balancer. It's not inspected, no additional markup is done, which really contributes to its ultra-low latency because as traffic comes in, it's immediately sent to its next destination with low latency. And to answer your question about the topology, right, the way that Azure Cross-Region Load Balancer works is that you have your regional load balancers that are connected in each region. So we could say this first one is in East US, this next one is in West Europe, and this last one is in Southeast Asia, right? And in each region, you need to deploy a standard regional load balancer. But where Cross-Region Load Balancer comes into play is then you can link all of three load balancers to your Cross-Region Load Balancer. And so this is what I meant by that, you get a single endpoint now, right? Because all your customers need to do is send traffic to this endpoint for the Cross-Region Load Balancer. And then Cross-Region Load Balancer will then direct traffic to the appropriate backend, region, and regional load balancer. Okay. Okay, yeah, that makes sense. Yeah. Maybe it's not related, and you tell me whether or not I'm off, but I'm off based on that one, but considering that you mentioned that none of the traffic is touched or scanned or anything like that. So basically everything coming in gets routed to the other end. Where would you put the firewall in front of the regional or behind it? So that is one component. So I guess you're asking about gateway load balancer support with? Well, in infrastructure, we always kind of like in terms of security, securing the environment. I never expose anything from a data center or from anything internal to the internet without putting a firewall in front of it. So in this case, where would we put that? And it may not be part of this conversation normally, but do you have, is this something that customers are asking? Yeah, so I will say thus, the way that firewall and some of the other services that customers are using in tangent with Azure cross, or Azure load balancer today, is that if it works at the regional level, all you need to do is then chain it to the cross region load balancer. Okay, perfect. All right, sorry, I didn't mean to take you off track. Oh no, you're good. Now, we understand what the overall cross region like architecture looks like. What are the like all of the components and like how does that work? Yeah, no, that's another fantastic question, right? So few of the components that are related to Azure cross region load balancers, one is the main one is our global anycast IP address, right? So the way that Azure cross region load balancer works is we give customers a single endpoint for their global application. And this IP address is then advertised across multiple Azure regions across the globe, which is giving customers a global presence for their IP address. And the way your traffic is routed, which is a very big question customers always ask me is, is Azure cross region load balancer just deployed in one region and all my global traffic goes there and does it then get distributed across the globe? The way I would like to describe it to customers is that it's a little bit of an abstraction there. The way global load balancer works is in tangent with all of our data centers around the world. And so I do have another slide that kind of really talks about how the components of Azure cross region load balancer works, how our routing is also done. So there are two big components for Azure cross region load balancer, home regions and participating regions. So when you deploy a Azure cross region load balancer, you have to specify a region, East US, West Europe, so on. This is where your Azure cross region load balancer is deployed, but it's important to note that traffic will not necessarily pass through this region. And I'll get into the reasons why and how the routing works, but this is where control plane operations reside. So anytime you're doing an update to your load balancer, adding more back ends, removing some, all of that will be sent to this region. The other component is participating regions. So these are around 10 plus Azure regions, as I mentioned, that are scattered throughout the globe that are advertising the global IP address. And so when you send traffic to a cross region load balancer, your traffic is first routed to the closest participating region to you, and then it is forwarded to the closest regional load balancer to that participating region. If that participating region, it doesn't contain the load balancer. So, and it's also, before I get into the demo of how the routing works. Yeah. So yeah. Before I get into the demo of how the routing works, I'd like to make one call out. The back end regional load balancers that you can link to the cross region load balancer isn't limited to the participating regions. Every Azure region is supported for the back end pool of the cross region load balancer, but we only have a select few regions that are actually advertising this IP address. Okay. So can we think of it in the terms, let me see if I understand it properly. Your home region or that's that central region or it's basically just where the config sits. Correct. And then all of the participating regions are basically advertising that config. Correct. And data path and, or the way that data is routed and traffic patterns is all through participating regions, right? Okay. So that's why I mentioned that traffic may not necessarily go through the home region, just based on where the users are sending traffic from, right? If all your users are in Asia, but your home region is in the United States, right? Then all the traffic is really gonna get routed by our regions in the Asia or European area versus none of the traffic is really gonna hit anything in the Americas just because there are no users there in that example. Okay. Because when I was looking at your architecture load balancer diagram on the last slide you had, it's not like we're deploying that one cross region and then all traffic goes through that before it goes to the others. Correct. Yeah, like that's a great point, right? Like it's a little bit abstracted away. It's we're working with the data centers across Azure's infrastructure to really make sure we have a global presence with us. Okay. Okay, perfect. Because for me, when I first started looking at that documentation, I found that those diagram, for most of us who've been in that game for a long time and are kind of new to cross region, we're like, I'm looking at it going, okay, so now I've got like one somewhere that will route to across regions, but I still have to manage that one, but that one is basically abstracted. Right. I mean, when you talk about manage, right? You still have to make sure that the configs are set up correctly and you are doing the normal maintenance or checks that you would with a regular load balancer, but it's not a single point of failure in terms of if that one cross region load balancer region goes down, it brings everything else down, which is why we have these 10 plus regions that are advertising the IP address in case one of the regions goes down, something goes wrong, we have nine other regions that are still advertising your IP address and they will kick in as soon as the primary fails. That's fantastic. Okay, perfect. Yeah. Sorry, it didn't mean to cut you off again. Oh, no, no worries. So just going in to show a quick demo of how the participating region works. So let's say our user right now is in South America or Mexico, sorry. Okay. Let's say our users in Mexico, right? And our application and backend load balancer is located in the Pacific Northwest. The way that the routing would work is your user will ping the IP address of the cross region load balancer and you will be routed to the closest participating region to you. So let's say that's central US for us right now. So traffic will go there and then this participating region will then say, okay, what is the closest backend load balancer to me? And it will say, okay, it's the one in Pacific Northwest and then it will send that traffic there. And then it's important to note that the return traffic from your application back to the source will go direct. It won't take the same pattern as the inbound. So as we can see the participating region is really where your traffic is sent and then it is then forwarded to the closest load balancer to it. Okay. Yeah. So, and this is where one of the other components of cross region load balancer comes in which is our geo proximity algorithm. One of the key components I touched on before is Azure cross region load balancers really optimized for ultra low latency networking. And part of the way we achieve that is through our geo proximity algorithm which really figures out based on where the client is located what is the closest participating region and closest regional load balancer to them. And Azure will try to route that traffic to that closest endpoint to really make sure we're reducing the latency that clients are facing making sure that traffic is going the shortest path possible to ensure that customers aren't feeling anything lane or those really lane sensitive applications are like performing at their optimal requirements. Okay. Once it hits that's advertising or participating region and then it gets forwarded to the like in this case to West US, the traffic after that is that all on the Azure backbone or does it go back out over the internet? Yeah, so like if we talk about step one, right? The customer is gonna get into the Microsoft backbone network somewhere in step one, right? Wherever the closest entry point is from one, two and then part of three, they're all in the Microsoft backbone at step three again at some point it's gonna exit out and then be routed back to the customer but scrap like at like one and a half and two that's all done in Microsoft backbone. Okay. So ultra fast, ultra secure. Correct. Perfect. Hi. Yeah. When you talked about low latency load balancer so if we're looking at people that are like across the pond or like in a global setup, if you will. So your algorithm, your latency algorithm like does it use probes? Does it like, how does it decide where the lowest latency is? Yeah, no, that's a great question. So the way that Azure load balancer on top of its geo-approximity algorithm knows which are the correct load balancers to send traffic to is we probe each of the regional load balancers every five seconds asking them, hey, are you good? What's your health status? Are you able to receive traffic? All of these regional load balancers will then probe back saying, yes, no. And that is then fed into the routing algorithm, right? As soon as a region says, I'm not good, I'm not healthy, don't send me any new traffic, cross region load balancer will understand that and automatically route any new traffic to some of the other regional load balancers which is really where the disaster recovery, high availability scenarios that we built this feature for really kick in because as soon as something is unhealthy, all traffickers have been routed to the backups or the other regional load balancers which really ensures that applications are highly available, they're not prone to a regional downage or any issues that they're facing in that region, right? They're abstracted away from that and they can kind of protect themselves by having a global presence. Okay, and you kind of alluded to something here. What workloads are the best candidates to take advantage of this technology? Like is it like IoT devices that are sending data back to the mothership or is it some kind of web application or is it like that database replication? Like what's your view on like the number the biggest target application for this type of load balancing? Yeah, no, you touched on it perfectly, right? We've seen certain scenarios where this feature is being used highly by Azure's customers and few like certain scenarios where like we've really seen like really positive feedback from customers. So the three big scenarios that I would talk about are the two big scenarios that I would talk about are customers that want high availability for their application across a global presence and to those customers that also want disaster recovery capabilities. And for both of these customers, right? They're really looking for a low latency solution, right? Ultra fast, layer four, they don't need some of these, you know, header specific routing capabilities, SSL offloading, right? For that, I think we mentioned before Azure front door is a better option for them because you get those capabilities at the edge but Azure load banser, you know, it's fast, it's transparent, everything is then just sent to the backend as fast as possible. And you didn't mention IoT, right? And it's actually one of the use cases we've seen a customer utilize Azure cross region load balancer for. So as we can talk about in this scenario right here on the slide, who the customer is, was there a small IoT customer and they had a limited number of Azure deployments but they were scaling up how many IoT's they were gonna bring up in their application but as well as the number of IoT devices that were gonna send traffic back to Azure. And the issue that they were facing was that, you know as they're popping up IoT devices throughout the globe with limited number of Azure deployments right for cost reasons and for optimization they noticed that they were facing high latency. Traffic, for example, if there's a IoT device here in Europe, right? And it's configured with the IP address of the Pacific Northwest application again, right? That's an incredibly long transit that that packet or traffic has to take, right? And then let's say they spun up something new here in Africa, right? Then the company has to go in themselves reconfigured their IoT device to point to this new Azure deployment which is an incredibly manual process, right? And it's often tedious. And then I'd like to also throw out the case where let's say something happens to this region in Africa, right? All of their IoT devices that were configured to it are now sending traffic to a region or an application that may be unhealthy may not exist anymore, right? So then that customer's losing minutes or maybe hours worth of crucial data that they need. So what they came to Azure for and ask for is like, can we get a way to link all of our load balancers together to ensure that all of our IoTs have a single entryway into our application, but then it's routed to the correct load balancer? So what we ended up helping them do was roll out Azure Crafts Region Load Balancer. And so now what they do is they have all of their IoT devices point to this global Anycast IP address. And then when the IoT device sends traffic, right? They first hit the cross region load balancer. So as we talked about before, they're gonna hit the closest participating region to them. And then using our routing algorithm, geo proximity algorithm, we then forward the traffic to the closest regional load balancer to them. So in this case, Azure Crafts Region Load Balancer would recognize, okay, this load balancer here in Africa is the closest one to my user. So let's send the traffic there. And then likewise, the return traffic goes directly to the customer. So in this way, right? The customer now avoids long lane traffic. So they're avoiding this like whole transatlantic traffic pattern. They're sending traffic to the closest regional load balancer to them. But the other feature, right? Let's say something happens to this load balancer here in Africa. As long as the IoT is still pinging this one cross region load balancer IP, they'll be forwarded to this backup region here, right? While they would still face the long lane, their application isn't completely down. They're still going to be able to preserve that data that otherwise would have been lost if they still use their two regional deployment. Okay, that brings a point to me because I've dealt with enterprises and like government entities in the past that have by compliance issues with putting storing data or anything like in outside or in specific region. So if your customer is in Europe, your data has to stay in Europe. If your customer is in North America, it has to stay in North America or whatever that may be. Is there a rule in the routing that we can set that? Or is that something completely different more in the lines of graphic manager and front door type deal? Yeah, no, that's a great question. And as of right now, there is no routing rule or rule that prohibits traffic from hitting a certain region or not. As you mentioned, that is something that I believe AFD or one of the other layer seven load balancers might provide that capability. Azure load balancer as mentioned is transparent, right? So as the packets come in, from a load balancer perspective, we have no idea of what it contains like the payload. We just forward it correctly. Okay, but that if what you're trying to do is segregate data by geography, this is not the tool. This is to provide ultra fast, unendered traffic from your customers, your stakeholders to whatever is the best endpoint for them to connect to and get their data in. Correct, right? Like if there is that specific requirements as you mentioned then, that this product might not be the best suited for them. This is for customers that really have a global presence for their application and are okay with traffic hitting different geos as long as it meets their requirement. Perfect. And you mentioned that this was for high availability and disaster recovery. Correct. In the past, I've personally stood on stage at Ignite, for example, and said that high availability is not disaster recovery. There's portions of it that kind of like cut over to both processes, but that if you're designing for high availability or you're designing for disaster recovery, there's really some things that you need to think about differently. How do you view this? Yeah, no, that's a great question. And I agree, right? There are nuances of both that are very different and there are characteristics that you need to consider if you're designing for availability versus disaster recovery, right? But I believe there is a good enough overlap between them two that Azure Cross Region Low Balancer covers that overlap, right? They're fine-tuning that you can do as a customer when you're setting it up to, you know, gear towards more high availability or gear towards more disaster recovery. But from a Cross Region Low Balancer point, right? We provide both high availability and disaster recovery to those customers. Because high availability, the amount of data that you can lose in a failure is basically close to zero. Yeah. So if this server is not there, the load balancer is gonna send it to that server and then your transaction still goes through and you don't lose any data. Disaster recovery takes, in consideration, RTOs and RPOs, retention objectives and I forget the acronym for RPO now is, but in terms of how much data are you willing to lose? Correct. So how does that work? Yeah, so in terms of Cross Region Low Balancer, I believe it's more geared towards that high availability where if one region goes down, your application as a whole isn't lost, right? You're automatically routed to the next available region to you, as I mentioned in that slide, right? Where if the region in Africa goes down, your application, your IoT device, isn't sending stale data anymore, right? It's gonna be routed to that load balancer in the Pacific Northwest. But there's no actual service failover type within the Cross Regional Load Balancer. Well, I guess, let me ask you, do you mean by service types in the application? Yeah. So that isn't available, right? Like what we provide is regional disaster recovery, right? So let's say a region goes down or the entire application within that region is down, which is causing health probes to probe as unhealthy. Then we mark that region as down and move traffic over to the next available region. So we're really providing regional redundancy. Perfect. So, mate, we've been talking about this, like the abstracted load balancer participating region and you've mentioned IP addresses. So that's what I assume is a static IP address that's assigned there. How does that work and who owns this? Like can I use one of my IP addresses or is it one that's provided by Azure? How does this all work? Yeah, no, great question. So the static IP address that you're gonna configure with Azure Cross Region Load Balancer, that's yours. You configure it, you own it. As long as it's attached to an Azure Cross Region Load Balancer or even in your subscription, it's yours, it will never change, right? Okay. And the way that you provision it is the same way that you would provision any IP address right now in Azure. All you would need to do is specify that it is a global IP address. That tells Azure that we need to wrap this in those 10 regions that I talked about before, but that is the only real difference between the standard SKU IP address and our global SKU. It's important to note today that you can't upgrade a standard SKU IP address to a global today. As of right now, any global IP addresses have to be a net new deployment. Okay. So it's not like you have an existing one and you say, I'm just gonna reuse this one because it's already configured somewhere else. Correct, right. Like that scenario isn't supported at the current moment. Okay, perfect. Those IP address or that static IP address, how does it scale? Yeah, period. That's a great question, scaling, right? And how do you scale behind a single endpoint? And so I'd like to go back to a slide to really showcase how does the scaling behind a cross region LB work today and how is it all done through our single globally anycast endpoint? So going back into the scenario mode, right? This customer is a automotive customer with a large global customer base. They have customers all over the globe and they have Azure deployments around the globe to really ensure they're meeting the low latency requirements for their customers. And as different geos of users pop up around the globe this customer is spending up new Azure regions to, again, ensure their low latency requirements are met as their business scales. But they've noticed a few challenges with their current scale up plan. So as users start popping up in different geos and demand grows, the customer has to add new Azure regions or additional deployments within a region to meet the demand. And this is creating IP management challenges really due to the overhead, right? So if we look at our map here, right? For each of the different load balancer deployments or Azure deployments, a user is configured to that IP address. So the user in North America's pointed to the load balancer in North America. Same with the user in the Middle East, right? They're pointed to the Azure deployment in the Middle East. And that's the way they're meeting their low latency requirement, right? They're manually making sure that customers are pointed to the correct Azure deployment to ensure low latency. But you can imagine as this is scaling up this is a bunch of IP addresses that the customer has to manage. They then also have to make sure that the customer is pointing to the correct IP address. And it's causing burden for them. And so what they came to Azure for and on top of that IP management burden, right? Going back to the case where there's an issue in that Azure deployment, right? It could be an application issue. It could be a infrastructure issue. Any sort of issue at a regional level can impact customers. So let's say in Europe, right? Again, there was an issue with the application. Something caused this Azure deployment to go unhealthy. Now the user in West Europe has no Azure region to point to, right? Until the customer manually fails them over to one of the other backup regions. And again, this can cause a loss of business for the customer while the customers are waiting for their traffic to go through. And these are all adding on top of each other for the customer as they continue to scale up their business. So what they came to Azure for was, hey, we like to scale up our deployments to really meet the needs for our customer. Making sure that we're deployed as close to the customer as possible. But our scale up plan right now is causing too many headaches and too much overhead for us to handle. How do we solve this? And so what we ended up doing with the customer was integrating Azure cross region load balancer. And with its static IP again, right? All the customer needs to do is send their traffic to that one global IP address. They don't need to worry about, am I pointing to the correct deployment? Am I making sure that this deployment that I'm even pointing to is the closest one to me, right? All the customer needs to do is contain that IP address and Azure cross region load balancer does the rest. So here, right? If the customer here in Texas is sending data to the cross region load balancer, we again, using our geo proximity routing and routing algorithms will route it to the closest Azure region regional load balancer to them. So in this case, again, it's the Pacific Northwest and traffic routes back. Now let's say it's the same automotive company. Let's say they notice a lot of customers are popping up in the East Coast and they need to deploy a new Azure region here or their application in a new Azure region to really, again, meet the low latency requirements. So let's say they spin up a new Azure region here in the Midwest. They believe that's optimal for all their users. Now, when the same customer pings Azure cross region load balancer, they will then get automatically routed to this new regional deployment and sent back. So as we can see, during a scale up behind Azure cross region load balancer, customers are then routed to the correct load balancer to them, but the customer themselves or they are client in this case, didn't have to do any application configure changes, right? It was all done automatically. It was done in an instant as soon as the scale up happened. The same story for a scale down, right? Like let's say we remove this deployment here in South America, right? Again, your end customers aren't going to notice that, oh, am I pointing to the correct region? My traffic isn't going to that region anymore. What do I do, right? Azure cross region load balancer handles all of that, handle scales up, scales down, all through a single static IP address. Okay. So I assume there's a cost difference between a regular load balancer and a cross region load balancer. Correct. So there is a additional hourly charge for the Azure cross region load balancer, but it's important to note that there is no data process fee for the cross region load balancer. And that is all charged at the regional level. So in that sense, we're not double charging you for the amount of traffic that goes through. You're only really charged for the hourly fee for the Azure cross region load balancer. And again, we'll drop the pricing link in the description below. So you guys can, you know, see the cost as well as, you know, make sure that it works well for your gas application and company. But what my, where my mind was going is if like this for scaling up and scaling down, this seems to be like a really, really cool solution. So that means that we should or our customers should start whenever you're deploying a load balanced application that by default you should deploy that cross region. So because it gives you that opportunity as opposed to, like if you've already got a bunch of like regular load balanced application, can you convert them to a cross region? Like is there a migration path? Cause you mentioned that your IP address has to be a net new. So is that the same for the rest of the configuration? Yes, that is correct, right? This cross region load balancer needs to be a net new deployment. As of right now, we do not have a way for you to upgrade one of your standard load balancers to a cross region load balancer. So both the IP address and the cross region load balancer will have to be a net new deployment that you then link all your regional load balancers to. But can you link an existing one? Yes, yes, so all three of these load balancers here, right? They could have been deployed like years before, like years ago, right? And let's say today we deploy a cross region load balancer, you can instantly link all of them together. Oh, okay, so perfect. So it's not like if you know that you're gonna have to scale up at some point, but you don't have to start paying for that cross region load balancing up front, you can just deploy your application and then later when you need it, deploy a net new cross region load balancer and a net new global IP address and then link your existing sites to it. Correct. Fantastic. Yeah, okay. Do you wanna take us through a demo? Yes, I would love to. Give me one second to pull it up. Cool, yeah, so I love to share with everyone a quick demo about cross region load balancer. Specifically, I'd like to talk about the case where one of our backend regional load balancers is down for whatever reason and how Azure cross region load balancer will automatically route my traffic to the next available region, right? So looking at my cross region load balancer, right? I have a globally anycast front end IP address right here and I have three regional load balancers in the backend pool and each of these are deployed in different geos. This first load balancer is gonna be deployed in East US. The second one is in West Europe and this last one is in Southeast Asia, right? And if I give a quick ping to my load balancer right now, we can see hello world from East US, right? And so right now, let's say I have something that went wrong with my VM that's servicing my East US application. So for this demo purposes, we're just gonna stop it to forcefully make it go unhealthy. And so what's gonna happen is as soon as this VM shuts down, the regional LB is gonna start probing unhealthy. And as we talked about before, right? Every five seconds cross region load balancer is pinging all of its backend pool member saying, hey, are you healthy? Are you not healthy? Like give me information so I know if I should send traffic to you. And so as soon as this VM shuts down, it's gonna send a message to the regional load balancer which will then cascade it up to the cross region load balancer saying, hey, I'm unhealthy, don't send me any new traffic until I'm healthy again. So, and that's an important point, right? After cross region load balancer, it will keep probing all the unhealthy load balancers until they're healthy. As soon as they're healthy, it's ready to receive traffic again and will be automatically added back to the availability pool. Okay, one question about the probes. Is the probe like configurable in terms of like what to look for or is it just like a ping to the IP address to see if the server's there? Great question. So cross regions probes right now are automatic. They run every five seconds, as I mentioned and we're looking at a whole list of things, right? We're looking if the IP is available. We're then asking the regional load balancer, are your backend pool members like the individual VMs, are they healthy? How many of them are healthy? Like we're getting multiple data points which helps the cross region load balancer understand the holistic help of your regional load balancer, right? Yeah, but if the server is there and the IP is there and it's responding and all other things fine but let's say IIS or Apache or whatever like web service is running on it, the server is hung, the service is hung. So is that gonna turn, is that going to respond unhealthy or healthy? Well, I guess it really depends on how the help probes on the regional load balancer are configured, right? Okay. If the regional load balancers, help probes as it's pinging its individual VMs are reporting healthy then in the eyes of cross region load balancer that deployment is healthy. As soon as- Okay, so the probe at the backend works the same way it always has and it's configured the same way it always has once it goes from that regional load balancer to the cross region load balancer that's just a different type of probing. Correct, right. Like when cross region probes the regional it's not then gonna probe all the backend VMs. It's just asking the regional load balancer, hey, you've done the probes yourself just send me the information so I can make the routing decision. So the probe for cross region it's more just aggregating the information that the regional load balancers have collected themselves. Okay, perfect. That makes sense. Thank you. Yeah, so we can see this VM is now shut off, right? So if we now ping the cross region load balancer again we can see that it's now saying hello world from West Europe. So as we saw right there, right? As soon as the VM became unhealthy that marked our regional deployment in East US as down and cross region load balancer automatically understood that and routed my traffic to West Europe which is the next closest geo to where I'm located. So that was a quick demo to really showcase how Azure Crafts Region Load Balancer understands the health of its backend load balancer and effectively routes traffic based on that information to make sure that customers are routed to the correct load balancer and ensuring high availability for these applications that are hosted with this feature. That's fantastic. I really can see a lot of like benefits of like using this worldwide. I like the fact that you're not locked into it like that you can have your existing load balanced application and tie them into something across region after the fact. The probing seems to be really straightforward. Like do your probing as you've always done on the regional and then just that information is gonna flow. It sounds really cool. If somebody wants to learn more about this, are there some documentation, some labs? What's available for them? Again, we'll share these documentations below but our documentation really goes over how does regional redundancy work? What is the ultra low latency benefit of cross region load balancer? We're kind of touching on all the points we covered in our conversation today as well as then we kind of go into these are the supported home regions, supported participating regions. And then as we mentioned, there are a few limitations that customers should be aware of before using this feature. So we also go into, what are the limitations things to consider, right? And then we also have a whole host of tutorials, right? On all of Azure's major clients, CLI, PowerShell portal and ARM template, how to deploy a cross region load balancer, how to kind of link it to your existing deployments today as we talked about. So we believe we have a decent amount of selection there that I would love everyone to test out and hopefully end up using cross region load balancer down the road. Well, thank you, May for this wonderful deep dive. I think we covered load balancing cross region load balancing how it works, how it routes, how it scales. Anything else you want to tell our audience at this point? No, just that thank you again, Pierre for this opportunity. It was a great conversation. And yeah, I hope everyone that's watching gives cross region load balancer a try and hopefully it helps your applications achieve high availability, those disaster recovery scenarios that we talked about. But again, thanks so much for having me here today. Hey, that was our pleasure. So Michael and I will be looking at further deep dives in the near future, but Mayeep, thank you so much for spending this time with us and for you at home. Make sure to go into the video description below and take a look at the documentation that Mayeep referred to, take it for a spin, see if it fixes or addresses some of the issues you may have in your own environment. And with that, this was another episode of Wired Fibrid. Thank you so much, Mayeep. All right, thanks guys. Thank you. See you guys, bye.