 Hello. Hello, everyone. Today's talk is about challenges, architecture, and solutions for massive-scale albaz deployment at EPN PayPal. So these are some of the numbers that I have to share with you. Some of the numbers were already been shared today by our chief architect, Subhu, at the keynote. So we have about 8,500 hypervisors deployed in our OpenStack environment and production. We have about 300,000 virtual cores. We have about 70,000 VMs running in OpenStack. We have over 1.5 petabyte of provisioned block storage. And thousands of users. So in our case, our users, since it's a private cloud, our users are mainly business units inside PayPal and eBay. And we are planning to cross the 10k hypervisor mark by the mid of 2015. These numbers are a little out of date, but it's actually 10k because today morning, Subhu's presentation, it was 10k. So I think those are the revised numbers. So we are planning to cross about 10k hypervisors by mid of 2015. So today's agenda is basically what we want to look at is the albaz deployment architecture that we follow at eBay and PayPal. We would also like to go through all the enhancements that we made in the albaz V1 API. We would also want to discuss a little bit about the albaz V1 integration that we did with Horizon UI. There will be a demo for that. Then also, I would like to share the challenges that we faced when we had to migrate eBay and PayPal from our proprietary load balancing management system to albaz. So we had a proprietary load balancing management system that was managing all our LBs in production. So we had to migrate all of those load balancers to albaz. So we had a lot of challenges in that, and we would like to discuss what challenges that we faced. And towards the end, I would like to go over a typical north-south and east-west load balancing traffic topology that we follow at eBay. So this is the albaz deployment architecture that we have that we follow at eBay and PayPal. On the left, we have a bunch of Neutron servers. And we integrate with two providers. So on the top, we have provider one architecture. And on the bottom, we have provider two architecture. So basically, what we have here is, let's say, for instance, we have an API call that is made by a client. And the Neutron server, what it does is basically processes the API call, and it tries to figure out which provider the request is for. We have made some enhancements to the albaz schedule that we'll look at later. So the scheduler basically decides that if it's provider one, what it does is for provider one, it invokes a provider one plug in and the request is published to the message bus. Then the message bus stores the message. And on the other side, we have agents running in HM mode. So we have agents that are consuming these messages from the message bus. So these agents, the way these agents are designed by the provider is that there's a one-to-one mapping between one agent and one LB device. So only one agent can talk to a device. So to achieve HAA, what we did is we had installed the provider agents on various nodes and we achieved HAA by installing pacemaker. So basically, via pacemaker, what we do is we achieve HAA. And at any time, one provider agent is up. So let's say, for instance, for some reason, the agent that is live goes down. The pacemaker tries to bring up a second agent so that it can consume the messages from the message bus. So once the provider agent consumes the message, it calls the LB device and it makes all the changes that is required to be made on the LB device, A. And then the device A responds back and it responds back to the agent. And the agent, basically what it does is it puts the response back onto the message bus. And the message bus is basically, there's a consumer on the other side on the provider plug-in, which is a callback. And what it does, it listens to the message from the message bus. It processes the message and it updates the status of the entity on the neutron server. So this is how one of our providers have, we have one of our providers integrated with LBAS. We have a second provider that we will be integrating soon. That uses a different topology. So the way it works is, let's say for instance, you make an API call to the neutron server. Here, the neutron server actually also has a scheduler which does some analysis on what provider to pick. And then it picks provider two, for instance. So when it picks provider two, what it does is it goes to the controller. The request and the controller, basically what it does is it has all the intelligence of which LB device to pick. And then basically what it does is it, the controller directly calls the LB device synchronously and then the device makes all the changes on the device. And then what happens is that the response goes back from the device B to the controller. And then the next thing that the controller does is via the same callback plug-in, it actually goes and updates the neutron server. The status on the neutron server. So this is how we have LBAS deployed in production. And this is the architecture that we follow. So the next slide is basically a list of V1 enhancements that we had to make. So we took the LBAS code from the community, but we had a lot of enhancements that we had to make to meet our business needs. Some of these enhancements will double click on each of these in detail. But the first one is basically IP usability for Vibs. We have a demo for that. We also added SSL search support. So we added API, CLI and UI integration support. We have a demo for that as well. We integrated LBAS with designate. We will look at why we did that and how it's working. And then what we did is we added some advanced health monitoring features. We also added a feature where if a member, if we have a member that is associated to a WIP and we have a NOAA instance to attach to it, and if someone goes and deletes the NOAA instance, we will automatically go and remove the member from the WIP. So basically that way, if the NOAA instance is gone, then the WIP does not need the member anymore so that the traffic doesn't go to that member anymore. So we have a feature that we implemented so that we can do that. We also did some advance BAS scheduler. We made some changes to the scheduler. Some of the changes are already in deployed in production, but some of the changes, we would like to work on it and deploy it on production as well. So let's look at the first thing on IP usability of Vibs. So IP usability of Vibs is basically sharing neutron ports within tenants for multiple WIPs on the same address in different ports. So what this means is basically, let's say for instance, for a given tenant, you have a WIP that is on 10.10.10.10. And it's running on port 80. So WIP is typically IP colon port. And you already have a WIP under that tenant for 10.10.10 on port 80. Now the same tenant wants to create a WIP on the same IP but on a different port. So that was earlier not possible with V1 APIs, but then we added this feature so that we, so our tenants could do that. So now with this feature, our tenants could do that. So I'll do a quick demo on how that works. So here basically if you see, you actually have, so sorry, so you had, so there's actually already a WIP on associated to 4.4.3. Now what I'm doing is basically in the UI, I picked the same IP and now I'm creating a port, I'm creating a WIP on port 80. So it was already there on 4.4.3, I'm creating on port 80. Now here I'm going and picking the TCP monitor, I'm adding the instance on the next tab. So if you go back to the LB details, it's basically on LB. So it's going to create a port on the same IP. So once I go ahead and create launch, click on launch, it will, what it will do is basically in the background, it sees that this tenant already has a WIP on the same IP but on a different port. So what it will do is it will, it will try to reuse the same IP for on a different port. So earlier there was already a WIP creator on 4.4.3, so now what it should do is it should create a WIP on port 80. So now if you notice, we have two WIPs here, one is on 4.4.3 and one is on 80. So this is how we achieved IP visibility for the same, for WIPs, for the same WIPs, for the same tenant on different ports. SSL cert support for LBAS V1. So V1 API was, did not have any support for SSL cert, but since our, since that eBay and PayPal are tenants, are basically our business units, needed a secure way to create WIPs. So they needed SSL cert support, they needed LBs to have SSL termination for all their request. So what we did is we went on and added all APIs to create certs, create keys and create chains. We also added APIs to associate and disassociate certs from WIPs. The other thing that we added is we also added support in the Neutron Client CLI support, so that the CLI, so we added CLI commands, basically to create certs, to delete certs, to create keys and create and delete keys and create and delete chains. What we also added is we integrated the SSL cert APIs with the Horizon UI, and you'll see a demo for that as well. So let's look at a typical SSL cert workflow on how it works. So a creation workflow, the way we follow is it first creates the cert. So we added an CLI command LBSSL cert create, which creates the cert, and then we basically added a LBSSL key create that creates the key. So the first thing we can do is create the cert. The second thing that we do is create the key. The third thing is we do is create the cert chain. So the chain is actually an optional thing. If you need a cert chain, you will create a cert chain. If you don't need it, you can just use the key and the cert. And the next thing what you do is once you create the cert and the key, you actually associate it to the web. So you use LBWIP SSL cert associate with that. So that's how our workflow for cert creation and association of cert to web goes. The next thing is cert deletion workflow. So what we do with cert deletion is we start, we start the other way around, we first disassociate the web from the cert. And then the next thing is we delete the key. We delete the chain if there was one created and then we delete the cert. These dotted lines actually mean that these key deletes, cert chain delete and cert delete operations are optional because it's quite possible that you could use the same cert on different webs. So sometimes you just want to dissociate the web from the cert, but you still want to keep the key. You still want to keep the certs, I'm sorry. So you'll still keep them and you won't issue any of the delete commands. This is how our UI looks like. Horizon SSL cert integration UI that we have on Horizon. We here we enter the name of the cert, we enter the certificate, we enter the key. And this basically updates the information other than the web. Designate an LBAS integration. So the way designated LBAS integration works is when any of our tenants calls a CLI to create a web or delete a web. What we do is we have neutron embed notifications on web creates and web deletes. And what designate on the other side does is designate consumes the notifications to create and delete A and PTR records. So those folks who don't know designate is the DNS as a service product of OpenStack that manages the DNS system. And it has APIs to create and delete A, PTR and C name records. So we have integrated LBAS with designate. So the next thing that I would like to talk is advanced health monitoring. We added a feature where we could create the health monitor that is shared. So when I say shared health monitor what I mean is basically we have a flag that is called shared and this flag and it's set to true. The health monitors that are created on the LB can be reused by multiple webs. So basically sometimes what we have requirements is some of these monitors like HTTP ping and TCP those don't change a lot across various webs. So when we create them, we create just one of them. So basically this reduces the number of monitors that we create on the LB device. We also added customizations on HTTP ECB monitor. V1 API was missing that feature where we could configure the receive string on the HTTP ECB monitor. So we added that feature. If you go to the next slide here, you will see a dropdown that says TCP ping, HTTP and ECB. So if in the Horizon UI, if you pick any of these monitors, TCP ping or HTTP, it will just use the existing shared monitors. So the shared monitors are created as a part of the onboarding process when we onboard LBs. And if the next slide actually shows how the HTTP ECB customization works. So when you pick HTTP ECB, what you see is basically we have these configurations. So health check, retry account before markdown and send string were available in V1. But receive string was not available in V1. So we added that feature. So basically what it means is our tenants run various applications. The requirement came because we have tenants that run various applications and they want to do monitoring on those applications. So the monitoring those applications, they are different. So basically what they want to do is they have different endpoints on which they want to do the monitoring. So the endpoint can be configured by sending string. But when the response comes back from the monitoring system, what it does is the receive string that comes back is different for different applications. So our tenants wanted a way for that to configure. So let's say, for instance, an application says receive string as success. Someone else might say, okay, someone else might say done. So this field basically lets you configure that value. Let's look at a Horizon LBAS demo. This will basically demo the SSL cert integration that we made between LBAS and Horizon V1. So here, so basically here I have a list of tenants. So I have instance one and instance two created. These are a list of tenants that have what I would like to do is I would like to create a web, a load balancer on top of these two instances. So what I'll go ahead and do is basically I'll just say launch load balancer here. So once I click on launch load balancer, I can say create new, I can give it a name, test Vancouver demo. Demo LB. And here I can give it a description. Here I have a list of load balancing methods that I can configure. I have lease connection, lease sessions, and round robin, I'll just pick round robin. That's the instance port that I want to configure. So that's the port on which I want the service to be on the instance. So I have those two instances. I have applications running on 8080. So I configure 8080, I'm picking the protocol as HTTP. So when I pick HTTP, I don't see any cert information at a table. But when I pick HTTP as automatically the port changes to 443, and I go back and now I can actually create a cert. So now I'll give it a name. And then basically I already have created a self-signed cert. So what I'll do is I have it in my file. So I'll go ahead and copy the cert information from the file and I'll paste it onto the cert field here. Then I'll go back and copy the key information from here. And this is the private key and this is actually a self-signed cert that I've created. So I'll copy the key and put it there. And for this one, I won't create any, I won't add any cert chain since it's optional. I will pick, so TCP, HTTP and ping, if you notice, if when I pick them there's no configuration because those are shared monitors in the system. But when I pick ECV, it lets me configure the monitoring so I can actually configure monitors. So I can say that I want to do health check every five seconds. And after every, after three retry counts I want the LB to mark down the, to the member. And I want to do health checks on this endpoint. So I want to send get request on these endpoints and I'm expecting a response back that is sort of a regular expression that says it has to have success somewhere in the response. Here I go ahead and add the two instances. I can use the same UI to remove the instances. I can add instances. I'll only see instances that are part of my tenant. So now all the information is saved and correct. And now I'll just go ahead and launch the instance. What this will do is basically this will go ahead and actually create a load balancer. If you notice the load balancer has already been created. Test Vancouver, demo LB and it's on 443. And it has the two instances that I added. And if I copy that address and if I go to HTTPS and with that address and 443, I will actually see a web. And if you notice it's actually load balancing between 130 and 147, which are the two members that were part that were added to the web. So yeah, so that's the demo. Elbas scheduler enhancement. So Neutron Elbas has a scheduler that actually is used to decide which LB to pick, which LB to make the changes on. So that scheduler could have, could use a lot of, a lot of improvements could be done on the scheduler. We could have a multi-level scheduling and we could do scheduling based on various of these factors. We already do the first one where we actually do scheduling based on our class of servers, whether the tenant is for a dev class of servers, for an external class of servers or a QA class of servers of production. There are other attributes that we can consider like capacity, vendor and SLA and tenants. So we can have scheduler enhancements with those attributes. So let's look at a pictorial view of how advanced schedulers looks like. So we have, let's say for instance, a Neutron server here. The Neutron server actually has a scheduler in it. So that scheduler could be based on class of service. So that scheduler what it could do is, it could determine based on the class of service that the tenant is for, it could pick one of the providers. So basically it could pick either a provider one or a provider two based on the class of service. So once it picks provider one or provider two, we could have a second stage of scheduling that goes here where each of these providers could have their own scheduler. So each of these providers could have their own scheduler and what they could do is they could have intelligent, excuse me, intelligent algorithm that decides which LB to create the entity on based on factors like capacity and as we saw other factors like SLA. So what we can do is here based on capacity, it can either go to LB one or LB two and the second provider could send traffic to, I'm sorry, would create entities on LB three or LB four. So this issue we run into a lot is basically we have a lot of load balancers in eBay and PayPal and what happens is that the typical use case is that when a load balancer gets overloaded, it cannot take any more entities because once it starts, once we start overloading a load balancer, we run into a lot of issues where we see latency in response time, we see the control plane going down on that LB. So we see basically that no entities can be created on LB because of various reasons. So we need some kind of schedule that does capacity based scheduling where it knows what's the current capacity on each of these LBs and if it thinks that one of the LB is over a capacity, over a threshold, then it should not create any more entities on that LB and it will basically create a user-different LB for creating all the entities. So this is where our requirements were. So global load balancing. So global load balancing is basically we do load balancing traffic across various data centers and regions. So what this means is basically, at eBay and PayPal, we have various applications and pools running but we have them running in various data centers the same application and we want a global load balancer to load balance traffic between these pools on various regions. This is helpful in case when there's a data center outage or something where you want the traffic so to be forwarded to not one data center or one region or to the other region. So for that we use global load balancing. The global load balancer actually acts as an authoritative DNS server for resolving FQDNs to public VIPs. So what happens is that the global load balancer is a smart DNS server. So what it does is you can configure on how the DNS FQDNs results on the global load balancer. So I'll look at a typical topology that we follow at eBay and PayPal for global load balancing. So what we have here is on the top we have a global load balancer and we have local load balancers in each of these regions. And then what we have is we have public VIPs that are configured on each of these local load balancers. So what happens is that and then these public VIPs are actually pointing to pools. So pool A is deployed with three VMs. Typically we have a lot but here in this example we have three. So we have three VMs in pool A, three VMs in pool A and three VMs in pool A but each of these are in different regions and we have public VIPs on top of that and we have a global load balancer on top of that which forwards traffic to each of these things. So now let's look at a typical example that a real world example that we have here at eBay. So at eBay if you go to search.eba.com search.eba.com is actually a CNAME. So if you do an NS lookup on this thing right now it's actually a CNAME and let's see and the DNS server, the provider DNS server here will have it as a CNAME and that CNAME is pointing to account name called search.g.eba.com and search.g.eba.com is actually hosted on a global load balancer. So when you do NS lookup on search.eba.com it will actually return a bunch of public VIPs. So these public VIPs are actually VIPs that we saw in the previous slide here. So these public VIPs one, two, and end those are that VIPs. So basically what global load balancing does is it manages, it decides which public VIPs to expose in your NS lookup. So that is what global load balancing does. So this is the example. So what we wanna talk about is the entities that are there in the global load balancer. So global load balancer, so global load balancing actually configuration why there is one DNS side to it but there's also a load balancing side to it. So what it does is we have various entities that are there in the global load balancer. So one entity that we have is global names. Global names are global FQDNs mapped to multiple pools. So basically the example that we have search.g.eba.com is actually a global name and it has two pools. Search pool layer, search pool b and these pools are in various data centers. So this is how it's configured on a global name. You can actually configure the admin status true or false whether you want the global name to be enabled or disabled. You could configure things like LB load balancing methods like you want topology or do you want round robin. You can configure that. So that's global name. We'll look at what global pools are in the next slide. So global pools are basically a logical group of various public VIPs in each region. So basically we have the search pool. The search pool is hosted in region A, region B and that's the public IP is public IP A and public B. So this global pool actually configures a list of global members and each of these global members are actually public VIPs for the pool in each of the local regions, in each of the local LBs. So it will have the local LB name and it will also have the public VIP that is hosted on that local LB. So you have a list of that. The other thing that global pool lets you configure is monitors. So it can do monitoring also on these public VIPs. And the other thing that it can do is it can configure time to live that is in seconds as to how much time do you want. This is the number of seconds it takes for any changes that you make under the pool to get reflected on your DNS servers. So the other thing that you can configure is max address return. So when you do an NS lookup, when you do an NS lookup on search.g.eba.com it returns two or three addresses. So that can be configured here. You could add as many members you want here. You could add 10 members here or 10 public VIPs here but that value controls on how many addresses you want to return back. So it's very configurable. Even if you look at it, the global load balancing one side is DNS but the other side is clearly looks like a local LB. Like it has entities, like it has pool entities. It has server entities. It looks exactly like a local load balancer. And the last entity that we have is global servers. We represent each server. This global server basically represents the entire local LB and what it does is it actually monitors the LB, local LB itself. And it has all the VIPs that are configured, the public VIPs that are configured on that particular local LB. It will have the ports that it's configured on and it will have monitoring on that and it will have all those configurations here. So essentially what we have is an representation of an entire local LB. So GLB is not there in V1 and I'm not sure. I don't think it's there in V2 also LB as in V2 API. But our business needs was to have APS for this thing. So what we did last year is basically we implemented this in our eBase proprietary load balancing management application. And these APIs are actually currently being used by eBay and PayPal to configure the global load balancing, configure our global load balancers. So the benefits of GLB. So what it does is it provides you benefits like health monitoring. So basically what you can do is you can do health monitoring on local LBs as well as public VIPs. These are the things that you would not get on a DNS. So it provides other load balancing features like admin status, life status, and load balancing methods on GLB. So what it does is you can pick the load balancing feature, load balancing method that you want for the GLB to use when it wants to forward traffic to one of the public VIPs on each of your regions. You can configure that. You can also configure the weight of each public VIP. So let's say for instance one pool in one region is running really hot and you don't want a lot of traffic to be forwarded to that. So you can configure the weight and can increase the weight on some other region which is not running that hot and you can forward traffic to that. So you can use global load balancing for that and you can also configure the number of VIP addresses. That is the max address return field. You can configure that also with that. So this is what are the benefits of GLB. Migration use case. So earlier this year we had to migrate our current proprietary eBay load management system to Elbas. So what we had is we had OpenStack deployment in each of our regions. But since we had not integrated OpenStack with Elbas, we had OpenStack integrated with our proprietary eBay Elbas and then that proprietary eBay Elbas was talking to the Elba device. So what we have is that's our control plane and then we had a bunch of VMs here on the right which are running, these are basically other cloud which has applications and pools and services and then what we have is, this is our data plane path. So we have traffic going, VIP traffic going from our GTM from our GLBs, I'm sorry, to LB device and from LB device it goes to the VM traffic. So what we had to do is we had to migrate all of these to Elbas. So our goal was basically to migrate everything to Elbas and we had to do that with no control and data plane impact. So what our goal was basically migrated all our existing entities from our eBay proprietary load balancing application to Elbas. We had to honor all the existing LB entities that are there on the LB which were configured by our eBay proprietary system and we couldn't change any of those entities because we didn't want any data plane downtime and we didn't want any control plane downtime also because we wanted our tenants to be able to continue changing the entities that were there on the LB. So the way we achieved this thing is, first is we worked with our vendors to basically change their architecture a little bit so that when we do the migration we don't make any changes to the LB devices. So that actually helped us have no data plane impact and then what we did is we wrote automation scripts to migrate all our LB entities from our proprietary load balancing management system to Elbas tenant by tenant. So what we do is we pull data from our eBay load balancing application and move them to Elbas and during that time, so we just use API. So we call APIs from one place, we call APIs from the other system and we sync data, we do it tenant by tenant and once a tenant has migrated, we want the Horizon UI to switch over to the new system. So the Horizon UI was basically talking to, before the migration it was talking to the old system and then what we had is for the time being we had Horizon UI talk both the systems and for a given tenant if the migration is done it would talk to the new system and if the migration is not done it would talk to the old system. So we have a seamless plan to migrate all of these entities from our old system to our Elbas system. We migrated all of the VMs so we migrated over 1000 plus whips to Elbas in a period of few weeks and that also we were able to achieve that without any control plane and data plane impact. So that was our migration. The next thing that I would like to talk about is north, south and east-west load balancing. So this is a typical architecture that we follow for our north, south and east-west load balancing. So what we have here is we have VIP traffic coming from north, first hit our firewall and then from the firewall it goes to the LB device and from the LB device it goes to each of these regions VMs and that depends on how the whips are configured on the LB and then what we have is for our east-west traffic is basically for one VM wants to talk to the second VM, another VM in the same region. We have another LB device in the cloud which does that. So our typical use case is we have VMs running applications. So if we have a web tier application or an application layer tier, app tier application what we have is they want to talk to the DB. So the web applications are on one VM and the database applications are on the other VM. So we have a VIP on top of our database application that VIP is hosted in the LB device. So that's our east-west traffic. So this is how our east-west traffic topology works and future work. So what we want to talk about is future work. So we would like the OpenStack, we would like to work with the OpenStack community to actually build the GSLV APIs. There is some integration that needs to probably happen to designate because it has to create DNS records for the global names. So there is some integration that so we would like to work with the OpenStack community on the global load balancing APIs. The other thing that we really want to work with the community on is Bulk API to add members. So we have requirements where we really want to flex up. We call it basically burst or increase the pool size of a given pool from 100 to 500 members. And that has to happen in 60 seconds. So in that case, Alba's V1 API does not have any Bulk API. So to add a member to a pool, there's just one API call that you need to make. So that one API call will just add one member. So if you had to add 500 members to a pool, we didn't want to make 500 calls. So what we want to do is we want to work with the community on actually supporting Bulk APIs. So Bulk APIs are basically you just call one API. You give a list of members, and it will add everything at once. So at once, what will happen is that Alba's will send all the member information to your LB device, either via the controller or via the agent, whichever provider you use. And what it will do is it will make all those changes at once. So that changes happens faster. Advanced LB scheduling. So the next thing we wanted to talk about is LB scheduling, because I see a lot of potential in the LB scheduling improvements in the LBAS scheduler that we could make schedulers a little smarter. Because in LBAS, if you're managing multiple LBs, you want some scheduling intelligence to figure out which LB to pick based on capacity, based on SLA. There are various factors. I mean, we haven't deployed. We haven't worked on all of them. But we would like to work on all of them, because if you wanted to migrate all our load balancers into LBAS, then we will need that thing. So we would like to work with the community on that front as well, quota support. So one very important thing that we have is that we don't have an LBAS is quota support. So I think Neutron has some core intelligence, core framework, I'm sorry, with quota support. So we would like to work with the community on adding quota support to WIP. So basically what we want is we want to control how many WIPs a tenant can create. So that way we can assign quotas exactly the way there are quotas for NOAs, for NOA, and everything. So that's the future work that we have. This is our LBAS team, where we have an e-pay and PayPal. No, no, it handles multi-tenant. So the question was does the load balancer support multi-tenant? So yes, the load balancer does support multi-tenant. Because the tenant information is forwarded to the scheduler and the scheduler forwards it to the agents. So the tenant information is, so it's a tenant aware, so it does support multi-tenancy. So the question is whether we use virtual networking? Sorry, you had two questions, I'm sorry. So what kind of LB device? No, no, so the LB device is virtual network aware, because what we have is we have the members in a virtual network. And the device needs to be configured for the virtual network. So without that, it cannot forward the web traffic from the LB device to the virtual network. So it is configured. And it is, what was the second question? Sorry, about LB device, what type of LB device, sorry. So yeah, so we support various kinds of LB device. I mean, that's the provider thing, right? So it's hardware, software. So we support various kinds of LB devices. So firstly, thank you. I mean, it was a pretty comprehensive, good presentation, so thanks for that. My question is on the global load balancer, does the global load balancer, because that's not in the data path, right? Because it's really just giving a set of IP addresses out. So in your global DNS scheme, your global load balancer is just a C name, and so it's just giving IP addresses out, and it doesn't really know what's the data on the... Right, right. And so it doesn't really know, it can't load balance on the data path. It's really load balancing based on some static algorithm. Right, so that is correct. So what it does is, if you look at the data plane path, it doesn't do any data plane routing. What all it does is, based on the configurations that's there on the global load balancer, it decides whether to forward, whether to, what addresses to return back, the public list of public address to return back. But since the global load balancer actually also, so yeah, that's right. So yeah, it does only in DNS, so yeah. Is the GSLB patch available or it's not published yet? Sorry, could you repeat the question? The GSLB patch that you've shown here, is it available or? So the GSLB patch actually, so the GSLB code, the code was not written for L-pass. It was written for our EB proprietary system. So we haven't, because there was no integration, because there were no drivers yet written by any of the vendors on the providers to talk to GLBs. So we couldn't write APS for that. So the first, what we need to do is we need to work with the community on a blueprint for all the GLB APIs, and then we need to work with all the providers to basically add support for that in their drivers and their agents and our controllers, whichever path, whichever deployment path they use. Thank you. No problem. Questions here? Uh-huh. You might want to look at that. What's it called? Sendlin. Sendlin. Yeah. Okay, I'll look at that. Yeah. S-E-N-L-I. Okay, cool. Yeah, I had to, oops, wow. I had a question on the load balancer if it's an appliance. How do you plug that appliance to a logical network? Because I guess you're not using VLAN. So how do you make that load balancer to talk to that web tier network, logical network? Oh, so you're talking how the local LB talks to, talks to the members. The web tier subnet. The members, right? Yeah. So we have to configure that on the LB device itself. So basically, we have to define routes on the LB device. So that is sort of a onboarding configuration that we have to do. So we have to configure that. So that doesn't change actually whether it's GLB or, I mean, for GLB, that doesn't matter, but that's what the LB is. Oh, SLB, yeah, yeah. So actually in the diagram, it's connected to the network where you have your workers, the VMs, but physically, it's not the case. It's how it- Right, physically, exactly. Right. Yeah, so the question was, how do we achieve HA between the agent and the load balancer, right? So the provider that we use, the way they have it designed is they have a one-to-one mapping between the agent and the LB. So one agent can only talk to one LB, but that is not HA, right? Because if the agent goes down, then you cannot talk to the LB. So what we did is basically we have the same, we have multiple instances of the agent running. We have the multiple, I'm sorry, we have multiple agents, multiple instances of the agents installed, but only one is running. And by a pacemaker, what we do is we achieve HA. So if one agent goes down, then we try to bring up a new instance of it. And to add to Kunal's point, we also have two LB pairs. So, you know, there's primary and then secondary LB, right? Yes, so we have redundancy for LBs, as well as redundancy for the agents via pacemaker. So the way we have is we have the LBs that we have deployed is in an active standby mode. So we have two instances running. Yeah, yeah, it's a process, yeah. No more time for questions? Questions come up. Questions, cool, thank you.