 Welcome everybody. Do you do take a moment to put your name on the list if you would if you're here? curious is Considering what today is Anybody have any stories? That they wouldn't mind sharing as an antidote anecdote Nobody fell for any April Fools today Well, I had to think for a few seconds like what's what's special about today Time's running so fast then realize Yeah, some of the April Fools jokes get so subtle that I was sitting there reviewing a PR earlier today The this particular contributor does this like you know, it was one of those engineers that actually enjoys documenting and like writing things down and What a lovely thing and so consistently this this contributor will Take a well usually like it's an animated gif. She's sort of demonstrating the function or the thing that's changed and and this time I Sat there looking at it and finally figured out it's just a static screenshot So it took me about 10 seconds of sort of waiting for it to start and then I thought and I wasn't sure This is a really subtle April Fools thing happening here So it's about as it's about as interesting as my April Fools gets I guess We're five after we've got mr. Owens with us mr. Blake whose last name despite how long I've known Blake. I'm still not Blake, can you give it to me at one time if you would? Yes, sorry got a little background noise. It's Blake Cobra Rubius Rubius very good Miss Rob is here. Mr. Farrell Mr. Ranganath and Yuri Mr. T. Oh Good and mr. Bell as well Good deal. All right, we're five after Does of the topics that we have listed does anyone see items that we're missing today? If you do Please pop them in I anticipate will Well that will have so just as a brief overview of the agenda will have at least probably half of our time for Yuri to take us through KGB Before that will cover Some service service mesh working group topics There are a number of people who sent regrets today. So we we sort of cut down the agenda Fairly quickly to start off with get nighthawk There are I think all the votes are in in terms of those who or I guess if I guess if you're on the call And you haven't taken a look at if you don't aren't familiar with this project, please go take a minute if you haven't Whether you're familiar with the project or not if you want to just make a remark or a vote on the logo for it, please do in decision is awful and so the I'll give a project progress update on behalf of a couple of the contributors of the project and Vinayak is here with us now so he might have Another portion to this update. There's been Then progress on the build of nighthawk for For a binary that's Compatible with the base image that's used for the Mechery project Those builds take a couple of hours in GitHub or there's a custom GitHub action that's been written now I think those builds take a couple of hours in part because of All that's included all of Envoy's tool chain that's included for those builds. It'd been a recent Contributor on jubril jubril are are you on? They've been recent contributor that had was Trying to figure out they could get nighthawk on alpine And so you don't have I'm unqualified to speak to that so and jubil isn't here so The last item as an update on get nighthawk is Well, you have a few of these today, but a maintainer nomination and I am from my part I had intended to get to this a bit earlier and that is to get an email out about in this case about Vinayak Sharma so mr. Sharma just to embarrass you a little bit on your your stewardship of the project site And how you've been giving accepting PRs and giving direction to I don't know about five others That work doesn't go unnoticed You've shown great great intentions towards the project of the sites coming along nicely You have my my vote or I'd like to put you up for a maintainership for nomination I Think we'll get an email out on the mailing list about that so now is the moment that you should speak your piece and And and get out of this if this isn't something that you want speak now or forever hold your peace Hi everyone my name is Vinayak Sharma and From the last one month or a little bit more than that I've been working on the get-night-hawk website and collaborating with a few other contributors and It would be great to get nominated as a maintainer for that project as well Good. You're your indentured servitude will start soon. Hopefully No, I have to ask us twice But that's good good good good service mesh performance of the next Project to rehash service mesh performance and we'll just cover measuring in the same swoop And that is both of those two projects have been being advanced through the service mesh working group. They're both submitted for Proposed for thanks to Ken and soon coup They've both been submitted for sandbox consideration this last go-around which was Well, I got rescheduled it was a few days ago. It was supposed to be a few days before that I think We only the TOC only made it so far down that list If you're on the TOC mailing list, you've seen kind of how far down the they got they got past KGB so Yuri will I won't steal thunder here, but we'll you'll talk about that And so those two projects are up for review next time round usually that's a two-month Split no two-month gap in this case It's a month out so for service mesh performance though as it's as The Contributorship and the maintainership has grown on it is growing There's been and as as it's being proposed for adoption there's been Well More concisely articulated roadmap. I wanted to to bring this up as hopefully just a point of Discussion and feedback. There's a There's an open pull request on it to On the roadmap here to correct a couple of things but I'm gonna let this settle in if Those that are interested those that are familiar with the project could Think on this for a minute and express opinion about how this One the break down. Yeah You know, thanks for this for adding me to review this. So I think it's a really good start as to dividing into In terms of spec or publication participation research Covers different areas in terms of roadmap. So I do have a few items that we could add So I think some of them are being captured in the The proposal SMB sandbox proposal and we could also add it here in terms of roadmap for example some of the load generation aspects are Running the performance aspects across a distributed cluster for not so Typical traffic or east-west type of traffic scenarios Right. So there's some more details that could be added there and also the I think one of the aspect that SMB site talks about is mishmark Right. So that's another thing we could add here internally we've been doing some work to kind of provide And it's effectiveness of a service mesh So yeah, so pretty much at some point that Once we have some definition there, I'd be happy to share as we go forward. Yeah, some of these things could be added here Yeah, I think I definitely take a look at this little bit more and provide an update to this Excellent. I missed it. Do you recall what the first one of the items that you had said was? Yeah, the aspects that are in the Proposal sandbox proposal then we could add them here and Meshmark was another thing Yeah performance analysis And distributed performance analysis kind of being Okay, very good. Very good Yeah, I think that it does touch upon the distributed performance analysis. Maybe we could specify some more details along those lines to not just have a in a broad Charter but Specify some more details. So it's very much clear as to where this is headed Maybe if it helps you get divided into short-term medium-term that goes to Maybe if it helps for someone new looking at some of these things They would come in and say okay short-term if this is what you're focusing on maybe I could help you along those lines Wonderful There's an as you as this gets flushed out a bit kind of around goals and scope there is fodder for consideration In the slides here, there's a slide on Well, it's called distributed performance analysis. Maybe it's not just that slide. Maybe it's one or more of these here, but It might it might be that these may or may not be helpful. I guess it's worth pointing out Definitely here and be good to look at the existing material and see what we can add here One item I'm Sincu that I'm just reminded of in looking at and thinking about research is Yeah, how we're being Your assistance and specific and others that might be interested and Ken has actually brought forth some of this as well is We got a we were able to meet with a Any rude professor of that NYU and we didn't get to The last time we got together we didn't get to include Mohit of NITK but But I'm glad that you're here focusing because some of your help in managing those relationships and keeping them fresh and sort of Having a kiss is consistent kind of cadence to those interactions will be I Think isn't it will be really helpful Yeah, absolutely happy to Think one of the previous calls had even asked about it. So they're happy to help in this regard one item that's it's a bit of an action item it's Well, let me ask you all if this makes sense so the You know the service mesh working group Has had has a number of small initiatives that have been growing and growing and some of those like S&P growing enough that they're kind of there's big as it is now Do like even as we go to and so by the way the next topic here is about maintainer nominations and Sincu being being one of those It strikes me that the we can send out that type of a nomination on the service mesh Working group mailer that that's entirely appropriate and probably should be done But also that there's a domain name associated with or like the there aren't other mailing lists specific to the project and so I guess I bring it up as like food for thought on You did Does the creation of a couple of those make some sense as well? Or maybe there's just not enough traffic here anyway, so maybe Right. Yeah, I was thinking that too. So maybe I think the new newly created service mesh work group domain. It's a good start But I think as we have a lot more traffic there we can subdivide into either for misery or S&P or At night on some of these initiatives get On the CNCF list So while we're on this topic one thing I had shared Lee was there were a couple of volunteers to interested to Get started on this. So is there any info? Yeah, yeah, as a matter of fact, so um, yeah, Sincu thanks. Thanks for asking. There's a couple of There's a couple of contributors Well, or people who'd been interested for some time and had thought one of them both of them had really studied Some of the goals around meshmark and S&P One of them had studied a bit more deeply around nighthawk and it's kind of around get nighthawk. It's it's all sort of intertwined some um, the two gentlemen are Um, I'll write their names down. I'll make I'll make an introduction. Um, I had recent I think I'd recently sent them both the draft of what that roadmap looked like because they were in part I was letting them know his steam is building steam is building It's about time to to jump in and because they'll they'll need some they'll need your guidance They'll need some guidance and it'll be your guidance One of them is his name is Chanaka I'm misspelling it and the other one Um, they haven't done some um, Linux kernel work, uh around networking Nisaric Introduction is going you know immediately just know I think they'll have questions that have questions that you can help answer and I can help answer as well around You know, I'll I'll I'll send you some of their questions. I mean they'll they're gonna want to you know scope of scope of The scope of the goals scope of the work how closely can they they engage? Yeah, absolutely Okay, uh, oh, uh, yeah, so then I sort of informally said that there'll be a couple of other nominations. So sumku ranganath on SMP as a maintainer Nick Jackson, um, who's been Well, he's been on this call a number of times, but he's long been a supporter of um, smp and um Helped shaped a few of the well a few of the the initial roadmap items actually around open application model and smi And how these three specs ohm smp and smi Line up And then um auto van der scha Is um quite keen on the spec he and jakeb mom have been Are jakeb sobum. I'm sorry. I'm calling him by his github name Of google auto of redhat jakeb of google have long been Also been very supportive of smp. Um, to the extent that um, something like smp marries up nicely with their focus on nighthawk and so We'd like to Invite and nominate um auto For maintenance ship as well Yeah, thank you actually, um, I wasn't um Prepared already for this, but uh, thanks for putting my name in Happy to help you Soon could be to be to be very candid. Yeah, like actually it's um because of your assistance specifically that it's given enough Uh for for momentum to like To make this into what it should be so So yeah, I'm a bit. So that's great. Okay. It's good So, yeah Very good. All right a couple of sig network topic any last thoughts questions get nighthawk smp mystery Some sig network topic. So unless I'm mistaken ambassador or the project formerly known as ambassador, um emissary ingress I believe that it's still out for review Can anyone correct me on that? I don't think That it's Adoption at incubation level has been Other reviews that are open is linkerd's up for graduation. Its review is in process William Morgan who was on last time has been really helpful with Lots of data lots of helping with making sure the write-up is getting complete He's been so helpful that I've heard from him almost every day She's uh that team is ready for Your public review and so the the sig review Can um The draft of it will be in your inbox later today for your input feedback approval disapproval No problem The that leaves us with yuri, um kgb yuri There was I gotta I guess I I gotta say this there was another Was it Was it yelp yet another load balancer? It was sort of also up for sandbox review this last go-round Yeah, but I'm not sure if it was focused on jsob. So it's our focus is global load bouncing specifically totally Well, you're with that. Let me stop sharing. I'm uh today you're gonna give a presentation of the project kind of cool introduce it to everyone Right. Hi guys. I'm yuri. I work as principal engineer for absa and Let me share my screen Do you see My screen guys. Everything is cool. Yeah, so I tend to keep a like a minimal Sorry for that Minimal amount of slides and just to provide a context and then we go right to the lab demo so kgb was originated in absa as a Totally open source project from day zero and The idea behind it is to create cloud native global service load balancing solution Why we needed it is pretty much our business needs because So absa first of all its financial organization, which serves african continent It's a south african bank pretty much And the usual deployment pattern is to At least two geographical disparate clusters data centers and To achieve reliability and availability for Financial applications And given that Substantial amount of workloads are already running on top of kubernetes. We needed something to enable global global service load balancing like kubernetes kubernetes way and cloud native way So he didn't find any kind of Proper solutions that work for us being vendor or open source. So that's why we decided to Develop something on our own Using operator pattern So we basically created a pair of kubernetes controller and associated custom resource definition to problem One of the things that differentiate us from existing solution is also Absence of a single point of failure. So we do not have anything any instance that Passing a traffic through itself. It just doesn't exist and we do not have any form of control clusters So there is no single point of failure any for a bottleneck the controller and operator Is getting deployed Right to the target good target clusters where work workloads are running So with that in mind we heavily Utilize and actually standard kubernetes primitives that are running in the cluster. So it's a standard ingresses kubernetes service Endpoints and Everything is getting drilled down to the ports and associated port probes Uh, so it gives application teams the power and over the global load balancing strategy control and actual health checks that are Effectively running from Within the cluster in a in a form of standard port port liveness already in as props Now, so we intentionally do not have any kind of standard load balancing http end to end checks the the Load balancing solutions kGB is aware of internal workload cluster state and that's how it reacts to the workload healthiness or or not healthiness and Steers the traffic according to the load balancing strategy. So reacting the on the Port status and Again application teams have all flexibility to define These props as detailed as they want. So specific to the applications and the Traffic steering itself is based on dns, which is kind of better tested by internet and we are benefiting from vertical reliability and obviously we have some limitations around dns The most prominent one is a time to leave right Ttl and how fast and users and customers are getting these updates. We will see it in the demo and Solution is designed the way that is as much a large agglomeration as possible meaning that We do not create dns records in a Environment dns like row tt3 or like in frame of using info blocks or ns1 whatever We are only automatically configuring dns zone delegation which points and redirects the dns queries down to our Core dns spots, which are integral part of kGB. So we are answering to dns queries with our dns responses that dynamically modified according to the global global load balancing strategy And associated workload healthiness So from implementation standpoint, we based our solution initially bootstrapped by operator framework and the following the recent upgrades and trying to keep up with the project. So we actually like Switch to We started testing with a release 0.6 where it was quite a little bit disconnected from qbuilder now It's pure qbuilder engine. There's something on top and we migrated to the recent version and try to Keep up with upstream These are like at least minus one release Core dns is very important part of kGB. So that's exactly the part that Provides the dns responses external dns is used to integrate with this Environment dns in u-infrastructure. So like route 53 and if u and ws is one of Good examples of that. So that's exactly the part that automatically configures zone delegation And pretty much it does nothing else, but it probably we benefiting from external dns given it has quite a good number of dns providers out of the box And we used to have special at cd dedicated cluster and cd operator to to act as a Beacon for core dns, but we deprugated it by developing a special core dns plugin Which is capable to read the dns and point crd right from Like right from kubernetes at cd like through kubernetes api instead of using standard beckon For at cd cluster, which is skydana. So it's We used to have quite amount of Reliability problems with at cd base setup and at cd operator from core dns originally. It's already deprugated. So we invested in Developings as special plugin And KGB currently consists of just three components a controller core dns external dns made it Making it for the whole setup much more reliable and This project drives only single crd of kind gsob and that's it. So We try to keep things as simple as possible from a management perspective From integration with other projects. So we call it digit dns this environment one. So we tested heavily like Info blocks and route 53 they should be production ready ns one is already well very well tested. We just Do not use it yet and like our scale, but all the tests are passing and again Potentially works for other providers, but that external dns provides But we heavily tested only these three and There are another projects open source projects that we integrated with and admiralty is one of the good examples where admiralty is used for global workload scheduling across multiple clusters and kgb Is enabling global load balancing for these global workloads. So we have a nice tutorials there on admiralty project page yeah, and we can Get straight to the demo, but before that just Provide a context on the demo setup. So I'm on a kgbio page. We have some architectural diagrams there and We will operate in this demo at two clusters to the bls eks clusters basically one in europe another one in africa In its pretty standard setup for us It's important to Emphasize on the fact that we have like this environment dns providers or roughly three in a edibles example kgb is getting deployed right to the clusters where next to the actual workloads and It the controller gets the information transitily through ingress then service endpoints and anapods so That's pretty much it up to two data centers and the in edibles Terence is to the edibles regimes And that's where we start our demos So any my questions so far before I start Or Or I can just Yeah, something of an ignoramus question. Um, and that is uh uh, well Is that is I guess like the like Like there's no The geolocation of a given service This is like the the primary factor that you're using there are dns zones Right like the from the perspective of a client looking to get to a service and the path that they follow as Um, as they initiated dns request, um, like Uh, is it is that primarily? Um zones that are being used for that and then the affiliation of services to a given dns zone Uh, yeah, uh, it's not a so dns zone is the same. So everything like is behind the same of qdn so Well, basically I maybe can start a demo to answer to unpack the answer for a question So, uh, in the right plane, we will run a test script Which is basically doing This stuff like curling the test application. So we already have kgb installed on two clusters and a test Uh test workload and the workload is a standard test application from view view works pretty popular one It's pod info and we detect Uh Each deployment As associated with geographical location just for visibility, right what we are clearing So currently we are testing the failover strategy where we have a primary Data center built in europe and a secondary one In africa. So how it looks like from Setup standpoint, so let's check where we are now So we can see that we are in europe by this information from node we have a kgb Already installed so we have exactly this three components kgb operator controller itself which tries all the real logic like a orchestrating it core dns To handle the dns queries and responses and external dns This one is special for route to this three, which is deployed according to the helm values configuration and it is handling this zone delegation ultimately So we have a test in workload In a test gsob nine space and we can get the ports quickly. So it's just a couple of ports running We have associated service And most importantly we have our gsob special resource So I think it makes sense to start with spec definition Sorry, it was Helm always yaml specs definition is there. So our api group for kgb kind gsob standard metadata and what we are doing here we have Embedded ingress spec as a part of gsob spec. So It's a pretty standard ingress with specifying host and an associated service, right? port and pass so it's Actually, it's the same ingress type in a Go link behind the scenes, right? So we just embed it into this gsob instance So controller reacts to it creates uh associated ingress for global load balancing and Makes additional And it performs additional in actions according to the strategy. So it's composes the spec is composed of the standard congress Plus gsob strategy to follow so in this specific case we We have a failover strategy and we are pinning primary geographical tech to be us block that's This gsob is already deployed Here as a test gsob failover And we can see it runtime What kind of status does it have so as you can see exactly the same spec? And Here we have A current cluster geotech and a healthy record. So it identifies Healthiness of the Of the workload again transits we proved through the service and a number of endpoints So basically it's again the state of Port liveness in healthiness props and it populates dns dns record visa Let's say healthy ap addresses so There is additional kind of internal dns endpoint crd Now each we we are using this The crd from external dns project From the CRD source so if we Get the yaml here so you can see at failover speculated visa IP addresses and According as given at the Has our special crd plugin it is capable to read from this crd and Actually responds To dns query according to this To the configuration specified in dns endpoint and dns endpoint crd is dynamically populated by controller So what are these ip's? so in our Edible scenario we have a Network load balancer like a local load balancer in Which is this in front of the workload. So it's like standard ingress and jinx deployed here in test setup and we have this Associated nlb deployed so if you make a dig to this nlb, that's exactly These three IP addresses. So we are Assuming the workload is healthy the populating dns response with healthy network load balancer IP addresses associated with the workload so Exactly the same setup is in Africa so same number of nodes and Exactly the same spec for for jsob for failover so We do not modify anything. We just apply The same spec on a another cluster without any modification. So and another secondary cluster also aware that primary is euro, so it is Returning system responses. So it returns also IP addresses for european that the centers know because the workload there is healthy So let's try to emulate some form of Overquote failure. So we again in europe and let's Just scale the test workload To zero replicas so pretty standard way to replicate something and Kubernetes a form of workload being done Yeah, so everything is terminating. We can Get the status It's already on calcine And you can see it's already returned some other set of IP addresses. It's the one for nlb in africa And you can see some five or three there That's why they're still hitting the old endpoint and we're operating with a dns TTL limits and currently TTL is 30 seconds plus some like deviation with As we just associated to reconciliation look and here we already see a switch. So There are rules in case of failover. There is small downtime and now We are already steering traffic to africa because 30 seconds TTL is expired and We're already clearing the healthy workload in a secondary data center So it all happened automatically Yeah, and we can scale it back workloads are executed again and we can check status already healthy already as a IP addresses of europe And you can see in this demo clearing Loop that it's already failed over back to to europe so there is no downtime because Workload in african secondary was always healthy So that's another use case called We can Steer the traffic in a controlled way if you like to so I've seen teams. We're doing like a mano Uh Mayo no pin of the main data center From one to another so actually making some form of global blue green that was another unexpected use case of khb that we seen from end users Yeah So that's pretty much failover strategy And we have second one is round robin which is just Yeah, it's demonstrated this Spec is just for demonstration purposes to demonstrate different statuses and this chunk is round robin and this strategy Specifies round robin type and what it does It's it basically returns the mixed response from both both data centers so As you can see it the response the ns response will contain uh nip addresses for europe nip addresses for Africa and And the response will we'll mix them up So if you make a dick, right here we go. It's like it's totally mixed mixed response So we also have an eroded map to make it more consistent to To steer the traffic and round robin in a more predictable way like 50-50 but currently it's a standard Very random round robin over the geographical data centers and Yeah, that's pretty much two Basic strategies that we utilize in apsa and it's enough for our business case and definitely have some Now more advanced stuff in our roadmap and trying to gather Some feedback from communities The next one will be probably something about geographical proximity and this kind of things In this case you have to create some advanced Coordinates plugins to modify Uh The responses don't fly according to In the situation currently the controller make it like a Composition way by populating dns and points crd and for Very dynamic geographical proximity geographical location closest location Strategy we will need to modify it already on like core dns level. Uh, so yeah Well, it's worth to mention. Yeah, as We mentioned today on tuesday cnc ftuc voted to kgb be accepted as a sandbox project To see the ocencia so we are super happy about it That's pretty much it Do you guys have any questions? So one question Thanks for the demo and the information. So in terms of load balancing And so what are some of the aspects? I know you mentioned about Failover reliability aspects. What are some of the other aspects that No for incoming traffic that you consider load balancing Well, we operate now only Two factors the underlying pot healthiness Of the target workload and the load balancing logic that that's it So from we do not imply any kind of end-to-end health checks. It's it is just Readiness and liveness pot props and they can be as sophisticated as application team wants to be So that's a cool idea to provide the Power and control to application team over the global load balancing for their applications Yeah, got it. Okay. And how does something like this coordinate with Things like API gateways or service meshes Literally anyone in this area. So just curious how this Coordinates with the deployments Yeah, so far we didn't integrate with any form of service mesh, but what strategy We currently employ is actually We are relying on a any On pretty much on ingress tables, right? So the ingress controller agnostic and we are getting these addresses or like a set of IP Addresses You know is getting populated by associated ingress controller Syngen's traffic like whatever can be potentially some service mesh Assuming it operates It controls ingress and doesn't operate purely with some special CRDs, right? So So currently like In Not direct integration point is this whatever is getting into ingress spec we Maybe better to Yeah, yeah, yeah So whatever is getting in status load balancer ingress being cost name or alternative version of it is IP address It also is the case for our on-prem setup. So it's getting populated into this DNS in endpoint By by controller So this way we can get some information from ingress and And the kjb will do the as I say that global dns response for it So that's a that's a current current Way out works, uh, not sure if Maybe in future we will extend it to some other CRDs Even if you have some advanced service mesh deployment, but currently You know, we never actually like tested or integrated into more sophisticated from a service mesh endpoint environment I'll be definitely open for any kind of ideas in the attributes Okay Yuri, how do you um when you're asked? How do you classify the project as a As a you know a custom Kubernetes operator or as a custom ingress controller I'm assuming it's the As an operator because like it's not it's not really ingress controller, right? It's something it doesn't It works in combination this ingress controller. It's not ingress controller itself And it and it doesn't work without an ingress controller Exactly. Exactly. Yeah Gress controller should be there around otherwise, there will be nothing in these status and there will be no information to populate To dns record No And yeah, you're right. I think as you got as you mentioned some things I think maybe some roadmap strategies with respect to geoproximity geolocation and some advanced calculations that would probably probably require deep integration into core dns I think that was what I was having a really hard time framing a question around earlier was um was those types of strategies that That makes sense You're elegant in terms of oops elegant in terms of how you're Um, you're relying on and I guess it you you say it you stipulated in your goals I don't know what point it is, but um to more or less, you know leverages You you it's pretty kubernetes native or I mean like, you know, it's the it's well It's sort of the answer is whatever the readiness probes and the liveness probes do is sort of what you you know and Yeah done only through an operator done so the um almost all Go or like any any material part of the project anything but go All right, it's really good. It's really good line the only Non-go Colt is like our pretty huge make file, but it doesn't count And then and you guys the project does have its own home chart as well. Yeah, for sure. That's how we actually chart Is pretty important part of the project because it's not just installation it also It does also have like important configuration points which affect the Load balancing operation Further, so we actually taking Uh, this initial taking the cluster with the initial Helm installation. So we installing This is configuration for uos one, right? So we specifying geotech uos one and we specifying the neighbor That's another uh, geota. Just will be enabled clusters that it's going to work with Here and then like through a convention or a configuration They started to talk and share information also over dns and The similar configuration is for africa, right? So it's kind of flipped So it's a cluster geotech and the another cluster to talk to is you Is euro so I already showed to you this dns and it's actually on the screen. Yeah, uh, the populates special FQDN kind of service one, which is not exposed to user, but it's just around so Clusters are querying each other For this For this special service fqdn and Basically, they're asking about the health and health status of Associated workload under control from another Cluster so if we go there, so they just asking each other every consolidation loop and For example In case of from broadening strategy, each of the cluster will return all of the ip addresses, right? from Both just will be enabled clusters and whenever The workload will be that there these Cluster whenever the workload will be that in africa, for example, european cluster will Learn about this fact through this special FQDN And assuming it will be Totally degraded like meaning no no targets or partially degraded mean one or two instead of like full three in this specific example it will Modifies a final a response according Does make sense And by the geotag the the strings that you're using there they don't They don't have any special there's no special convention today or no no No, it can be anything in this example. We just named it like as a it was routines, but you can name it Whatever you like Short question it might be an appropriate one first one It's be valent or it can be to multiple zone Uh, yeah, five zones. This is my Actually question. Yeah, that's a great question. So, uh by design we are not limiting We are not limiting amount of clusters To operate so here we have a comma separated to list, right? And the round robin already works out of the box, but failover strategy is kind of not really ready It's kind of work, but the secondary will be not obvious So we have actually an issue in our github to test the kjb in more more than two cluster deployments to make our Operation more ready for that scenario. So we test it in production like these two data centers set up heavily Uh, by design we are not limited, but it's not really tested well enough at that point I think you also covered the second question. The second question was the strategy Section that you had in your crd of the on the ingress type there If that is something whatever plugin or so is is it made by you? and that's the Uh Yeah Yeah, it's our stuff. It's uh, it's ours. Uh, our customer service definition, which is uh, so kjb controller. Yeah, it reacts to this crd customer service presence in the clusters and creates associated the innocent points and ingresses and uh Overall automation According to the spec I'm good. I figure it out on the first question. Thanks. Thank you. Cool Cool, thanks a lot and maybe while we own the spec, uh, we may be worse to mention That we have uh So we have read a single crd, right? Uh, which is pretty uh, like Convenient way to steer the The traffic But during adoption in apsa, we realized that even some additional crd may be like a little bit of overhead for teams I uh, given that they already have they established hand charts. They like a new type drop in might be as an overhead and plus We're pretty on a like pretty reasonable scale. We have more than 120 clusters So propagating airbag rules there to enabling A new kind a new apn point for every team Also a little bit burden for operations team as well So We provided a more kind of Mm-hmm relaxed way Uh to enable jslb an alternative fund so totally not replacing the original jslb creation Uh, so assuming your workload already has standard ingress You can and it is most probably the case you can create the Uh annotations on already existing ingress or like extends your three existing culture Who's the jslb strategy there? and In case of fail over it will be a primary geodec and the controller and the kjb controller will react to it and we'll create GSlb resource automatically out of the annotations and we'll we'll just link existing ingress visa These are jslb type. It is the jslb cr and they will close a little this way So that's another way to enable Uh global load bouncing for our pool And it you to count to us these adoptions internal adoptions How do you as we're going to wrap up here? Um one item if you don't mind um a link to the road map for kjb would Be interesting to check out Yeah, we are keeping it in like a github widgets and we have the smile stones there Yeah, so you tried it right in github Okay, got it. Yeah, I was uh Yeah, if you have a Okay, yeah under milestone under releases under milestones or Under github issues and there is milestone section. So let me go So we just keep it simple and regarding management. So we have it here. Yeah, got you Yeah, so that the next one is zero eight. So that's a current Cool Thank you for this Yuri as nice to nice to dig in kudos on the on the project being adopted. This is Well, I think mr. Farrell Daniel is going to follow closely in your footsteps. I think with um submariners Okay Thanks a bunch, Yuri. Um, thanks all for coming and we're out of time. Um, catch you in a couple of weeks Everything cheers. Bye-bye Thank you