 Good morning. Good afternoon. Good evening wherever you're hailing from in the words of Chris short. I'm not Chris short I'm Bobby Kessler Chris's intern filling in today Chris is having internet issues But he will be back later on OpenShift TV, but this is another edition of cloud tech Thursdays So without further ado, I'm just gonna pass it over to Josh and Amy to introduce our guests today Welcome everyone. I'm Josh Burkus. This is Amy Marek and We're here with cloud tech Thursdays Today, we are very excited to welcome two of the people behind the metal LB project Who are going to introduce you to the project and explain what it does? And these are Mark Curry and Russell Bryant Before we get started though a couple of things One is if you have questions For either of our presenters then feel free to ask them in chat And that would be YouTube chat or Twitch chat depending on how you are tuning in We will see those questions and we will Share them with the presenters So with that Mark Russell want to get started Sure, I'll go first My name is Mark Curry and I'm an OpenShift product manager and responsible for networking and so part of that is where this this comes from and So, you know, the first question maybe that could be asked is you know, why metal LB? Why are we here talking about this? How is this important or relevant for OpenShift? Well, as the name implies metal LB's primary use case is load balancing Kubernetes services within the cluster and specifically for bare metal deployments If you're running OpenShift and if your preferred Kubernetes platform is one of the popular public cloud providers They already have existing native load balancers that you can choose from for this function and likely That choice would be best practices for your public cloud deployment However, for bare metal deployments, there is no prepackaged load balancer So this is where metal LB comes in and this is why we're investing in the upstream community for our customers And this is why also we're targeting full support of metal LB for OpenShift bare metal deployments as an option in some forthcoming releases So at this point, I'm gonna hand it off to Russell, but I'll be back later to talk a little bit about how this aligns to the OpenShift roadmap Russell Yeah, cool. Thanks a lot Mark and you know, thank you all for having me So my name is Russell Bryant. I work for Red Hat and Engineering. I serve as an architect and there's a some of my big interest areas are both bare metal or on-premise environments and Also networking. So I've worked with Mark For a number of years on different things and those the intersection of those two areas is what brought me to to metal LB originally I'm looking at I'm looking at Kubernetes and the great things that we do with OpenShift and different environments, particularly cloud environments And looking at how we can improve our experience on bare metal and that and one of the I would say gaps that that Bare metal clusters tend to have brought me to metal LB Well, it brought me to looking at the problem space and evaluating choices and then deciding, you know What metal LB looks like a great solution To the to some more to the problem. I wanted to solve and getting involved That's that's led to me. So as as a part of that I got involved in the upstream project And I'm I'm currently one of the maintainers of the project is one of the things that I do So let me talk a little bit more about Metal LB like what it is technically like what problem does it solve? so if you're um, if you're using if you're using OpenShift or Kubernetes one of the The things that you can create is a service of type load balancers a key Fundamental feature of kubernetes now a cloud environment what that's going to do Behind the scenes is create a load balancers or a cloud api You know mark mentioned this before but what but what about bare metal right? There's no cloud api there There's no there's nothing you can do to to magically Bring up a load balancer. So how can we replicate that experience that that sort of functionality in a bare metal cluster? There's different ways we could do it metal LB is one solution and one that I think is a pretty good solution And so how does it do it? I'd say one of the key Architectural principles about metal LB is it's a it's a network based solution. So it's not about I'm spinning up a bunch of new software. It's not spinning up a software load balancer It's about how can we provide the the right behavior? By actually interacting with the the network itself And I'll talk a little bit more about a moment and what that means But it operates in two modes. Okay, so it has a layer two mode and a bgp mode um Now the architecture before I get into those two modes, there's the architecture of metal v. It has two components Okay, so one is a controller And it is just a Singleton component of one of these runs in a cluster And what this controller does is it watches for when a user of the cluster creates a service of type load balancer And when it does that it's going to um, it can it allocates an ip address to it That's really the main thing it does. It's it's ipam or ip address management So the service to first thing it does it creates an ip address now this ip address has to be something It's part of how you set up metal lb as you allocate a set of ip addresses that are usable on the network that the cluster operates on And then the the real I guess Core of what metal lb does is the other component and the other component is called the speaker Now the speaker is a component that runs on every node of the cluster So runs as a daemon set and kubernetes terminology and this is what On a per node basis is responsible for speaking to the network and And telling the network about these ip addresses and where those ip addresses can be reached in the cluster So let's dig another layer deeper. So I've created a service of type load balancer If this is if you're running metal lb in layer two mode An ip address will be allocated to that load balancer and then The speaker will use either arc If this is an ip ip before network or ndp So that's address resolution protocol or neighbor discovery protocol We're kind of getting to the network protocol level details here To announce the location of that ip address. So we've got the load balancer ip address and now we're going to Metal lb decides which node should receive the traffic for that load balancer And it's going to announce its location to the network using one of those those mechanisms These are this is core to how networks work these these these protocols. And so that's what metal lb does just it just issues gratuitous arp or Neighbor discovery Messages and then the network is able to reach that load balancer ip address on the know where where metal lb has announced its location Now, um, one of the so there's there's some good things and bad things about this mode. The good one is it It it's using a network mode that exists like on virtually every Network like this. It's very compatible. It works almost anywhere So that's that's kind of that's kind of the good thing and it's and it's good enough for some clusters Or a lot of clusters I would say especially ones on the small smaller side Russell Yes, I have a question from the channel and I kind of had it too Is the target only for bare metal and they were asking or can other prem Installations work with it like vmware, but I was also thinking on that can it all be used for vm's and other things besides bare metal Yeah, I speak of bare metal. I don't know that that's like where my my head goes Um, it's really applicable to any on-premise environment. So it's so I would say any environment where you have the same problem Where where it's you you're running you're really responsible for the infrastructure the cluster is running on And you don't you know, you're not anything's let's say not cloud effectively is it's appropriate So it I would say that where I'm Where we see the most demand for it happens to be bare metal But it actually is applicable to and I expect that we'll be expanding our support for to other environments So a vm environment would absolutely be appropriate Um, it I would say maybe it's some vm environments. I mean Um, give an example of like where it may or may not apply into the environment Like open stack is a good example where it's very applicable to open stack in some scenarios and then not others So if it's open stack, um, and you're using Open stacks like virtual networks, so the cluster runs on a virtual network very similar to the type of networking It would be in a cloud Then, you know, it doesn't make as much sense or You know, maybe and and that's sort of environment you're going all in on open stacks Self it's cloud networking You really look toward more like it's load balancer service would be a better architecture in my opinion Now open stack can also be used in a way where in open stack terminology They have think of provider networks provider networks means you're taking the dms and they're and you're attaching them directly to Well, I say directly, but you know From a network connectivity standpoint the vms are on a physical network that the environment has And in that environment, it's absolutely applicable here like then the mental b makes a lot of sense. So, you know, it's um It's like absolute so to summarize It's absolutely applicable to bare metal clusters vm clusters Maybe slash probably it kind of depends on networking details whether this is an appropriate solution It doesn't care if it's open shift open stack. It'll do vmware or any other Yeah, yeah, I was kind of using open stack as an example, but vmware would be another one where this is It's close to my heart. I'm an open stack person. Yeah And I and I guess I in another and I'm uh, I spent a lot of um I spent several years working on open stack in the past So I you know have a lot of background there. So it's it's always very top of line for me So and we did have the question of whether metal lb is a redhead product and it's not it's an open source project Yeah And we'll come back to that too or like it's not a um, so redhead did not start this project This is something we got involved with because we saw the value at it and just sort of at our core You know, we look at What what open source solutions solve the problems and a lot and quite often they didn't originate from us But we we want to join with other people that are trying to work together to solve the same Problems and this is a case of that where we've now joined a community of people that are that that are collaborating on solving this problem So we didn't originate it We now contribute to it and help maintain it and it we are going to include it in a redhead product as a part of open shift and You know mark alluded to that earlier and I think that In a bit we'll come back to that and talk about the sort of detailed roadmap of where the different features are falling in our release schedule, but Yeah, I'd like to jump in on that real quick and just clarify that one of the things that we do At redhead is of course we support anything that we ship and as part of that support We fully test and vet and enterprise harden and do all the things with With an upstream project that we we invest a lot in And and so, you know, I've heard I've heard the question Will it work with other things? Sure It it will as as Russell articulated But keep in mind that the part that we're supporting at least initially Is going to be for open shift bare metal deployments And with the success of that, I'm sure we could we could grow that that That footprint of support to to other implementations Okay, now we're we're getting into some detailed technical questions So decide whether or not This is going to be covered later as the Sir our person who was asking about vmware Fahad Said to follow up on that Are you planning to replace the use of keep alive d on ipi clusters on on prem? Or is the plan for this to coexist with that? It's a fantastic question. Um, and so I was actually talking to someone about this this morning. So let me answer it, uh Which so the the first answer is it's going to coexist at first. Okay, so, you know, where we use keep alive d Is is solving some very specific? Use cases that are absolutely required for these ipi clusters to get them to bring up and function And we're not going to rip that out immediately now as you Can you can as you can tell by I can tell that you that you know this because you Like the question like metal lb is performing You know conceptually the same thing that we were doing with keep alive d there So will we replace it? I mean we could now we haven't like we haven't put that on the roadmap yet But like we absolutely could and it actually makes sense. Um, there's some there's some catches to that As we get into the details like one of them is the way we're using keep alive d is that we have this as a We make that stuff work out of the box as soon as you install the cluster The way that we're doing metal lb is that this is an optional component That you would that you can optionally add to your cluster. And so we have we would have to Make that there by default in the scenarios. We're going to use it to replace some of our keep alive d usage And so it's a it's a maybe slash probably but not like on the we haven't planned exactly when we would do it yet because how we're using keep alive d No, it works okay for now. So it's not an urgent thing, but it you know, it makes sense that would be considering so Sorry if that's sort of a non-answer, but um, great question and absolutely does make technical sense. They just don't know when we'll do it Um Yeah, so grab me The two modes Yeah, let me tell you about the second mode the second mode is pretty cool Um, if you get if you're into this sort of thing, uh, so the second mode is is the bgp mode. Okay, so This one is um, I think really the more the star of metal lb. I mean both both modes are effective But bgp is is pretty cool. So this is Where again the speaker runs on every node And every node then acts as a bgp speaker. So these Every node is then peering with the the routing infrastructure in your environment You know, if you're and this uh, so this is only working. This can only work if your network environment can do bgp So you have um routers that your nodes are connected to that can't speak bgp So the the the the speaker on every node will connect to it And when you create a a service of type load balancer and ip address is allocated to it Metal lb will figure out which nodes um that ip address can uh should be like a route should be advertised for And what's particularly cool about this is that it will advertise a route to that ip address from multiple nodes Maybe even all of them kind of depends on Scenarios I'll paper over that for a second but in almost all cases will advertise routes from multiple nodes and What's cool about that is that like then you you get the routing infrastructure itself can provide load balancing So to contrast this with the layer 2 mode um in all The traffic is all going to come through one node in the layer 2 mode because we can only announce Via art burn dp that ip address from one node So all traffic comes to one node and then it can be load balanced within the cluster It can sort of you know go through that one node and then be be be load balanced from there but with bgp The the the bgp router since it has routes To this to that load balancer ip address on multiple nodes of the cluster It can use ecmp or equal cost multi path routing to Send different connections to different nodes in the cluster. So like What's cool? Is it mental? I'll be it actually isn't doing a lot like it's what it does is it connects to a router and announces routes But it but it's now enabling the network infrastructure itself to provide load balancing Across the cluster. So it's it's quite powerful. And again, it's you know, just it's relying on Just good use of existing network technologies to to achieve the behavior we want and And it's and it solves the problem quite well without having to Create any any not running additional. Well, not additional software beyond the bgp Like we're not adding additional software that's doing processing of the traffic itself. Like we're not, you know, adding a A box somewhere that runs load balancing, you know, we're using the network. So that's really the That's the second mode and that's if for the environments that that can use bgp. This is this quite powerful mode So that's my that's the second mode All right two questions. Yeah Um, now this the first one might be mark related Is there any integration with okb project? Is it going to have it initially or it should add after setup? So that's kind of a workflow release process type question And the other one that is most likely more you is open shift support k8s one dot two two dot x Actually, that might be marked too. So I think I think that second question is basically what version of kubernetes is this aligning to and It might actually be a more general open shift question That's what i'm getting out of it. Which means it's not really for the show I actually answered it in text with the standard You know open shift trails kubernetes releases slightly Yeah, but I think I think we could probably say that you know, there's a there's a good chance this will be kubernetes So four eight is one twenty one to be one twenty two and one twenty three And for you know for for when for across the two different modes that russell was talking about So so coming soon And any idea on okd? What whether it'll be part of it initially or it'll be added after setup That's a difficult one for me to answer because okd is actually not a product So okd is a project so it depends on those project maintainers, but there would be no reason why it could not be included Yeah, and um, I guess one thing I can mention is um You know from an open source perspective like so, okay, metal lb is not going to be installed by default So that's one thing. So if you install okd, you will not get um metal lb Just out of the box, but you can install it afterwards and like many things we do An open shifter okd. We use operators to or the operator marketplace to to choose and and install additional components. We've created a metal lb operator that Is now on operator hub as of like within the last few weeks So it's like very very brand new from an upstream perspective. So like you can you can actually go ahead and try this out In an unsupported fashion and you can use an operator to Get it deployed in your cluster. So hopefully it answers the question um Yeah, so I think I completed my high level overview of the different modes mark. Do you want to uh Just kind of because you started talking about the roadmap there a little bit about how we're gonna You know in terms of where we officially support like the different Modes and different releases I cover that now. Sure. Yeah, let me go ahead and share It's this is one of those things is probably Best viewed as a slide. So we did prepare a slide for this one One moment and I will share that so Hopefully you can You get the big picture there. Um, so the idea is that so Today we uh the current default version of open shift is 4.7 very very very soon We will be releasing open shift 4.8 Um, and then open shift 4.9 is targeting this fall and that's when that first mode will be Made that brussel discussed the metal lb layer 2 mode will be Fully supported targeting and and then simultaneously that's also when The upstream frr support for bgp capability will be Will be resolved In preparation for the next release open shift 4.10 Which is targeting Currently the the december january time frame of this calendar year And that will that will provide metal lb with bgp support And also targeting bgp with dual stack ipv6 capability support and then Building upon the success of those two primary use cases that our customers have asked for In the future, we will enhance that and you can see a few of those Topics listed here And currently we'd be targeting open shift 4.12 onward Which would be the latter half of calendar year 2022 so the latter half the next calendar year Keep in mind these are target dates, but so far everything is looking very good Thanks mark. Um, and you reminded me of some uh Some of the the fun stuff that we are working on the upstream project So I mentioned, you know, I'm a I'm a maintainer, but there's other people from I had Contributing to metal lb and one of the first things that we're doing From a feature perspective and the project is this frr support So frr is an open source project that implements a bgp daemon And um, that's something that we already use at red hat. So we're um, you know interested in it confident in it And we want to apply it to this in this use case So what metal lb did in the project originally Is implemented it's basically its own bgp stack So it you know in if you were to go look on github in the metal lb code It has an implementation of the bgp protocol or at least a minimal Implementation of just enough of bgp to do to perform its use case And um, that has carried the project well so far But as we look into the future and the additional use cases we'd really like to support more bgp features We thought that it would be advantageous to instead of metal lb implementing bgp itself Let's switch to another existing More featureful and mature implementation of bgp. So that's what we're doing We're adding support for metal lb instead of speaking bgp itself It will be then managing and configuring controlling An instance of the frr daemon running instead. So we feel like that's a Better base for us to use as we move to move on to support bgp So that's going to enable us to um, some of the some features will get from that like just right away by using frr And then just give us so much more flexible base for bgp in the future. So that's that's going pretty well And that's yeah, I mean that's that's really a pretty crucial base for all the other bgp features that you talked about mark Wanted to mention it All the other stuff we're doing is like We've been working on ci coverage ci, you know improving ci adding ci tests and new ci jobs in the upstream project and lots of general sort of code reviews bug fixing that's for the Okay, and we've got two questions from the channels What is the recommended option to try metal lb with ocp 4.8? Let's ask that one first And then is there any recommended scc settings we need to apply on middle lb namespace Um Yes, so With open shift 4.8. I mentioned that there's a there's a brand new operator that you all can try that should you know Automate some of those requirements like should automate the requirements for you um, you can also just go and this you know You can also just go to the metal lb upstream website and it includes Instructions for how to deploy it sample manifests It also includes the open shifts if you look at the metal lb website it has some open shift specific notes including those Security contact settings that are required for the metal lb namespace That's I don't remember the details off top of my head that I know that we have documented for open shift on the website And uh, so like you know as the roadmap started implied with 4.8 We don't have a uh an official way to try it like it's not It's not included an open shift officially, but you know, it's it's it's open source It's a component that you can run under clusters, you know not supported by by redhead, but That's the way you could do it if you want to sort of try it out for sure Okay And the sec settings Oh that one that was talking about the website. I know we have a document on the metal lb Okay And we got another question in Is frr daemon an existing project or is it a conceptual project you are planning on working on and it sounded like it was already a project? Yeah, it is a Is an existing project. So the website is frr routing dot org and this is a Linux foundation project Lots of companies have contributed to it. It's actually um, it's not new at all If you're in the networking space, you may have heard of another project called quagga And I think in case the frr is originally based on that from the past in it's um Yeah, it's it's a it's a very mature existing project But fr routing dot org Or you can go learn more about that. It is a very Very nice featureful routing daemon Checking spelling and I'll get that in the chat for people. Yeah, cool. Thanks And it's some of the other things that like are really interesting about it This is not stuff we have on our road or we have on a detailed roadmap yet But just kind of imagining the future of another example of why it's powerful So I talked about bgp and I talked about bgp because that's what Well, that's where we see the most demand for from a routing protocols first perspective And it's what um medal I'll be already supports, you know using its custom implementation But there's also been interest at least in the upstream community about can we support other routing protocols like ospf for example? and um frr has support for many routing protocols beyond bgp And so we we kind of have a good base if we wanted to add support for another protocol if we had Good demand for it Or somebody could join the project and write that absolutely Yeah, that's that's another thing. Yeah, we have pitching the project the project is um the development community is um It's you know, it's certainly on the smaller side, which honestly I find incredibly fun Like it's just it's a sort of tight-knit small group. So it's it's really fun to work on so So interested and there's plenty of opportunity for contribution for sure Um, there's there's more. Yeah More there's more work to do Then then we have maintainers and contributors right now Which is a common story on any open source project early that's that is seeing successful uptick. So Yeah, and if you're an open shift customer today and if you have um had feedback for how it is that we're implementing Metal lb if you have something that's about it. That's very important to you in your use cases Please communicate that to us. This is the right time to get it into the queue and and we can prioritize it Okay, did you have More basics. Oh wait, you were gonna actually cover the the well you actually covered the roadmap already So yeah more basics or should we just go further with questions? I'm just I'm I'm having to take any questions that are remaining I don't I think I've covered the basics Um, I actually have a couple myself All right, let's let's hear. Um, well, we wait for the the audience to catch up um the um First so imagine that I actually have We have real bare metal situation, right? I have a cage somewhere and um, I actually have the money To buy a dedicated piece of network hardware with like an api like a sysco box or something, right? um Where would what would be my trade-offs ongoing that route together with? I um, you know more traditional load balancer thing versus using metal lb um, you know for for me it's like uh I I think that It's just it's well, it's a couple of things one. It's not necessary like because it's uh, they You probably have enough in your network as it is with your existing router To to do what you need the type of load balancing here And I guess another thing is like load balancing is a little bit of a loaded term We talk about load balancing you can think about as conceptually We're trying to balance load across some environment But it's used at different layers of the networking stack, right? So it's you know we're talking about load balancing at like layer three layer Oh really layer three only um in metal lb's case um, and you know a load balancing product is probably providing a lot of features that are higher level particularly the lake seven level and you're gonna maybe do some fancy load balancing based on you know HTTP like Destinations that sort of stuff. So it's it's sort of like a Different problem space and another one is the sort of the beauty of metal lb particularly when we're talking about the way we do it with bgp is that We're not putting traffic through a box like that's the point, you know We're not we're not we're not funneling traffic through any box or a couple of boxes even we're we're Using the network infrastructure to to provide sort of ultimate scale where we can spread the load directly from Where traffic is coming in though the routing infrastructure to across the entire cluster You know, it doesn't we're not introducing a point where traffic goes through so I guess summarize my answer in two parts one is um A load balancing appliance of some sort It's probably related to in most cases is related to features that aren't relevant to the use case here And the other is by design We don't want to to push the traffic through any sort of centralized point at all We want we want to spread it as much as possible. And that's that's the beauty of using the the sort of network based approach here just Putting the right things in the right places so that the network infrastructure spreads the traffic by design Yeah, also I had that you know, our customers have a very broad set of use cases And some are have pretty extreme performance considerations Some have pretty extreme configuration requirements and and I would say that if you do have the luxury of having Let's say a hardware solution that That you have at your disposal and if that solves the gap Then by all means use that but we and over for all the reasons that russell articulated there is the overwhelming majority of our customers are Are satisfied with the middle of the solution Okay, we have a couple of questions on chat So in a multi-tenant open-shift environment Will there be a way for tenants to manage bgp local prefs local preferences in metal lb Good question No, this is a cluster wide thing. So like the least the way this is set up today is that this is a You know, this is a cluster administrative level setup thing And you wouldn't you wouldn't be able to have multiple tenants with separate configurations Okay, and then the other question is in layer two mode What is the average fail over time for an ip address from one node to another? Yeah, what's the worst case you've seen? um, so yeah That's that's that's a really good question and actually i'm going to answer it but more than you asked Um, so Yeah, that's the reason I say that is another Place where we can talk about the difference between the two modes and and some of the benefits but so, um In layer two mode first Failover, you know, of course the thing if a node fails where an ip address was resident Then we have to move it to another node. So there's two things. There's um, how quick can we detect the failure and then? And then bring it up to the node once we've detected the failure It's pretty quick right right because of what what we have to do is decide Okay, what's the new node that owns it and then do what it did before which is issue those gratuitous ARP messages out to the network or in dp for ipv6 to say Hey network ip address is over here now from the node. So but the trick is of course um, detecting when Uh, the failures happened And all of my testing so far it's been At worse, you know under 10 seconds the way it is. Um, it's which is, you know, not It's not the fastest Now in a previous generation previous version of LB. It could be minutes Which was was you know, definitely not good. So it's improved to seconds the way it's implemented. Um, it's and it's using a, um It's using a library under the hood called member list, which is a Implementation of this gossip protocol. It's it's doing its own sort of cluster membership protocol to for all the speakers to Watch out for each other to be able to detect node failure faster And so at worst we're talking seconds So it's absolutely not the right if you need sort of sub second fail over times layer 2 will not provide that and we might be able to tune the parameters of our Failure detection to speed it up, but like I don't think it'll reach sub second fail over times But if you can do You know seconds if you can deal with Fill over time of you say five to ten seconds that sort of ballpark then we're in good shape Um, so wait will bgp Support faster failover. Yeah, so bgp is a better start. Uh, a better a better, um We'll be better for this so what with bgp first of all We already have IP addresses like actively Functional on multiple nodes in the cluster just to the nature of how this operates Traffic can already be sent to multiple nodes, which first of all that means if something fails You're not immediately impacting all traffic to begin with right so it's you're impacting Connect, you know Connections that will happen to hit the node that fails That's the first thing and then the next thing is so how do we how quickly can we ensure that? Traffic is no longer sent to that that node that died Now that is dependent on how quickly Actually the bgp router Knows because that's that's actually what matters in the bgp case. It's when does the bgp router No longer determine a given node a valid route and And well That's one of the features we we're going to get out of using frr is there's a There's another protocol like talking networking. It's just like acronym soup. There's protocol, you know Protocols on the list now another protocol called bfd. Okay, so bfd is a a protocol used to help To determine link failures fairly quickly Relatively quickly than what you would have otherwise And so we'll use bfd on these on these bgp connections to more rapidly determine when when the connection between our bgp speaker on a node and the bgp speaker dies So that the the bgp router will say up. Well, but you know, I don't I don't see that that that node anymore So it'll no longer route traffic there. So that will be that'll be quick and and I say quick because that kind of depends on Depends on specifics of your routing infrastructure and and how we tune bfd We haven't like turned that on and tested it yet. So I can't quote specifics, but you know, this is sort of a I would say a standard way of doing this with bgp and that's what we'll do. And so that'll provide much better failover behavior In the bgp mode, but we do you know, we'll do our best with with layer 2 with you know, the limit limitations inherent with with that mode It'll be seconds so long much longer answer than what was asked. Hopefully that's helpful There's a great question reminds me of stuff that you know how I should have talked about Somebody has decided he wants to be on the show Oh the cat I have a doll hiding in the corner I love office pets. I've got two under my desk. Luckily they're getting along Okay, do we have more questions the kitty is standing in front of the chat screen for me? Not yet I was just going to ask if anyone else had any questions yeah Let's take more questions. Did you have more questions? You know, it's been so long since I've used bgp We had a data center and we were processing credit cards So it's actually kind of nice to hear that it's still out there because Getting two connections another data center was always a challenge Um financially and just getting them, you know, knowing that your two connections were being routed in different ways So it's nice to see that it's still out there and people are using it. Um And it I would assume that yes, we are using not just you know one provider for these bgp Networks, but we are going with two that are coming in and then you just configure it Based on the information that your provider is giving you Yeah, so bgp is definitely alive and well. It's it is, you know, one of the things that runs the internet It tends to pop up in the news every now and then when when a mistake is made and You know Part of the internet was taken out by accident. I think that you know those things to have been improved. I think Over the years, but um, it's still crucial to how the internet operates And you know, you're you're talking about the the provider's thing You know bgp as a protocol can be used in a couple different modes You know, there's there's the there's the case kind of where I've talked about, you know runs the internet where how all the different providers peering with each other and and then there's As a protocol you can use it internally to your organization as well And that's more the use case here where at least in the context of metal lb We're only really talking about the bgp hearing between nodes of your cluster and and then the routers that they talk to And so, you know upstream from that whether from those routers or upstream routers from that in your organization Then you might get involved into bgp hearing with With other providers and so forth We're kind of making use of bgp as a technology Just within your data center within your environment for this metal lb use case And it may be that your bg infrastructure beyond that involves providers, but not directly in the context of metal lb Is um Has there been any integration with tools? For managing kubernetes clusters so things like ocm and kubit min and cluster api Like his cluster api is pretty much public cloud only right now, isn't it? um Yeah, well actually um another thing that was involved in the last few years was Starting a project where we implemented cluster api support for bare metal where we automated the provisioning of bare metal to do cluster api so um That that's metal cubed or metal 3.io. It's great. So anybody's right. So right. So can I use cluster api with metal? So and deploy metal lb. You absolutely can there's no direct integration though. There's um, There's uh, you know, it's like you can use that to bring up your cluster And then once you have a cluster then You install metal. It'd be like it's we haven't done anything anywhere where it's just automatic So as a part of I've installed my cluster and it includes metal lb out of the box It's just it's always sort of the you're gonna take a next step and the two things that I That exists in terms of doing that. There's the there's the operator so you can um, you can add metal lb via that operator um Or you or there's the upstream project. We we have a helm chart So if you want to use helm you can use that to deploy metal lb um, we also just have like sample manifests for like quick Quick hack of a deployment sort of thing if you want to just you can grab our manifests as is or With slight modifications if necessary and use those to to install it on the cluster But no direct any great integration without any of those things today Okay, we have a question came in When OpenShift adds support for metal lb Is the intention to use bgp Slash layer two for the cluster api and apps ingress or just for user workload ingress Yeah, great question. So to start it's just going to be user workload So it's not going to be used by default for any of the stuff like like our our cluster ingress controller like uh, I'm kind of talked about this a little bit earlier where metal lb You know technically and conceptually is applicable to that and very and we very well could migrate Our our default setup to use metal lb at some point in the future But to start we're we're providing this as a it's an optional component targeted at user workloads But that's a natural sort of evolution of it That we may then replace some of the stuff that we used We talked this is what we're talking about earlier with a keep live d where we that's kind of a sort of a warm Or manual very focused use of keep live d for a couple of specific use cases we could then go back and use metal lb for those in the future We're doing out of the gate Yeah, so we talk a lot about the open shift support for metal lb Did any of the other kubernetes distributions have uh metal lb integrated? Wow good question And I don't uh, so honor I'm trying to see how quick I can pull up the metal on the website to um Help me remember this so like there's a part of the metal lb website There's a list of like different environments where it's been used. Um, I don't know about distributions I I don't know off the hand Of any distributions that which distributions include it. I don't remember but it would be You know, I guess the question to ask is and it's a I mean I've seen lots of people say with k3s using it like just as an upstream maintainer on the issue tracker and stuff People I see people using with that Okay, well that would be distribution Yeah, I'm just one and this is I'm trying to like dig into my memories to see if I can remember so k3s Yeah Yeah, I can see there's some setup guides for using k3s with metal lb so yeah The With there there are some for examples. There are some n eps neps out there that Sort of bespoke custom deployments that they use Where they have integrated metal lb to to some extent But what our customers are really asking for with the bulk of the customers out there are asking for was for frr vgp metal lb and and That does not exist anywhere. And so we're basically working with upstream to create it as the first Yeah, that's a good point. Like when we when we were looking at this You know, we have problems we want to solve. What are our choices? What we chose was we want to use metal lb as the community to and and help Jump in and help evolve it to solve what we want to solve another option we had was to do something more, you know, basically More custom, um, you know start something brand new around frr, but that just didn't seem to be the right choice. This was a A very active community. Well Very actively used project that was like like lots of things as an open source, right? It's like that's really cool. It does a lot of like it's almost what I need Right, so let's jump in and collaborate with others to make it what we you know exactly what we need Tell us what we're trying to do Yeah So anybody in in is anybody in the project talking about Joining the cncf Oh, yes. In fact, um, so some some upstream project history. So this is um The this the metal lb the project was started by one guy originally And and then but he has since moved on and and now there's a team of maintainers including myself that that take care of the project Um, and you know, he still owns the domains and you know, there's some sort of like What let's let's finalize the sort of handoff of the project to long-term ownership discussion And as a part of that we did apply For the cncf sandbox and it is in the queue for review at this stage So that that's the discussion that that I helped drive and put in a proposal Um to the other maintainers to say I you know, this is what I think the right thing is for the project Um, and we reached consensus and I felt I did the application. Yeah, we are in the queue for review. So That's the that's the the future I expect or that I hope for I think it's a You know, that seems like a natural home for it's like it's a kubernetes specific project. So Hopefully that that goes through Um And yeah, and it's a healthy deaf community I work with people from so there's like there what well There was a company in kinfolk who um has now been acquired by microsoft that there's a couple maintainers from there and then another maintainer from from Works for rancher And then other contributors from elsewhere But uh, yeah, so I think you know cncf is it would be a great home for us Well, I think that's all the questions we have so any final thoughts On metal will be the future of that Future of load balancing Future of load balancing. No, I don't know. I don't think so. I just you know, thanks for having us. Um You know, this this has been a very fun project You know, of course, it's I do this as part of my job, but it's it's been one of those things. It's kind of Been super fun for me. Um, just just because I this is like nice intersection of things I'm interested in So thanks for the opportunity to come talk about it a bit. That's fun Yep, I'd also like to thank everybody for their time and uh and our ability to speak about it today and look forward to hearing from all the customers with their requests for enhancement and so forth about metal lb and And uh, love to have those discussions Okay Is Yeah, actually one quick question. Is there a slack channel or a mailing list where people can follow up if they have questions later on? Um project. Yeah, so the upstream project metal lb. There is a metal lb users mailing list It's a google group. Um, and we can find the link on the metal lb website for that Um, the project also has two slack channels on the kubernetes slack. So there's metal lb and metal lb dash dev those two channels are Active on the kubernetes slack. So those are the two places From the upstream product perspective And then mark how do people Reach you if they have any product questions or requests that sort of thing. What's the best there? Basically through their account team. So if they're an open shift customer They they should have a connection point there and depending on the level of support they have That that is different. But we that is definitely the the way to approach it. Some customers can't submit requests for feature enhancements directly But the account team does it on their behalf and that gets it into our queue And at point at which point we evaluate it and if it has merits Um, and we will we will build it into the future of the product cool awesome. Well, thank you so much for attending and uh letting us know All about metal lb the Uh, you know and how to use it and what's going on with it Um for anybody tuning in who say miss the beginning of this Um, this broadcast will be archived in youtube so you can go back and watch the beginning of it um, and of course we have lots of different shows here on red hat and open shift streaming Um, just so everybody knows this particular show Cloud tech thursdays is actually in three weeks going to become cloud tech tuesdays And we're going to be moving to a new tuesday time slot at 10 a.m. Eastern time And that's in order to accommodate more of our european viewers so Thank you very much And we will see you online Thanks for joining us everyone