 Right. I'm Yotaro Haikawa, a software engineer from Isabel, and thanks for having me here. So I'm leading the development of the BGP control plane in the Isabel and Tensillium community. Thanks, Yotaro. So before we get into the details of BGP, Yotaro and I thought it would be a good idea to spend just a few minutes kind of going through a networking primer to kind of set the foundation for BGP, right? So if we think of networking at the very basics of it, it's taking some computers, putting network cards in them, connecting them to some kind of network device like a hub or a switch. So they're all fully connected. And then you assign an IP network. And here you see a network number and a mask or what is called a cider block for this network. You make sure each host on that network is assigned a unique host number from that network, and then hosts can freely communicate with one another, right? So at layer two, they use address resolution protocol or ARP to map the IP addresses to lower level MAC addresses, and they can just communicate directly to one another. Now this doesn't go very far, right? If you try to keep scaling this out and keep putting hosts on this network, you get to a point where the address resolution protocol that I was talking about, you start getting a lot of broadcasts on the network, it becomes inefficient, and that's when multiple networks start getting involved. And when you start talking about multiple networks, we have a new device that gets introduced in here. We see four different networks, each with a router. And what you do is you tell each of the hosts about their default gateway. So hosts, if you're unable to communicate with an address that is not local to you, right? If you want to communicate with some destination host and that host is not on your network, doesn't have the same network number as you, then you need to send those packets to your default gateway. And routers are responsible very simply for forwarding packets between the interfaces on that router to different networks. You may ask yourself, well, how do those routers know about those networks? Well, the first thing they do is they actually, each router creates a routing table, and that is a destination network that's associated to a next hop IP address and an outgoing interface, right? So packet comes in a router interface, it looks up and says, what destination is this IP address? Let me look at my routing table. Oh, it's going to go out this outgoing interface and it's going to be this next hop IP address, which is typically another router. And how do routers learn about all these networks to put in the routing table, right? So you can statically configure those routes, but that doesn't scale well in small environments. It's perfectly fine. But when you start talking about large enterprise networks, service providers and so forth, routing protocols are used and routing protocols are a process that runs on each of these routers. And those routing protocols send messages and receive messages on those different interfaces. And when it receives those messages and says, oh, here's another router that's speaking the same routing protocol that I'm speaking. Oh, it's telling me about this network and the next hop to reach that network. So they're just sharing with each of the other routers what networks that the router knows. And then that information is used to build the routing tables. Now I want to talk about autonomous systems. So this is a higher layer of abstraction than a network number and mask. An autonomous system is a collection of networks that's under a common administration or administrative control, right? And it presents a common routing policy, right? So if you may be a large enterprise with thousands of locations and thousands of networks, but you as an enterprise have a common policy that you use on how you exchange routing information with other enterprises or with the service provider, the internet service provider that you use. And each autonomous system has an autonomous system number that's assigned by internet authorities. Could be a local authority. And that autonomous system number is used by BGP, which we'll show here a little later. And I also want to make a distinction that there's two different classes of routing protocols. There's an interior gateway protocol and an exterior gateway protocol. BGP falls under an exterior gateway protocol and exterior gateway protocols. They're not meant to be used within the enterprise in the sense of exchanging routing information. They're meant to be used between autonomous systems. But what we're also going to show is that BGP is becoming very popular in data center environments as well. And that's where we're going to get into some more details here a little later. But interior gateway protocols, they are typically used within the autonomous system to exchange routing information between different routing peers. And things like OSPF, IS to IS, our IGRP are very common interior gateway protocols. Well, let me hand it over to Yutro to talk more about BGP. All right. So what is BGP? So BGP is a protocol to exchange the route between the autonomous systems. That is the exterior gateway protocol that Daniel explained. And then the ultimate purpose of the protocol is figure out the shortest path to the destination. And it is originally designed to form the internet, but it is becoming popular in the data center networks as well. So that's why we have the use cases in the psyllium or the Kubernetes in general as well. And the more important thing here is this is an IETF standard protocol. That means that you can exchange the route with any devices like a Cisco, Juniper, Arista, whatever the vendor is. You can exchange the route with those network gear while we are supporting BGP. That is a very powerful thing because if we support the BGP, we can go beyond the Kubernetes cluster. So here's how it works in very high level. So this shows the Router A and Router B and Router C connected straightly. Oh, sorry. No, this way. So this topology connects the three routers straightly and the Router A and B and B and C established the BGP peer. And then the first of all, the Router A only knows about the network connected to itself. And then the Router A advertised the route over BGP. And then now Router B now learns the route to the 10.0.0.0 rush 24. So what happens essentially here is Router A says like a hey Router B. If you see the packet going to 10.0.0 rush 24, I know how to get there. So please forward the packet to me. This is what the Router A said. And then now the Router B propagates that information to Router C, but the next hop rewritten to itself. So what the Router B says in here is essentially the hey Router C. If you see the packet going to 10.0.0.0 rush 24, I know how to get there. So please forward the packet to me. Again, this is what Router B said. So then we now have the one directional connectivity here. And then the opposite thing can happen in the return direction. So Router C can advertise the route to their own network to the Router B and Router B propagates it to Router A. Then we have the connectivity for both directions. So let's now make the thing more complicated. So what happens if there are multiple paths to the same prefix? So in this case, hosting the left network can reach to the right network in two ways. So Router A B D or Router A C D. So BGP usually chooses the shortest path. But let's say in this case, both paths have to exactly the same distance. So what to do? So BGP can handle this situation in two ways. So the first way is breaking the tie in some way. There are multiple ways, but in some way. And choose only one route as a best path. So that means the hoarding the 100% of the traffic to the Router B, for example. Another way is using both paths. So in this case, there are two paths. So 50% of the packets goes to Router B. And 50% goes to Router C. So in this case, it's specifically called equal cost multi-path. With this feature, we can essentially load balance the traffic within the network. So this is frequently used technique in the BGP network. And then we also have the use case that uses ECMP in Kubernetes as well. So let's talk about the Assyllium. So how BGP used in Assyllium. So Assyllium has a feature called BGP control plane. So it is a BGP implementation designed from scratch to be fully integrated with Assyllium. So it plays very well with Assyllium specific feature like group proxy replacement or Assyllium specific IPAM implementation like cluster pool IPAM, multiple IPAM, and so on. So it is also Kubernetes native. So we eliminated the hard-coded IP address from configuration as much as possible because the IP address is very ephemeral in the Kubernetes world. Instead, we rely heavily on the labels to configure the things. So here's how BGP peering works in Assyllium. In Assyllium BGP control plane, you can apply the BGP policies to the node by labels using Assyllium BGP peering policy resource. So for example, in this topology, we have two racks and each racks have top of rack router. And the nodes under the rack needs to make a peer with each router, right? So in this case, we can define two BGP peering policies. So one selects the nodes on the rack zero and make a peer with router B. And another selects the nodes on rack one and make a peer with router C. So how the route advertisement works. So here's how route... So one of the popular use cases of BGP in Kubernetes is making the pods directly reachable from outside of the Kubernetes by advertising the Potsider assigned to the nodes. So when you want to do that, you can just turn on the flag, export Potsider. So that's how we advertise the Potsider assigned to the node to the upstream router, rather A. And then the workload outside of the cluster, in this case node C, can directly reach to the pod through this router. So with BGP, you can also implement the type load bancer service. So in the public cloud environment like GCP or AWS and so on, they provide you the load bancer implementation to you, right? But in the on-prem environment, it is not the case. You need to prepare the load bancer somehow by yourself. So with BGP control plane, you can easily implement load bancer service without having the special equipment like load bancer appliances. You can simply select the service by labels. Then Selen will advertise the load bancer virtual IP for those services. And then the upstream router can load bancer traffic to each node using ECMP. We just explained a few minutes ago. So this means that you can use the upstream router as an external load bancer. So this is nice because one, we don't need to have the extra equipment. And two, the packet forwarding happens at the speed of hardware on the router. So it is very fast. So the one last BGP use case we want to introduce is an integration with a multiple IPAM. So it is a feature, a multiple IPAM is a feature introduced in the latest release of Selen. That allows you to define the multiple pool of POTSIDER within the cluster. And Selen BGP control plane now can selectively advertise the POTSIDER allocated from the specific pool by selecting the two input IP pool resources by label again. Then we'll demonstrate it to you. So now I'll pass the mic to Danian who contributed this feature to the Selen. Thanks, Yataro. So if you'd like to follow along or probably follow along at another time, we have a QR code to the repo that has the instructions on how to run through the demo that we're going to talk through today. So we thought it would be a good idea instead of just talking about BGP control plane and BGP would actually show you BGP control plane in action. And so we have a very simple demo environment that includes an external router here. It's depicted as router A. And then we have a Kubernetes cluster that has two nodes, node A and node B. And those nodes are a control plane node and a worker node in our demo environment. And then we actually have an external node that's simply represented as a Docker container in our demo environment. And this acts as an external workload that we're going to go ahead and show reachability into the POT IPs from this external workload. And after the IPAMP ciders are announced using BGP control plane from the nodes to the external router, then we'll have reachability here from the external workload to the internal workload running in our cluster. So to speed things up a little bit, I have already pre-provisioned the demo environment, which consists of creating a kind cluster and then using container labs to create a bunch of other nodes as well, and then stitching them all together. And again, if you take a look at the demo repo, you can learn more about the demo environment. So after creating the demo infrastructure, we go ahead and we install Cilium in the environment, and we wait for Cilium to be ready. And after Cilium's ready, what you'll see is Cilium also created two POT IP pools, one called default, one called other. The other thing to make note is that these pools have no labels associated to them. And if we take a look at these POT IP pools, you'll see that the POT IP pool called default is from the 10.0.0.0.16 cider, and it will go ahead and allocate slash 24 networks from that larger pool to Cilium nodes. And the same is done here for the other pool that we called other, but this is using the 10.20 network. Next thing we do is we create a workload in the cluster, and we'll see if that workload is ready, so it is running. And the thing to note here that I want to point out is look at the IP address that this workload gets. This workload was assigned an IP from the other pool and not the default pool, and the reason being is when we created this daemon set, we specified what pool that we wanted to use through an annotation. Pretty simple, right? Okay, so now what we're going to do is we're going to go ahead and we're going to create the BGP peering policies. We have two of them, one called control plane, one called worker. And what you'll see here is that those policies were realized here. Again, here's control plane, here's worker. And you'll see that we have BGP running on these two nodes, the control plane node and the worker node. And you see the local autonomous system and the peer autonomous system, the peer IP address, and the key being is a session state. BGP has different session states, but ultimately if a BGP session state's not in an established state, then there's no routes being exchanged between those two BGP speakers. One other thing to point out here too is you see received an advertised, zero. So no routes are being received or advertised, but we do have an established peer. Each of the nodes, the worker node and the control plane node being peered with that external router that you saw in the presentation. So if we do peer policies, actually policies, here's the BGP peering policy on the control plane. And what you'll see here is as your tour showed you in one of the slides, we use a node selector to attach this policy to a node within the cluster. So you could essentially even have one policy that gets attached to multiple nodes based on the selector, giving you that flexibility, right? The other things you see here is virtual routers, right? So we could actually run multiple BGP instances on this node. We have just a single BGP instance, and we specify the local autonomous system, along with the peer configuration for this instance of BGP, right? And if we look at the worker, we'll see a very similar configuration. The only differences that we're going to see for the worker node and the worker policy is that the worker policy gets attached to the worker node, and then it has a different local autonomous system, a different peer IP address because that external router has multiple interfaces, different IP addresses, all right? So what do we do next? We have peers, but we're not exchanging any routing information. So let's go ahead and first, let's update the peering policy of the control plane and the worker. And what you're going to see here with this update is I'm now adding a pod IP pool selector to both the control plane and worker policies that matches on the label foo bar, all right? And I'll just show you again here really quickly for the worker. You see that the worker has now been updated. There's that pod IP pool selector, and then I'll do the same for the control plane. There's the pod IP pool selector, right? We're still not advertising any routes because advertising the pod IP pools is a two-step process. Remember I showed you that the default and other pod IP pool that was created during installation had no labels, right? So if we have this pod IP pool selector saying, hey, match pod IP pools with this label of foo bar, it's not matching anything, right? So let's go ahead and label. So here I go ahead and add the label foo bar to both of those IP pools, the other and the default. And now when we go and we see the peers, they're there, right? And let's go ahead and test reachability from our external workload. So 10, 20, 30, so you see we've got reachability. And if I go ahead, I probably should have actually tested this before advertising. But if I go to control plane and I get rid of this, give me a second, I'll get rid of this here. Let's do the same thing for the worker. And so the pod IP pools will still have the labels. But we have updated the peering policies to say we no longer want to advertise pod IP pools. And we can verify that with the VGP peers. You see we're not advertising anything here, right? For either of the nodes. And now if we go back and try pinging again, and if we patch it, add it back in. VGP peers. We're advertising those networks again, and we're reachable again. So a pretty simple demo. We didn't have a lot of time to get into all the different details of the VGP control plane. So please take some time. Check out the documentation. And if you'd like to get involved with, we'd love more involvement, not only in VGP control plane, but as many of you know, Cillium has a lot of great functionality and we're always looking for more help, whether it be with submitting code, fixing issues, documentation. It's a great community to work in. And we appreciate your time. Please use the QR code to provide feedback. And do we have any time for questions? Thank you. And yes, we can get one or two questions. Hi, I have one comment to the previous slide. The weekly meeting is at four o'clock European time, not at five. One question. What are your other plans with the VGP? What do you plan on doing next? To me, it seems that everything is now done. We can use it with multi-pool. Yeah, so do you want to start off with kind of the VGP roadmap? You want to touch on that? Yeah, so we obviously want to expand the use cases of the VGP. At the same time, what I personally want to start doing here is improve the operator's experience. So in the demo, we used the Cillium CLI to check the VGP status. We want to expand it to allow you to check the advertised route, receive the route details. Also, I personally want to support the VGP number, which allows you to reduce the number of the VGP pure-in-policy dramatically. That's pretty much for me for now. Do you have anything, Daniel? Well, just a heads-up as well. There is a lot of work that is just starting to happen with, I don't know what the official term is, but BGP Control Plane Version 2, let's call it. And so what you saw today was a single API kind, Cillium BGP pure-in-policy that did everything. BGP Control Plane Version 2, we've gotten feedback from early adopters, and part of that feedback was, wow, it'd be great if this was a little more composable. And so BGP Control Plane Version 2, the initial alpha APIs just landed, I think, two weeks ago. And post-CubeCon, a big focus is going to be on that BGP Control Plane V2 using multiple API kinds to kind of construct the BGP environment. Thank you for following that. Thanks a lot. Thanks for your presentation. I have two questions. Number one, how does your, one of the use cases was to expose the API address on the pod publicly, right? How does this compare to just using Ingress? You know, you have a service that exposes the pod, and you use Ingress. Your Ingress is reachable from anywhere. I'm supposed to hit it in the pod directly. How does, I don't, just from my understanding, you know. Yeah, so with Silium and BGP Control Plane, different ways to get things done, right? We can go ahead and use a standard service IP. We can use a pod cider or the multi-pool IPAM and expose those pod IPs directly. And we've gotten feedback from users that want that functionality. And you say public, it doesn't necessarily have to be public, though, right? It's just outside of the cluster. And there's many environments where even outside of the Kubernetes cluster is still considered private. And that could be okay. I want, I have multiple, I've got VPCs everywhere, right? Virtual private connections throughout my whole organization with different clusters. And I want those pods to communicate directly so that they don't have to go through a service IP. So to, I guess, directly to your question, is it exposing outside of Kubernetes? It doesn't always mean to the public. And we want to be able to achieve those use cases.