 Someone is raring to go here. So next up is Peter Hanstein announcing myself. Good afternoon. Welcome to, again, to URBCCon. Next session is Hanning Brower, well-known OpenBGPD, an artwork hacker, who will help us celebrate the 10 years of OpenBGPD. Take it away, Hanning. Yes, thank you for coming to the birthday party. So 2014, as the title kind of indicates, marks the 10th anniversary of OpenBGPD. I initially started it in late 2004. And well, the 10th anniversary, of course, is a very, very good opportunity to recycle a tenure. I mean to look back at the design and implementation and the lessons learned. Some background. What do I do with BGP? Why am I interested in that? I run an ISP since 96, 98-ish. We're heavily using OpenBGPD. We're basically only using OpenBGPD software. OpenBGPD's since 2000-ish. And back then, our core autos were running OpenBGPD, of course, but well, a software package called ZEPRA for the BGP needs. Back then, there really was no choice. There was nothing else. There was one alternate implementation that didn't work at all, basically. So it was ZEPRA or Biosysco. ZEPRA is interesting. These days, it's called, what was it now? Quarga, right? That's a good example on how to not design a network demon and especially not a BGP demon. First, big, big, big mistake. They use threats. Second mistake is cooperative threats. They do have a central event queue, which means that everything that's supposed to happen at some point goes into the queue. So there are some critical events, like the keeper lives you have to send to your peers so that he doesn't think that you died in between. And if those don't arrive in time, the peer will drop the connection to you and drop the routes. And while losing routes is bad, in the worst case, you're offline. Now, after a while, when we figure out that we have to send out the keeper life and the session is already gone, we notice the session is gone. It gets re-established. Of course, that generates another big amount of events flooding the event queue even more, making the problem worse. Next big problem with that thing, there was almost no documentation. The little bit that was there was in Japanese, and my Japanese is not very good. Even the comments in the source code are partially Japanese. So I tried to deal with that. I found and fixed the worst bugs, like the ones that really disturbed the daily operations most. I got it to a kind of reasonably stable state, where it was still slow as hell. Throwing hardware at the problem was not an option because that was already pretty decent hardware. A little bit later, the Zipra author tried to commercialize it and make a living of it, which ended like it usually does when people tried it with open source software. It died. The most frustrated users done for Quarga and still tried to cope with it. But since the basic design is already so far off and so wrong, this has no chance of working. So starting with BGPD, I sense a pattern here. Theo is to blame. At one point when I was in Calgary, the beer is to blame, of course. But I think you bought it. So I mentioned all this to Theo, and I kind of mentioned that I kind of consider writing my own implementation. But it seemed to be a way too big task. And unfortunately, Theo didn't drink enough, so he did remember this the next morning and kept nagging until I actually started. Was it late 2003 or was it late 2004? Where's Claudio? Not here. Anyway, so I eventually start hacking. By the time I was able to speak to another BGP or another BGP implementation, like the entire session management, but not actually exchanging any routing information, I started to show the prototype to a couple of people. The one person who sent a meaningful reply was Claudio. Did he leave the room? Yes, he did. The one person who sent a meaningful asset was Claudio. And in December, we finally reached a state where we could exchange very basic routing information with another peer. And at that point, we imported my initial BGP implementation into the OpenBSD tree, and while doing so, we also imported Claudio. The protocol itself is surprisingly simple, actually. BGP is Borek API protocol that's defined in that RSE. ISPs talk that to each other. Every ISP basically announces the list of networks that are reachable through him, not just his own ones, but through him. Since dealing with that at the network level is a lot of data, like the trace route style, a lot of data, and it's sometimes inaccurate, you summarize the networks into autonomous systems. So typically, one ISP is one autonomous system. Instead of dealing with IP pathes, BGP looks at AS pathes. So each and every ISP is just one hop, one AS. A BGP speaker will announce the networks that, like this ISP's own networks. And it can, doesn't necessarily have to, but it can announce the networks it learned from other BGP speakers. So typically, that would be downstream customers. The AS pathes are just the AS numbers written behind each other, starting first one, this is the next one, and this is the final destination. In this case, the OpenBSD AS. And to reach that, we have to go through these two ASs. That probably changed by another path. BGP really only knows about four messages. There's a fifth added later, but basically four messages. There's an open message that you send once the GCP session is established that tells your peer, your own AS number, a couple of timing parameters, and stuff like that. There are the keeper lives I already mentioned that are frequently exchanged to make sure that the connectivity between the BGP speakers is still there. We have update messages, which are the interesting ones, because those contain the actual routing information. And there's the one you don't want to see, the notification. That means that a fatal error happened, and when you send a notification or receive one, you are required to tear the session down and delete the routes. So BGP design. I did not want to go for threats, surprisingly. So we went for three processes. One, an empty session engine, which is just responsible for dealing with the GCP connections to the other BGP speakers. It does not itself in any way deal with the routing information. It does not even parse that part. It just passes that on to the route decision engine, which does all the, well, deals with all the routing. That's also the point where the decision which route to a given destination as best is taken. And there's a parallel process that talks to the kernel and starts the session engine and the RDE. So that's the basic design. It's slightly more complicated than that, but that's the basic design. The master process to the left runs as route. It has to, because it wants to change the kernel routing table, which requires route. It also deals with encryption keys, which is also a function that requires route permissions. It forks off the session engine and the route decision engine, which then drop privileges to an unprivileged user and trutes itself into var empty, which as the name indicates is empty almost up until recently. There was a logging socket in there. The communication between those processes happens over socket pairs. So the master process sends configuration information to the other two. The session engine sends the routing information. It learns from its neighbors to the route decision engine. And of course, events like that peer just went away. The route decision engine talks to the master process to validate the next stops it learns from its peers. You don't want to install a route into the kernel where the next gateway is not reachable, right? And eventually, it feeds the routes to the master process or the master process, can enter them into the kernel routing table. The session engine is the only one talking to the network. And as I mentioned, this is an important point. These run without any meaningful privileges. They have no special privileges in the system bots or ever, and are change routed. We are, once again, following the principle of this privilege here. So the each and every process only has the privileges. It really needs to do fulfill its task. When one of the unprivileged processes needs some operation to be done that requires privileges, it asks the master process to do that for the parent process to do that and send back the result kind of. The session engine needs route privileges for one single operation that's binding to the TCP port, 1.97. And as I mentioned, the parent process obviously needs route permission since it modifies the kernel routing table and IPsec flows. But we'll get to that later. So to be able to bind to that low port, you could just bind to the port and then drop privileges. But that doesn't quite work out, because we might have to do this again later after dropping privileges. So the way we do this is the parent process creates the socket and binds to it and then uses file descriptor passing to send the socket over to the session engine. The parent, unfortunately, has to keep track of which file descriptors the session engine has opened so it doesn't bind to the same port on the same IP address again. Since now the session engine is not doing that bind itself anymore, it's perfectly fine to run without any special privileges. So session engine, as I've probably made clear, I don't quite believe in threads. So if you're not doing one thread per connection, the other end model to do this without one process for connection, which is also stupid, the only other option is to go for a non-blocking asynchronous design. That last and the least means we have to put all sockets in non-blocking mode. What does that mean? Usually, when we have a socket and we call a write on it, say with 64 bytes, no, that's a little bit more, perhaps, well, 128 bytes. And at that time, the kernel can only get rid of, say, 50 of those bytes because the peer is too slow. The write call will block and will only return to your code once it's done with the 120 bytes completely. It will sleep. Once we switch the socket into non-blocking mode, as soon as it cannot proceed immediately, it will not sleep. It will return. Tell us, hey, I wrote 50 bytes. And the remaining 78 bytes, you will have to write again later. But you need to take care of that in your code. So the consequence of that is that you have to do the buffer management yourself and keep track of which of those bytes you already got rid of and which you still have to write out. To abstract that, I designed a pretty easy to use buffer API that's hiding all these details. On top of that, I added an API for the messaging between the processes, which I called iMessage. It has nothing to do with that company from the US. That turned out to be pretty, I'm surprised myself, but that turned out to be a pretty smart thing because we're now using this very same API in almost all newer OpenBSD debits. Of course, it evolved. And of course, it's not just my code anymore. But that was the groundwork. So these days, it's actually in the butyl and not duplicated in all the programs anymore. That's an incomplete list of programs in OpenBSD using that. I think that's kind of impressive. You have to add HTTP now, for example. So the messaging. Obviously, when you do privilege-separated demons and have multiple processes each only running with the privilege that it really needs, the messaging is a core component. VGPD turned out to be kind of complex in that area. We went up to 66 different message types. That's a lot to compare. OpenBSD's age has less. We're not just using that messaging framework internally between the processes. We're also using this between the little control utility, BGPD control, which talks to BGPD using the very same library functions. But instead of using a socket pair, it goes over a Unix domain socket. And iMessage doesn't really care what the underlying transport mechanism is. It works over TCP just as well. So session engine. As mentioned, it just maintains the sessions. That's its job, nothing else. As soon as the session is established and all the parameters are negotiated, it frequently sends the keep-alives out. It keeps track of the keep-alives it gets from its neighbor so it can drop the connection if the peer is dead. And it does not deal with routes at all. I cannot stress this enough. Due to that, we can be reasonably certain that we'll never miss sending out the keep-alive in time. The session is very, very lightweight. It's typically under five megabytes of RAM. If it does get bigger, that's an indicator that one of your peers is very, very, very slow because then it has to buffer a lot. The RDE is where most of the magic happens that maintains the so-called routing information base. That's all the routes start. That's implemented mostly as a massively interlinked tables. The most important ones are the prefix tables. That's the networks, the routes, the IP prefixes, and of course the as-path tables. The filtering runs there because you don't want to accept everything a random peer sends you. It decides which of the paths it learns for a specific prefix is the best in some. Well, there's an algorithm saying this is the best. What you call the best, of course, is up for discussion. And it does generate the routing updates to be sent out to the peers itself, hence them off to the session engine and hence it off to the peers. So, as mentioned, heavily interlinked tables, the point here, the main point here being that we do not want to do any table walks. That's the way a certain commercial router vendor implemented this, starts with a C, and that turned out to be a giant performance issue for them because every couple of minutes, these walk the tables and clean them up, process started and blocked off other operations. So I did not want to repeat that mistake. It obviously has to be pretty memory efficient because we potentially deal with a lot of routes here. And of course, I want it to be fast. The decision process I mentioned to decide which of the path is best. The first check is the prefix, eligible, which really means is the gateway reachable. Because once again, we don't want to install that routes. Second step is the so-called local preference that comes from your configuration, which you can use to force BGPD to pick a certain path for a certain prefix. The third one is the one that is supposed to kick in. Usually you want to pick the shortest path, first that's likely the fastest. These days with ISPs becoming bigger and bigger and some AS is covering half the world, that is not the best measure anymore, but well, still. Everything after is really mostly there to make sure that we decide on a best one. So the next step is origin, which indicates whether that route originally comes from OSPF or static to configure it or the like. Then there's the multi-exit discriminator. Please don't make me explain that. Then it kind of already gives up. External BGP is cooler than the internal BGP sessions. Internal is both peers have the same AS number. We added weight that comes from the configuration as well. You can use that to indicate a preference for a certain link. The next one is another extension we added. The older one is better because the older is more stable. Well, and then it becomes kind of hilarious. The lowest BGP ID wins. What's the BGP ID? The BGP ID is the numerically lowest IPv4 address on the system. If that still doesn't give a winner, which is kind of impossible, but still, the shorter cluster list wins. Ignore that cluster stuff for a moment. If that still doesn't work, the numerically lowest peer address wins. Good indicator. And if that doesn't work, we are screwed. So, you see equally long AS path is more and more because the bigger ISPs all peer in the same spots, at least in Europe. And for traffic engineering, you want to be able to express a preference for one of your upstream providers. This is not local preference because the local preference, you force traffic onto that one. You just want to indicate if they are equally good, prefer that one. And that's really what that weight extension be added is. Coming back to the parent process, which really is BGPD's interface to the kernel, besides getting the actual routes into the kernel, it does the next stop validation for the RDE, as mentioned, you don't want to install that route, so you have to figure out that the next stops, next stops the gateway is actually reachable. To be able to do that, it maintains its own copy of the kernel routing table. It can be quite big, right? We're talking 400,000 entries, sorry. 400,000 entries roughly. So to do that, it fetches the entire kernel routing table on startup and the interface list as well. And now it obviously has to keep that in sync with the kernel. To do that, it listens to the routing sockets. Whenever a route is changed, or to change the route from user to send a message on the routing socket to the kernel, and the kernel will relay that message to all listeners on the routing socket. So as long as you see all the messages and don't miss one, which actually is possible, you can keep your internal copy in sync. And well, that's exactly how we do that. And that also means that if you manually modify routes on the BGPD router, BGPD will notice that and cope with it, which the other implementations don't, or many other. We also need the list of interfaces because when the interface goes down, we want to invalidate all routes that use that interface, obviously. Since we keep that in sync by listening to the routing socket interface, link state is announced there. The BGPD process actually notices when you pull the cable and reacts immediately, opposed to a certain commercial implementation that has to wait for the next run of that table cleanup process. So that internal view of the kernel routing table can be coupled and decoupled from the kernel. Why? Because I could, basically. You have to have a mode to run BGPD without modifying the kernel routing table. If you're not actually running on a router but on a system that just relays BGPD information to other BGPD speakers, there is no point in updating the kernel routing table. Since the code was there to couple or decouple it, could as well make that a switch to be able to do this runtime. And surprisingly, this was super fast. On 10 years old hardware written 10 years ago, it took under 10 seconds to feed the entire, back then 300,000 or 250,000 routes into the kernel within 10 seconds. It's pretty impressive. Memory efficiency is still there. You still manage to squeeze 400,000 routes in roughly 32 megabytes. So since it's kind of obvious that your BGPD sessions are a nice attack vector, if somebody manages to make that session go away, you remove the routes. So in the end, you might be offline. If somebody manages to smuggle packets onto your GCP session, you're even more screwed because he can make you route traffic towards his sniffing box. So you really want to protect those. And the BGPD standards have an extension TCP MD5, which really is at the TCP level and not at the BGP level. And it turned out that BSD always had code for that. Well, that code was not just full of dragons. It was even worse. There was no way this code has ever ever worked. Impossible. So they carried code around for 30 years that never worked, brilliant. So we just deleted it because it was pointless, non-fixable, and the notation was wrong. Instead, we re-implemented TCP MD5 within the IPsec framework because it is kind of a special form of IPsec, right? That unfortunately meant that I had to add the PF key interface to BGPD, which is the interface the kernel provides to manipulate IPsec information. That interface has been designed by a comedy, which is a guarantee for the specification to completely suck, it could be completely decoupled from reality. Still, it's the standard. So I implemented that, it was painful, but that means that I could already talk to the kernel about IPsec stuff and that obviously made it much easier to read real crypto, like use the real IPsec. There's a nice example here on how to not implement MD5 signatures, and it's from previously. In 2006, they added code to be able to calculate that MD5 signature and send it out, but they did not bother adding the code to actually check the signature on incoming packets. Really, really useful. Another way on how to not implement this is provided by a certain commercial vendor. They do the TCP MD5 signature check before they do the regular TCP checks, like sequence number, check sum matching. So that actually became a denial of service vector. Why would you do the most expensive check first? It doesn't make sense to me. This unfortunately spread the myth that TCP MD5 is dangerous because it opens the door for denial of service attacks. It seems like only Juniper and OpenBSD got this right, and this is astounding because it's not actually all that hard. I think previous years, this is fixed by now, and I'm not sure about Cisco. So instead of this half-baked crypto MD5, TCP MD5 stuff, why wouldn't we be able to do real IPsec? And we're at the interface already. So let's do that with static keys. It's really not all that hard. How does it work? VGPD loads the security associations, that's basically the keys into the kernel, and it sets up the flows, and you don't have to do anything manually. Turns out Juniper can do the same thing, and we are perfectly compatible and all just works. Unfortunately, and this is one of the lessons we learned, even though Juniper machines support that, Juniper supports that, basically no ISP enables that. It's not being used. Most ISPs don't even use TCP MD5, they keep their sessions entirely unencrypted and too easy to attack. Cisco can't do that, of course. It's entirely possible that there's some features that you could be extra for that implements that, but as far as I'm aware, it doesn't even exist. And yeah, well, as mentioned, unfortunately, it's very, very, very uncommon to use any of these techniques to protect the TCP sessions. Static keyed IPsec is nice, but how about dynamically keying? This would be even better. So you need ISP, these days could probably use ICD to do the keying, so the keys are changed regularly. The implementations actually are not that hard. BGPD gets an unused pair of SPIs, that's some kind of identifier basically from the kernel and uses them. It still sets up the flows, which is hard to do manually in the ISP configuration. And BGPD already knows the endpoints, so there's no point in the administrator having to repeat that information into another contact file. Which also means that ISI KMPD only needs to do with the keying now, not all the other stuff. And that in turn means that you can run ISI KMPD without any configuration. It really is as simple as copying the key files over that are automatically generated. Start BGPD with the KA flex and well, go for beer. So the TCP window size, it's an interesting topic in 2006, I think, there was a kind of famous attack where people realized that it was kind of easy to smuggle the TCP RST onto an existing TCP session, like from the outside. And the reason being that the TCP windows were too big, so the window of allowed sequence numbers was too big. For BGPD, this is critical because session gone means the route's gone. So the default window is we use the default window unless you turn on TCP MD5 or IPsec, then we grow the window as far as we can to 264K. So conclusion IPsec or cryptography improve performance. At some point, we figured out that BGPD is not just good for exchanging routing information. I mean, who wrote that's just an IP address and a net mask basically, right? Or prefix length. So can't we use this for something else? Yes, we can. One of the more interesting ideas to use that, that finally is being implemented now and Peter is giving the presentation. Right now, I think, right? All right, it's the integration with SpamD to exchange IP addresses of hosts that send too much spam. Oh, he's doing the other talk, sorry. I'm adding confusion here, sorry for that. So to make use of that, BGPD needs to be able to talk to PF and well, I might be biased here, but also work a lot on PF. I was kind of interested in integrating the two. How does that work? In the BGPD filter language, we have a way to pick the prefixes we want to insert into a specific table in PF. That table, with that table in PF, you can do anything that PF can do. You can filter based on that information. You can redirect back, it's based on that information. That's what's being done in the Spam Blacklist distribution case. You also kind of quality of service processing, if you want. Opens up some pretty interesting options. We have route labels now. That's an extension I did in the kernel routing table, where we are basically able to add originally 32, I think we did this to 64 bytes now, or free text information to a route. If you do a route get, just see the label. Let's clear text. It's stored in the kernel routing table with the route. It's being set in the BGP filter language. So here we match everything from a specific neighbor, from a specific autonomous system, and add that specific label that then ends up in the kernel routing table. PF can filter based on those labels. I said you can block traffic based on that, but it's much more interesting to use that for other bits like quality of service processing. So you can put all the routes that point to a specific ISP and to a specific queue. And of course, you can slow this queue down. And you can tell your customers, I always told you that ISP is slow. It is really, really powerful. There are more useful applications. I listed some here. You can limit the states per source address depending on the source area. So if you know that most of your attacks come from a specific ISP, you can only apply those limitations to that ISP. It really helps in fighting this really not-of-service attacks. Carp, your BGP router obviously is kind of important if it's down your offline. So how about being able to use two in a redundant setup and failover? Generally, I have carp to do that. And now we just need to integrate the two. BGP is aware of the carp master or backup state. I have my own. BGP is aware of the carp master backup state. When the carp interface is in backup state, BGP just keeps all the sessions depending on that one in idle state, doing nothing. And as soon as the carp interface becomes master, BGP immediately tries to establish all those sessions to cut the failover time. And it's actually pretty damn efficient. It works very, very well. That's exactly the setup I'm using for more than 10 years now. Sorry, for 10 years. The other way around works as well. BGP can influence the carp master backup decision. You do not want to become carp master. I mean, the carp interface is typically your default route for the inside machines, right? And you don't want your freshly rebooted BGP router to become master before it actually learned the routes. So that's why BGP can demote the carp interfaces to prevent that. It'll undemote it when it has the important, you mark those sessions in the configuration. It'll undemote carp when the important sessions are there established and the routing information is being exchanged. One of my favorite topics, IPv6, do I need to say anything about that really? Can you read the commit message or do I have to read it out? What these bytes have to be zero by definition? No, I don't know what to say about this really. It's so horrible. I mean, next. Here's another example. Here's a function that's the IPv4 version. It takes a net mask and converts it into a prefix length. So that's four lines of code. And it's only four lines of code because the default route is a special case. Otherwise it would be one. Now let's look at the IPv6 version. Sorry, it doesn't fit the slide. And yeah, it's incomprehensible. So, BGPD filter language. Being so involved in PF, I tried to make the filters really look like PF and used the same approach. So it's one big filter route set and instead of having filter blocks or one filter per neighbor, the syntax is like PF, like PF, it is last match wins. It actually is a properly designed language. This is nice to use. It's not just the software accident that happened, which is the case for certain commercial implementations. The filters are typically very important. Unfortunately, once again, most ISPs don't do any filtering. They'll receive anything their peers announced them. So if your five employee peer suddenly announces, hello, I'm Microsoft, they believe it. And this has happened and this frequently happens. This is a big problem. With proper filters in place, this would not be possible. But well, at exchange points, it's even more important. At exchange points, everybody typically peers with a route service because everybody peering with everybody obviously doesn't quite scale. So they are a route service, everybody peers for the route service and the route service redistributes that information. These route servers better filter what they accept from the exchange point members before redistributing that to everybody else, right? It's only some exchange points doing that. The other, it's roughly half by now I'd say, the other half of the exchange points don't do any filtering at all once again. This is especially problematic since everybody trusts those route servers, right? And since some exchange points get quite big, the filters get very big. We did load the filter used at, the filtering used in Frankfurt at D6. The resulting route set was some 150,000 lines or so because very massive. And that's exactly the problem here. With trying to implement those just like PF, like one big route set, sequential evaluation, and last measurements, we do have a big performance problem there. That was a mistake, which we still haven't fixed unfortunately. It would be much better to just do smaller filter blocks to apply them on a per neighbor basis. At one point, we'll have to do this, but we keep saying this for at least five years now. Not finding the time to do so. Mistakes happen. The ecosystem is quite important. Rackier Unix machine doesn't suddenly become a good router just by installing a BGP speaker on it. So it's not just the BGP implementation. We modified the kernel a lot to make OpenBSD a better router. We got the route priorities. We got multi-path routes, multiple routing tables, and eventually routing domains. We even got an MPLS stack. And also it's not just BGP, we also want OSPF, DV, MRPD for multicast stuff. And for the few places, that's the run run, sorry, that's the run rip. We even got to rip the re-implementation. Hot topic. So-called software versus so-called hardware routers. Many ISP employees keep claiming they have to buy hardware routers because all that software stuff is just bullshit and too slow and whatever. In reality, most of the so-called hardware routers share exactly the same design. It's basically a PC running software. It's only the really, really, really big and expensive routers way beyond 100,000 euros that implement more bits and dedicated hardware. The software routers can be realistically used up to somewhere around 10 gigabits. That is quite a bit. In reality, you want to go a little bit lower because you want to maintain some headroom for attacks. On the other hand, of course, that limit goes upwards every year because the hardware becomes faster. And really, 90% or so of the internet really 90% or so of the installed BGP routers don't handle that much traffic. So they are perfectly fine with the software implementation. And obviously, BGP gives much more flexibility than the dedicated hardware. Last, at least because you can run TCP dump, which you can't do on your commercial routers. And especially the use cases like route servers that don't forward traffic really cry for software implementation. So status, it's rock solid. It's reliable as hell. I can stand here and not worry about dropping off the internet. It is pretty much feature complete. It's in use by many ISPs and exchange points. I just learned that we, despite the filter performance problem, have a market share about 30% at exchange points. One-third, not too bad. There are some really, really, really big ISPs using that. So when you're sending your traffic to the internet, chances of that passing an OpenBSD machine running BGP is quite high. Of course, it's much cheaper to buy, run, and operate than buying the big brand, so-called hardware routers. And for those who don't trust themselves, of course you can buy commercial support from a couple of companies, including mine. And that's it basically. We don't really have time for questions. Okay, so it's coffee or questions, huh? Well, hi. How do you cope with routing messages that are lost? That's almost a separate talk because it's kind of hard. We modified the kernel to figure out when that happens. So in the kernel, we know when that happens. And then we sent a special message up the routing socket, indicating, hey, you lost a couple of messages, which you can use in user land to refetch the entire stuff. And how do you debug OpenBGPD, this three-head beast? That's not very specific. Most of the time, by just adding a couple of print-offs, sometimes by inspecting the core image with GV, if it actually done core, which hasn't happened in a long time to me, it really depends on what you're on right now. But usually, you get away with a couple of print-offs and it's faster than all the other techniques. Thank you. Hi. What number of routes would you consider as production-safe? There is production-safe. From the... So maximum number from the BGP learned. From the BGP side or from the kernel side? Actually, both. From the BGP side, the only limit really is memory. So I can't give you a number. It's kind of, it's not infinite, but for practical purposes, it is. The biggest one I'm aware of has over 2 million. Okay, so just four full feeds. But that's... Yeah, that's just four full feeds, right? No, no, no, no. More than 2 million prefixes. Okay. It's a very large ISP that has all the customer routes in there, like the de-aggregated ones that he's not going to publish outside. It's a very large one. And they do load them in the kernel and it works. So 2 million is definitely not yet a problem. In the kernel, the limit really only is memory too, but the kernel memory infrastructure is different than userland. I can't give you a number. It's, once again, for practical purposes, not limited. Of course, the bigger your kernel routing table, the slower the lookups. So for the packet forwarding path, that could be a performance problem. Okay. And another question. I think two years ago, I tried to use openBGPD to implement CBGP and it does not have support. Does this change? BGP confederation, is this changed now? No. Okay. So we are not feature-complete. Oops. Any further questions? Further questions? Everybody is shy. Everybody is shy. Well, if everybody's shy, let's thank our team. Thank you.