 Did you get enough coffee and cake or apples? Now, Claudio Jäger is going to speak about OpenBSD as a routing platform. Yeah, good one. So yeah, I will talk about using OpenBSD as a router. I will cover more or less from where we're coming and where we actually try to go in the next couple of releases. And I will also focus mainly on what happened during the last year in OpenBSD in this area. So the past, from where are we coming? A lot of people are using OpenBSD as firewall and I think that's one of the key strengths of OpenBSD with PF which is probably one of the best firewalls available at the moment. Then we have AltQ which implements quality of, where you can implement quality of service policies. A other feature that we added a couple of years ago was Carp and together with PF Sync you actually can start doing high reliability firewalls that fail over without even actually losing TCP connections. Then Trunk was added to bundle Ethernet links. In FreeBSD this is now the aggregate driver. It's called lag, I think. You can load balance, you can round rob in and you can even fail over the Ethernet links. So Trunk together with Carp you actually can build up a system that will always be reachable even if one switch is lost because you can use the Trunk failover and add Carp on top of it and by doing that together you get a really reliable system. One other thing that a lot of people don't know that we can do that is so-called MBUFF tax. MBUFF tax is something that is totally inside the kernel and not actually visible to the user land. MBUFF tax are here that you actually can add metadata to your packets that flow through the network stack. This is mostly used by PF to actually tag packets incoming and then you actually can do specific matching on the outgoing rules and by doing that you can actually simplify your rule set normally quite drastically. In the routing area we started first with OpenBGPD that was in 3.5 and through the years now we already have quite a lot of support for various RFCs and also some other stuff that's probably unique in OSPFD in BGPD. So first one of the most important things about BGPD is that it has actually an atomic configuration reload so everything is based on your configuration file and you reload the configuration file. This is really different to the approach that most other routing architectures are using by like Zebra, Quagga or Cisco they normally have the CLI and you start editing or changing your configuration and while you hit enter it's already starting to try making a connection or whatever with and that normally can cause quite troubles if you start reconfiguring your systems. We have support for BGPD communities. You can check, set and remove the communities. We have multi-protocol capabilities to support IPv6. We can do few BGPD specific RFC like root reflection and root refresh. Root reflection is here that you can simplify your IBGPD mesh which means you don't have to implement an absolute full mesh in your network. This is normally handy when you have a lot of routers. Root refresh is a concept that you can request your neighbor to actually resend the whole routing table but it's a little bit of a crappy RFC. We have inbound outbound software configuration that means when you reload your configuration and you have changed your filter configuration in BGPD you no longer have to clear the neighbor sessions to catch up with the filter changes. It's doing it now while you actually reload the configuration. Outbound software configuration does not need additional memory but inbound software configuration because it actually needs to know what the host sent me and they have to keep this information around needs more memory. We're using a, say, copy and write sheen there so we normally, if you don't modify too much thing we don't actually need a lot more memory for inbound software reconfiguration. These are the OpenBGPD specials. This is stuff that we support in OpenBGPD that's normally not seen in other BGPD events. We have a possibility to add prefixes to a PF table. Even if the prefixes are actually not valid for the forwarding path you can still add them to a table. We have so-called root label support. Root label is a label that we attach to a root in the kernel and PF can actually then look at this label and do decision-based on the label. This comes in handy if you want to do quality of service if you want to prefer certain links or if you just say, okay, I have my upstream provider that is where I have to look that it's not pushing too much traffic out or I want to actually reduce the amount of traffic sent to him. I can use these root labels. We have support for carp interfaces in BGPD so that when we can do BGPD failover with carp this is probably handy if people want to run it at like an internet exchange point and they only get one IP address but they actually want to have a redundant system. You have your two BGPD routers running. One is in backup. It's actually having no active sessions and the other, the master makes all connections and if the master disappears the backup will start to open up all connections and that normally works a lot faster than waiting for all timeouts and renegotiation of the sessions. We support TCP MD5 and IPsec as security mechanisms. They're directly integrated into configuration so you don't have to fiddle around with special IPsec configuration and like TCP MD5 it's just a simple line in the configuration. You just give the key and it's just working. Another really special thing is that we have some special macro. We could say it's the neighbor IS value that you can add to communities. The idea behind this is that a lot of configurations normally try to add communities based on the neighbor where it received the prefix or where it is sending to and this makes it a lot easier because you can simplify your configuration. As an example at the Telehouse Internet Exchange in Zurich we switched from a Zebra system to an open BGPD system. On Zebra we had to actually configure special root maps for every neighbor and just normally the config ended up with a couple of thousand lines and we replaced it with open BGPD that does actually the same thing and it's 20 lines of configuration. So that's the thing that's pretty cool. The other rooting daemon we have that is OpenOS BFD that's an internal gateway protocol daemon. We added basic protocol support and we are able to redistribute starting connected networks. So that's not a lot of stuff that we support and why is it? So in the last... It's now two and a half years that the OSPFD is in the tree and in the first one and a half years we mostly fought with the state machines and the various interactions between all OSPF daemons in the network. It's a pretty big mess because it's all multicast and everybody talks with everybody and you have to do the right decisions based on your view and in the end all machines need actually to have the same information and even small errors normally end up with like crazy instabilities in your network. One of the big problems is that the RFC that described me as OSPFD is not very explicit written. They have some gray areas where you have to figure out what you actually have to do. One other problem is that we have found a few systems that are actually violating the RFC so you have to add workarounds to get them working. In the end OSPF is a lot harder to implement than BGP because BGP is using simple TCP session. You talk to your neighbor, you get his stuff, you get all the pref access and then you do a local calculation and you're done and if you're not getting the stuff right it's okay your routing table is probably not 100% correct but it doesn't result in like a meltdown of your network whereas in OSPFD every router communicates through multicast messaging flooding their information out to the network and if you miss updates or if you interpret them wrong you actually get a total wrong view of the network and normally this can result in stuff like evil routing loops that can bring down networks. What did we do during the last year? This is mostly from 4.0 to 4.1. In the kernel we added in PF a new thing that's unicast reverse path forwarding checks. The idea behind this is that you actually take the source address of the packet that is incoming, you do a root lookup with it and then you compare the outgoing interface of that root with the incoming interface where the packet came in and if they do not match then this check failed and you can deny the packet based on that. So it's a little bit an anti-spoof on steroids. It's working normally very good for routers with the default root or actually firewalls like the end firewalls on your border. It normally does not really work well in the middle of your core network where routing can go various paths. We added support for multi-path routing. Actually it's equal cost multi-path routing. So it's possible to define multiple next hops to a specific network and the system will then load balance the connections. The algorithm we use is built so that we use the source and destination address to build up a hash so that all connections from one host to another host always take the same path unless you remove the path from the tree and the idea behind this is to solve the problems that you normally get with multi-path routing in TCP because when you have different paths in TCP you normally get problems because packets are able to surpass others and that normally results in window changes and the receiver normally closes down the window because it thinks that there is a network congestion and so you get pretty bad speed when this happens. Another thing we added is the carved emotion counter. The carved emotion counter is a concept to make it possible that the user land has a feedback mechanism to the carved interface to make the carved interface to do a failover even if the system itself is still running. So you can use the carved emotion counter to... on boot up we raise the emotion counter and by doing that the system that's still in the phase of booting up doesn't get master. This is used like in Saucing D which is our IPsec failover daemon. When that one boots up it raises the emotion counter and then it does all the public key calculation which can take a while and does exchanging the keys and everything and only when it's finished it will reduce the emotion counter and then the master system will normally take over again from the backup system. This is a bit... it really helps and it can be used also with other daemons. We have it now in current. We have it in BGPD. We have it in OSPFD. We have it in Saucing D. So multiple daemons can tell if the system is in good condition or not. This stuff here is mostly from 4.1. So in 4.1 we added support for multiple routing tables. It's still very preliminary but the basic idea is working. So you can have multiple tables in your kernel. You can add routes to these tables and you let the PF do the selection of which table it should use for a rule. So what it actually does is it lets you do policy routing on a higher level than with the Route 2 statement that is already part of PF. You can do a lot more with it but it also comes with the cause that the setup is more complex than a simple Route 2 rule. We added rapid spanning tree protocol support in bridge and it's actually now the default. So no longer 40 second wait times with spanning tree. It's a lot better than the old spanning tree protocol but I'm still not really fan of it. Then we added a third routing daemon or actually a rewritten one. We added a version 2 routing daemon. It is mostly intended to replace the really old Route D which is an evil piece of code. It's mostly based on OSPFD so it's looking very similar but it's a lot simpler than OSPFD because RIPD is from the protocol. It's pretty simple and there are still like small appliances that normally only can talk RIP and not OSPF or BGP so you still need a RIP capable routing daemon in the system. Another thing that was added is host state D. There was already a talk about it so I will keep it really short. It's a load balancing monitoring daemon for layer 3 and layer 7 and I think it's a pretty cool application because you can do really crazy stuff with it. This is the stuff that we changed in BGPD during the last year. As I said, we added the capability to influence the carb demotion counter so it's possible to flag your most important sessions with the carb demotion counter so when you're losing really important sessions like your full feed then you can signal your system to fail over to the backup that probably still has the feed. The other thing we added is a max prefix timeout. Maximum prefixes are normally used in unpeering sessions where you just say okay, we limit the number of prefixes to 10, to 50, to 100, to 2000s depending on your neighbor and if they hit the limit until now it was so that the session was administratively taken down and somebody had to log in and clear the session again to bring it up again. Now we added a timeout which just takes the session down for some minutes and then tries to bring it up again in the hope that the other side is realized in between that they made a fuck up and fixes their side and then the session comes up again. This is mostly a thing to make you sleep longer during night because normally stuff like this happens in the middle of the night and then suddenly somebody calls you, could you not please clear your session? So it's something for the operators. Two other things we added is the BGP-LG and BGP-LGS-H tools. One is a CGI written in C to do looking glass operations directly with OpenBGPD. It runs in our change-rooted HTTPD daemon or it's able to run in there. It's using the restricted control socket from BGPD so it only can read stuff. It's not able to change anything so you can just do all the show commands but everything that would actually change the state of BGPD is not allowed. The BGP-LGS-H is more or less the same application. It's just intended to be a CLI that simulates the way a Cisco works and it's mostly intended for remote looking glass application and can telnet or SSH into your machine and getting that information out. In OSPF we finally started to add a couple of new things that because it started to get more stable. We added support for or actually we more or less rewrote the redistribution code so it's now possible to depend on the redistribution to depend on routing labels. It's also possible to negate redistribution rules so you can say no redistribute 10 slash 8 so you don't redistribute that internal network. We added reload support which is I think one of the most important things that was missing a long time. It's no longer necessary to restart your OSPFD to change configurations. It's now possible to run OSPFD if you have multiple networks on the same interface then it's now possible to run OSPFD on all those networks. This is something that was a specific feature that Quagga and Zebra had but e.g. a Cisco is not able to do that. So you cannot have two OSPF running or actually having multiple networks on a Cisco and have running OSPF on all those networks. The last part is we're now able to define on the redistribution rules the metric and the type so it's possible to define if it is a AS external type one or type two message and you can actually specify the metric. The idea behind this is that you can have multiple router connected to the same network and you want to redistribute the network but you want to define which one is the actual most the router with which the traffic should actually flow through and which ones are just like backups. The problem during 4.2 so one part of the actually in the kernel one big thing that happened was we looked a little bit at the network stack and tried to improve performance and we found a few things that were just like slowing down the system quite massively. One of the most important one is in PF performance was the actual MBOF tax that I talked about earlier. It is the PF meta information that we attached to every packet was actually pretty expensive to do because every time a packet passes through PF it attaches this tag and the tag is allocated with malloc and malloc is not really fast. So by changing that and actually moving this information which is just about 12 bytes or something like this directly into the MBOF packet header we improved the performance by 100% which is quite massive. Then we started to skip stuff that most systems not really active like IPsec. It was so that IPsec even if nothing was configured it started to do root lookups all over the place and these are not really fast so we skipping now these checks if there is no IPsec flow defined and by doing that we actually gained something like 5% in PPS rate. In PF we skip unnecessary check summing. We once added these things to because somebody found out that you can use these check summing you could figure out if it is a firewall or not because a firewall will just drop your packet or actually will reply to you with a reset if you send in a packet with a corrupted check sum whereas an end system will just drop the packet because the check sum is wrong. We did this for every packet that came in and now we are just doing it before sending out the reset and by doing that we actually skip quite a lot of check summing that is not necessary. One last thing we did is we actually profiled the kernel with a 10 gig card that we had a driver for it and we tried to figure out why the performance was so low that we got and while looking at the call graphs we figured out that we had two functions that took extremely amount of time. One was the kernel random pool steering function that was using, I think it was like 20% of CPU time and we figured out that we actually stirred this pool on every packet that was incoming even though the packets that came in all added the same randomness to the pool and in the end it would be enough to do it once per interrupt and we changed this now and that added quite a boost again. The other thing is our pool allocator did also a stupid thing, it actually accessed the timer on every free so by accessing the clock which is pretty slow on a couple of architectures like on AMD64 we had still the, I think it's the old normal clock driver it took so long to access the clock that we ended up with losing a lot of time in this call removing those calls just gave us another 20% of more time to actually process packets there is still a lot more to be done but these were just like the two calls that just like signed out on the profile and they were really easy to fix in BGPD one thing that's really important that we added now is 4 byte AS number support the AS number space is slowly getting out of numbers I think we're now seeing already AS numbers in the range of 50,000 or something like that and it's until now it was just a two octet so it will be out of numbers pretty soon the AS is an autonomous system every ISP or actually everybody that wants to do default free routing needs AS number and it's just a 16 bit value so that's actually the first thing that we're running out before we run out of IPv4 address space or everything else we run out of these and there was now at the beginning it was an internet draft and it's now finally an RFC to move BGPD the protocol to 4 byte AS numbers we were probably one of the first projects to add it officially to our tree there were patches flowing around for Zebra and Quagga but I think they're still not committed in their systems internally BGPD is now always using 4 byte AS numbers so everything is converted to the 4 octet one and the only thing that we don't do is we don't do the native 4 byte AS sessions which are only used if you actually talk to a system that actually has a 4 byte AS number so normally it's turned off because we found out that quite a lot of systems are currently not able to handle this capability correctly and so you have issues to get the session up we added filtering support for IPv6 and we fixed a lot of issues in the IPv6 multi-protocol handling so it seemed that not too many people are actually using IPv6 together with open BGPD and it changed in the last half year and that's why we actually started to find all these bugs in OSPF we added support for stub router advertisements the main issue that this thing is trying to attach is the problem that if you start up your OSPF router and you don't have forwarding enabled or you don't couple your forwarding information base if the kernel routing table doesn't get the updates from the OSPF daemon you end up in a situation that you have a OSPF daemon running in your system in your OSPF cloud that is unable to actually forward traffic and the other systems don't realize that and normally it can happen that then you send traffic to this host that should actually pass the host to another OSPF router so it's actually a transit point and the router doesn't know how to transit the traffic and either it drops it which is actually probably a good thing or it actually creates a loop which brings down probably your network so what we did is we added this stub router advertisement that just if one of these conditions is not if the system is not able to actually run correctly it will only announce himself with the highest metric possible and by doing that the system is just a leave note in your OSPF graph and no traffic is actually flowing through your systems unless it actually has to hit that router we added Carb demotion to OSPF it's now possible to set it on interfaces and even on areas on the interfaces we already tracked the interface date in OSPFD so we just checked the interface date and if the interface goes down we raise the Carb demotion counter on the areas it's a little bit more complex an area for us is considered active as soon as we see another neighbor that is able to transport traffic with us the other really cool thing is now that we are able to map routing labels to AS external tags and by doing that it's possible to distribute policies based on these routing labels to other systems so we have a one-to-one mapping and you can then define that you can have multiple ring levels or something like that and you say okay this network is in ring one, this network is in ring two, this network is in ring three and you just have on all your border routers you have your PF config that's everywhere the same thing that just checks the routing labels and decides based on these routing labels if it is allowed to pass the traffic from one ring to the other and this normally simplifies it one thing that you normally have to use like the PF table or you have somewhere you have to manage all these networks in which class you actually add them and by using OSPF to redistribute this information you no longer have the ability to forget to update one of your systems because they're learning it automatically so I will look a little bit how to use this stuff so Carp and OSPF that's always a little bit a tricky relationship because Carp is doing the same thing as OSPF they try to figure out what's the state of your link and Carp is actually switching between machines and OSPF it's just not working correctly so what you have to do that you don't end up with a half working setup now OSPF honors the Carp state so if it is master or backup if it is master it will redistribute the network if it is backup it will not we now can use a demode group to pre-empt the Carp interface based on the state of a different network so you have your outgoing interface and say okay we demode Carp based on that but now the problems so redistribute connected does not really work with Carp there are two scenarios one is you have a numbered parent interface and then it's so that redistribute Carp will actually monitor your parent interface and not the Carp interface so it will start to redistribute the route even though you're actually in backup state the other thing is there is some strangeness in Carp that when it comes up it announces the route in some strange ways especially if you have a non-numbered parent and Carp has actually to add its network to the routing table it's messing around with the routing table and it's still not 100% correct what it's doing so the redistribution code normally doesn't get it when the system comes up because when it's first adding its network to the backup state BGP doesn't recognize that this is actually a connected route and OSPF neither and so we have to fix that I think I have a fix for it but it's not really... yeah, it's routing table stuff so in the end everybody should use interface Carp in their config that just works and so how does it work? it's normally so that you want to have your client machine in your network or like your server housing have a redundant outgoing router so you have your two OSPF routers at the edge that both have a Carp interface to the inside network and you use this Carp interface IP on all your servers or all your machines as default gateway now if one machine fails the Carp IP will start to flip and that's all okay but until now if one of the outgoing interfaces to the OSPF cloud failed the Carp interface didn't change now we change with the demotion counter we can do that as well so we demote based on the state of ethics P1 and if there happens... if we lose the link there it will actually switch the Carp interface and the other side will become master and start to rebuild the router out so this is about how you set it up and that's the configuration so you have... you assign a router ID to your system you have your area zero configuration which the first four line is for your crypt identification we have two keys we tell the system that it should use the key one as master key but also accept the other one as intermediate we have the main link which is the interface ethics P1 where we just demote the Carp and we announce the Carp interface with interface Carp zero that's the whole OSPF config another thing is that a lot of people probably don't realize already is multi-path routing and so how does it work and is it actually the way to break your network in multiple ways so in multi-path routing you have to know that the return path may not be the same as the one you're sending the stuff out so you get asymmetric routing asymmetric routing is normally a problem when you have firewalls in between it especially if you have stateful firewalls in between it because one machine gets the incoming state and the other one gets the outcomming state and this will not work one really big issue that we still have to figure out how we want to solve it is if you're actually sending out terrific from your connected machine so if you have a TCP session going out from your system the problem is that we do a bind and choose the source IP address and then later on we do the multi-path lookup and it's possible that we actually choose one IP from one interface but actually sending it out on the other interface which normally will get you in trouble with anti-spoof or URPF chicks so you have to look out there the forwarded traffic itself is not affected by this so it's only traffic that is actually coming out of your box that may help this problem but one big issue is that we do not realize when one of the multi-path links is lost so you end up with... if you have two routes and one is no longer reachable then you have like 50% packet loss and it's only like half of your machines is probably affected and the other one isn't so it's probably a little bit of a problem to figure out but actually is the problem at the moment you can work it around with the ES8D and shell script that actually monitors the next hop and then removes the routes if necessary as usual incoming traffic cannot be load balanced that's like with everything you need to enable multi-path with assist control you need to add the minus M path switch to route when you add a second route else the system will not accept it and if you remove a multi-path route you actually have to specify the next hop as well so you have to identify the route by the prefix and by the next hop to be able to remove it now to the policy routing stuff this is still very... yeah it's not really finished but the few information that is probably interesting is that we support multiple routing tables at the moment we limited to 256 tables the main routing table has always the ID0 PF can do the packet classification what is not yet supported is it can only be used in the forwarding path so outgoing connections always use the main table we don't have any link layer information in the various alternate tables so it is not possible to bind interfaces to routing tables or it's not possible to use the same network twice like having it on fxp1 and fxp0 is the same network it doesn't work there is no fallback to the main table so if the route lookup in your alternate table fails the packet is dropped the next hop is actually looked up in the main table so you need to have all next hop in the main tables as already said it's not possible to bind interfaces to specific tables and yeah multiple tables and multi-path routes can be combined that's nothing really... to add something to a routing table you have to specify which table you want to use you have to use the minus t switch to route you can also show the table with the minus t switch it's just... you have to specify it for every action you want to use you can also use minus t0 for the main table but that's the default now what's happening at the moment where are we going? one of the things that already hit the tree was the RTM version bumped with it we changed the routing messages we cleaned them up we used everywhere now 64-bit counters we included the routing table IDs and the routing priority to the various headers and the whole thing is more or less it's more or less compatible with old binaries and it's completely compatible with applications that use that include that route h and when you recompile it you just get the new stuff as I said it's already part of current there was... that was quite a large change and a few things broke while doing that so yeah what was the problem? the problem is RTSOC.C RTSOC.C is one of the most evil parts of the network stack I know there are a lot of bad magics going in there and a lot of dragons hide in that file it's insane another thing that broke was OpenVPN OpenVPN did actually something very stupid they ripped out part of netroute.h and added that to their source files directly so they have like this ifdev freebsd we have this and this and this and this defined ifdev openbsd or netbsd we have this this this this defined and suddenly nothing worked anymore so yeah include netroute.h don't do silly ifdev games and especially if you have the capability to run Autocon for 10 minutes or something like that you should actually do it correctly another thing that we're working on is virtual routing and forwarding that's using the multiple routing tables we are now able to do this because we actually changed the routing messages we still need to change a few other things we need to extend the interface to the restruct efnat to add a default routing table ID so that the incoming traffic is actually bound to the correct virtual routing instance we need to modify the rplucup code and the same thing in IPv6 to support the multiple routing tables so that the rplucups are actually added to the right table and not all end up in the table 0 and yeah we have to support some way of cross VRF routing at the moment I'm not yet sure if we want to just abuse PF because it's already able to select the the routing tables or if we should implement a special loopback interface that has one end in one table and the other end in the other table another thing is routing priorities that is more or less done now and should probably be committed in a couple of weeks the idea behind this is to simplify the synchronization of the routing daemons currently the routing daemons need to keep track of what else happens on the system to understand if we actually have a conflict in routes and we have to decide if it's now the BGP route that we should take or the OSPF one routing priority removes that synchronization need we just track everything in the kernel and the kernel does the decision and so in the end the user line no longer has to do this decision and it simplifies a lot of code we just add a new metric to every route that we add we abuse the multi-path code for that instead of doing equal cost multi-path routing we now have like different cost multi-path routing and we just always use the one with the lowest priority because lower is better and when you add or delete routes you then have to specify which one you actually want to delete so the last thing here route delete behaves strange because of this I fixed this on the train ride to Copenhagen now so this is no longer true in the end the basic functionality is now running it's working on my laptop but there are still dragons so it needs a lot of testing what will happen in OSPF DA in the next couple of months we want to add equal cost multi-path routing to OSPF we have it 95% done there's just the last K route bit missing we want to support stop areas we want to make use of the routing priorities when they get into the tree to simplify the code other thing that we actually consider is not so stubby area support and in the end we want to fix more bugs because there are still some cases where strange things happen they're hard to find another thing that we will start or actually was started is OSPF version 3 that's the IPv6 support for OSPF this will be a new demon because the protocol is quite different we're making slow progress we're now able to send out the first hello packets but a little bit more is needed it is slow mostly because IPv6 is at least I consider it painful because a lot of stuff that is really simple to do on an IPv4 address is very complex on an IPv6 address one of the problems is like you have link local addresses that you don't have in IPv4 that just makes everything a little bit more complex I hope that we have at least a minimal support in OSPF in OpenBSD 4.3 in BGPD we have to fix the MRT dump output until now nobody realized that since we switched to 4-byte AS numbers the output format is no longer correct so we need to fix this we want to use routing priorities again another thing that I want to add is extended communities attributes then one thing that could be fun is the great full restart mechanism together with carb failover that you can actually have a hot-stand-by router running already with the whole routing table ready and when the master fails it just brings all sessions up it already starts routing and just synchronizes the tables again and you actually have no you will actually no longer see even a slightest drop of connectivity the last one is something that is also on my list to do it's the BGP MPLS VPN support that's an extension to the multi-protocol stuff and especially with the idea that we probably get MPLS support in 4.3 these things actually make it's getting more and more interesting we are working on MPLS by using the old Ayami codebase that was written for NetBSD, I think at least 3 years old now it's we need another, we need special routing daemons for MPLS because you have to exchange the switching information we have to add a lot of stuff in the kernel and one thing that we're probably doing different to Ayami is that we do not start to do IP input, IP output hacks to do the forwarding decisions but instead we're trying to implement a generic MPLS interface as an endpoint so that you can just normally route your traffic to the MPLS endpoint or bridge your traffic to the MPLS endpoint and then it will just flow through your MPLS network and get out on the other side in my opinion that's the more logic approach to it so yeah, that's it are there any questions? could you describe more what kind of systems you use for a router? we're normally using i386 or AMD64s we add not too much RAM at the moment I think most of our systems have 1GB of RAM that should be enough for the next couple of years normally we use high-end gigabit cards so either EM, SK or probably BGE and we have to we do a little bit of tuning but not that much so these systems are normally capable of routing quite a lot of traffic to them it's actually using SMP systems with more than two CPUs does not make sense two CPUs is probably okay to have one CPU running the kernel stuff and having the other one running user-land BGP applications stuff any other questions? it's on my list the problem is I don't have enough time so if anybody else wants to do it I'm happy to get this it's probably not that complex but yeah it's tricky yeah yeah it's the trick is if you want to do L2TP version 3 or version 2 or how you mix them together that it's getting easier any other questions? yep yeah MPLS we have a bigger diff outside of the tree and I hope that the guy that is working on it is finally reappearing again because he just disappeared over the last couple of months and I try to push him again to get his stuff and I hope that we can commit it in the next couple yeah let's say in the next month or something like that that we already have something in the tree and then can fix it in tree but this applies and it builds it should apply and it builds I haven't seen it since a long time I have a very early version of it and a lot of other stuff is already fixed and the problem is people that just disappear is a large problem in the open source community we have to move on to the program now if you want on any subject it's pretty available