 It was an infrastructure architect at UQ still. Yeah. And he's going to deliver his talk on firewalling with BSD. David, over to you. Hello. I'm actually really nervous because I haven't talked in front of a group of people for quite a long time. And last time I talked, it was about something I'd just finished coding so it was fresh in my mind. I've been working on SCSI stuff recently so this is a bit of a shift. Yeah. As I was told, you're able to interject but just say so first. So introduction, who am I? This is what we're going to be covering at a very, very high level. Everyone's cool with that. Please feel free to ask questions whenever you want. So as indicated, I am the infrastructure architect in a faculty called Engineering Architecture and IT at the University of Queensland. That job title doesn't actually mean much. So I get to do whatever I want pretty much. Some of what I do is running the firewalls there, which sit between the faculty and the rest of the university and the rest of the Internet. We obviously run OpenBSD in that role and I'm here to talk about how we do that. I am also a core developer in the OpenBSD project. As I said, I generally play with storage, but I have to play with the network stack sometimes because work needs it pretty much. So does everyone here know what OpenBSD is? Does anyone not know what OpenBSD is? Ouch. I'll skim through the next few bits then. So OpenBSD is just another Unix. It actually descends from the original Unix by way of Berkeley and NetBSD. The website states that it aims for portability, standardization, correctness, product of security and integrated cryptography. We currently run on about a dozen architectures and we like it that way. Different architectures expose different code behaviors and different bugs in software. So just running on another CPU lets us improve things in other portions of the tree in a machine independent way. For example, one guy ported to something called the MVME88Ks, has anyone heard of those? Yeah, they're very old and they're very, very rare, but some guy has sat down and ported OpenBSD to it. We're possibly the only other Unix apart from the original Unix that shipped on it to run on it. But during the development of that, he was fixing a machine independent bug about once every three days. So there are benefits to running on multiple architectures. Unlike some other projects, we have one source tree for everything. So our kernel, our user land, all our documentation, our build infrastructure, our source control system. It's all in one source tree. All the code, new code especially, is BSD, ISC or MIT style licensed. We have some historical exceptions, such as the tool chain. There aren't any good alternatives to GCC. So we're using GCC, which is GPL. There's a few other bits like that, but the compiler's the big one. We do a release every six months. So we have different times of the year. We do different types of development. We have different focuses at different times of the year, but we religiously make a release to put on mirrors and on CDs every six months. You can install third-party software and by far the easiest way to do it is via packages. The ports tree is there, so you can go make, make install for your own third-party software, but we really recommend using the packages. We build them for you, so they just work. If you wanna update software, then feel free to take the ports tree, tweak it slightly, submit patches. That'll be great. The interesting thing with OpenBSD over the years is that the people who developed it, they tended to work on networking things. So along with all the other security stuff that they do as a matter of course, they tended to focus on implementing network things, which makes it a very good platform for secure network services, such as firewalls. The other thing to note about OpenBSD is we are very aggressive, not just our personalities. We make changes across the whole system to improve the security of it. So we have modified GCC. We've added extra code there to do stack protection and pro-police and all that sort of stuff. We have made changes to the kernel to make it more secure. We have deprecated interfaces and added new interfaces and such. We honestly don't care about backwards compatibility. We only care about the running system. With that in mind, that means we don't have like 32-bit versions of system calls or 64-bit versions of system calls. We just have the system calls. When you compile software, you always get the single working interface, which is good. As new types of bugs are discovered, we do make an effort to go through the rest of the source tree and look for those same bugs. We also push the use of randomization as much as possible over the place, which is kind of important. So IP IDs, process IDs, lots of things, all random, all consuming random. We're also extremely conservative. So our tree must compile at all times. Because of the six month cycle, we tend to put the big changes in at the start of the cycle and let them settle for four or five months. Having said that, if it doesn't work, we don't sit there waving our hands around going, we should fix this or we should fix this. We just back it out and the person responsible for the change has to fix it before it goes back in. So having a working tree is extremely important to us. I say peer review is necessary, but that's just for the slides pretty much. There are exceptions where you own a piece of the infrastructure and it's on the periphery. So if you're working on a device driver, you can do whatever you want pretty much. If you're working in the UVM subsystem, the memory management things, you must get peer review. Because if you screw that up, then everyone's porn gets lost and things like that and they get really upset. We do back away from some tweaks for the sake of usability, which probably surprises a lot of you. For example, our malloc implementation is quite customized and it offers extremely randomized and extremely harsh environment for programs to run in. A lot of the options we have available in there are turned off by default because we don't trust the third party software to survive in that environment. So we're talking about randomizing allocations like the contents of allocations before we give them to the program. We're talking about allocating buffers up the end of a page and unmapping the next page so accesses over the end of the buffer cause faults and seg faults. We talk about randomizing data as it's given back to the allocator so use after freeze fault on invalid addresses or invalid data and things like that. We would like to ship with that on by default but we can't because people run more than just the base system on OpenBSD. So what I'm here today to talk about is PF. We all know what PF is. We don't know what PF is. We're very imaginative at writing names for software so PF is short for packet filter. It does packet filtering. It was the successor for IP filter which was removed possibly 10 years ago now because the license had some interesting terminology and it was deemed not as free as required to exist in our source tree so it was removed. We did try for a few days to try and get the situation clarified but it was too unclear so it just got removed. That meant there was a three or four month window where OpenBSD did not have a packet filter. That really upset me because I used OpenBSD at the time to share the internet at home and you need the packet filter to do the network address translation. So yeah, I was quite young. It was probably not as bad as I thought it was at the time but the executive summary of PF is that it is a stateful filter so I'll explain stateful in a short bit. It does a little bit more than that though. The other interesting thing is we ship it on by default. I think a lot of other distributions, other types of UNIXs do this too now but yeah, PF is on by default. So what stateful filtering? Does anyone not know what stateful filtering is? Darren, good. The firewall basically tracks connections through it. So as you establish a connection through an OpenBSD firewall, it will remember something about that connection and it remembers things like the source and destination IP, the protocol, the ports that you're talking on if it's TCP or UDP and it will also keep track of the TCP windows so you cannot spoof other packets within the connection going through our firewall unless you're in the window but then you're in other sorts of trouble anyway. Because we do record something about each connection going through, we have to allocate memory for it and boxes have a limit on the amount of memory they have in them so there's a limit on the number of states that PF can handle. If there is no state for a packet, it falls through to the rule set evaluation. So when you're running PF, most of the time there's just this chunk of memory that it uses to look at packets all the time. That's largely opaque to you, it just works. However, your policy is defined in the rule set which is what creates those states. So PF rules are basically a list of things to match on. So their criteria to apply to the packet pretty much. So whether it's on IPv4 or v6 packet, it's IPs, ports, protocol, all those sorts of things. You can get very fine grained stuff with the TCP flags, there's ICMP types. The interesting things in recent times is you're able to filter on the user IDs and the group IDs of the process that the packet is destined for, which is kind of cool. So you can make sure that packets to a certain port only get processed by a certain user on the system. It also means that you can have a multi-user system and you can allow people to connect and you can prevent other people from connecting based on their user or group membership. So you can do HR type policy stuff. If you have a multi-user box and they all use it to run a web browser, then you can configure PF to block Facebook to one or two people, not Google, that would be wrong. What you actually do once you've matched on a packet is you can either pass it, so it'll fall through and create a state by default. You can block it, which basically drops it on the floor. You can also do things like return packet that says the port's closed or the connection's dropped or there's no route to host or something like that. Another thing you can do in the rule set is you can match packets. There are these special rules called match rules which apply actions to the packets as that rule matches. So by default, pass and block, the last pass and block rule that the packet matches is the one that applies to it, but you're able to use a match rule previously to change some characteristic about the packet for later pass and block rules to match on. Does that make sense? No, yes. It basically means as a packet into the system, you can write a match rule to rewrite the addresses on it, for example, and then the rules that occur after that, they have to match on the changed IP addresses. Things like that. So as I said, the last match wins. However, there is a keyword you can use that will short circuit. So as soon as a packet matches that rule, it is the rule that applies. There is an implicit keep state. So when we inherited, well, when we had IP filter, the syntax basically required you to tell it about every rule and the state you wanted to create. By default, it just let the packet through but it didn't create state for it. Recently, like in the last three years, I think, two years, OpenBSD made the decision to make creating state the default action. So you don't actually have to put keep state on your rules. They just do it. They also have things like make sure that the TCP flags for an opening connection thing. So there's some things which you tend to always put on your rules, which we now imply. Keep state is one of those. You can write rules that do not create states, but you have to use the no state keywords. Because most packets going through your system are for existing connections, you only have to write rules to allow states to be created. So you only have to allow the connection to be opened. Does that make sense? Yeah. The alternative to that is to write a rule that allows packets to go in both directions of a connection. So I was talking to someone yesterday about how they had to write rules to let SNMP from one network through to another one. And they basically had to write a rule that said on one side of the firewall, allow connections from any UDP port to port 161, and the same on the next interface. And then for replies, they had to say, allow any UDP packets from any high port back to port 161 port back to any high port on the other side. So they basically opened the network that they're trying to connect from up to any connection from what they're monitoring on port 161. With stateful firewalling, you only have to care about where the connection comes from, not what happens in the middle. The other interesting thing with PF is ruleset loads are atomic and they do not disturb existing states. So you can change your policy at any point in time and it either works or it doesn't. It either parses and loads or it doesn't, whether it actually does what you think it does is a bit different. Same with programming, I guess. So PF in OpenBSD, it sits between the traditional network stack, so the bit of the kernel that processes incoming connections or forwards connections and the interfaces that the packets received or transmitted on. PF is run twice for forwarded packets. So as your packet comes into the system, it'll come off an interface. It goes through PF to enter the stack and once it's finished being processed by the stack, it will be sent to PF on the outgoing path and then onto your interface. So if you're routing, you have to be aware that you have to write rules on both sides of the connection, on both sides of the box. That is a bit simplistic these days because there are so many hooks from PF into other parts of the kernel. Like it's having your packet filter look up the user IDs of what it's connecting to. That requires some layer violations, obviously. So is anyone confused on that? This is a point where someone would ask a question, I imagine. Any theory that's bad? Hi, my question's really about what you touched on there with the layers. Is there any work being done with the firewall around applications and being application aware? PF has a policy of only going to the ports on TCP and UDP. If you wanna go deeper than that, you have to send it up to user land for some sort of processing and there's a variety of mechanisms. You can get the packets up to user land for that processing that give you a lot of information about what's happening but generally we don't do anything deeper than TCP in the kernel. I'd answer your question. It does, just one point I suppose is that it's still possible to subvert it because it's just port and IP? Correct. It's really hard to firewall port 80 these days because so much other stuff runs over it. But as I said, you can shove that up to user land and do some other processing there. However, there's some caveats which I'll touch on later. Any other questions? Good. So in practice, you actually operate and run PF. PF is a chunk of code in the kernel. You use a program in user land called PF-CTL to actually run and change the behavior of PF in the kernel. So since it's on by default, you can use PF-CTL-D to disable the packet filter. If you realize that's a mistake, you can always enable it again. If you wanna see what it's doing at a high level, like the number of rules that are being matched or the number of states it's created, there's different metrics it keeps track of. You use SI to show the info. If you wanna see the states that have been created, you can use SS. There's also V flags to increase the verbosity so you can get some excruciatingly boring detail on things. You can have a look at the rules that are currently loaded. You can also tell PF-CTL to parse the rules to make sure that they're valid syntax. And if you're confident that they're okay, you can load them just by going minus F. If you wanna see the statistics tick over, there is a program called sysstat we have with different views of system metrics. One of the metrics we watch is PF so you can see stats tick over. So I'm gonna show a few examples highlighting different bits of PF. The easiest one to think about is if you have a home connection, you get a single IP from your ISP and you have private addresses on the inside of the network. Basic PF configuration for supporting that, doing that on the edge is this. So we start by saying there's a default deny policy so by default packets are blocked going through the firewall. However, on our internal interface, we wanna allow all traffic. So this allows traffic from the firewall to the private network and it allows traffic from the private network to the firewall. That's the pass on the EM, that's the internal one. We also wanna allow the firewall to make connections out to the internet as itself. So we pass out quick on PPPOE0 from PPPOE0. So to explain some of the syntax here, the syntax has these from and to keywords. What follows that is something that has addresses and that can be a table of IP addresses, it can be an IP address itself, it can be an IPv6 address but you can also use interfaces and it will use the IPs on the interface at that time. However, some of your interfaces have dynamically changing addresses. So if you wrap the interface name in brackets, that means you want to dynamically look up the current IP on the interface at run time rather than just use the IP addresses that were on the interface when you loaded the rule set. In so far, we haven't allowed traffic from the internal network out to the internet though which is what the next rule does. It says because the traffic from the private network is already allowed into the box by the second rule, the pass on EM, we now have to consider getting it out of the box which is what the pass out on PPPOE0. So it's allowing traffic out from EM0 network. So EM0 colon network, it actually looks at the EM0 interface and takes the networks that are currently connected to that network, I configured and uses that in that rule. Without the next statement though, the NAT2 PPPOE, that we would be sending the frames out to the internet unmodified so we'd see private addresses leave the PPPOE connection. In that situation, you want to NAT them to PPPOE so other people know where to send the packets back to obviously which is what that next statement does. That's all pretty easy. Good. PF can do some other cool stuff which I'll talk about now. PF supports the use of macros so you can go mgmt.net equals something in the config file in this situation would be the IP address of all the machines used to manage the network or the network address. So in our situation by default, we want to block all traffic going into this server. The next one allows connections from the management network to the SSH daemon and the next rule is the interesting one. So we want people to be able to access the web server, right? But we don't want people to make too many connections to the server at one time. So we tell it, we want to create states but we want to limit the number of states that any IP from the internet can create to 80. So if one host tries to make a thousand connections only 80 of them will succeed. The next bit, the TCP closed is specifying that we want the state to expire states for closed connections really rapidly. So five seconds after the state, well, five seconds after the TCP connection is closed, there's this state where it's supposed to wait for things not to reuse those ports and PF keeps track of those. We want those to expire really quickly. So we tell it, don't wait half a minute or whatever the default is, wait only five seconds. The last bit there, the sin proxy state is also interesting. So sin proxy is, it basically gets the firewall to proxy the sin handling for the thing you're protecting. So if a client connects to your web server, they have to go through PF. PF, when it gets that sin packet will actually attempt to complete the three-way handshake to the client without talking to the web server at all. Does that make sense? Once the client has completed the three-way handshake, then PF will then go and make a connection itself to the web server and then join those two connections together and start forwarding the traffic. This means that PF can take the hit of a sin flood before your application or your web server has to. Does that make sense? Any questions on that one? We've got a handout. Sort of obvious one. How does it, if the application can't handle it, how does that seamlessly tell a good person on the other end that it didn't work? Is that all seamless or? Since it's completed the three-way handshake, it looks like a valid connection and it just has to close it or time it out. If PF's already said it's valid and then it's not. PF's checking the behavior of the client, not the server. Yeah, if the server... If the server is down, it doesn't know about it until the client already thinks it's set up and running. So it's a bit of a change in behavior. However, there are some things. There's a program called RelayD which can check the validity of a server and modify the rules at runtime and it will take things away when they're not working. So you can have these in proxy state rules and have RelayD make sure that the server's there for it to do the three-way handshake for. But if you just load a rule set like this, there's no checking of the upstream. Another example, say you have a remote site office, you have a DIYR cell connection to the internet and over that, you have a VPN connection to the head office. It's pretty much the same as the home site except you want to control where the traffic to the central office networks go. The VPN link is on an interface called GIF-0. So by default we block, we allow the internal network to talk on EM-0. We want to allow traffic from the central site to come back to our network. So we just pass in on GIF-0. So VPN traffic coming in is allowed. However, we only want traffic going to the central network to go out on GIF-0. So the other interesting thing on this rule is the received on EM-0. So that basically says, you're allowed to pass traffic on GIF-0 if it's going to the central office and if you received it on the internal network. Good with that? It basically makes a trust relationship between interfaces. So you can say traffic going out on GIF-0 has to come in on another interface. Very easy to do that. However, we also want to make sure that when the VPN interface is down and the routes are no longer valid, that we don't send traffic destined for the central office out over the internet. So we block out any traffic going over the PPPoE connection to the central office network. Cool. And then we have the same stuff that we had from the home network. So it allows connections out on PPPoE from itself and it allows connections out on PPPoE-0 from the internal network. Cool. Yes. So PF in practice, this is where I start to get interested in PF is I have a network at work with 16 networks attached to it. But I don't want to have to write rules separately for each network. So OpenBSD and PF in particular offer a feature called interface groups. You can tell multiple interfaces that they belong to this super virtual interface called an interface group. So in this situation, all the VLAN interfaces from zero to 60 have been configured to go into the staff IF group. So to start the rule set off, we have a default deny as always, but we wanna make sure that the traffic from each of these networks only comes from my, each of the networks only use IPs that are directly connected to those interfaces. I, we don't want unknown IPs on networks or routed packets and such to come in from our staff interfaces. So we, there is a feature called anti-spoof which is a syntactic sugar around writing these anti-spoof rules. So it basically blocks traffic from, traffic from networks you know about from coming in from any other interface on the system. So if you have VLAN zero with 192.168.0.1, you have VLAN one with 192.168.1.1, the traffic for those networks has to come from those interfaces. Right? Good. Continuing. Our external link is on trunk zero. So we wanna just allow that to come into the box, but we only wanna allow traffic to go out from the, from the staff network. So instead of specifying each interface separately, we use the interface group here. So staff can talk out to any, to the internet. It's worth noting here that there are no rules that allow staff to talk to other staff. So we keep them segregated that way. We have no rule that says pass out on staff from staff or pass out on staff received on staff. So they're protected from each other. And finally, we have a DMZ at work which has a web server on it and a file server. We want everyone to be able to connect to the web server. So we just allow any connection that's entered the box to go out again on VLAN 100 to the web server on ports 80 and 443. So the curly brace notation there is syntacticsugarinpf.conf again, it basically does automatic list expansion. So any of the keywords within the brackets will cause that rule to be repeated with each of those arguments separately. So the first one there, it actually expands to two rules which say to web port 80 and another rule that says to port 443. It's just a syntactic sugar to make these things a lot easier. And the last rule there says we want to allow packets received on the staff interfaces to connect to the file server. All good? Make sense? Cool. As the gentleman up the back said, there's some things you want to look deeper into the packet for and to do that we send the packets up to userland for handling. The most common one is FTP because it has the data connection on one port and then it tells the client on the other side to connect back in which doesn't work too well with NAT. Basically you run the FTP proxy. It's all set up and knows how to talk to PF and such like that. You create a rule set anchor. So in your main rule set you can allow other programs to write sub rule sets and they're referenced using anchors pretty much. So you say, you tell PF there is an anchor that FTP proxy will manage with that statement there. And once FTP proxy is running it will add rules and remove rules to set up the connections coming from the servers back into the clients. So it does dynamic rule changes on the fly but just in its own little area. There's a question there. David, could you give us some examples of other common uses for anchors? I tried to set it up so I could have the policy for each of my interface groups in anchors. It shortens the main rule set so the evaluation is slightly faster. So however, the way I was defining it there was some namespace issues, but you can do it. You can load anchors from rule sets. The other interesting one is there's a SSH thingy. What's it called? Or PF, yeah. People can SSH to the box and it just runs instead of running a shell it'll run an or PF process and it will populate an anchor with rule set specific to that user. So you can have sort of pseudo VPN gateway type thing where instead of having to establish a connection they just SSH to something and then ports are open for them. It's like port knocking but growing up. And there's an example of allowing the user the FTP proxy runs as to make connections. So it basically says pass out quick user proxy. There are a lot of other useful config bits so I don't really wanna go over them all because there's so much. I touched on tables instead of having IP addresses in rules you can actually put radix trees with lots and lots and lots of addresses in them very fast to look up and yeah makes the rule set a lot shorter to evaluate. You can have macros as I showed you before I was using a doll of something. To define a macro you just go keyword well macro equals and then a string and the pars are just substitutes the string wherever you use dollar foo. You can use lists which is the curly notation. Lists so pass to foo port 8443 will expand to two rules one which is pass to foo port 80 and one which is pass to foo 443. If you look at a rule set it does tend to get quite long and it does get slow to evaluate if you have a lot of rules. So a lot of effort has been put into making a rule set optimizer which basically reads your rule set in and reorders it so it can evaluate it quicker and it adds something called skip steps. So it'll order it so you do all the interfaces in one block or you're matching on interfaces on one block and if you fail to match a rule further up on an interface it'll skip down to the next block of interface rules. So it's reordering and then skipping past common blocks. It gets a lot faster to evaluate. Still not as fast as state matching but it's not as slow as a linear search in memory. Cool. However, you've done all this on one box and suddenly the power supply dies or someone trips on the power cord or something bad happens, right? The obvious solution in networking is to buy a tool of something. However, because your rule set only allows connections to start not continue, if you do lose a box and it fails over, you're kind of screwed. So you need the states on the spare box for the failover to work which is where PF SYNC comes in. PF SYNC was invented eight years ago I think. It was 2002 that Mickey did the first commit to synchronize states between PF firewalls over the network. It doesn't actually concern itself with the active or passive role of the thing. It basically says all firewalls are equal. If you get a change, I'll tell the other one. If I receive a change, I'll merge it into the local thing. It really doesn't care who gets the traffic. It just tells whoever's connected to it that there's a change. As I said, as states change in PF, PF SYNC is told it builds packets and transmits it to peers and PF SYNC merges updates from packets into the local state tree. While we're talking about that, PF SYNC will try to mitigate the number of updates it will send because a packet goes through PF twice. That means there'll be updates to states on two sides of the stack, which means you'll get two PF SYNC updates, which means if you didn't mitigate, you would send two packets to a peer for every one packet you forwarded, which is not good. Most of the time, the thing that limits a firewall is the packets per second, not necessarily the amount of bandwidth it's using. So tripling the amount of packets per second, it has to process, is not a good way of scale performance. The initial versions were kind of rudimentary. So as a packet came in and the state was modified, it would build the PF SYNC packet, but because it's mitigating, it's got this packet in memory for the next one. The next one comes in and it actually repars the packet it had just built to figure out if it was already in the packet or not. So yeah, you were parsing quite a lot. We also do IPsec synchronization over PF SYNC now. So if you have two VPN servers next to each other, you can set them up in high availability and PF SYNC will make sure that they know whether replay counters are to avoid bad things. I did a big rewrite of this a couple of years ago to better handle cases where traffic was, for a single connection was going over both legs of the firewall and there were a lot of code speed-ups in there at the same time. So to actually use PF SYNC, it's pretty easy to configure. You basically create a PF SYNC zero interface. You tell it which physical interface you want it to use and that's it. That's all you have to do. It's about two shell commands or one line of config in the startup scripts. PF SYNC only manages to synchronize the states though and some other little bits. It's your job to keep the rule sets in sync. The reason for that is there is some machine-specific configuration that you want to put different config in place for. So each machine will have different management interfaces. So you'll want to write a rule set that allows connections to its management interface. So yeah, two questions. This is exciting. How do you handle the situation of dynamic rules in that situation then, if they're not being synced? It gets more difficult, as I'll mention later, because you've got a proxy in user land handling the connection. It would also require the state of that application to be synchronized across. But we don't manage process migration on network socket migration, so you lose anyway. So in that situation, you've just lost. It's bad. However, FTP proxy doesn't actually proxy the data connection. So in that situation, because it's set up a rule which creates a state, that state will go across and the data connection will keep going. But the control connection, which went up to user land, it will eventually fall apart. So a lot of these solutions are designed to work with two firewalls, basically in the same rack. Yep. Is it possible to use PF sync in a situation where there's like a couple of hundred kilometers between the two offices? So you've got a private link between two offices and you have two internet connections, for instance, and one of them is a backup pass for the other. Yes and no. It depends. If you have the same thing behind each firewall, then it makes sense to maintain the connections to them. If you've got completely separate sites with different hosts behind that you're protecting, it doesn't make sense to synchronize the states. So there's separate networks behind the two, but they also have a link between the two offices that they use to, basically a private link between the two offices that they could use as a backhaul for the internet connection for the other site. And it's whether you can sort of scale to that kind of distance and could still cope. I would like to say yes, but I would say with caveats. I would like to say yes too. I think if you only care about the connections from one site to the internet, you can sync the states over and have them use the other site as a backhaul. And that would work, yes. Any other questions? As I said, PF sync doesn't actually care or do anything about prioritizing the firewall. You have to use another something to do that. Usually that something is carp. Is anyone familiar with HSRP on Cisco's? Yeah, it's basically that, but we made our own because there's no patent issues and it's simpler. If carp is a virtual interface, you build on top of an Ethernet interface. It brought the master broadcasts its state to the network and all the backup peers will wait until the broadcast from the master stop and then it will take over. Make sense? It's just like HSRP, but simpler and dumber. That works okay. Yeah. There's also VRRP, but it has patent issues as well. But carp's good enough for us. As I said, in some situations you have Ethernet on one side of the firewall, but you have something like PPPOE on the other side. There's a program called IF State D, which lets you execute actions based on interface state changes. So if you have one firewall and it's connected using PPPOE and it's the cart master on the inside, if it blows up all of a sudden the other firewall is the cart master. IF State D will go, well, I just became the cart master. I'll make the PPPOE connection and continue forwarding traffic. So yeah, it gives you HA on non-Evernet things if you have Ethernet somewhere. So PF sync in action. As I said, to configure it, you create it and then you just specify the physical interface you want the packets to travel over. There are a couple of tweaks you can do in some situations. So max update is the number of updates to a single state before it must send the update out. By default, it's 128, but if you want to better deal with failover of high-speed connections, it's probably a good idea to bring the max updates down a bit so it more rapidly syncs these high-speed connections. The other one that's interesting is the defer keyword. So if you have two firewalls and on one side of the network, it's the master and on the other side of the network, it's the backup. That means the packet sent through that one, the replies will come through the other. However, PF sync mitigates. So when you send the creation through this firewall, it's possible that the reply will come back before PF sync has decided to send the pack, the synchronization over. So this box will get the reply, but it has no state to match against it. So it will drop it on the floor because there's also no rules that allow it through. What defer does is it tells the master to hold on to the packet until it gets an acknowledgement of the state insert from the peer. So it avoids this race with birthing connections. Does that make sense? It's off by default because it does require more memory and it does add latency to the initial connection. And most people have fairly simple failover situations where one is the master for everything and the other one takes over for everything when you failover. So that birthing connection race doesn't occur in that situation. To actually switch between the peer, to actually get one box to take all the traffic and another one not to, you muck around with the carp demotion counters. So the master that's advertising says I am at this priority. You can get the backup firewall to claim it's a higher priority and it will start advertising and the master will then give up. And so you can do manually directed failovers using demotions, carp demotions. If you've, can you all read that? That's what the configuration for PF sync and carp looks like in practice. So the important thing to note there is the PF sync sync dev and the carp master state. So it's the master and it's status is master. There's a bit of redundant information there, but yeah. So PF sync at home, I would question your need for two firewalls at home. However, PF sync does help you a little bit here. PF sync obviously has a serialized representation of a state. And the cool thing is PF CTL now lets you dump the entire state table as a file. So you can write the current connections your firewall is handling to disk, reboot and then load them back and continue forwarding traffic. I think that's cool. It means I don't drop off IRC. It's really awesome. However, if you're at work, you should press for a budget to buy two boxes. In a simple situation, you will have static IPs on both sides in a single default route. So basically you configure the default route on both boxes. You configure the carp interfaces on both sides of the box. You can do a graceful failover between the two. It's great for patching systems and things like that because most people don't patch firewalls because everything relies on it. With the PF sync and carp stuff, you can patch one box, you patch the backup box, reboot it, bring it up, failover to it and then do the other box all without losing connections. Cool. Unless there's ABI breakage, but we try very hard. Well, no, we don't. I do have patches for backwards compatible PF sync stuff a lot of the time. It's hard to get them integrated because it is a lot of code that will be used once and it's hard to test and maintain and stuff like that. But if there is a bump, just email me and I'll probably have a diff. As I said, when the master loses some state or light power, the backup will take over and forwarding continues. So PF sync at my work is a little more complicated than that. As I said, I have 16 networks. However, I only have three physical interfaces on the firewall. I have a 10 gig and a one gig link and a failover trunk. So as long as the 10 gig link is up, it'll be used, but if it isn't, then it'll switch over to the spare one gig link. So it's degraded service. And I have a one gig link, physical link dedicated to PF sync traffic. I cool. I, on top of the trunk interface, I then stack these 60 VLAN interfaces. And on each of those VLAN interfaces, I then put a carp interface. My external access is over just straight VLANs with upstream providers. I'm plugged into two Cisco routers and we talk OSPF to each other. So is everyone familiar with OSPF? I'll talk a bit about it. OSPF, it basically tells people on the same network what routes you have connected to yourself. So because I have statically configured interfaces in that area, I tell the upstream about them so they know where to send the packets. And upstream tells me where the internet is. That's all it does. It also ties in a bit with carp. So if you can't see any OSPF peers on that network, it will then demote carp so it doesn't take the traffic and hopefully the other fire will have the OSPF adjacencies. It also carp when it's the master will say that the interface is up and OSPF will look at that state and advertise it. If carp is the backup, the interfaces considered down and OSPF won't advertise it. So using that, you can prioritize traffic onto a firewall still using carp priorities. That's what the config looks like. I have my two VLANs up to which is VLAN 363 and 364. I have keyed access to it. So there's some MD5 happening there and all my carp interfaces are monitored by OSPF but I don't actually announce OSPF on them. They're in the area. As I said, there's also a cool hack Cisco I've done for one second failovers with OSPF. I'm getting wrapped up. Okay, so that's what it looks like. One firewall says it's the master, the other one says it's the passive, the dashed lines of the backup ones. Makes sense? Cool. There are some caveats. As I said, connections terminating on a firewall can't be synced because the application in socket state can't be synced. High speed connections, active active because PF sync mitigates you are limited to what the PF sync traffic can exchange and there's still some newer PF features that aren't represented in the PF sync messages. There are lots of other things you can do with PF and PF sync for getting traffic on the boxes. I mentioned those. We're moving towards supporting MPLS fully and VRFs so you can have the same networks and different routing domains and things like that, which is kind of cool. PF is all aware of this stuff and can filter on it and move packets between routing domains and things like that. We have a lot of VPN stuff as well, so yeah. Sorry, I think like two minutes over. Timing mate, perfect timing. We've got a little presentation to make for you. You've done a great job there. Put your hands together for David, please. Done well.