 Hello. Okay. I got my user mode Linux window trying to come up on the other side here. Let me just launch into the presentation while that's happening. Hi, everyone. My name is Michael Rash. Okay. How's that? Okay. My name is Michael Rash. I am a security research engineer for Enterosus Networks. I also released some open source software on my website, Cypherdime.org. And that software will be the subject of this talk. So first let me just say I like IP tables. I want to extend the functionality of IP tables. I want to write software, I do write software around IP tables that provides more functionality than IP tables has that should not necessarily be included within IP tables. And today the talk is going to have two main parts. The first part will concentrate on a new tool that I have written and released today on my website at this conference called FW Knot. It stands for the firewall knock operator. It is an implementation of a port knocking scheme, but it combines at the same time passive OS fingerprinting with a strategy borrowed from Poff. And the second part of the talk is going to concentrate on a patch that I wrote to the IP table string match extension, which allows the same content replacement that is performed by the replace keyword in Snort inline, except this happens exclusively within the kernel. And I'll display some net perf benchmarks at the end. So let me just say also that everything you will see here today is fundamentally made possible by IP tables. I don't want to have to appeal to a packet capture library like LitPcap. The FW Knot implementation exclusively uses IP tables log messages. A lot of people think IP tables logs, that's not very interesting, but you can get a lot of really interesting information from IP tables logs because most of the interesting fields in the network and the transport layer headers are logged for you and decoded for you by IP tables log messages and you can even log application layer data if you're using something like the IP table string match extension. So port knocking, sure many of you are familiar with it. The term was coined by Martin Krasinski. I apologize if you're in the audience and I butchered the pronunciation of your name. It is a method of encoding information within sequences of connections to closed or open ports. It doesn't really matter. It is commonly used to allow someone to modify the access controls that are implemented in a firewall by using a sequence which allows only say the person who knows the sequence to be able to connect through your firewall. They could be encrypted or they can be shared sequences. By shared we mean that both the client and the server will agree beforehand on exactly what the sequence is. If anyone saw David Worth's talk at Black Hat, the 20 minute turbo talk, he gave a really good concept about implementing a port knocking scheme around one time pads. That was really cool. FWNOP doesn't support this yet. I would like to mention that FWNOP, I'm very interested in accepting patches. Any comments or bug fix or suggestions that you have please email me. Multiple protocols can be involved in port knock sequences. They don't just have to rely on TCP. FWNOP support sequences that are sent over TCP, UDP, and even IC&P can be thrown in there if you'd like, even though that's strictly a network layer protocol and doesn't have a notion of a port number. If you encrypt your port knock sequences, you can get the third, you can have a third party IP address authenticated through the firewall which is part of the encrypted sequence. Even if someone were able to sniff the traffic between the client and the server they would not be able to deduce for what IP the firewall will actually allow access through for. Probably the best thing about port knocking is that it's ideally suited to be implemented around something like firewall log messages because the server side is completely passive. I never have to allow the TCP or IP stack that the firewall runs on to respond in any way to the client. If you have a firewall configured and the most secure stance, which is a default drop stance, then you can implement very easily a port knocking scheme around that as long as you're also logging packets which is something firewalls do pretty well. Let's take a look at a couple of knock sequences. First is a shared sequence. The sequence is agreed upon by the client and the server. The knock client sends a series of five connection attempts with TCP send packets to ports 1001 through 1004 and the last port TCP 22 will be agreed by the knock server as the port that, oh, the client wants access for so it opens up TCP port 22 after it sees the knock sequence come across from the client. Not very complicated. Let me just say here, of course, knock sequences suffer from the capability that if someone is privileged enough to be able to sniff the traffic between the client and the server, they can easily replay the same exact sequence that you sent across and so port knocking does not provide bullet proof security. It is simply another hurdle that a someone needs to jump through to be able to connect to SSHD, for example. I run FWNOP on my home system. I can authenticate to it with an encrypted sequence from anywhere on the internet but only clients that know that sequence can then connect to SSHD. I want IP tables to provide the security for SSHD because if I protect SSHD with IP tables, not only can they not talk to the demon itself, they can't even talk to the TCP stack or even the IP stack. So let's take a look at another shared sequence. It's a little more complicated. We have a, this sequence involves multiple protocols. You can see TCP and UDP involved there. A couple of IC and PECA requests and we can introduce timing delays that can be made significant. So the last port that sent its connection to UDP port, I should say a packet to UDP port 5003, but the sequence will only be honored if that packet is seen at least 10 seconds after the previous IC and PECA request. If someone is able to sniff your traffic, they may see all the packets. They go across from the client to the server, but they may not also realize that the timing delays can be made significant. So if they try to replay the sequence, if they don't introduce that time delay in there, the sequence will not be honored. Also, incidentally, if you're using an ethernet sniffer or something and if you're doing this as a manual process, then frequently people will restrict their view to say just the TCP protocol or just the UDP protocol. So they may see the TCP packets that are involved in the sequence, but they may not realize that those, they may not even see the IC and PECA request, first of all, and even if they do, they may not realize that they're also significant. Remember that we're just trying to make things just a little bit more difficult to connect to SSHD or whatever demon you prefer. This particular sequence, note that the, that the, at the end there, TCP ports 22 and UDP port 5000, which is used by open VPN. These two ports are opened after the knock sequence is seen, but the, these two ports are not, they do not appear at any time within the sequence itself. So the sequence, the server and the client must agree beforehand on which ports will actually be opened. Okay. Encrypted sequences. We can encrypt our port knock sequences by noticing that an IP address is essentially four 8-bit values. A port is a 16-bit value, which can be treated as a concatenation of two 8-bit values, so a high order byte and a low order byte. Protocol numbers can be completely described by a single 8-bit value. So one is ICMP, six is TCP, and 17 is UDP. And we need to select an encryption algorithm. So why not Reindahl? It was selected for the AES standard. Sounds like a good way to go. It's block size. Being a symmetric block cipher is 16 bytes. Notice that we've only used so far four bits for the IP, four bytes for the IP address, one byte, two bytes for the port number, and one byte for the protocol. So we have a total of nine bytes left over. FWNOP makes use of those nine bytes. I'll cover that later. And so if you see the plain text line there, we are going to have the knock server allow access through to TCP port 5500. Of course, that's arbitrary. You can make that whatever you'd like. For IP address 174.4.3.2, which notice does not correspond to either the not client address or the knock server address. Sorry, this screams a little. Okay. And then if you take a look at the cipher text line here, notice that we have a sequence. It's 16 ports, but not all 16 ports are displayed here. Only the first few. Notice that the next to last port is port 80. Probably aren't going to be logging TCP send back to port 80 in your IP tables firewall. If you are though, you probably will notice it after your disk fills up if you have a high traffic website. So what we need to do is to have a way to use different ports other than ports that can be easily confused with well-known ports you may be running legitimate servers on. So we will define a special range of ports which is 8 bits wide, a total of 256 possible values. And we'll define that range on the server side to be 60,000 through 60,255. And then on the client, we will add 60,000 to each of those encrypted ports that we saw in the previous slide. And then we will connect in sequence starting with the first through all 16 port numbers. And the knock server, which will be monitoring that range of ports, will, assuming that the decryption works, which means of course it's a symmetric block cipher, so you must have a shared key. Assuming the decryption actually works, then it will allow access through IP 174-432 to TCP port 5500. Okay, so that's port knocking in a nutshell. Now let's cover a little bit about POPF. POPF is a passive operating system fingerprinter. It offers three main methods for fingerprinting the TCP IP stack on a remote operating system. The first is with TCP SIN packets. This is the method that we will illustrate how to accomplish with IP table log messages. It also offers SIN act fingerprinting or reset fingerprinting, but since that requires generally a bidirectional communication between the client and the server, we won't be discussing that because we would like to be able to completely characterize an operating system with only a single TCP packet. It uses libpcap to monitor traffic across the network. So how does it fingerprint an operating system? It looks at several fields within the transport layer header, the TCP header actually. It looks for a TCP SIN flag being set, of course. It looks at the TCP window sizes, and then the remaining options there, maximum segment size through the timestamp, are included within the TCP options field in the TCP header. And it also looks for TTL values, fragment bits, and the overall packet length, which are headers in the network layer header. So if we take a look at a couple of POP signatures, the first is for FreeBSD 4.4. If you see a packet on your network that has the following characteristic, the way you read this is it's a series of values separated by colons. The first value represents the TCP window size of FreeBSD 4.4 systems hard code, the TCP window size of 1024 bytes. The next field is for a TTL value. It's looking for a TTL value of 64. Of course, since every intermediate router between the client and the server is going to decrement the TTL value, you have to be prepared to have sort of a threshold that allows you to extrapolate what the initial TTL value probably would have been. And to do that, you can rely on the idea that generally, all packets, all routes between clients and servers on the internet generally will take about 15-20 hops. So that gives you a bound. The second signature is to identify Linux 2.4 kernel. You see the S4 there for the first portion of the signature that refers to the TCP window size must be exactly four times or a multiple of four, the maximum segment size. It doesn't care what Linux has set the window value to be just as long as it satisfies that requirement, it is probably going to be coming from a Linux system. Of course, these signatures, if you've gone around in slash proc and you've messed around with kernel data structures that are exported to you via slash proc so that you can change your TTL values. Oh, I'm sorry, your initial TCP window size, etc. This will not work. This just this assumes that you have gone with a default default values in the stack. Also, I believe Kathy Wayne gave a presentation about using packet purgatory. That was pretty cool. And that can change things as you have packets traversing a system to or outbound packets. So you can spoof some of this information. What we're trying to do is just make more information that more information significant in the actual PortNock sequence. So how do we do this? How do we implement this around IP tables? Well, first, we have an IP tables rule that instructs IP tables to log all TCP packets received on Ethernet interface ETH0. We put a logging prefix of drop so that these the packets that are logged as a result of this rule are easily identifiable in our SysLog. And we use the sort of less commonly used option log TCP options. Incidentally, that last rule there in parentheses, you cannot have IP tables both log and drop a packet within the same rule. And so if you actually want to have if you want to be sure that your firewall is actually dropping stuff to and logging it, you have to have that additional rule there. And the next three slides are going to examine a single log message. The first slide will show you the log message up through the network layer. And then we'll continue on from there. So this first slide, excuse me, we can see that it's a SysLog message. This packet was we see our drop string there, which means that we see our logging prefix that we added in that previous slide. We see that this packet was logged on the in interface of ETH0. We see that the out interface is blank. That's because this packet was logged in the input chain. The packet was destined directly for the firewall as opposed to being destined for a machine that's through the firewall. If it were destined through the firewall, then you would see the output interface filled in, it would be logged in the forward chain. IP tables does a great job about decoding a lot of information. We see source and destination IP addresses. We see TTL values, IP IDs that don't fragment bit is set. And we see that the protocol that's one layer higher in the stack is TCP for this particular packet. So we also get information in the transport layer. In addition to all the fields we had before, we also see source and destination port numbers. We see the TCP window is very clearly delineated there, 5840. And we see that the SIN flag is set. All other flags are cleared. And finally, because of that log TCP options command line argument, it also logs the TCP options portion of the TCP header, but it doesn't decode it for us. It just says, oh, okay, log TCP options. I know where that is in my TCP header dump, here it is in your log, that string of hex data. IP tables does a great job about decoding information, but it says, well, you know what, I don't want to burden the kernel with having to decode TCP options. So I'll let some user land application do that for me, which is probably a good choice. So we have to decode this information. No problem. TCP options come in two formats. The first format is a single type field of 8-bits wide. Examples of this, such TCP options are knob instructions, which are used to align TCP options on word boundaries. Not all TCP stacks will actually make use of the knob instructions. So any TCP implementation has to be prepared to deal with TCP options that may not be aligned on word boundaries. The only remaining type with an 8-bit field is an end of options code, which is used at the end of the TCP options. And then the most important type contains three fields, a type field, a length, and then the actual value. And you can see there that 020405B4, which was included in that IP tables packet, decodes to a maximum segment size of 1460. So if you take a look at that hex dump, you will see after decoding it that we have the following characteristics. The maximum segment size of 1460 selective acknowledgement is permitted by whatever operating system generated this packet. Time stamp is included. There's one knob instruction and then TCP window scale value of 0. So that log, the packet that we logged, if you look in summary, contains the following characteristics. In addition to the decoder TCP options, we see a length of 60. The don't fragment bit is set, TTO value of 64, et cetera. So what does POP tell us? The operating system that generated that packet was most likely a Linux 2.4 kernel. Okay. Should be mentioned that we have not talked about the IP ID, which is used, it can be used in other fingerprinting strategies. Toby Miller wrote a paper, passive OS fingerprinting details and techniques, which makes use of IP IDs, which are highly variable. The method by which they are generated is highly variable depending upon the operating system. I would like to be able to characterize an operating system with a single packet. And if I were to make use of the IP ID, then I would have to look at relative differences between successive IP IDs from the same operating system, which would allow me to deduce whether or not the algorithm that generated the IP IDs was random or followed at a simple numeric plus, you know, numeric additive rule. But I don't want to have to rely on multiple packets from the source. So I didn't implement that. Also, X Pro, of course, should be mentioned. It's an active operating system fingerprinter, but some of the same information contained in its database could be used. But I don't want to have to send packets back to the source. So I don't use it. So FWNOP gets strictly used as IP tables log messages to implement both port knocking and passive operating system fingerprinting as we've seen. It supports shared and encrypted dock sequences. You can send sequences over TCP, UDP, and ICMP. FWNOP makes relative and absolute port timing significant. So if you want to make it so that only someone who sends a knock sequence to the knock server puts a five second time delay between each connection, then that server will not honor the sequence unless it sees that time delay. The local username identification is used in encrypted sequences. Remember that nine bytes worth of information that we haven't used in the Rindall block size. You can install an FWNOP client on a multi-user system, but have the knock sequence honored only for a specific username that's on that system. So even though anybody can execute FWNOP on the system, if they're not the specific user that you want to allow through, the knock sequence will not be honored. And last but not least, you can either require that an exact match is made for the operating system that generated the knock sequence, or you can match based upon a regular expression. So if I want to say have any arbitrary version of FreeBSD able to connect to my SSHD and I can use a simple regress to match that or Linux or whatever you'd like. Just a couple notes about the implementation. It's implemented as a client server. FWNOP runs in server mode on a Linux system that's running IP tables where it has access to syslog messages. Of course, you can have IP table or syslog logging to some of the syslog server, but we won't really get into that. Sequences are encrypted via Rindall. FWNOP in server mode reconfigures syslog to write to a named pipe in your operating system. The reason it does this is because I don't want to have to have FWNOP worry about the possibility of an automated log rotation script or something to run out of cron that rotate my log file away and then I lose my file handle. It doesn't take POP signatures directly from POP. It uses the etcpf.os file that's included with an OpenBSD and it's available now on my website if you'd like to download it. Okay. Let's see if this will work. Is this on? Okay, down at the bottom, there's a tail-mind itself on my IP table's log or far log messages file. Sorry, you can't really see the whole thing here. I'll try to compensate. So the first thing we're going to do is we're going to send a simple shared sequence across. First, I want to display that SSHD is not accessible from my user-mode Linux installation, which is this top white window down here. The black window is my host operating system where FWNOP is running in server mode. So hold on just a second. Okay, you can see down at the bottom the IP table's log. It'll scroll up here. And FWNOP monitored a successful port knock sequence through IP table's log and allowed access to this particular source IP address for 10 seconds. If you have your IP table's firewall configured such that your accepting package to the part of established sessions first, then IP tables can have the log that accepts the traffic removed, but your session won't die because I'm still logged in up here even though I don't have access anymore to the SSHD. I'll demonstrate that. Hold on. Is that better? Okay. Great. So I can't reach the SSHD man at this point anymore because it's been shut off. Okay. Now I'm going to send across a different knock sequence to the firewall. This involves TCP, UDP, and ICMP. But if you notice down here this time, so it's, FWNOP supports multiple knock sequences per individual source. This particular knock sequence was not honored because the time delay was not met. If I add a time delay between each port, the knock sequence this time will be met. However, I also instructed for this particular sequence that I'm expecting it to come from a free BSD system. And because this user mode Linux system does not correspond to free BSD, the sequence was not honored. And finally, we'll demonstrate an encrypted knock sequence. Of course, I have to type in my encryption key, which is just an eight character password. And you can see the IP tables measure just scrolling by on the screen below. Notice that I'm rotating between the TCP and UDP protocols for this sequence. And then down here we had a successful decrypt. It matched a regular expression designed to detect Linux systems across my IP tables logs. And it allowed me to connect to TCP port 22. So I'd like to take questions on FWNOP and then move for the second part of the talk. Yes. That's a good question. I could implement that. It's not currently used. I'm sorry, let me repeat the question. He asked why not use, make use of the IP IDs since I'm going to be sending multiple packets anyway. Good point. I would like to just take the signatures as they come from POPF. So I could make use of the IP IDs, but I'd have to combine the password operating system fingerprinting, say, in Tobey's paper with that of POPF to make, to come up with something useful. I'm probably not going to get more granular data by using the IP IDs than what I already get with POPF. So it could be added. It seemed a little bit unnecessary, at least from what I'm doing here. Anyone else? Yes. I'm sorry. I'm not target. Can I restrict not to a specific IP address maybe? Oh, yes. Right. So the U-Log target, he's asking about, could I use the U-Log target to implement this as well. Yes. It could be used. The U-Log target will give you pretty much the entire packet as a data structure to a user land application. But at least I think that's right. Is that right? Yeah. Okay. You could do that. I'm not sure if U-Log returns exactly the same format as the normal log target. So I mean you'd have to reinterpret the actual what's coming out of U-Log. But if you're running on a Linux distribution, I can probably, there's a much greater chance that the normal logging facility is made use of by IP tables instead of U-Log. Any other questions? Yes. Yeah. What I have right now to emulate that is a knock limit. You can set a knock limit on a, to define how many times any particular knock will be honored. But there, it'd be much, much cooler implementation would be to implement something like David Worth's one-time pad-based port knocking. Because if you say you are able to, say you set a knock limit of only one, of one, right? Then FW knock will, you have to be sure to either restart FW knock and then therefore you can use the same sequence or redefine your sequence on both the server side and the client side to be able to gain access through again. So I mean that is an option if you want to deploy it. So let me move on to the second part of the talk. The IP table string match extension is a kernel module that allows IP tables to match sequences of bytes at the application layer. It uses the Boyer Moore searching algorithm which is used commonly by intrusion detection systems because it's very fast. This constant BM max HLIN is defined as 1024. What this means is that if you attempt to match a string in a packet that contains an application layer payload that's larger than 1024 bytes long, the string match extension will not attempt to actually match in such a packet. That's due to performance reasons although we'll see later that performance isn't really too much of a worry maybe depending upon your application. So we're going to look at a couple of snort rules. This is snort rule or CID 940. It's a rule designed to detect someone trying to access a specific DLL associated with front page and we can write an IP tables rule that will detect the same string that snort is looking for. Note that we're logging this in the forward chain presumably because you're not running front page on your Linux box. We're looking for TCB packets with the destination port of 80. The TCB flags AC and AC refer to the TCB flags that we'll be examining and the ones that we expect to be set. So here we're just saying that we just expect to see the AC flag set in the packet. It doesn't care that also the push flag or the urgent flag may also be set. The reason for adding this is because of the flow keyword in snort which we can't exactly emulate directly with IP tables log or IP tables rules but we can sort of come close with making sure that the AC flag is set in the packet and then we look use the string match extension to look for the string slash underscore VTI bin etc. And we'll throw a log message saying oh I saw snort ID 940 on your network. Note that we're just logging it. We're not dropping it or rejecting it. Frequently we're going to try to look for stuff that doesn't involve just nice ASCII data. So we need to be able to search for non-printable characters specified by hex codes. If you look at snort rules many, many of them look for hex data in network traffic. So with the hex string patch that I wrote and it was accepted by the IP tables maintainers you can take snort content fields directly out of snort rules and put them into IP tables rules and we can see one here. This is an old exploit for name D snort rule ID number 261 and again we will just log the packet if it's seen over TCP port 53 and we're not dropping it. The string match extension is compatible with IP tables targets such as accept if you want to accept the packet. You can drop it if you'd like. Of course if you're running if you drop a TCP packet in midstream that packet will be continually tried to be retransmitted because that's a requirement of TCP. You can return if you and this target applies to custom chains if you have a custom chain that's trying to do more advanced processing within IP tables you can use the return target to return out of that chain and continue on through the rest of your rule set in effort to not have it examine successive rules within that same custom chain and of course you can also reject the packet so that it never actually makes it through the target system you can send a TCP reset for a TCP session you can send various ICMP messages in response to UDP traffic if you'd like. So if you're curious about how many rules can be translated directly into IP tables rules you can run FW snort it makes use of the string match extension in the hex string patch and it can translate about 70% of them notice that equivalent is in quotes we can't actually do exactly what snort does within IP tables rules we cannot search for application layer content that it starts at a certain depth or offset we can't support multiple content strings within single IP tables rule and we can't do things like take care of these standard IDS evasion techniques will defeat the string match extension we can only search for a sequence of bytes within a single packet if you can fragment your attack or fragment that sequence of bytes across multiple packets then it will defeat the string match extension and any you know clever encoding technique that you use also will also defeat the string match extension but if you have certain well-defined things that you don't want to see or at least that you want to send logs for you can make use of the string match extension so snort inline is an IDS it's a patch to snort which I think is going to be incorporated within snort before too long the IPS functionality not sure about that but it's designed to run on a Linux bridge it makes use of the net filter libIPQ library to queue packets from kernel space into the userland application inline snort and after passing packets through the snort's detection engine if they're if they're deemed to be okay to pass on into the target system then libnet is used to send the packets down on the egress interface and out to the target system so snort inline implements a new snort keyword called replace it allows you to change application layer data for example if you don't want anyone to see the same target system to see the string bin SH you can replace it with be in SH and of course that will not correspond to any legitimate path and target system and so presumably your attack is stopped let's examine what the packet has to go through if it's being processed by inline snort we start out in kernel space the packet is appears on the ingress interface we're still in kernel space the IP IP tables gets a chance to look at it and it it notes that it's in the forward chain because it's going we're running on a bridge it's not directly accessible it's not addressed directly addressable so any packet that is examined by snort and land will be examined first by the IP tables forward chain IP tables will descend the packet through live IP Q to user space once the operating system scheduler gets it gets around to running snort inline in the context which is performed the snort inline will have a copy of the packet within user land and it will get a chance to run the packet through the detection engine if it's okay it will construct a packet with libnet and then do another context which down to kernel space and put the packet on the ingress interface so we have I'm not sure how many packets can be queued at one time I don't think that it's required that every single packet requires two context switches but I'm not sure what the exact number is if you know the answer come see me after the talk so but note that at the very minimum we're going to be doing a lot of context switches and copying of data between kernel and user land so the string match extension runs in kernel space if you look at this file live IPT string dot see the function prototype for the entry point to the search algorithm you specify a character pointer to what you're searching for and there's a character pointer which points back into the SK buffstructs of the packet which show you which look would give you a pointer to the application layer data haystack and note that it returns a character pointer it does not return a simple return code saying zero or one whether or not the string was actually found within the packet it returns a pointer to the actual data so it's easy at that point then to also modify that data with one exception if you're modifying application layer data you must also if it's a tcp packet recalculate checksums because tcp mandates that a checksum will be calculated and verified for every packet sent across the network if the source of the packet happened to compute the utp checksum which is not required by the utp rfc then you also will have to recalculate it as well but that's an optional step depending upon whether not the client or the server as the client actually calculated the checksum first hand beforehand so the string match this patch adds a new replace string keyword to the userland portion of IP tables that allows you to say in our first snort rule take the underscore VTI bin and replace it with remove the underscore and replace the .dl extension with .dab and this time we'll send the string oh I instead of just seeing snort id 940 I nullified snort id 940 on your network for you we can do a similar thing with hex data it also adds a replace hex string option and if we take it a quick look running out of time here the packet goes through on the network we're able to stay in kernel space the entire time packet appears in the ingress interface the packet is matched in the forward chain we run the stream match function and the data replacement within kernel space and the packet goes out on the ingress interface so what does this mean in terms of benchmarks I have to admit the the description of the talk so that it was three times faster I have to admit that's a mistake I'm sorry it is not three times faster it is significantly faster but not three times faster and we'll see that in a second so this is on my home network I set up a snort or IP tables machine I can I'm using both and there's a net perf client a net perf server net perf is released by some engineers at HP nice piece of network benchmarking code it however there's a little bit of difficulty when you're trying to use net perf to benchmark something like an IDS or or IP tables because the way net perf is architected you connect first to the net perf server with the net perf client but the actual port that the actual data stream or test will be run over is not well defined it's a random port that the server decides to hand you after you connect first so I wrote a trivial little patch that hardcodes that port to us or to a specific one that we can specify so that we can easily say you know on port 50,000 or whatever that's where you're the rule will be applied in in both snort and also IP tables so looking at the benchmarks the first three sets IP tables forwarding with no IP tables filtering turned on I should say so I should say it's just a standard IP forwarding I get about 1.95 megabits per second snort in line because of the context switching which actually I think this is still pretty good 1.72 megabits per second not bad it's about 12% faster for IP tables or normal forwarding I should say if we turn on the replace that the IP tables replacement or the snort in line replacement it is about 10% faster for IP tables but this does not log anything this just accepts the packet on both sides but it has done the data replacement in line and it's about 10% faster note that the snort in line hasn't slowed down at all with the replacement turned on because DC 702 interrupted my speech and all they got was some lousy points and next it sucks that was diverting all right so snort in line note the sort of line hasn't slowed down at all from the first test and that's because we're all we already have to construct the packet with Linnet in user land and send it down so it's a very inexpensive operation actually replace content in the packet as well because we've already got it in user land and it's a little bit slower with the IP tables replacement turned on this actually this replacement is doing searching and replacing within and that's why it's a little bit slower but you can see 1.92 megabits a second versus 1.95 it's not that expensive of an operation and then the third set is if you actually log stuff to which you probably want to do we take a hit on snort in line and it makes it about 43 percent faster just to log and replace but with IP tables applications well-defined exploits C++ worm was a single 404 byte EDP packet yeah probably don't want that allowed that one into my network and maybe if you want to preserve application layer responses it's like you know translate URLs into four or four errors and have the web server do that for you maybe so and with that like take your questions think we have about one minute left yes yeah absolutely um IPS that does things like this content replacement is highly hyped up I think we're probably all aware of that I cannot give you I can give you I think one maybe possible legitimate use for this and that is that when you're executing an attack against the server system you might not be aware that the attack is going to actually succeed so if you're on the other if you executed an attack against the system and you have you have something like an inline IPS drop the entire session as a result of that attack you might be you it might tip you off that there's an IPS type device between the client and the server more easily than if you selectively modify certain application layer data but that's a stretch there I totally admit that I am not promoting IPS in any way shape or form this is just enabling technology if you want to play around with this essentially so is it useful probably not other questions yes if something triggers a snort rule can I rewrite the IP headers I'm sorry like redirect to a different destination IP I currently cannot I don't have that implemented within the string match extension I mean that's maybe possible you could use potentially use the route target within IP tables to do something like that but I'm not really I haven't really done that yet I'm being instructed I have to stop so thank you very much