 Hi all, sorry for the delay to start. I have some problems in the laptop with the web application file we should use to the demonstration. Well, who am I? My name is Lando. I work in information technology more than 10 years. I work for a company in Brazil called Security Labs. We work for credit car company, government, and the older big companies in Brazil. I'm affiliated to Hakiaholic, which is an international group of security research. I had been working with penetration testing the last three years. I had discovered some vulnerabilities in web mails, like open web mail, web meow, and the others. Access points like the link, Citrix Metafame, and the other softwares. I wrote some tools that was used in some magazines famous around the world like Hacking9, and the PC world. I also spoke to some conference very famous in Brazil, like Huckers2Huckers conference called Breakers, and was invited to speak to others, like IT Underground in Italy, and the It1Take1 in Mexico. Well, I am from Brazil. Slide, speak by itself. Yes, when I arrived here, everybody spoke to me about Brazil. Are you from Brazil? Soccer, what do you think about the next games? How are they stuff? So, yes, I'm from Brazil, the city of Pelé, and the slides again speak by itself. Well, what do you see in this presentation? What is the Waf, who we are speaking about? Type of operations, common topology, passive and reactive mode, tricks to detect the Waf systems, tricks to fingerprint to Waf systems, genetic evasion techniques, specific techniques to evade the Waf systems, and what they fail to protect. Well, we have a small description he tried from the web application security consortium glossary, that basically say that web application files are a kind of files specific to layer seven applications in the case to the web application. That's tried to protect the application from hostility attacks in this specific layer. Well, they are also called deep impact inspection files because they look every request and they're able to look response. Looking for problems in HTTP, HTTPS, SOAP, XML, services, web services, and the others. Some web application files look to signatory attacks like you see in many files and the also abnormal content, behavior problems. That basically are the same we see in EDS and EPS technology. The web application files can be either softer, harder, or appliance. And basically they try to protect a web server. Some notes about the definition. Some new web application files can work both with attack signatures and the abnormal behavior. Some systems not necessarily need to be in front of the web server as most, where in most places we're accustomed to see. They can be directly put in the machine where the web server is running. There's not the only, as described in some older documentations that are really used as reference, able to detect only attacks in the incoming packets. It's able to detect attacks in bound and the outbound packets. Basically they are three types of operation modes. They are negative modes that are based in a kind of blacklist. They are positive modes, which are a kind of white list based, and mixed mode. That's what the most famous web application in nowadays is using. Kind of negative mode, and a kind of positive mode. Well, negative security model recognize attack by relaying a database of expected attack signatures like most of the EDS and EPS that we have today on the market. Example, for example, look for any page or any argument with the user input match any potential cross-site scripting strings. This can be, for example, strings, string dot from char code, and a combination of all these strings. Well, the good point of this kind of operation mode is that it's very fast to install and use it. Basically you have a pre-built group of rules that try to detect the most common attacks. So you basically get to apply, install, install, and this is running. The bad point is it's so generic, so in general you have a high number of false positives, more time to process, and also a small number protection. Positive security model also. Basically it's a kind of enforcement of the application logic. For example, if you have a script called the news.jsp and it have a field called the ID of the identification, that you only accept numbers from number zero to number 10 and they always starting from number O and going to six, five, five, three, five. Basically they construct a kind of rule that only allow this kind of data, this kind of the digit start from the number to the number and no other data will be exceptions and not be processed. Well, by the nature of the rule that's much more small and it's much more fast and have better performance and they also provide less false positives and you're having good rules. The bads are it take much more time to implement. Some web application fighters in the market provide some tools to automatically generate this kind of rules, what they call in general automatic learning mode or similar name. This kind of stuff is good and they are helping a lot in generating these rules. The big problem is that this rule is never perfect and it should work automatically. They need the basic to have human reviews. Else they will also be so leak in a fashion to doesn't generate a lot of false positives and it make the attackers life much more easy. Well, the mixed mode is basically the use of positive and negative models together which is happen most of the today's web application file. However, in general, I'll have one that is predominant, negative or positive model. Well, in general, what systems can be used in three different network topolors between our web server and the web client? This is the most common that we see in the internet. Also, which people call like reverse proxy. We have a web application follow in a reverse proxy appliance or a machine that is seated between the web client and the web server. The second is integrated in the all web server which is more common is mall companies and the third option that's not so common that's directly connected in the switch in part me or span or have that are older asynchronous names to the same that basically get a copy of all the data that a network that's a port to get. It's replied to another one when it's running the WAF system. Passive or reactive. Most WAF systems work both passive and reactive mode. In general, in the first days they are running in passive mode to identify false positives and in special when you are using some kind of automatic learning mode technology to detect this kind of problems and fix before put it in a production environment. In general, when it's running very well with the auto generate false positives it starts to run in reactive mode. A little back from Brazil. Here we have some famous guys from Brazil that probably all should be. The second thing that everybody asked me when I arrived here. Hey, and the Brazilian Jiu-Jitsu. What happened? It really works, blah blah blah. That's the answer. Brazilian Jiu-Jitsu really works. We can see Anderson Silva, Royce Grace, Rodrigo Minotaur and many others. Well, here we have some tricks to the fat WAF systems. Well, WAF systems leave several points that can be used that permit us to detect them. Example, cooks. We have example of Citrix Net Scholar. Citrix Net Scholar, in some case, that on cook in the application that allow us to detect it's running. As we can see in the cook field we have NES that is a Synchron of Net Scholar underlining AF and the other names. This kind of cook allows us to identify this kind of system. We can see in a real life site here. This site is a real site on the internet using Net Scholar from Citrix. We can see the cooks it added to the application and consequently allow us to detect the presence of the web application firewall. It's a common example of seeing or pointing that the web application firewalls add and should never happen. Science it should be stealth to the attackers. Header rewrite. Some WAF products allow rewriting of HTTP readers. The most common is the server field. What happened? Sometimes some web application firewalls when you request a normal and validate WRE or WRL they respond with a valid 200 code and the server field is set for the name of the web server or Apache internet formation service whatever. However, some products when we request invalid or hostile WRL or WRE it is simple remove the server field. So it's a thing that we can identify that there are some system that can be a WAF that is protecting the site and also some older WAF applications. Only override or allow the reader rewrite when we send some bad content. For example, requesting a hostility content below we have example. A valid and known hostility request should return a 200 okay status code and with the server that is the real server content Apache 2.2.9 running UNIX. Within the same server we request a hostility request for example a well-known web exploit that this web application trigger we can get the following response. The server was changed to Netscape Enterprise 4.0. So it's another problem some web application products they only apply the HTTP the reader rewrite in some circumstance that allow us to identify their presence too. Some WAF vendors return different HTTP response error codes in the same URL valid one or not. If you insert a hostility parameter for example just to be more clear since my English is far away to be good suppose we had a not in this case for example a page no is index.php.p ID equal one, two, three for example it's a kind of suppose valid request and this process it. If you put in the same request that you always return the code 200 for example it should be processed okay. If you just change the one, two, three to something like pipeline ID or ID that is hostility content in general people trying to execute the operating system command this can return for us a different HTTP response code for example 404 as if the page index.php doesn't exist which is not true so we can be sure that it's not the content of the web server and the application in the middle like a web application firewall also the same happened with your request parameters that doesn't exist for example no is is to and the same parameters it will return in general errors four, two, four, four, two, three, five, oh, oh and the other so this kind of returns can help us to identify the web application files. Here's a valid example that was used in one of the application files we tested. A valid request we felt still the parameter that doesn't is to return it 501 status code method not implemented for sure it's not valid and also we could note that the options sorry they allow was changed because they allow option of HTTP header in the normal request never returned the trace option so there are a lot of small points that can be used to detect it's kind of systems. Well, back in some of the web vendors provide a filter to close connection you can use it to drop the connection block users and they also have the possibility to interact with external files. Well, ModSecure is a common example. They have a future called the drop the action that immediately initiate a connection close the action using a thin packet. Attackers can launch well now attacks that mismatch the built-in rules from ModSecure and analyze the back to the thin packets and try to identify the presence of a web application file possibility, ModSecure. This kind of filter is not available in older versions of ModSecure. Some of these techniques we are using to identify web application files can be used in the house to identify intrusion preventing systems that are available on the market. What more about Brazil? Well, back in tricks to fingerprint off systems. All off systems have a built-in group of rules in negative mode that are that rule is based in blacklist that you spoke on the start of the presentation. These rules are different in each product. These rules can be specific to a well now vulnerability. For example, the old EES unique code attack can be a generic rule for a well now class of a vulnerability like SQL injection, cross-site scripting, and the others. And in general, these rules are associated with action like drop the request, redirect to another page, et cetera. Attackers can create a set of attacks that test for a range of vulnerabilities that most off system protects against or not. In this way, we are able to identify building rules of a product and consequently what product it is. What it means? We can generate a database of a big variant of a web application attacks that some web application files detect and the others not. Based in the detection or not detection, we can identify the software. For example, suppose we send a request with a HTTP method that is different from 1.0 or 1.1, for example, using 0.9. Some web application files detect this and block the contents or return a different HTTP code or can redirect you to another page, et cetera. Request content language in a method which is different from post. It's the same. It's not a valid request that all web application files detect, but some detect. Where are you with a recursive path? Even invalid. We can request, for example, how are you accessing recursive paths even that doesn't exist. Some web application files will detect the recursive path using UNIX notation or Windows notation. Also it can be used to detect the difference between them. Request where cookie matches the same name that is also detected by the full by some web application files by other not. Request is where the way he matches the string or the way he itself. USR, X11, R6 being X term. In general, we can create a really big database of this kind of checks and run against most web application files and then we can detect based in what can detect and provide a different response for us or not and in this way identify what web application file it is. More or less like we do probes in operation system, basically the idea is the same. The attacker can go deeper and create several mutations of the same attack using, for example, evasion methods. These evasion methods can make a well-known rule be bypassed or not, and these also can be really useful to identify more precisely systems. Some time identify different versions, a new version from our old version. Also some attack is presented on the start on the tricks to detect off systems in the first slide can be used and they help to identify system like the example of C-tricks net scholar. These techniques can be used to generate a big database and as we spoken to create tools to detect this kind of web application files automatically. Well, generic evasion techniques. Today we have a wide range of techniques to evade the EPS and some off systems. Most of these attacks works because we have bad normalization, for example, encoding, et cetera, and the canalization that the implementation in the off does not work very well. Weak rules, the full week rules built in the off systems. We can see a big number of products in the marketing that have pre-built rules that are really weak and they allow a big number of possibilities of evasion. Evasion at network transport layer in some EPS and some off systems depending when they are running. For example, if they are running in port mirroring. Here we have some examples of common generic evasion techniques. A very common generic evasion techniques to bypass some EPS systems is add SQL comments to parameters instead of space to try bypass off systems and some EPS systems. Also is put words in case instance TV and the case instance TV to try the effect some of these rules too. This tool is very common. SQL query encoding is common. We use resources that the on database provide for us, like stored procedures, et cetera, that allows the attacker to encode and decode data. For example, 2x decimal directly in the database. In this way, we can bypass some EPS and the off systems in some case. Double encoding that basically is the same idea of the original encoding, but the sigging itself, for example, the percent is encoded too. So you add a double encoding. Where he encoding? Like, for example, only code forward slash that was possible some time ago. Don't remember exactly how much. For example, in the tipping point EPS, using this kind of technique also applies to some EPS and some off systems. HTTP request is mumbling. That basically is a technique that some attackers use to create a HTTP packet that will be parsed differently in the back end web server and in the web application file, reverse prox or whatever that is in the middle of the connection. If a packet fragmentation in some case, depending off the topology is possible, that basically use the question of fragmentation and the reassembly to bypass the same. Brazilian carnival, that everyone always ask, probability of you know and who don't know should go to Brazil, to the carnival and you appreciate boys, girls, everybody. Specific techniques to evade off systems. Similar as attackers can fingerprint off system as presented, they can use a technique to precisely identify which restriction of a rule applies to a specific class of vulnerability. What I mean, attacker can insert a hostility SQL injection, for example, to a parameter and this expect to be detect and the action take it. For example, a different HTTP error code be returned, a page redirection, et cetera, or simply generate a error code or the normal page be displayed depending off the web application file. The big point is, we can try a high number of different hostility queries and discover exactly what the SQL query is blocked and wishes not, what word, for example, select from where exec, she's XP underline, exec and the other query are used. If we are able to detect these strings that are used by the web application file to built in Hulus, we can try to bypass them. Using try and the ever, it's possible to identify the combinations of these strings, wishes allowed, wishes not. Also, it's common, we have a Hulus that is based on combinations. For example, only select is not blacklisted but if you use together select, from, and where together, it starts to be blacklisted. So, based in this information, we can try to construct SQL injections in the case to bypass the web application file. Also, this can be used to deny, also this can be used to detect how the rule was built. Let's get a real example. It's not can be used the only way if SQL injection, it can be used with a lot of other attacks like cross-site scripting and the others. Real life, in a penetration test, we realized maybe three months ago, we had a client in Brazil, wishes a bank, that had, we had made a penetration test before and we had, we got the access to the system. So, they got the solution from Citrix NetScholar and now we made the penetration test again. All we have to do to bypass the Citrix with the same problem was to identify what strings was blacklisted and what not, what combination was and what not, like in the previous slide, and then we identified the same query we used in the last time to retrieve a lot of information from the bank, we could use it, just removing, removing the query, we fed out new words in the query, using some database query encoding and removing all the single quotes and the Citrix NetScholar was bypassed. So, a lot of the systems that people sell today that are with WAF system are really good, but we cannot trust in the built-in rules, they are only, they help, but they are really far away to be perfect. In general, what are big pitfalls that web application firewall have to have problems to detect? Cross-site scripting. Cross-site scripting is really hard to effectively detect and block. It's extremely mutable and consequently very hard to detect, in special when we are dealing with the outbound traffic. The outbound traffic, heavy rules that in general can't be so, so enforced like inbound. For example, in general, we get a lot of news from third-sites and the older content. These older sites or older content can be hacked and used as a source to infect your site in the outbound connection, even if using a web application firewall. File uploads, many web application firewalls do a great job in file uploads. However, we have some tricks that's really hard to bypass. For example, when you insert web shells inside the images and call it, for example, via LFA, local file inclusions, via comments in JPEG and the others. Also, remote command execution based in server response is very hard to detect. Some web application firewalls in the last year, maybe, tried to detect some remote code execution. Like, for example, the return that when we type in a UNIX machine, the command ID, the command UNAME, PS, LS, they have a kind of standard format. And there are rules that try to identify this standard to identify it in the outbound traffic. It's a good idea, at the last thing they start, science it could detect a lot of people that had broke into the web server. But if the attacker had the access, he's able to, he's a huge comment on the web server. It's not possible to really detect science if he can use a lot of tools that they all will encode providing the Linux or even they all shell, like a bash, you can turn it in X code, you can do shift right, shift left, and a lot of encoding in the output that will not trigger the rule. So this kind of detection is not always useful and can be easily bypassed. Other problems that is probably one of the most hard to detect is logical and design flaws, which is extremely hard by the nature of the problem. Basically, suppose you have a bank and this bank have a place where you can check how much money you have in your account. It's common that this data like a chance, account number is passed as parameter. If there are nor, there is not a stronger check in the application to grant that your sessions, your cooks is related with your, with this data that's passed this parameter, attackers can manipulate this kind of data and the access information from older users. This kind of problem is very hard to web application files detect and block and the very, and the holders in the same idea. Well, our time is finishing. We had a 10 minutes, it's over. I would like to thank you all of you. Also, the site Hakiaholic is looking for new members. We have a private forum where we share a lot of stuff. If you someone is interested, we are looking for new members. Thank you all and sorry for the problems in the laptop and the delay to start.