 My great pleasure to introduce Eileen. I want to just put this little bug in your ear that conferences are a great way to get out and introduce new people to the community. And it would be a shame if this was the very last conference in Salt Lake. So, Eileen Yuchitel. Hi everybody. I'm Eileen Yuchitel. I'm a programmer at Basecamp where I'm on the security infrastructure and performance team. I'm so happy to be speaking at Mountain West again this year. As Mike said, this was my first conference that I ever spoke at back in 2014. During that talk, I discussed situations I had faced with Active Record and how those problems were consequences of not paying attention to the sequel that Rails was generating. When I gave this talk, Aaron Patterson was in the audience as he is today. And after trolling himself during Q&A, he told me he had found a bug in Active Record. And he suggested that we pair on Rails to fix the issue and I've been contributing to Rails ever since. Now I'm on the Rails Committers team and the Rails Security team. Between Basecamp security and Rails security, I spend a lot of time working on, well, security. Many people think that means that I've been a security expert for a while now. But in reality, I ended up working in security because I cared more. That's not to say that other team members of mine don't care about security, but that I had a passion for it. Many people think that means that I've been in... I want to make languages and applications safe for everyone to use, even those who don't know how to be secure. Security is a difficult field to work in because it's easy to become jaded. There's a lot that's broken and sometimes there's no way to fix everything and make all systems secure. It can be somewhat overwhelming. While everyone is celebrating new releases of software applications, those of us who do security work wonder what new vulnerabilities will appear. New things mean new threat surfaces. Fixing those threat surfaces can mean breaking functionality. It's frustrating that to keep everyone secure, we have to take something away. Security is paternalistic and protective and users fight against it because security makes their lives harder. Today we're going to talk about how and why security is broken, but we're also going to take a closer look at common vulnerabilities like CSRF, XSS, and XXE. These three well-known attacks have been around for a long time. They are products of how browsers or systems were built and are easy to accidentally implement into your application. First, let's go back to what I mean when I say that security is broken. For starters, one of the hardest things about making a secure system is that it's impossible to test for all vulnerabilities. Attack surfaces are everywhere. You could have a vulnerability in the application code, the database setup, or on your server. The possible attack factors are not finite. This is because hackers are always one step ahead. The majority of security procedures are reactionary. Even if vulnerabilities are responsibly reported, everyone else is left playing catch up to patch their language, application, or server. And those kinds of patches only work if they're actually implemented and correctly fixed the problem. Sometimes patching one vulnerability can expose you to another. In one of the recent rail security releases, there was a DOS attack factor in XML parsing. If you couldn't upgrade, the workaround was to change the XML parsing backend from XML to one that wasn't vulnerable to this attack, like libxml. The problem was that libxml allows entity replacement, and depending on the version you had on your servers, it may have allowed entity replacement by default, which can lead to an external entity replacement attack. This type of attack is worse than a DOS because it can be used to gain access to sensitive server information. We'll talk more about exploiting XXE later on. So how do we get to this place where security is so broken? A big issue with web security comes from the failure to create and enforce web standards that browsers and programmers are expected to adhere to. When the idea of the internet was born, no one considered or could have predicted to the level to which simple features or hidden bugs would be used for malicious purposes. The browser wars were a catalyst for many common vulnerabilities. When Microsoft created Internet Explorer to compete with Netscape Navigator in the late 90s, they started an arms race where features were more important than anything, including security. The W3C was created in 1994 as an attempt to standardize the web, but vendors were more concerned with market share than following the rules. This left the W3C playing catch-up and implementing standards based on what already existed. This created a history of browser knows best. The push for web standards was just business and easily ignored because market share was more important to vendors. When the DOS settled and Microsoft won, there was a myriad of committees attempting to enforce web standards at the same time. Those committees, mainly the ECMA, ISO, and IETF, were not working together and often provided contradictory advice. Decisions in building browser technology continued to be dominated and influenced by market share rather than standards and security experts. Eventually, we got to a point where legacy behavior had precedence and couldn't be easily removed. That makes a weird noise in the microphone. That's awesome. Because a clear set of standards was never established, we also failed to implement a guide to what a secure system looks and behaves like. In the book that's hanged web by Michal Zaleski, he explains that we have completely failed to come up with even the most rudimentary, usable frameworks for understanding the security of modern software. What he means by this is to have a secure system we need to be able to clearly and simply define what a secure system looks and behaves like. Consistently, security is a reaction and we can't be proactive against something that we don't yet know about. What we've ended up with instead is risk management, constantly assessing where your largest targets are and trying to close the holes in order of importance. Are your employees the weak link that are going to click a phishing email and give access to the entire system? Or is it that you're using an old, unpatched version of Ruby with an SSL vulnerability that can lead to a man in the middle attack? We can build tools to detect and assess known problems, but we can't protect against future problems. We have to plan to have everything compromised because we have no way of determining beforehand if a system is completely and unfailingly secure. The territory that security needs to cover is too broad. We have to secure languages, servers, application code, and users. With all those attack vectors that need to be covered, there are just too few people that understand how the vulnerabilities work. We need everyone who builds applications to understand security, ops, programming, and even your support team. This is everyone's job, not just the job of your security team. Understanding security vulnerabilities isn't easy. Not long ago, I didn't quite understand the risks for CSRF, XSS, and I had never even heard of XSE. You'll leave here today with a better understanding of how these attacks work, how to mitigate them, and how to build better software in the future with security in mind. By demystifying these attacks, we can work towards a more secure internet for everyone. With that, the first well-known vulnerability that we're going to talk about is CSRF or cross-site request forgery. A CSRF attack uses a malicious site to trick a browser with an active session into performing an unwanted action. There's a bit of confusion about the difference between clickjacking and CSRF. They're very similar, but clickjacking requires the end user to interact with the page and requires a view layer to perform the attack, whereas CSRF attacks don't require the user to perform an action and can be carried out in the background. Both attacks require the user to have an active session. CSRF attacks happen in two parts. An attacker builds an exploited URL or script and then social engineers someone into visiting a malicious page, sometimes through phishing. So let's say that we have a website and on there you can have a profile. Only you can change your profile by being logged in, just like Facebook, Twitter, any other social media site. This form requires a post request to update the user's profile and the form has no CSRF protection. It's a simple form with a field for name, email, address, website. An attacker can take the source code and build an identical site that posts to your website. The submit can be done automatically with JavaScript. The only action required by the victim is to visit the malicious page. So here we have the malicious website and the victim's website. The attackers copy the source code from the original site to make it look exactly the same. And the only clue that it's not the right site is the URL. If we look at the source code of the attacker page, you can see a couple notable changes to successfully perform the CSRF attack. First, the email address is no longer elated at example.com. It has been changed to the attacker's email address. When submitted, this form will update my email address to be the attacker's email address. Then the attacker adds JavaScript to automatically submit the form when the page is visited. And the attacker ensures the site submits to the original site. Remember, the point of CSRF attacks are to forge the request. The attacker is trying to submit data to a profile they don't own so they can take over the user's account. Because we are still submitting to the victim's site, the user will automatically be redirected back to the victim's site and may not even notice the attacker changed their email address. This is of course a very simple example, but you can imagine that without mitigating CSRF attacks, the types of malicious activities that could occur. At worst, CSRF can be used for account takeover on authorized transfer of money at a bank. At best, CSRF can be used to embarrass another user by changing their name or photo. The amount of damage an attacker can do depends heavily on the purpose of sensitive requests. So now that we understand how to exploit CSRF attacks and how dangerous they are, let's talk about how to protect your application from these kinds of attacks. An easy way to mitigate CSRF attacks is to use frameworks that have already implemented protection against request forgery. Rails, Django, Java, Spring and .NET have all included CSRF protection automatically in their frameworks. This is what CSRF protection looks like in Rails. If a request is made from another website, it will be caught by a protect from forgery callback and rejected with an exception. Rails does not protect get requests, so you'll need to make sure that any sensitive request is not a get. CSRF protection in Rails adds a hidden input field to your forms and a meta tag in the document head with an authenticity token. It also stores the token in the session as well. If the form authenticity token does not match the expected session authenticity token, the request will be rejected. If no token is in the request at all, it will be rejected because Rails will generate a new token for that request and they won't match. One caveat about CSRF protection in Rails that you may not know about is that CSRF protection is part of the callback chain, which means Rails forgery protection is order dependent. For example, here's an application controller with a before action that authenticates the user and calls protect from forgery with a conditional. The catch here is the forgery protection depends on how we authenticated. In this case, it will only be run if we authenticated for the browser and not for our API. Generally, this isn't a problem, but in another controller called OtherController, we skip the authentication callback and immediately call a different authentication method. If we're depending on CSRF protection to be run after authentication, in this particular case CSRF protection will be run first and silently skipped because we told it to only run if we authenticated by web. If the user is not authenticated, then this conditional is not true. Because we didn't authenticate by web first, CSRF protection is not run. This is because the actions in your application controller will always come first, followed by the other controllers that inherit from application controller. By skipping authentication in other controller, the conditional for CSRF protection is false. To make sure that we're protected from CSRF, we must recall protect from forgery after authentication method on the other controller so that it is run in the correct place of our chain. You don't have to worry about callback order for CSRF protection if you're always protecting against CSRF and not using conditional logic in your protect from forgery call. The default behavior protect from forgery is to be inserted into the callback chain at the point in which it's called in your application, which is fine as long as it doesn't depend on any other action in your chain. This is how I feel about CSRF protection being order dependent in Rails. Luckily, most of you don't have to worry about this because you probably aren't using conditional logic in your controller filters. Rails 5 now supports per form authenticity tokens, which means that each individual form will use its own authenticity token. If your application is using content security policy, this protects you from form hijacking and increases the security of your application. CSRF protection relies on tokens being unique. If you don't refresh session tokens when users sign out, the authenticity tokens can be reused for that user in future sessions. Rails itself has no way of knowing that the user logged out, so be sure to add reset session to your sign out code. This will set the CSRF token to nil on logout so that the Rails will generate a new token in the next session. If you don't reset the session, an attacker could gain access to the session's authenticity token and reuse it on users' future sessions. You want to make sure that when you're using unique authenticity tokens to protect against CSRF attacks. Lastly, ensuring that your site has no XSS attack vectors is helpful for mitigating CSRF attacks. XSS is not required to carry out a CSRF attack, but it makes it easier to spoof the token, change referrers, and double submit cookies. With that, let's talk about XSS attacks or cross-site scripting. XSS attacks inject malicious JavaScript into trusted websites. XSS is easily exploitable because of the places where user input is not sanitized by the application. There are three types of XSS, stored, reflected, and DOM-based. Today we're going to only talk about stored because it's the most dangerous and easiest to demonstrate. Stored XSS means an attacker inserts a malicious script into the database or file system, which is executed when the victim visits the page containing the malicious script. Testing for XSS is a pain because you have to check that each place you're allowing user input isn't outputting on sanitized data. I know what you're thinking. This doesn't sound like something you want to do, and testing for this is not fun, but we have to because it's a very easy attack to carry out and can potentially be dangerous enough for an account takeover. XSS attacks don't require user interaction and can be executed easily. The simplest way to check for the most obvious form of XSS is to see what happens when you submit scripts in your form fields. Let's take a look at this user profile, for example. We have a user who has entered an image tag with an on-error alert as their name because they think they're funny. Alerts are good to use for XSS testing because they're very obvious and incredibly annoying. You can't miss them. Because we're using Rails, this output is automatically escaped by your application code. If you're using any version of Rails after Rails 2, this is automatic behavior by default. You literally have to ask Rails to introduce this type of XSS attack. That said, it's still relatively easy to enable accidentally. If we take a look at our code, it's not obvious this is automatically escaped, but that's just how new versions of Rails work. When Rails parses our ERB tags, it ensures that no injected tags are output. Or at least, evaluated. Unfortunately, it's pretty easy to accidentally disable this automatic protection. Let's say for some crazy reason you wanted to allow the user to address up their name by adding HTML tags. To do this, you might add .html save to the string. This will allow users to add bold tags in their name, but it also exposes application users to XSS attacks. Now that our name attribute is allowing HTML, we have also allowed JavaScript tags. The string is no longer escaped, and when our victim visits the attacker's profile on our website, a malicious script will be run automatically because the XSS is stored in our database. Of course, attackers won't use an obvious alert, and the malicious script will likely run in the background without the victim ever knowing. Another less obvious way that XSS can be exploited is through the JavaScript URL scheme. This is the legacy feature of browsers. This XSS won't automatically fire by visiting the page. Instead, it will be executed when the website link is clicked. This can be a problem for auto-linked URLs like this that obscure the location of the link. Any site that allows URLs that doesn't validate user input against the JavaScript scheme may be vulnerable to this attack. How does the JavaScript scheme work, though? First, we have the JavaScript scheme instead of HTTP or HTTPS. Browsers don't limit the type of protocols that can be created. Many mobile apps utilize the ability to create any possible scheme to handle specific behaviors in their applications. The JavaScript scheme tells the browser that visiting the URL should execute JavaScript. This domain can literally point to any website because once the victim tries to visit the page, JavaScript will be executed instead of a redirect to the JavaScript protocol. A redirect because of how the JavaScript protocol works. So we're not going to actually go to example.com at all. So this could be Google.com. It doesn't matter. Then the %0a is an encoded line feed, which indicates the line has ended but doesn't move the cursor to the next line. This is what separates the URL from the JavaScript so that the browser will execute the malicious code. Without the %0a, the JavaScript alert actually doesn't get executed. Finally, we have the JavaScript code that we have injected with an alert. This is the malicious script that is stored in our database now. The JavaScript scheme is less talked about but an interesting way to obscure stored XSS attacks from the victim. This type of XSS attack does require the user interaction, though, because the user must click the link to execute the JavaScript. These seem like pretty simple attacks, but what can they be used for? XSS can be used to access cookies, session data, and other sensitive information. This retrieved data can then be used to take over a victim's count, steal an online identity, transfer money, prompt the user to download viruses, among other things. The amount of damage XSS attacks can do rely heavily on the purpose of the application and the type of XSS attack that the site is vulnerable to. So what can we do to avoid XSS attacks? All user-supplied data should be considered dangerous. While the majority of your users are innocent, you want to protect them from the hackers that are trying to harm them. The best way to ensure your application does not allow XSS injection is by ensuring that the user data input into your database is escaped. Make sure that you don't run HTML safe on user-supplied data without sanitizing it first. This is the easiest way to accidentally enable XSS injection in your Rails application. Rails and other modern frameworks are automatically disabling this, but if you actively enable it on your framework, Rails can't help you. If you absolutely must output tags on a user-provided string, then your application should utilize a sanitizing library. In Rails Active Support, it allows you to whitelist specific allowed tags. You could allow a bold tag, but not an image tag. Additionally, if you do allow an image tag, Rails will strip off the on-error attribute, allowing images for protecting your application from XSS attacks. Make sure that you're validating any user-supplied data, especially URLs. The JavaScript scheme can be easily mitigated by whitelisting allowed protocols. Here's a simple Rails validation that checks the URI scheme. First, we add HTTP and HTTPS to the whitelisted protocols. URLs that are added by your users are now required to have HTTP or HTTPS schemes. This doesn't protect your users from someone inserting a malicious website, but it does protect untrusted JavaScript from being executed. When a user is created or updated, we validate the URL by using Ruby's built-in URI parser to check that the scheme is in the whitelist. I recommend using a URI parsing library instead of a Regex, because Regex is really hard to get right. You don't use a blacklist because there are not a finite number of schemes that can be used, and therefore making it really easy to bypass our protections. You don't know when someone's going to make a new scheme that's allowed by a browser that can have XSS attacks. Data schemes are another one that you can encode the URL and do some crazy stuff. You should always use a whitelist over a blacklist to ensure it's very difficult to circumvent your data validation. The next exploit we're going to talk about is XXE. While unrelated to CSRF or XSS, this is a vulnerability that has been around for a long time and can be easily avoided if you know about it. XXE is a tongue twister. XML, external entity attack. In some XML parsing libraries, it's possible to use values dynamically with entity replacement. In your XML document, you can add an entity reference. When the XML is parsed, it will replace the XML entity with your content. I'm going to quickly demonstrate how entity replacement works so that this attack makes more sense. Here's an example of a to-do list in XML. We can use the entity directive named ant1 to load a system file named listitems6-7.xml in the place of the dynamic entity notation. If we look at listitems6-7.xml, we can see there's just two more to-do items about booking flights from Salt Lake Sea and finishing this talk. That will be dynamically loaded when we parse the XML file. When we parse it, it will be replaced with the two lines from system file listitems6-7.xml, and now we have a full to-do list. How does entity replacement result in a security vulnerability, though? This seems like a pretty useful feature of XML before we had Rails or other dynamic frameworks. The problem is that this can be used to load any file on the server. Exploding XML replacement requires that you're parsing XML in your application and that you're using a parsing library that supports entity replacement. The default Rails XML parser is rexml and does not support entity replacement. If your XML parsing library allows entity replacement, your system and application may be vulnerable to XXE. Because any system file can be loaded into the dynamic entity, it can be used to get sensitive files which could compromise your entire server or application. This is far worse than an account takeover. Let's take a look at an application with an XML endpoint. Here we have the create method in our user's controller, which accepts XML requests. An attacker who knows that the application allows XML can try to exploit XXE through a crawl request. First, the attacker sets up a payload to create a new user. The payload is set up to try to retrieve our secrets.yaml file. The attacker has guessed the path to our system of the sensitive file. Of course, if you're correctly using secrets.yaml, then your production password or production secret will not be in this file. But this is just an example of one file a attacker could get access to. The payload will be sent to our application as the name of the new user being created in the crawl request using the name entity. Entities can be whatever you want, so I use name here. If successful, the user's name will be stored in the database as our Rails secrets.yaml file. The attacker that uses a crawl script to send a post request to the create user.xml endpoint and with the payload that asks the secrets.yaml file that we looked at before. And when run, the user will be successfully created with secrets.yaml as the user's name and return back to the attacker. And when we look at the app, we can see the user was successfully created with the secrets.yaml file as the user's name. Now the attacker knows a successful file was retrieved and they can continue to load up other files on the system that can be used to compromise the server. It's unlikely that your secrets.yaml file is going to have production secrets in it. I would have used database.yaml, but it was throwing a syntax error, so that doesn't actually demonstrate the vulnerability as much fun as I wanted it to. The real issue here, though, is that once the attacker knows that they can get one file, they can write a script to get any other file on your system. And one of those are going to be sensitive. XSE attacks can be more dangerous than XSS or CSRF attacks because they can be used to obtain sensitive server files. This can result in an entire system being taken over rather than just a user's account. This attack depends heavily on how your server's file system and user permissions are set up. It's a lot harder to exploit than XSS or CSRF, but if the system is successfully compromised, there is no telling how much damage an attacker can and will do. I know the idea of your server getting on makes you feel sad, but don't worry, we can fix this problem. First, if your application doesn't need to support XML parsing, then you should just not parse XML. That sounds a bit easy, but if you're building a new application, it's likely that you're using JSON instead of XML, so you wouldn't have to worry about XSE. If you absolutely must parse XML, you should avoid XML parsers that allow entity replacement. LibXML used to allow entity replacement by default, but in current versions, it's not on. Even with entity replacement off, though, sending documents with dynamic entities in LibXML can cause a DOS attack because the system may attempt to read the file. The best way to avoid this attack is to use XML or another parser that does not support XML entity replacement. If you're using LibXML, you can easily test if entity replacement is supported by checking the default substitute entities method in your Rails console. If true, then entity replacement is supported. On current versions of LibXML, entity replacement is off by default and requires it to be turned on by your application explicitly. If you absolutely must use entity replacement, then you should whitelist known and expected entities instead of allowing any entity replacement of any file. I didn't wake up one day and understand how these vulnerabilities worked and how to mitigate them. It took a lot of research to make sure I was patching them correctly and was able to properly test for them. Every time you patch a vulnerability, take the time to figure out how to exploit the attack and why the patch fixes the problem. If I patch a Rails vulnerability instead of doing a full upgrade on my application, I always write a test that demonstrates the vulnerability. This helps me understand how the vulnerability works and shows the intent of my code so someone doesn't accidentally remove my patch. One of the tools I used to research CSRF, XSS, and XXE was OWASP.org. OWASP Foundation is the open web application security project and contains information of all types of vulnerabilities with explanations on how to exploit and how to patch these known problems. OWASP is an invaluable resource in security education. Security is hard, it's easy to get wrapped up in the everything is broken mentality. Maybe you're feeling a bit down right now. But if we work together to make software applications and systems more secure, the human element will follow suit. Instead of treating security like a secret society, we should embrace teaching everyone on our teams from ops to support how vulnerabilities work, why mitigation is important, and how to be more secure. I made an application in Rails 5 to demonstrate these three vulnerabilities that I talked about here today. So you can test the exploits yourself and understand how each of these vulnerabilities work. The master branch has the vulnerable application and then includes a branch for each vulnerability to demonstrate how to patch them. While most of these vulnerabilities are simple to avoid, it's easy to accidentally implement into your application and it's paramount that we all understand how they work. We should appreciate that new software releases increase our threat surface and have an impact on your security team. When designing a new system, consider security first and shining new features second. This will create a more resilient system that is ready for attacks. While we can't predict new vulnerabilities, we can at least build new software thinking about them. Don't assume that because you use Rails, which tries to make security easy, that you are by default secure. Your system and application do not know best, only you do. I hope that you leave here today feeling empowered to secure all of the things with me now that you have a better understanding of how security and common vulnerabilities work. Together we can make the internet more secure. We can plan for the future by knowing about the past and being intimately familiar with the different types of attacks that can harm our user security. We can work together on legacy software and future software with security in mind and make security less of a burden for everyone. Thank you. Now it does, but it used to not, also. It's one of the other ones that did allow it. Oh, sorry. The question was, does Noko Geary support entity replacement by default? And the answer is it used to, but it does not anymore either. I think that most of the parsers know this vulnerability and have patched it. So that's good. Yes. The question is, how can you figure out which XML parser your Rails application is using? You can ask in your console, if you're not actually changing it in your initializers, it's probably XML because that's the default. But you can go into the console and then just ask what the XML backend is. I think it's Active Support XML. I don't know. It's in the documentation, how to look up which one you're using. Yeah, I'm saying, the question is if I'm suggesting not to use secrets.yaml. It doesn't matter because the files that someone is trying to get, you're going to eventually have a sensitive file. I use secrets.yaml because it didn't throw a syntax error when I tried to get it on my local computer. The database YAML through a syntax error, which was disappointing. So if you're using secrets.yaml correctly, your production secret won't be in the file. So it's not technically a sensitive file. But if you're using it incorrectly and actually putting your production secret in there, it is a sensitive file. Aaron wants to know if Rails supports XML post by default. And no, it doesn't. I actually had to add a lot of stuff to the Rails 5 application to get it to break. You have to add action pack XML parsing and you have to actually write a 2XML method in your model. But if you were using, I used Rails 5 because I didn't feel like installing Rails 3. And building an application with that. So the application I made shows that even with new software, you can still make yourself vulnerable to old attacks. It's harder. But if you're upgrading an application from Rails 3 to Rails 5, you might have that stuff in there and you might not even know about it. And then once you get up there, you're like, oh, my XML parsing is not working. You just slap that gem in there and then you're done. All your tests pass. But you then might be vulnerable and you don't know. Yeah, that's a problem. The question was that if you're upgrading from 3 to 4, 4 to 5 in your Rails application, how do you know which gems are vulnerable? Well, if you're getting to 3 to 4, it's probably usually going to force you to upgrade the more recent versions of the gems depending on what's required by Rails at that point. So once you get to 5, you should be using the most recent versions of those gems. But you can also use the breakman gem, which will tell you what's vulnerable in... I don't know. I don't think it will look as far into all of your gems, but it will at least tell you what in your Rails application is considered insecure at the point of Rails that you're using. And it was just upgraded for Rails 5, so we're good there. No more? Okay. It's lunchtime.