 Yeah, so just letting you all know, I posted a copy of the slides on my Twitter, which is at Kenesan. So if you want to follow along, you can just download a copy of the speaker deck straight from there. All right, let's begin. Hi, DEF CON. Yes, so this is my talk, how to use content security policy to stop cross-site scripting. First, a little bit of detail about myself. My name is Ken Lee. I'm a product security engineer at Etsy.com. In a previous life, I worked at a financial software company. As I just told you, literally 30 seconds ago, my Twitter handle is at Kenesan. If you have any questions, you can feel free to email me or send me a tweet. So let's talk about content security policy because I assume that's the reason you're all sitting here today. The best way that I've heard content security policy described is it's essentially depth for the web. And by that, I mean, it is a way to tell, for a server to tell your browser, you're allowed to execute certain things and only these things that I tell you to execute. And by doing that, it provides for a mechanism to stop cross-site scripting from happening. Just by a show of hands, how many people in this room know what cross-site scripting is? Okay. All right. Then this slide is going to go extremely quickly. So as an example, I take, for example, a page, a page is HTML and I throw a very simple cross-site scripting inside of it. This page has a content security policy on it. And if you look at the bottom of the page, the actual, the execution of the script is blocked by the presence of a content security policy. And that's sort of basically how it works. Now, for more detail, the way that it works is that a browser that is following a content security policy only follows specific directive specified by the content security policy as elements that it's allowed to render or execute on the page. In particular, the two important things to note about this is that content security policy, by default, disallows the use of inline JavaScript on a page, which is a big thing that I will get to later. And in addition, it prevents the use of the eval style family of functions in JavaScript. So I'm just kind of throwing this up there. This is an example, content security policy header. You know, you're not expected to understand this. There will be no quiz later on this. This is just to sort of demonstrate what a content security policy looks like. It's basically specified as a set of directives with a set of your eyes and keywords that tells the browser what you are allowed to execute JavaScript and other elements from. So content security policy, as I've said before, is broken up into a series of directives. The most common ones that you will probably end up using are directives such as script source, which controls the use of JavaScript on a page, style source, which controls the use of CSS and other styling on a page. And as you can see here, there's a directive for pretty much most types of things that you can embed on a web page. And in addition to that, there are also special keywords that you can use in combination with the directives to modify your content security policy. So for example, specifying a keyword of none for a script source directive tells the browser don't accept any source, don't allow any JavaScript from any source. The self directive is pretty self-explanatory. It basically says for only allow content from the same sub-domain and scheme. And the unsafe inline and unsafe eval are special keywords that actually override the default functionality that I mentioned before with regards to content security policy blocking inline JavaScript and eval. So an important aspect of content security policy that I'm going to go into is the use of report-only mode. Report-only mode, basically, it's appended to the end of your content security policy header or content security policy meta tag. And what this allows you to do is it allows, it tells the browser, I want you to not, I want you to not actually block elements that are disallowed by the content security policy. So it's essentially a dry run. And what makes this functionality even more powerful is the use of a report URI. So at the end of your content security policy, you can actually specify a reporting endpoint which tells the browser, hey, you've seen some bad shit with content security policy violations, send those violations my way. And this basically provides a mechanism for the server to learn what kind of content security policy violations the server is seeing. And again, to emphasize what's important about report-only mode is that it doesn't actually block. You can deploy this without actually affecting content that the client is seeing. So an important aspect to note is that content security policy as a standard is still evolving. This little snippet of a screenshot was taken from a version of Firefox not too long ago. And if it's hard to read the text, what it basically says is CSP warned, failed the parse, unrecognized directive, unsafe inline. Firefox at the time that the screenshot was taken had a bug where it didn't recognize the unsafe inline or unsafe eval directive. And I think the latest version of Firefox, Firefox 23 actually still has a bug where it doesn't allow you to whitelist unsafe style source elements. So like I said, browsers these days are mostly up to spec with regards to CSP 1.0 compliance. But if you happen to notice in the process of your testing for content security policy that you're seeing some unexpected behavior, there is a strong possibility that it may not necessarily be your content security policy. It could just be the client's browser acting like a smelly lobster. So let's talk a little bit more in detail about the inline JavaScript bit that I alluded to earlier. So as I mentioned, content security policy 1.0, this allows by default the use of inline JavaScript on a page. This is actually, as it turns out, kind of a big deal. The way that is recommended to sort of deal with inline JavaScript is to create external scripts out of all your inline JavaScript. This has kind of a number of implications. The biggest one being that you're essentially turning all these inline bits of JavaScript into synchronous calls that the browser has to retrieve the JavaScript from. Alternatively, if you want to get around this, you can also specify the, as I mentioned before, the unsafe inline directive. But this has implications for defeating or basically weakening the strength of your content security policy. And in addition, if you use any kind of asynchronous JavaScript loading libraries such as RequireJS, this also has implications as well because RequireJS and other asynchronous JavaScript loading libraries, what they like to do is they essentially like to say, hey, I want to load this bit of JavaScript. So at some point during the rendering of this, of the page, I'm going to basically call append child to the head of the document with the contents of the JavaScript. And because this is all done asynchronously, it's very fast. There's no additional HTTP call. But the problem is that that's going to cause issues with content security policy 1.0. So as a result, you know, there are potential performance implications to deploying CSP 1.0 by turning all of your scripts into externalized bits of JavaScript. Hopefully content security policy 1.1 will fix that as right now they're sort of working out the final bits of the spec. But there will essentially be a way to safely whitelist inline JavaScript on the page. So let me talk about some real world implications about deploying a content security policy to your production website. So, oh my God, this is terrible. So there's a couple questions that a lot of you probably have if you want to, if you're thinking about rolling out a content security policy to your website. For example, how should you go about rolling out a content security policy to your website? How should you test the validity of your content security policy? And what bits should you throw the content security policy onto? A number of websites such as Twitter, for example, have chosen to focus their content security policy on specific segments of their website. And this actually makes a lot of sense from a metric standpoint because it gives you a very focused approach to applying and fixing the issues that your content security policy detects. So these are two just graphs of Splunk showing a hit count of content security policy violations that have been sent from client browsers using the report only directive to an endpoint that I've specified. And in addition, the bottom graph shows a list of top blocked URIs. So what's really powerful about content security policy is that if you specify a report only endpoint, you can use this to essentially learn all the things about content security policy violations that clients are seeing. And this potentially has some really interesting implications in the process of looking into content security policy during the evolution of the spec. One of the things that I noticed was the fact that you end up seeing a lot of mixed content on your website. And the nice thing about content security policy is that it's actually really effective at helping you root out and stamp out all those instances of mixed content on your website. So essentially the technique that one can use for detecting this mixed content on your website is if you specify a content security policy and you have reporting endpoint, if you see the reporting endpoint, the browser essentially sends a JSON blob to the reporting endpoint containing information such as the document URI, the URI of the blocked element, what have you. And so basically if you just parse this blob and detect if the scheme of the document scheme that was retrieved was HTTPS and the block URI was HTTP, then you have an instance of mixed content sitting on your website. Now granted, certain headers such as HTTP strict transport security can actually be an effective approach in essentially removing mixed content from your website because this will force all your subdomains to HTTPS. But from an implementation standpoint, from experience, I've discovered that most of the issues that you will run into with mixed content on your website when rolling out a content security policy won't be from your own subdomain. Surprise, surprise. It'll actually be coming from, in most instances, third party vendors who don't, for example, simply don't have an HTTPS endpoint for their website. So a couple thoughts, additional thoughts about content security policy. As I mentioned before, unsafe inline and unsafe eval, essentially severely nerf the protective abilities that content security policy give to you. So it's important when deploying a content security policy to consider whether or not you should include these directives in your policy. In addition, content security policy can, if you choose to implement it as a header, you can potentially be including a very, very large number of sources in your policy. And that can cause your header sizes to grow, which can potentially have an impact on performance. And the final, last sort of obvious point is that you should always make an effort to test your content security policy before you roll out to production. So let me get to the final, the good bits before talking about the tool that I created. So if you want to test content security policy right now, live, it's a thing that exists in Firefox 23 and Chrome 25. Previous versions of this browser used, I believe, the X content security policy header for older versions of Firefox and the X WebKit CSP header for Chrome. And if you want, you can test this out, you can apply the report only header or as a meta tag to simply have the browser attempt to basically not block instances of bad JavaScript or style elements that it sees. In addition, the report URI is tremendously powerful from a metrics perspective just because of the fact that it can give you so much information on what clients are actually potentially seeing in terms of blocked content. And with the ability to monitor all of this blocked content via either Splunk or StatsD or Graphite or some other logging metrics tool that you use, it gives you the ability to look into and fix all of your inline JavaScript issues that are caused by deploying a content security policy. All right, so let me talk a little bit more in depth about the tool that I'm releasing. So one problem that I initially ran into when deploying a content security policy was that it was actually really annoying to test my content security policy in a development environment and then push it to production. Because of the very nature of the fact that in production, I would have to specify prod hosts and in Dev, you know, I would have to use Dev host. And it was annoying to the point where I didn't want to have to constantly poison my host file to have to handle this. So I decided, you know, F it, I'm going to make some tools to help me fix this problem. So CSP tools is essentially a set of three Python based tools that do the following. The proxy tool is essentially a Python proxy written using LibM proxy that intercepts all HTTP and HTTPS traffic. And what it will do if you connect to this proxy is it will insert, dynamically insert a content security policy report only header with the policy you've specified. In addition, the proxy will also capture any content security policy violations the browser sends back to the proxy endpoint. The browser tool is basically a Selenium powered instance of Firefox. And what that does is it essentially allows you to create unit tests for content security policies that you've deployed to specific pages on your website. And finally, the parser tool, which is fairly self explanatory, what that does is it takes me. Hi. So have you attended any talks? Um, yeah, in the past, Defqons, yes. Yeah, okay. So we're here to actually help you out. This is your first time speaking, right? Yeah. How's he doing? We have a little, as many of you know, we have a little tradition here as our speaker does not seem to know where all first time speakers must do shots. We also have some other first time attendees to Defqon. There you are. Thank you, sir. All right. Congratulations. Cheers. Oh, my God. Wait a second. And where's Heather? Heather, raise your hand. Yep. It's Heather's first time at Defqon too. And now back to our regularly scheduled, oh shit, your time's up. Sorry, folks, no demo. Just kidding. So here's a little demo of the CSP proxy at work. I'm browsing to this one website, www.etc.com. You may have heard of it before. Wow, that really hit me hard. So I'm just going to the console to demonstrate, you know, there's no tricks up my sleeve, no content security policy here. Going to the network tab, just showing you that the GET request does not actually have content security policy specified in the response. So scrolling back down, you can see no tricks here. So I'm going in a second when this really slow video demo finishes, I'm going to just change my proxy home to specify using a proxy on local host port 8080. If I'm talking a little slower, it's because I just had a lot of alcohol and now more alcohol. Yeah. So I'm starting up the proxy on and I'm specifying a host of www.etc.com. And now I'm going to reload the page now that the browser is using this proxy. And you can see here from loading the page it's seeing some violations. If you look at the initial GET request, we can see that a content security policy report only header is in the first response from the server. And if we, and we can see here a content security policy violation being sent back to the proxy. And in addition, we have a number of content security policy violations being logged on the console. So that's essentially how the proxy tool works. And if you actually look into the log file all it is, like I said before, no lies, I wouldn't do that to you guys, is just simply JSON blobs of the content security policy violations that the proxy has logged to disk. So now this next video is demoing the parser and browser tool. I'm going to load up the proxy again this time on port 8090. And I'm going to fire off the Selenium powered browser tool. And I've specified a number of URIs, HTTP and HTTPS links for the browser to log to. And since I've specified my proxy on port 8090 for the browser to use, it's going to connect directly to the proxy and send as you can see in the background, send content security policy violations directly to the proxy. So you can see here I'm browsing a couple URIs, first the front page of Etsy, now some HTTPS link, which it loads just to tell the proxies chugging away happily because it's basically just logs all content security policy violations right off the bat. And I'm going to jump ahead. So I implemented this using basically Selenium. And if you really wanted to, you could actually modify this to make the whole thing run headless so you don't actually have to see the browser running. So now we closed down the proxy after a ton of them, and they've hit the URIs that we've specified, which is excellent. So now we can go about the process of making a content security policy using the parsing tool, and bam, we've just created a content security policy for our website. So yeah, if you want to get CSP tools, it's available on GitHub at the following URL. Feel free to issue pull requests if you find bugs with my implementation. Also, if you don't know what I mean, hit me up afterwards in the QA Lounge or on Twitter, which is again Kenneson, if you'd like to view a copy of these slides. And I'd also like to give a huge shout out to Kaizong for helping me tremendously with the initial implementation of CSP and also just a general shout out to the Etsy security team for being tremendously supportive of my efforts in implementing content security policy. Oh my gosh, outside is the appropriate place to be asking them, at least from what I've been told. Thank you.