 kind of nice and it's too warm. You guys, just a little bit less warm and it'd be great. This is the problem I have with San Francisco, just a little bit less cold and it'd be great. So I'm just gonna keep going back and forth I guess. Anyway, I'm here to talk about security. So first, let me introduce myself. I'm Andre Arco. That's my picture on the internet. I'm also indirect on basically all the internet things. I work at Cloud City Development. We do mostly Rails contract work and have like a bunch of devs who do cool stuff. I specifically do pairing and team training on teams doing Rails development. If that's the kind of thing that your company's interested in, talk to me later, it's cool. The other thing I work on is called Ruby Together. It's actually one of the things that Winston and Alan did as organizers of this conference is everybody's goodie bags have Ruby Together stickers in them. Ruby Together is a non-profit company that I started specifically to make sure that all of these tools that every Rubyist uses every day keep working. So, Bundler and RubyGems and RubyGems.org and all of those things that basically all of us sit down and use any time that we're doing work have spent years and years just getting maintained by people in their spare time. I don't know how much all of you know about the history of those tools, but it's basically been people who are like, man, it would be cool if, and then spending like their weekends and their nights improving things. And that means that when people don't have weekends and nights, it goes totally unmaintained. And sometimes that means that there are security issues and sometimes that means that RubyGems.org goes down and sometimes that means that Bundler doesn't work with the newest version of RubyGems because no one had time to make sure that those things kept happening. So Ruby Together is a trade association for Ruby developers and Ruby companies that funds developers to make sure that all those things keep working. So right now, Ruby Together is paying for maintenance work on Bundler to make sure that Bundler keeps working with new versions of RubyGems and new versions of Ruby as they come out. And it's paying for maintenance on rubygems.org to make sure that the servers get security patches applied and it's not an easy target for hackers. We would love to do more, including additional development work, adding new features, making things faster and more awesome. We'd even love to eventually pay people to improve Rails and even Ruby, but we need all of your help. So check out Ruby Together. So far, companies have joined Ruby Together, include Stripe, Engine Yard, Cloud City, where I work, and Bleacher Report. We'd love to have you and your company help us with that as well. So back to security. Security, unfortunately, is really hard. It would be nice if when things were hard we could just go off and do something easy. I find shopping much easier than security. I recognize that this may be different for individual people, but I'm drawing from my own life experience. So I get sad when I have to deal with security instead of going shopping. So the motivation for me to give this talk is actually now maybe a couple years old. And it was when there was a huge string of security releases. Depending on how long each of you's been programming Ruby, like maybe you remember this, maybe you kind of glossed over it, maybe this was your nightmare for multiple months in a row. So this thing happened where Ruby went through a bunch of security releases in like a few weeks. It's really not normal to have a Ruby release more than like maybe two or three times a year, and there were three releases in like a month. That's really a lot of releases. It was actually so many releases that the release manager apologized for releasing too much. But it would have been worse to not release because the releases contained fixes for critical security problems. So everything's terrible. At the same time, in the same time period, there was a amazing record number of Rails security releases. Look at that, that's amazing. How did that even happen? That's a really huge number of releases. So while that was happening, and I was talking to other Ruby developers, I would say, oh, there was the security release because of this security problem. It's really important you should update so that all of your stuff isn't easily hackable by anyone who wants to hack it. And a really common question that I suddenly got when I started talking about security issues was like, wait, okay, a security issue means that there's some CVE number that goes with it, but what does that mean? So CVE does mean a security issue, but it actually also has additional meaning. CVE stands for Common Vulnerabilities and Exposures, and it's a number issued by a company that tracks security issues for actually all software. There's, it's not just Ruby, it's not just Python, it's not just Oracle, it's not just, it's security issues with all of the software in the world. And there's a company called MITRE Corporation that issues the numbers that are used to track these security issues. And lots of organizations, including huge corporations and even governments, including the US government, have agreed to use these numbers that are issued by MITRE as the way to tell that you're talking about the same security vulnerability because when you say, as you just saw, did you see that big security vulnerability in Rails yesterday, you could be talking about three things. So having the CVE number is a good way to tell them apart. The MITRE Corporation assigns blocks of numbers to big software companies like Oracle or Apple or Microsoft, Red Hat, other companies. Those companies, then, when they encounter specific security issues, use one of the numbers that they already have in a block to say, okay, this security issue has this specific number and now we're just gonna use that number to talk about this problem from now on. The Ruby community has been getting help with security issues from Red Hat. The Red Hat security operations group has mostly been the source of CVEs for Ruby and for software written in Ruby like Rails and Bundler and other gems. There is a global, or yeah, global, both MITRE and the US government's National Institute of Standards and Technology have websites that provide a complete list of all the CVEs that have ever been issued, so you can go look them up. You can see what the problem was, you can see what software the problem was with and you can see if there are workarounds or if there's a version of the software that fixes the problem or if there's nothing that can be done and you're just screwed. All of those things are there on those websites. So once I've explained that part and I'm telling people that these security issues could cause really severe problems with their application because there are horrible people out there who would be really happy to take advantage of these security problems, the most common response that I get is, but Matt's is nice and we're nice and why is security a big deal? Come on, everybody's just great, right? So unfortunately, in that time period where Ruby and Rails both had tons and tons of security issues discovered at the same time, there were so many security issues in such a small period of time that security researchers who many of whom are not Rubyists and many of whom are not nice, said, wow, if there are that many bugs, there must be a lot more other bugs that are pretty easy to find too. And so that's ultimately the reason why it's really important to start talking about this stuff and thinking about it now because we've unfortunately for the last year and a half or two years as the Ruby community been kind of a bigger, juicier looking target than other software and other communities just because there's this well-known set of, wow, that was a whole lot of problems so fast, like maybe there's some more problems, I should go check them out. And so it's actually really important that we start paying attention to this because it's not like those security researchers are gonna go away satisfied that there are no bugs in our software. There will be bugs in our software and now they're paying attention and looking for them. So as you have probably noticed, there are some other people in the Ruby community who maybe aren't as nice. But before all of these security issues happened, Rails was actually the only project that had any serious experience dealing with security issues at all. Rails was part of this big group but before that Rails actually had established procedures for dealing with security issues and they had a security team and they had a security reporting address and they had a bunch of stuff that meant that they were actually able to deal with this flood of security issues when it came up. And so that means that the rest of the Ruby community can definitely learn from the things that the Rails team has already figured out as a result of having to be on top of that stuff just by being a bigger, more attractive target to security researchers. So the other reason why for a very long time Rails was the only gem that had any experience dealing with security issues is because for a really long time, Rails was the only gem that really was worth attacking. When Rails was like the only thing that everyone used, there wasn't really anything else that really mattered to attack. But starting with Rails 3, one of the big changes that happened in Rails 3 is that things broke down into smaller and smaller pieces and you can only use some of those pieces or you can swap some of those pieces out for other pieces and ultimately the combination of Rails 3 becoming modular and Bundler providing the ability to add a new gem to your project by like copying and pasting one line of code and then typing bundle. It means that now we have hundreds of chances for security issues in every project, hooray. So I actually went back and tracked an eight-week period kind of just after the Ruby and Rails issues that I showed you and I counted up which gems had security releases with fixes for like critical security issues in them. So this is just eight weeks. Ruby gems, yes, Ruby gems had a security issue. It was in the documentation generator. There was a CSS attack. Bundler had a security issue. JSON had a security issue. RexML had a security issue. Rack had a security issue. Arrow had a security issue. ActiveRecord had a security issue. ActionPack had a security issue. ActiveSport had a security issue. Ardock had a security issue. Yup, that was eight weeks. Unless you spend a lot of time running bundle outdated, chances are good that you do not actually make sure that Ardock is up to date in case there's a security issue. So I am here to talk to you about what should we do about that problem. Just keeping up with the super critical security releases is sometimes really hard, right? Like Ruby has a critical security release and you're like, oh, but something slightly different and my application doesn't quite work seamlessly and that means we have to upgrade production and that means that the other developers also have to upgrade their development environments and maybe we should just do this a little later and then you have a critical security vulnerability that everyone knows about in production on your servers which is not good. So how can we handle the problems that we face when other people's code have security issues? Well, it's really frustrating, right? Updating is a pain, security updates are important but it's not always really clear what the downside of ignoring them is. Again, when I talk to Ruby developers about security updates, they're like, oh yeah, that's important, security's important, I know security's important. We'll do that when we have some free time after this important thing and then that important thing and then the other important thing. Well, one day we'll do it. So it's also really hard to sell updating sometimes to bosses. You're like, well, we need to spend some time to do this update and the boss is like, oh, okay, that won't make your shipping schedule slip at all, right? You're gonna be done with everything at exactly the same time after you upgrade and you're like, yeah, yeah, maybe. So the way that I have found is most useful to think about updating is that it's a lot like insurance. Businesses are actually really fond of insurance. If something really disastrous happens, the business is not instantaneously destroyed by whatever the problem was. They talk to the insurance company and the insurance company says, wow, that was really unlikely. Good thing you've been paying for insurance. We got you covered. Updating is basically like that. It's a small amount of developer time that you pay on an ongoing basis when there are new updates. But without that small payment over time, when something goes wrong, what goes wrong goes really wrong because your system is full of security issues that anyone who pays attention just like already knows about how to use an exploit and hack your stuff and hack your user stuff and hack their stuff and it just gets worse from there. As you can maybe imagine, this is not really great for anyone, honestly, but you as a developer in particular. And just one security breach can be a really big deal. Ask me, I worked on rubygems.org when there was a security breach and we had to throw away the servers and build new servers from scratch. And download every single gem and check to make sure that no one had replaced any of them and build new servers. And a week later, rubygems.org was back up and everyone was really relieved, but it was not good. If you do, so assuming that you're not doing this insurance level maintenance of upgrading when there are critical security upgrades available, when you have a problem, it takes down your site for an amount of time that is more than five minutes and less than months, I guess, but it's not really clear how long it's gonna take, right? Like there was a company that offered backups as like their entire service was give us some money and we'll hold your backups. And one of their like, not even really important servers hadn't had security updates applied and it got hacked and they had been using the same AWS key for every single thing that they did on AWS, on every machine and their not important server had a copy of that key. And the people who hacked that not important server that they hadn't bothered to upgrade because it wasn't important, use the AWS key to delete every single piece of data that the entire company had ever put onto AWS and make sure that it couldn't be recovered because it was the admin key for the account. So that entire business literally replaced their website with we have gone out of business, sorry, and they were out of business the next day. Like that is actually a real thing that happened. So security issues are somewhere between bad and your business is now over, go find a new job. Even if it's not that bad, security issues, especially in places where there are laws around disclosing being hacked, like in the US it's not super even, it's like a state by state thing, but like in California, which matters for a lot of tech companies because if they have some corporate existence in California, this law applies to them, you actually have to inform every single person whose information might have been stolen when you get hacked. That means lawyers, that means having to figure out how to contact hundreds or thousands or hundreds of thousands of people and it's just like a gigantic problem is a nice way to put it. So updating is hard work. I totally get that. I experienced the hard work myself, it sucks, but updating is worth it because updating like insurance gets you something that takes a gigantic disaster into something that is not actually a complete disaster and then you can do things like sleep at night because you don't have to find a new job, which is great. So that covers security issues with other people's code. Now let's talk about what happens when you find security issues that aren't actually causing critical security updates yet. So we're developers, we use other people's code. Chances are really high that while you use other people's code at some point, you will find a problem with other people's code. Anybody who's never found a problem with other people's code, raise their hand. Okay, that's what I thought. So if the problem that you find with someone else's code is maybe a security problem, what should you do about it? Well, fortunately, we don't have to try to figure that out. Software developers and security researchers and companies that sell software for the last several decades have been engaged in a kind of low level argument slash war about how security issues should be reported and they've kind of hashed out something that is considered the best practice for a software developer who finds a security issue. So that's what I'm gonna talk about. It's called responsible disclosure. It's, as I said, it's kind of worked out from years of trying different options. It is not super great except everything else is worse. And it's the best thing that we've come up with yet because although everyone involved in the situation usually ends up unhappy, everyone involved in the situation hopefully does not end up screwed over. So responsible and disclosure. I'll talk about disclosure first. Companies hate disclosure, right? If you announce that your software has a problem and isn't perfect, that's like companies are like, no, no, that's not okay. We can't tell people that we make not perfect software. And companies are really motivated to make sure that problems with their products aren't publicized, right? If they can spend money or spend money on lawyers to sue you to make sure that no one will find out about the problem, it's actually often worth it to them. And then there's the responsible part. Security researchers really hate this part. Responsible disclosure means that you don't say, hey, I found the security issue, aren't I so cool? And you put it on the internet. It means that you go talk to the company and you say, hey, I found the security issue. There's a generally accepted amount of time depending on how bad the issue is and what company it is, usually between 30 and 90 days. And you say, hey, I'm gonna give you this window. At the end of this window, if you haven't made an agreement with me about when you're gonna have this fixed, then I'll make it public. This is the stuff that security researchers hate because it means that they can't go talk about how awesome they are. And that's basically what security researchers do security research for as far as I've ever been able to tell. So everyone ends up unhappy, but hopefully nobody ends up screwed over by it. And one way for this to possibly end up with not everyone ending up screwed over is rewards. Some companies are trying to change the equation of security researchers being really unhappy by saying, hey, we'll pay you money to reward you for finding this and then you can talk about it, but we'll only give you the money if you don't talk about it until after we fixed it. Which is cool, like maybe everyone ends up happy, that'd be great. So here are some examples of what companies do. Google, for example, has a very clear application security researcher slash reward policy. They rank how big a security issue is and then they issue an award somewhere between $120,000. Last time I checked the max was $20,000 per issue. But I mean, don't let that stop you. There was one guy who reported three bugs in Chrome at one time and cut a pad of $35,000. Google actually, so far this year, that might be last year, but yeah, lots of money going to security researchers. This is actually a thing that is totally working out for Google. Other companies explicitly provide responsible disclosure guidelines, but what is this? This is, oh yeah, this is the payout schedule. It's, I feel like someone thought that they were just so clever that they set the low end of the scale to one, three, three, seven dollars. Anyway, there are other companies that have really explicit agreements about responsible disclosure, but then state that researchers will not be given any money. I'm not sure why companies would decide to do that, but they definitely exist. Other big company examples who are willing to payout include Facebook. Facebook has kind of a similar bounty setup to Google. They say if it's an actual security bug, they will give you money. It will be at least $500. The highest payout that I know of that Facebook has issued was I think like 40 or $50,000 for a single bug. It was a really bad bug. There's other systems that are kind of like semi-alternative, GitHub has set up like a bounty slash game center leaderboard for security issues. You can earn points for finding bugs in GitHub. They give you both points and rank you on a leaderboard and give you t-shirts and give you money. Works for GitHub. Oh yeah, they don't explicitly say whether they'll give you money or not, but they definitely have paid a lot of money to people who have found bugs in GitHub. Where else was I going? Oh yeah, Engine Yard is one of those companies like I was talking about a minute ago where they have a policy, they explicitly say $0, ever. So if you find security issues to re-sum up, talk to the company whose code, if it's a company, definitely talk to the company whose code you found a security issue with. Try to make sure that they know what's going on. So, right. If you find a bug, back to your work as a developer, once you have the bug, stop, think for a second. There are two questions that you can ask yourself that are really straightforward that let you know how to handle the bug that you found. The fact that it's a bug means that something isn't working the way it's supposed to, but the thing that makes it a security issue or not is can you get to something that doesn't belong to you? Can you see someone else's information? Can you change someone else's information? Can you cause problems for people who aren't you? And second, can you disable something for people who aren't you? This is like an entire class of attacks that mostly get called denial of service even though they're not like a distributed denial of service attack like what happens to GitHub every couple of months. They're an attack like when symbols used to not be garbage collected and you could just send requests to a rail server in a loop until the rail server died and locked up and you couldn't do anything. So, if the answer to either one of those two questions was yes, then you should go talk to whoever owns that software before you go out onto the internet and say, hey guys, I found this problem. Everybody, you should attack it. That's what happens, right? Even if you don't actually say that. There are people who pay attention to stuff on the internet looking for new problems that they can use to attack things. So, contact an author. Even if it's a false alarm, the author will be able to say, hey, that's not actually a security issue but thanks for letting me know and then you can go talk about it if you want. But if it is a genuine security issue, it's a really good thing that you checked in with them before you went and announced it publicly. It means that they have a chance to fix it and it means that everyone else can potentially have a chance to upgrade to the version that's fixed before there's a script that you can download that lets you auto-hack anything with that flaw. So, if you go to report, if there is a company behind it, definitely go talk to the company. Most companies, as we just saw, have specific security reporting guidelines. If it's not a company, check the read me, check to see if there's a security policy. If there's not, just check to see if there's at least an email in the gem spec of the gem that you can send an email to. Sadly, sometimes there's not even an email in the gem spec, but find the repo on GitHub and see if there's an email on the author's account on GitHub, just try to get in contact with them somehow. Definitely try to contact the author if you can. If you can't contact the author or if you try to contact the author and you don't get any response at all even after waiting for a couple of days, then you have to kind of make a decision. What are you gonna do about it? So, I'll talk about that in a second, but ultimately when you're talking to developers about security issues, please try to remember that those are also developers and you could be on the other end of this at some point with your own code. Have empathy with the other person. It is not their primary job in life to fix the security problem that you found. Try to figure out a solution that means that they can fix their problem and that you can get credit for it that doesn't involve anyone having to be vulnerable to attacks. Most authors, most of the time will say, hey, I really appreciate that you let me know about this, I have figured out how to fix it and I'm going to announce the fix and let's announce the bug and the fix at the same time and then everyone can upgrade and it should be fine. That is actually really fortunately the most common thing that happens. But in the worst case, you won't be able to get ahold of the author. The author will have disappeared. You can just stop using that gem. I've done that a couple of times. If there's some other option that you can use that does have a maintainer or that you can take over maintaining instead, I've done that. But you could also just fork it and fix it yourself and be like, hey, I fixed the security problem in the version that you have and then start using that. At least then you don't have the problem. So, that in, I guess not that short of a summary is responsible disclosure. And last, I want to talk about what happens when you have a security problem in one of your gems. So, maybe show of hands. How many of you have ever written a gem that is published on RubyGems? That's a pretty good number of hands. How many of you have written Ruby code that's on GitHub that other people can use? Even more hands. Okay, cool. So, this actually applies to all of you. Even if it's not a gem, your code is a security vulnerability for someone else waiting to happen. Hooray. You know, unless your code is perfect. In which case, I'm sure everything's fine. So, there's three people who find problems with your code that you'll have to deal with. Three kinds of people. The easy one is someone like you. They say, hey, I found this problem. Maybe we can get it fixed and then announce that it was a problem and you can release it. That's totally the easy case, right? You fix it, you release it, you announced there was a problem, but now it's fixed. Everyone's happy. There's a medium hard one where the problem is already out there and people are already maybe already figuring out how to use it to do bad things. In that case, if you can announce it without making it even more dangerous for the people who already have the problem, do it right away, but fix it. Like, that's basically the answer always, but in this case, it's even more important because people might already be having problems as a result and then just announce it and basically everything's fine. The hard one is when it's a security researcher who's like, I'm amazing, you better fix this, I wanna talk about how cool I am. So, if and if you run into a person like this, I'm really sorry. There are definitely people in this room who've had to deal with this kind of security researcher possibly many times, Aaron. And so it's really important to respond soon enough that they know that you heard them, which usually means within 48 hours, 48 business hours. It's really important to tell them what you plan to do and it's really important to let them know that you're still working on it if you're still working on it. Security researchers will usually wait until they haven't heard from you for two or three days and then say, oh, you haven't told me anything, I guess you're not fixing it, I better go tell everyone in public now. That totally sucks, be careful. Ultimately, if you own code, make it as easy as possible for people to let you know if there are problems with it so that you can fix it. Put your email or an email that people who use the gem can use to contact you in the gem spec, put it on GitHub. If you're on a team, have a security address for that team, write a security disclosure policy even if it's just like one paragraph of like, please let us know, we will thank you, we will fix it. Yeah, that's basically the plan. So I created a mailing list specifically for gems that have security issues. There's a Rails only mailing list for security issues, but there isn't a mailing list for Ruby and Rails and Bundler and gem whatever, or there wasn't until I created this one. It's a Google group, you can subscribe to it, that's the address, and the Rails team and lots of other gems send all their security issues there so you don't have to like track down individual security announcements for gems, you can just subscribe to that one. Ultimately, I hope that if we're all paying attention, security will become just kind of like a background thing that we all know how to deal with and that's when we can go shopping. Thanks. We have any questions for Andre? Thanks for the great talk. So our team uses things like Breakman or like Rails practice. What do you think about them? And also are there any automated ways of like running these gems and then automatically updating them? Do you think that's a good practice? Some automation tools for security patches, thanks. Sure. So definitely, I think Breakman is a super valuable tool because it watches for things that you can do in Rails that create security holes. It's kind of unfortunate that it's so easy to potentially create security holes that you can find them by running a regular expression against the code base. But Breakman's good at finding them and for sure using code climate integrates Breakman if you use that or you can run Breakman itself against your code base. That definitely protects you against the set of like really obvious things that Breakman knows about. Definitely keep in mind that it is very, like it's not actually even hard to write a security issue that Breakman doesn't know exists. It's not a fix for everything. It just protects you against some of the like well-known things that people have done a lot in the past. As for updating things, it is possible to get a list of outdated her, I guess, a list of updates to all of the gems that you have in an application just by running the bundle outdated command. That will at least tell you if you're behind. It's harder to know if there are critical security updates in those changes. There actually isn't any super good collection of like here's what changed and that you can easily read. Ultimately, it just kind of comes down to an ongoing, again, actual amount of work and doing things like subscribing to the security list so that you know if there is like a big critical security issue, you can at least know about that. And then just kind of like over time look at one outdated gem, spend a few minutes on it. Sometime later, come back and look at one more of outdated gem and be like, well, do I need this update? Is it gonna break things? Does it work? Do the test pass if I upgrade it? Cool, let's do it. That's pretty much the best that we know how to do right now. Any other questions? Over here. Yeah, so actually I really love Ruby gems. I think it really helps projects. I write gems and maintain some of them myself. But when we are on a project, we can easily see that big long running projects have like easily 50 to 100 gems. As the maintainer of Bundler and do you think it's a problem? What's your take on it? I guess there are a lot of applications that use 50 or 100 or 200 or I've seen applications that use 400 gems and as the author of Bundler, I'm sorry. It's trading one problem for another, right? You were able to spend a lot less time doing things by using those gems, but using those gems means that you have to spend more time doing some things like upgrading those gems when there are security problems. Unfortunately, we have yet to discover whatever it is that makes the computer actually do it for us. I'm waiting for Matt's to ship that in Ruby 3.0. But in the meantime, decide if it's worth the trade off. If what the gym does for you is actually like gives you a single function that you could write in five lines, it's probably not actually not worth keeping the gem because then you don't have to pay attention to security updates. If the gem implements an entire login system and has active maintainers, like that's probably worth it. You don't have to write a bunch of code. It's a trade off. I know you mentioned that there isn't really a good place online to find out about it. There's something called the Ruby advisory database, which is off the hub that is trying to track all those. Is there any other plans to kind of maybe like a see a life of bundle is like bundle security outdated or something? I don't have any explicit plans to add bundle outdated security, but that actually is a pretty good idea. If somebody wants to submit a pull request, let me know. I would be interested in bundle outdated security. Yeah, for sure. Great, thank you so much, Andre. Let's thank our speakers again.