 So, my goal here today is to get you thinking systemically about security. Not thinking about the what, you know, cryptography and hashing and password security and using template auto-escaping. You know, I'm not talking about the technical details, although Kelsey gave a talk earlier today which covers a lot of those technical details in depth, so if you want to know more about that, watch the video. Now, what I'm going to talk about is how does your organization think about security? What is your security program? Let me get a quick reading. How many people here work for an organization that you would say has an established security program? Wow. Okay, that's awesome. That's actually more than I thought. And how many of you do some form of security in your day job but don't really understand exactly how it fits into a larger picture of your organization's security posture? All right. Okay, cool. So, I'm speaking to the people who raised their hand the second time and I'm speaking to the people who didn't raise their hands at all who don't think that they have a part or don't understand their part in an organization's security. You know, really I want to answer a simple question here which is what would a minimum viable security program look like? You can't, you know, I work for Heroku which is part of Salesforce, right? Salesforce, 15,000 employees. We've got a security program. We've got lots of security programs. I want to answer what does this look like if you're four people, if you're 12 people, if you're a small development team within a larger context that needs to sort of get its act together. And remember when we talk about minimum viable products, we're not talking about, we're not talking about just one part, right? I love this image because it really describes like exactly how you should think about what minimum viable means. It doesn't mean let's just build one part of it, it means let's build something that satisfies. So in this example, I want to tell you how to build a skateboard. So the conceit of this talk is, let's say you've got one week, you're going to sprint on this for a week, you're going to sit down with your coworkers and at the end of that week you want to have an established, defined, measured, successful security program ready to be iterated on over the next five weeks, five months, five years. And that's what we're building. So here's what we'll do. Monday, we're going to develop our training program to make sure that developers understand what building secure software means. Tuesday, we're going to develop an SDL, which is a fancy version of saying what is security here and how does it work. Wednesday, we're going to plan for when the shit hits the fan, excuse my French. Thursday, we're going to talk about what a lot of people think of as sort of the boring parts of security, governance, risk and compliance, formal security programs. And Friday, you're going to tell the world that you've just done some awesome work. Let's dive in. So train your staff. So security is a shared responsibility. A system is only as strong as its weakest link, and this means we need to strengthen all of the links. Every single person at your organization is in some sense accountable for the security of your organization, whether they are a developer who needs to write code that protects against SQL injection, or an admin who needs to not fall prey to a phishing attack and share corporate calendars, or a janitor who needs to keep the doors locked, or a manager who needs to not let someone approve a change that would be a bad idea for the company's overall risk profile. These are all actions that people need to take to ensure that we're doing our best job protecting our organization and, most importantly, our customers. So you really need to have holistic security awareness training for everyone at the company. This isn't optional, and I'll talk a bit about why in a minute. So what I suggest you do is focus on some very basic security hygiene practices. Good passwords is the easy one. Luckily there are several good password management utilities, LastPass and OnePass. They are not hard to use. I hesitated a minute. LastPass is a little hard to use, but it has some good features, so it's kind of worth it. You can make a decision there about UX versus features and decide which one you like. Train your staff for using a password manager that will dramatically level up the sort of organizational security posture of their platform. Shared password reuse that is using the same password on one side and another, and then site A gets compromised and attackers use it on site B is an incredibly common exploit vector. There was an interesting breach a number of years ago of a company, what was their name? They were a MongoDB as a service provider, and the way that they were compromised was that the password that was used was used by a staff member on Adobe's website, and Adobe was compromised. With the compromise to the Mongo provider, a shared password used by a user of that service was used to compromise another service, a continuation integration service. You have this chain of the attacker moving from platform to platform, harvesting passwords and trying them across other systems. Training out password reuse through the use of technology is a really good way to cut that down. Multi-factor auth is a thing, it works, you can train your staff how to use it. Basic training in customer privacy procedures is something that you should probably be spending some time writing down and helping your staff understand. This will differ from organization to organization depending on who your customers are and what privacy means to them. This is worth the investment. Let's talk a bit about phishing because that's the biggest threat that you'll probably face to this sort of general population of your staff. Phishing, this is from the Verizon does a yearly data breach investigation report where they compile data on security breaches from hundreds of organizations and do a bunch of analysis and grouping of the types of vulnerabilities. They find that more than two-thirds of incidents that follow this pattern of trying to steal data feature phishing. Most attacks start with either a targeted or an untargeted phishing email. What's really scary if you're in the security field is that almost a quarter of recipients open phishing messages and 10% of them click on attachments. Which means that just 10 emails gives you a 90% success rate. That's pretty scary. That means as an attacker, I only need to send you to your company 10 emails to have a fairly good chance of successfully phishing someone. What can we do about this? This is a big threat and it's really hard to address because it's people. So there's some technology tools, good email filtering helps, Gmail's great. Being able to store and archive all of your organization's emails so that you can determine the scope of a phishing attack if one occurs is another great technological stuff. But really training is the main thing that you have to do here. The same study, the DIBR also found that the best early warning system for phishing attacks is your own staff. A properly trained staff, the average time to respond to a phishing attack was 20 minutes. So if you have a staff that knows what phishing is and understands what it is and how to report it to you and how to sort of tell you that something's up, you have a better than average chance of catching an attack early on. You can do this yourself. You can also pay for this. FishMe.com is really good. They'll run phishing attacks against your staff for you. You give them your staff email lists and they do targeted and different types of phishing attacks. And anyone who files for one gets taken to a special customized training specifically for that style of phishing attack. So it's a good service. I can recommend them if throwing money at the problem is something you can do. So the other part of this is writing code. We're at a programming conference so I'd be remiss if I didn't talk about code a little bit. So who do you think should be responsible for writing secure code? Whose job is it to write secure code? I heard someone say everyone. And it's the same question is who's responsible for writing tests? We used to have this idea about software testing that you had test engineers and they would write test code. And then the other engineers would write the code code and then they'd like, I don't know, meet with pistols at dawn or something. And like, yeah, safe to say that didn't work. One of the main innovation of test driven development isn't necessarily the acronyms and the styles and the functions, but just the idea that testing and coding are this inextricably linked cycle. And it's the same way with secure development. Prisa Tabriz, who runs Chrome Security, has said that one of the key factors in her success at Google has been to push decisions around security as far down the chain as possible. And this is kind of counterintuitive because you might intuitively think, these are company wide risks. We need to have the director of security making all the security decisions because then we'll be really secure. And it doesn't really work that way. The further you get from the hands on keyboard, the people writing the code, the less context and information you have and the less able you are to really assess a risk. And so, if you, like me or someone in management, your main job should be to empower and push those decisions as far down the stack as possible. Sure, in the middle of an incident, you need command and control, right? You need top down crisis mode leadership in the middle of an incident. But for the day to day bread and butter of writing code, it's got to be people, average developers writing code day to day. And so there's some good news here, which is that actually writing secure code is easy. Now, there's this idea that security is really, really hard, so we need to leave it to the experts. And I want to push back on that. Yes, there are parts of security that are hard. Cryptography is impossible. I barely understand how sort of prime factorization crypto works, and now we've got this elliptical key stuff. And I mean, you left me behind a long time ago. But most, most coding does not involve that, right? Most coding is day to day, fairly easy basic security hygiene. And basic security hygiene can get you a surprisingly far away. When you look into breach reports, those that have happened because of vulnerable software, they're always basic stuff, SQL injection, cross-site scripting, cross-site request forgery, the basics. The stuff that's in the OWASP-POP10 is rare for a truly novel and hard security vulnerability to actually lead to a real world compromise. So by expending a small amount of effort in basic training, you can get really, really far. And there's a lot of good resources out there. These are four of my favorite. There's probably quite a few others. OWASP maintains a sort of top ten security risks with information on how to address them and what they look like and pointers to different languages. Mozilla's secure coding guidelines are probably the best publicly available application security guidelines. They're somewhat language-agnostic, although Mozilla writes a lot of their code in Python and Django, so it'll play well to this crowd. Microsoft's guidelines are a little more focused towards compiled software as are Apple's. And so depending on what environment you're writing for and what type of software you're writing, one or more of these may make a good secure coding guide. And your company's secure coding guide could literally be go read the OWASP-POP10. That would already put you many, many feet above average. All right, so Monday is complete. You've developed a basic security awareness training program covering phishing attacks, multi-factor off how to use password managers, and you've picked a secure coding guide. Maybe you've customized it a little bit. Maybe you've just taken one off the shelf and put some resources out to your staff. So Tuesday, we're gonna build an SDL. Okay, so it's a buzzword, I'm sorry. An SDL stands for Secure Development Life Cycle and depending on the level of formality of your organization, this could involve lots of fancy flowcharts and diagrams that look like they're off a government slide. But really an SDL is an answer to a very basic question. Okay, we've told people how to write secure code. How do we make sure it happens, right? We know what the best practices are. We know that our staff is smart and wants to do them. What is our mechanism for making sure that they do? And for me, the heart of an SDL is figuring out this virtuous cycle, is figuring out how we take the things that we know, we translate them into best practices for our organization. We follow those best practices in development. Things happen, either successes or failures. We analyze them and that builds more knowledge. How do we build this program so that we are continually feeding in the results of the things that we learn about security as we develop software continuously? So for me, the minimum SDL needs to answer three basic questions. When do we do security? At what point during our software development life cycle do we think about security? Who is doing that thinking? And what does doing security even mean? So you could answer these questions in several ways. I have a suggestion, this is how I've answered them. I think doing security throughout development as much as possible is the best way to go. We have an internal security mailing list. We have a chat channel. We use GitHub comments. There are multiple ways at Heroku for staff to get in touch with us and ask us questions ranging from very simple like, hey, what library do we use for OAuth again? To, I'm designing an entirely new crypto widget and I need a lot of help. Any size of engagement when we're there. So having experts available for questions and building a culture where it's cool to ask that stuff and where people help each other out with it as much as possible. It's probably a good idea to have an explicit security step when you're sort of planning and building a new product and probably again a review just before you ship. These are hard when it comes to agile because we don't have as many explicit design upfront steps and shipping might be something that we do tens or hundreds of times a day. So I don't have a great answer here. This is still one of these really hard problems. Sort of figuring out how to integrate security and agile is an ongoing problem and something that's sort of pretty tricky. But I still think you can identify moments where you can identify sort of touch points. If you're about to launch a new feature, if someone's going to take the time to write a blog post about it, yeah, maybe at the same time you might want to take some time to do an explicit security review. So who does this work? Again, you should be pushing security decisions down as far as possible. So we really focus on giving engineers tools and documentation and authority to make decisions and taking a default position of trust. Our basic assumption is that everyone is reasonably competent at their job and trying to do it well and that the decisions they make are more informed than the decisions that we make and so we should start from a default of trusting what average staff do. If you have dedicated security staff, if you're lucky enough to have a dedicated security team, I think the best position for that team is sort of a consulting role, right? You're not necessarily saying this is the architecture or yes, you may build that or no, you may not. You're a consultant, you're asking questions, you're answering questions, you're giving expert feedback. We have a thing we talk about on the team about, we try not to say no, we try to say yes if, right? So don't just say like, security's role shouldn't be you can't do that, it should be there are risks, how do you plan to address them? And the decision about how far up those risk decisions make need to be based on some fairly good understanding of what risk means for your organization. There is such a thing as acceptable risk, right? You're going to come up with a situation where you've got a known problem. But if you don't ship tomorrow, it's going to cost your company $40,000. And so you have to make a decision, is this security risk so bad that we need to pay the money or do we have a plan to remediate it in a reasonable enough period of time that it's worth the risk? And the greater the risk on either side, the higher up you probably want to push that decision. That's a company-wide risk decisions what probably needs to be made company-wide. We build tools, this is one that we have, we have a little, we refer to this as the twine game, it's like a choose your own adventure, sort of point and click to sort of figure out what level of risk a particular project is going to be and sort of self-service the decision about whether you might want to involve our team or not. So what does doing security mean? This is the last part here. What are we talking about when we say doing security? So I think checklists are the greatest invention since sliced bread. Probably better, I would give up sliced bread if I got to keep checklists. The good introduction to checklist is in Tull Gawande's book, The Checklist Manifesto, great book. He's an amazing writer, really good to read. And one example he gives is so there's a doctor at Hopkins called Peter Provenos to sort of the invention of using checklist, inventor of using checklist in medicine. And they were having a big problem with central line infections. And so he designed a checklist, I think it had five items on it, it was really simple, like do you have clean drapes? Is the needle clean? Have you washed your hands? Like it was really, really basic stuff. So they gave this to all the doctors and nurses doing central lines and monitored the results. The ten day infection rate went from 11% one in ten patients were getting infected to zero. And in fact, they were so surprised by this result, they thought they were doing something wrong. So they measured it for another 18 months because they thought that they had messed something up. And so all in all, in that two and a half years, there were two central line infections after the introduction of the checklist. Down from one in ten to two over two and a half years. Gwande notes that there are these three kinds of problems in the world. There are simple problems, it's like baking a cake. Once you know how to bake a cake, you can do it repeatedly over and over again. You can give someone a recipe who's never baked a cake and they can probably do an okay job of it. There are complicated problems, like sending a rocket to a moon. The recipe is much, much longer, but there's still a recipe. We know more or less what all the steps are. We get it wrong more often because it's more complicated. But we could write a checklist for sending a rocket to the moon. It would be a lot longer than the cake, but it would be a thing. Then there are complex problems, like raising a child. There's no one way to raise a child. You can't give someone a laminated checklist on how to raise a child and expect that to work repeatedly every time or even like any time. And the key observation is that we are besieged by simple problems. Life is full of fairly easy things that we just don't know how to do or that we just don't do consistently. And so checklists are how we solve simple problems. We hand someone a checklist that reminds them how to do the simple problem. So these are a couple of ours. Let's see what are these, this is an initial. This is sort of a project level assessment, like this is a self-service checklist that a developer would walk through when they're getting ready to write a new component or reviewing one. We have a checklist for vulnerability management. That's when we find out that there's a vulnerability in open SSL, just hypothetically, and have to decide what we're going to do about it. We have a lot of these, we think they're great. If you want to dive into the checklist world, there is, yes, there is a checklist for writing checklists that will tell you how to write a good checklist. And of course there's Gwande's book, which is pretty fantastic. So, Tuesday, you've created an SDL, you've documented your virtuous cycle. When do we do security, who does security, and what is doing security? And if you take only one idea away from this talk, please let it be checklists. They are great. They are an incredibly lightweight way of introducing something you can call a process without the sort of overhead and sort of business-y bullshit that you normally associate with words like process and policy. Okay, so you know your staff is trained. You know how to write secure software. Now it's time to start thinking about when things go wrong. As Bob Dylan said, everybody must get owned. I think I have that right. The fact is though that this is sort of unfortunately true. Bruce Schnauer observed that we're starting to sort of view breaches as a fact of life. And this is depressing because it shows just how bad we are at our jobs and how much we need to really level up here to stay ahead of the black hats. But there's some silver lining to that for people involved in security. Because it means that more and more we're judged on our response. Our time to respond, our ability to contain and attack, our transparency, our security practices. Really interesting point, this is a tangent, but this just happened. There was just a circuit court that ruled that the FTC has the authority to find companies for violating security practices under the idea that it's deceptive trade practice to offer your customer's privacy and not follow basic security practices. This is a really interesting decision because it implies that the FTC might actually be getting into that courts and regulators are getting into the business. Not just of regulating PCI and HIPAA and those sorts of things, but actually like do you hash your passwords? You don't hash your passwords, that's an accepted best practice. We're gonna find you for doing it. And so suddenly I think we're reaching a point where companies security practices are being critiqued, certainly in the court of public opinion, as we've seen with Sony and Ashley Madison. But I think we're actually shortly going to start seeing company security practices critiqued in the court of courts. And I think that's nothing but a good thing, right? I think that will drive much more adherence to the things that we already know are our best practices. But so what this means for us is that we need to get our house in order before anything happens. There's no way, if your incident response planning starts when you get that phone call telling you that something's wrong or that there are attackers on the network or that there was a login from someone who left the company a year and a half ago, I mean it's just gonna be disastrous. I know one company that was breached last year that I spoke to a person who introduced themselves as their CISO, their director of security. I found out later that they didn't have a security department. And when the breach began, the CEO called this person and said, I'm promoting you to our chief security officer. Deal with this. Yeah, don't do that. So it's hard for me to be more prescriptive as I've been in previous points here because I think the details of what an incident response plan will be are going to be very specific to your organization and your risk profile and your regulatory exposure and your customer base and your product and et cetera. So I just want to give you some questions to think about and a framework to structure your incident response plan. And the work in writing an incident response plan is answering these questions. So I break down IR into five steps. The first one is initiating a response. How does someone report a breach? How do we track incidents? Do you have a bug tracker do you use? Do you have a white board? Do you use Trello? Where do you track that stuff? What are the, another good point. What happens if the thing that you use to track is the thing that's been compromised? Good question to ask. What are the roles and responsibilities during an incident? As you move into actually managing the incident, you need to understand how communication is going to be happening. Who communicates? Where does it happen? Who's involved? How often do you send situation updates? An important question for people at the sort of senior or management level is at what point do I need to wake up the CEO, the executive team? How severe does it need to get when I need to bring in lawyers? Those sorts of questions. We then need to figure out what's even going wrong and how are we going to collect this information? Who's going to follow up? This can be fairly lightweight. Our preferred tool for assessment during a breach is just a Google Doc. We open a Google Doc and everyone just writes in it. And by the end of the incident, there might be 100 pages of just random notes and output from firewall logs and just random shit in there. But now we've got a nice, complete time-stamped log of everything that we did during the response. So this doesn't need to be heavyweight stuff. But knowing that that's what you're going to do, it saves you those five minutes of arguing about where you're going to track your work. So once we've figured out how severe a problem is, we need to know what our response SLA needs to be. I mean, the reality is that not every incident is everything's on fire, all hands on deck, stop the presses. Some things are, yeah, this thing is bad, but worse would be waking up the team responsible. Let's get them in in the morning to fix it. And you should have some system for determining that, so it's not a seat of the pants decision. Once you fix things, a really common problem is to sort of knee-jerk and fix the one thing that happened in that situation, even though it actually maybe that's not the root cause or maybe this incident exposes a lot of other long-term things. So how are you going to ensure that any long-term remediation tasks are actually followed through on? And for people with customer notification requirements, it's important to know what your legal, your ethical, your moral requirements are around notifying your customers. There are likely going to be both legal requirements and then there are, I hope, also ethical requirements around when you tell your customers that something happened. And finally, all of this is useless if you don't learn something from it, and so you should understand how you're going to reflect back on the work and explore the causes. And what do we need to collect? What sort of information do we need to know? Again, there may be legal reasons for this. We have some requirements around the information we have to collect around incidents for being able to sort of talk about this to our customers, but you may also want to be able to collect metrics on root cause so you can go back in five years and say, hey, five years ago half of our vulnerabilities were network related and now only a quarter of them are, we should focus elsewhere, or whatever it is. So a lot more reading, I'll give you a few pointers. We wrote about incident response at Heroku, the pattern that we use for handling. This blog post in particular is about production outages when a service goes down, but we use the same system for managing security incidents. And it's based on the incident command system from search and response teams. And then this last one, Magoo, Ryan, last name is escaping me, he ran security at, he worked in security at Facebook, he ran security at Coinbase, he's got a pretty good pedigree, and he kind of wrote like a, oh shit, you've been owned guide, and you could do a lot worse than just starting with that as your IR guide. So Wednesday, this is the hump day, this is the hardest part of your week, creating your incident response plan. So Thursday, governance, risk, and compliance, this is the awesome part. There is an absolute alphabet soup of governance and compliance regimes out there for companies and for organizations. These are just the ones I know of in the US, I'm sure that there are like 37 million more across the world. And for the vast majority of at least small organizations, none of these are worth your time. Now, this may not be true, if you're a health information startup, like that HIPAA spec, all 800 pages of it, it is your best friend. If you're taking credit card payments, you better know PCI, if you wanna sell things to people in Europe, Safe Harbor's gonna be pretty important, but for most smaller organizations, you can skip right over these things. But you ignore formal risk programs at your peril because sooner or later, you will want to take credit card payments, you will wanna get into the health information market, you will wanna sell to people in Europe. And it's gonna be very important when that happens for you not to have shot yourself in the foot in the early days by completely ignoring this work. So you can save yourself a ton of effort by laying a very easy and simple groundwork right now for a formal risk program. And really what we mean when we talk about GRC is documentation, right? We're at the Django conference, I don't have to sell you on documentation, like we know it's a good thing, so document your security work, right? Have you made a decision about company policy? Write it down. Really, really easy stuff, but surprisingly, a lot of company policy decisions stay in email. Hey everyone, from today forward, you must use multi-factor auth with your GitHub account. And then you don't actually track that anywhere. And so someone, when an auditor asks four years later, like, what's your policy around multi-factor auth for GitHub, you can't produce anything to show them. Whereas if you had just taken that email and put it on like a wiki somewhere, now you've got a policy. You don't need to worry about formal language. There's this idea a lot in compliance like you need to use this very stilted, legalistic business language. The audience here are not judges and lawyers. They can be very informal. Our official password policy requires that you use at least two of letters, numbers, uppercase, lowercase, and emoji. And nearly every auditor that we've showed that to highlights the little emoji line and then gives us a big thumbs up about it. The other part of this is just tracking as much as you can. So someone asks you for access to a GitHub repo. Reply back with an email, yes, I'm confirming, I'm giving you access to this repo. It seems a little formal and weird, but it ensures that you have a paper trail. And again, when you get into the point of being audited, this is what an auditor is gonna look for is a paper trail of what you've done. Even better, most of us are engineers or work with them, write a system to track access, control, and access requests. We wrote one, it's great, auditors love it. I also would suggest that you write three documents to become the skeleton of your risk program. A data classification guide, what data you have, where it's stored, who has access to it, what controls are around it, what category is it, is it PII, personally identifiable information, is it payment data, is it customer data, you know, how do you classify and think about your data and control access to it? The second are checklists for access control. Think onboarding and offboarding, right? When someone starts or leaves your organization, you need to make sure that their accounts are turned up and turned down. This is important for formal audits, but it's also a common way for people to get breached is someone who used to work there three years ago still has access to GitHub for some reason and their email account gets taken over and you can do the math from there. So having a checklist of who gets access to what, when, and how, and tracking that, and then you can go through and uncheck items one by one as the person offboards. And the third thing is a weird thing to document when you don't have much process already, but I think it's one of the most important things you can have is what is your exception process? There are always going to be exceptions. There are always going to be situations where you need to break a rule and it's so much better to know how to break rules than to pretend the rules never get broken because if you just think that everyone always follows this policy always, then you never know when someone's not and it bites you to come later, but if you spend the time to decide who approves exceptions, at what level do they need to go out? How are they tracked? Those sorts of basic things. Again, auditors love it and it's gonna level up your company's security posture. So document everything, write some basic policies. So you've done most of the work, now it's time to tell people about it. You know, the fact is if you actually work through this checklist, you will be better off than most of your peers, let's say. No, I mean look, like again, we only have to look back over the history of data breaches over the last few years to see that people are getting owned through some pretty basic stuff. So if you've taken the time to address the basic stuff, you are doing a really good job and I know that this stuff is scary and I know that security seems like a battle we can't win and maybe it is, but we can win it most of the time and we can win against most of the attacks and if you've taken the time to address this basic stuff, you're in pretty good shape. You should feel fairly good about this foundation and you should be comfortable and happy bragging to your customers about the work that you've done. So I'd suggest three things. You do need a privacy policy. This is a legal requirement if you're taking any sort of personally identifiable information and that should live at your site slash privacy. I'd suggest a security page as well that talks about what you do about security, document some of the stuff that we've talked about earlier and you should maintain a security knowledge base. Talk a bit later about whether this should be public or private. That's an interesting question. So the privacy policy is necessary. It's legal requirement and a lot of companies just won't even do business with you. Okay, I work for Salesforce again. When I wanna buy a thing, the form that I fill out to send for the initial legal review, there's a link for where I need to put in the link to their privacy policy and if they don't have one, like I can't even get the request started. Like our procurement team, our legal review team just won't even think about buying from you if you don't have a privacy policy. It's just a non-starter. If you have lawyers, they'll write one for you. If you don't, Automatic has published a couple of templates that are worth starting for. Weirdly, Automatic and WordPress have different privacy policies even though they're the same company. So I don't, but I haven't figured that out. Maybe they're just two different versions of the same template, but they're both good. So you could steal and attribute and share alike and use them. Your security page. You should summarize your security practices. This can be less formal. The best way to think about this is sort of, this is where you tell your security narrative, right? Like this is where you talk about, at a high level, what your security program is trying to accomplish. You know, you can kind of explain there what the things that you do, what your program looks like. And you know, you can brag a little bit about how you have a risk program and a documented incident response plan and you know, well-documented privacy policies. And you know, you can talk about all of this stuff in somewhat bragging languages. If you have any formal adaptations, if you've done PCI or HIPAA or et cetera, you should list them here. And the most important thing that you should have on this page is tell people how to report vulnerabilities. You should probably have a security mailing list. You should probably have a PGP key that's kind of like a Brown M&M test, like looked it up in Wikipedia if you don't get the reference. The idea is it's sort of a sniff test, like a way for people to tell that you're serious is if you have a PGP key. I'll tell you in like two and a half years at Heroku, I think one person has sent us an encrypted email. So there you go. But if you tell people how to get in touch with your security team, they'll be much more likely to actually do that and not publish something publicly about how they tried to report a vulnerability and you didn't listen. So the last one is the security FAQ. The way I think about this is every time a customer asks you a question about security, or an internal person, a product manager, a sales person, a marketing person, a non-technical role asks you about security, write down the answer. And over time, you'll discover there are some natural groupings. You know, at Heroku, of course, there are a lot of questions about containerization, like how do we separate one dino from another dino? This comes up a lot. And so there's a lot of questions in there and we've kind of grouped them all around like container security. You'll notice that we don't publish ours publicly and this is an interesting point. Transparency is a really important value but there are also some good reasons to limit this information. There may be confidential information in there, there may be things that disclose information about other customers that you might not want, there may be information about the level of your security readiness program that you want to be transparent with customers but you probably don't wanna share with non-customers because they're not, you know, because that's a much broader group of people and they're not under NDA. So my litmus test for publishing security information is transparency going to make my customers safer. If it is, publish it even if it hurts. If it's not, if it's going to make them less safe, don't publish it even if it hurts. So that's your day five, privacy policy, security page and a security FAQ. So to recap, your minimum viable security program is train your staff, develop an SDL, a virtuous cycle to ensure that you continually develop and learn from your software development practices, have a plan for incident response when something goes wrong, be ready to do something about it, lay the foundations for a formal risk program and tell the world, good job, you've built a security program, thank you. So I'll be around in the halls in the rest of the week and there's contact info there so you can ask me questions in any of those formats but I won't be taking them here. Thanks y'all.