 Hey everybody, thank you for coming today to the Talk About Security Champions, the what, why, and how. I'm Ann Marie Fred. I'm a Senior Principal Software Engineer at Red Hat, and I have a little bit of information about myself to start. I have more than 20 years of software development experience, more than 10 years of working in a DevOps field. I was a DevOps Days Raleigh Conference Co-Organizer for three years. I'm also active in the Linux Foundation and CD Foundation, a couple open source projects. I was HR Manager for three years, and for the last four years I've been acting as a security focal. And you also see I was at IBM until a year ago, so some of the stories are from there. Brief disclaimer, these are my experiences, not the official position of my current or former employer. So according to former FBI Director Robert Mueller, there are only two types of companies. Those that have been hacked and those that will be. In fact, according to the Sonatype DevSecOps Community Survey, 24% of organizations either suspected or had verified a security breach in the past 12 months. And most of those breaches target data. And data breaches are very expensive. For one, GDPR and other privacy regulations hold companies liable for security breaches. Stop working, yeah, just stop working. Ignorance is not a defense. Yeah, sorry, the microphone broke. Is that better? Okay, all right, so ignorance is not a defense from liability and this can cause major damage to a company's reputation if you think about high profile data breaches at places like Facebook, Equifax, LinkedIn, Anthem. Was anybody here affected by any of those data breaches? A couple, yeah, a couple, I was too. Really obnoxious. In fact, on average, a data breach, according to IBM's 2021 cost of a data breach report, costs $4.24 million to repair. And in 2021, there were over 18,000 CVE vulnerabilities disclosed and at least 66 zero-day vulnerabilities. So on average, more than one every week. More bad news, the time between a vulnerability being announced and the exploits appearing in the wild used to be 45 days and that is compressed by 93% over the last decade to only three days and that's according to research by Sonotype and Derek Weeks. Which means it's a race and the only way to win it is that we need continuous security. And what if we're cloud native? Well, in cloud native computing, generally you're going to expose your services via APIs as soon as possible. But that means we need to secure them from day one because if your service is on the internet, it's exposed to attackers as well. A big part of securing things from day one is knowing how to scan the software and then interpret the results of the scans and then fix it using static application security testing, dynamic application security testing and so on and so forth. For example, cross-site scripting vulnerabilities are very common. They're easily detected by scanning tools and they're easy to fix if you know what to look for. Another thing about CICD is that that moves the bottleneck from the release schedule to the developers. If you can release a fix within a couple of days, now it's the developers who are slowing you down all of a sudden. And what if we're using open source software? Well, open source is great because it accelerates development but it also brings with it a host of publicly disclosed known vulnerabilities. And security researchers are using their automated scanning and detection tools to scan those open source packages and report the vulnerabilities. But bad actors have the same tools and they can find them as well. Maybe before they're fixed. In fact, comparing 2020 versus 2019, which is the last time this research came out, there were 50% more vulnerabilities that year than the prior year and we've seen this trend continue. The log for J log for shell vulnerability is a great example and a typical example of an open source vulnerability. So log for J was a trusted open source library. Nobody had any reason to suspect that it would be a problem. It's used by most Java code. It's used in hundreds of millions of devices. In 60% of the software projects, it's used indirectly. So if you have your package list, you won't see log for J there. It'll be a dependency of one of your dependencies. And this log for shell vulnerability is allows arbitrary code execution exploited over the internet without authentication which makes it the holy grail of exploits. But so how have we done with fixing this, right? Well, if you look at that graph, you'll see like there's some blue lines that dropped on quickly. Those are the companies that were on top of it and they fixed things quickly. But if you see the red chunk there, 33% of Maven downloads are still vulnerable versions of log for J. That's horrible. So 33% of things that are being actively built and deployed are still vulnerable. How does this happen? Well, according to Tony Irvine and secure delivery, for every 100 developers, there's only one security professional. And because of this, security is often neglected. It's treated as a last minute gate. So maybe two months before you try to ship something for the first time. You'll do some threat models. You'll do some manual penetration testing. The software development teams will have a couple of weeks to fix vulnerabilities. They probably won't fix all of them. And then because you have a committed delivery, they ship it. That's what happens. So the fact is that a tiny cybersecurity team can't find and fix vulnerabilities within three days. They need help. And to move faster while being more secure, we have to up our game. We have to make our technical teams more self-sufficient with respect to cybersecurity. And that means we need to scale up the number of application security subject matter experts from about 1% to about 10% of our technical population. And these are people who are gonna be qualified to dig into the details of security reports and security tool findings. So what do I mean by security champions? What I mean is one person, one management designated person per small team or squad. It's roughly a 25% time commitment. Some weeks it might be an hour or two. Some weeks like during threat modeling, it might be full time. It's at least one-tenth of your development population and there I'm including developers, architects, testers, SREs. You wanna have champions across these disciplines. And we're gonna give them maybe 10 or 15 hours of additional application security training. And that's enough to be a local subject matter expert for their own team. Why do we need them? One professional per 100 developers leaves those cybersecurity professionals spread too thin. And the people closest to the code are the ones who are best equipped to keep it secure if they have some basic security trainings like classic shift left. It's also a great way to promote a DevSecOps culture where the developers have control and responsibility for their own application security. There's a playbook for how to start a security champions program. First of all, you're gonna need a small working group. We're talking maybe three to five people, just a handful. You can ask for volunteers, people who are excited about learning more and bringing more security knowledge into the organization. You're gonna have to create and publish a program description that describes what the program is, why you're doing it, what the expectations are of everybody involved, what the time commitment should be. And then you can use that description to shop it around to management and executives and get their buy-in so they know what they're signing up for. Your group will also need to choose a training plan and pay for it and set a due date for the training. And then you're gonna have to have the managers appoint a designated security champion for each of your small teams or squads. And then your working group can also set up communications channels like instant messaging and email and meetings. And finally kick off the program. I have more details and backup slides, by the way, this is the short version, right? But yeah, it's about 10 to 15 hours of training that we want. We're gonna make sure that we've covered cybersecurity basics, which really everyone in the company should be getting, like how to have good passwords and how to store them securely. Security and privacy for developers 101, security and privacy by design. You're gonna wanna definitely include OWASP, top 10, SAMS 25, top 25, training maybe four or five hours on how to recognize and find and fix the vulnerabilities. And I also like to include an hour or two on the basics of threat modeling. It's not enough for these people to go run threat modeling exercises like a qualified security architect would do, but it's enough for them to help a security architect do a better job. And it's enough for them to help keep the threat model up to date over time. And I have some examples of ones that are good in the backups as well. Also it's important to have an interlock meeting. So we set up a secure engineering guild and we expect our security champions and security focals to attend or at least watch the recording if they can't make it. Obviously you wanna have a good agenda in minutes. And some topics we've included recently are like status, corporate initiatives and security alerts, security tools and how people are doing with adoption, tips and tricks for that, pen testing and remediation and how that's going. Interesting security stories in the news and when we can five or 10 minutes of education on something very timely. Like right now my group is filling out new security surveys. So we've walked through these surveys together so people could ask questions. So what does it look like when this works? Well at IBM Digital, after 12 months of this program we did have 10% of our developers who finished the training program and became subject matter experts. We also saw that critical security alerts if we call CISO overrides were often able to be handled within one to three days. And that's from the time that corporate cybersecurity sends the alert to the security focals. They fan it out to the champions. They go to their applications and see if they can find the problem. They report back, they fix it if there's a patch available and they report the status all the way back up the chain. So accomplishing that within one to three days is pretty great. Importantly to me, I knew that no teams were following through the cracks. So in my organization of like, I don't know, a dozen-ish teams, we had like 130 applications and two dozen security champions. And I knew that when I sent out alert all those were being covered. We also had far fewer security reports that were marked as false alert or could not reproduce, which is developer speak for, I don't know what to do with this. Our developers also uncover new vulnerabilities on their own. So as they're going through the training classes, they're thinking in their head, oh, I think my application might have this use case. I'm gonna go see how we implemented it in our code and they fix things on their own. We saw broader adoption of threat modeling. Our teams are reaching out to the security architects asking to run threat models and teams are thinking about security every week. And just three months in, at Red Hat Developer Tools, already we have the secure engineering guild meeting every week. We have a dozen security champions that cover 16 small teams and we're working on pulling in the other half. We have our training funded. We're about to kick it off. They've already done five threat models. They got two more in progress and several more scheduled in the next month. And they've updated dozens of security assessments and data privacy assessments in the last two months. So key takeaways. Security champions program is straightforward and repeatable. You can do it. But you're gonna need management support. You're gonna need a few dedicated, excited people, few months to get the program started. You'll need that funding for the online training or if you already have a corporate training program, you might already have modules in there that you can chain together into a plan. That's what I was able to do at IBM. And at Red Hat we already had a program. I just had to get more funding for another year. Of course you're gonna need your management to designate that one security champion per team. And you'll need one or two secure engineering guild leads per org. Usually this is more like a security architect or a security focal. Somebody to keep that cadence going with the team. Thank you. It's a lightning talk, very fast. But if you'd like, you can reach out to me on Twitter or LinkedIn here. And I'm also have a moment for questions if anybody wants to ask one. Okay, question? Yeah. I think everybody can know enough about security to handle their sphere, right? So a small team is usually what? Five to 10 developers. And they know their application really well. So they just get enough security training to recognize the potential holes in their own code and fix them. Become a security expert. Is a special mindset, a special background education, or what do you think? I think it's mostly curiosity. So in fact, usually we don't ask them to be technically as necessarily, because those people get pulled into meetings all the time and sometimes they just want to write code. So ideally it's somebody who volunteers for the team and it can actually be a more junior member of the team as long as they're interested in doing that training program and learning about it. We also find that people will continue, so some percentage of the people will do more than that 10 or 15 hours and they'll keep going. Right, so you're like that one in 100, right? I'm teaching students and I went for this time with more than three natural dollars. I think, so in our org, I think we had like 75ish developers each time, right? We got a dozen or well 10 to 20 people identified as security champions. Of those maybe four or five will be super excited, but that's enough to keep the program alive and functioning, so thank you.