 So, next we have security champions if you haven't heard of it. You will soon. Anne-Marie Fred is going to be talking about that, so you'll get to see why it's important. All right, thank you. Is my mic on? Yeah, there we go. All right, thanks everybody for coming to this talk today. My name is Anne-Marie Fred. I'm a senior principal software engineer at Red Hat. And the talk is about security champions, what they are, why you need them, and how to get a program started. Little bit about myself first. So I was a full-time software developer for more than 20 years, of which the last 10 years or so was in a DevOps environment. I was a DevOps days conference co-organizer in Raleigh, North Carolina, for three years. I'm currently also active in the Linux Foundation and CD Foundation and a couple of their open source projects. I was an HR manager for three years and then went back to being an individual contributor. For the last four years, I've been acting as a security focal for my organizations. Most of my career was at IBM and then last year I moved to Red Hat. Quick disclaimer, my experience is not the official position of either of those companies. So according to former FBI director Robert Mueller, there's only two types of companies. Those that have been hacked and those that will be. In fact, in 2020, Sonatype did this DevSecOps community survey and they found that in just the previous 12 months, 24% of the respondents said that their company either had been hacked or they suspected that they had been successfully breached, in fact. Most of those breaches target the data that your company has. And data breaches can be amazingly expensive. GDPR and other privacy regulations hold companies liable for those breaches. And it's not just the fines that you get from the regulators that are expensive, but also the cost of repairing the damage that was done. And ignorance of a vulnerability or of a breach in progress is not a defense from your liability. This can cause major damage to a company's reputation. If you think about high profile breaches at places like Facebook, Equifax, LinkedIn, Anthem, how many of you were affected by at least one of those breaches? Yeah. How many were affected by most of them? So according to IBM's 2021 cost of a data breach report, the average cost of repairing one of these is $4.24 million. In 2021, there were over 18,000 CVE disclosures. How many is that per day? I don't know. How many do the math? And also last year, there were at least 66 zero-day vulnerabilities reported. So a couple times per week we're having to deal with this now. So what does this all mean if you're using open source software? Well, of course, open source accelerates your development. And most people are using it. But it does bring with it a host of publicly disclosed known vulnerabilities. The good news is that security researchers are out there using automated scanning and detection tools to scan open source projects and find vulnerabilities and then report them to the owners. So it's happening faster and in a greater volume than ever before. The bad news is that bad actors have access to the exact same tools. And they may not tell you about the vulnerabilities that they find. In fact, in the white source, state of open source vulnerabilities report from last year, there were 50% more vulnerabilities in 2020 versus 2019. And we see this trend continuing. Here's some more bad news. According to Derek Weeks, the time between a vulnerability being announced and the exploits appearing in the wild used to be 45 days. That is compressed by 93% over the last decade to three days. There's just no time. And what if you're a cloud native? What does that mean? So usually as a cloud native organization, that means that you're going to very quickly get your new APIs, websites, whatever, out onto the internet and accessible. That means you need to secure it from day one. So when I was looking at our WAF traffic last year and IBM's website, we would see people running scanning tools like security testing tools like Burp Suite against our website. People who are not us every single week. And a big part of knowing how to secure your APIs and your websites is understanding how to scan your software using static testing, dynamic testing, and then how to interpret the results and actually fix the problems. For example, cross-site scripting vulnerabilities are very easy to detect. You can scan your site or your APIs and see them. They're also very easy to fix, but only if you know what to do about it. And CI CD is another aspect of cloud native, of course. Now the good news is that we don't have so much of a bottleneck at the release process. It gets quite possible for us to fix software and deploy the fix within a day or a couple hours even. But now we've just moved the bottleneck to the development team. So it's a race. The only way to beat the bad actors here is that we need continuous security. Here's more bad news, though. So according to Toby Irvine's research and my own personal experience in talking around, this also holds, for every 100 developers, there's only one professional cybersecurity person here, like somebody working in it full-time. That means that cybersecurity professionals are a scarce resource. And that means that security often gets neglected in our transformations. Security is still very often treated as a last-minute gate. So a month or two before people try to release new software into production, they'll do a quick threat modeling exercise, they'll run some manual penetration testing, they'll fix vulnerabilities for a couple of weeks, but maybe they won't have time to fix all of them. And then since there's already a committed date to ship it, they ship it, right? The fact of the matter is a tiny cybersecurity team just can't find and fix vulnerabilities within three days without a lot of help. So if we want to move faster while being more secure, we have to up our game. And how do we do that? We have to make our technical teams more self-sufficient with respect to their own cybersecurity. We need many more application security subject matter experts. We need to scale up from about 1% to about 10% of our technical population. And we need more people who are qualified to dig into the details of security reports and security tool findings. Now, it's always hard to have any sort of change, so I'm going to talk about some things that motivated my teams recently. At IBM Digital in 2019, we did internal dev ops surveys across our teams, and we identified security work as one of the top three pain points for our developers in an area where they were asking for help. At Red Hat, we just went through restructuring our product security programs in response to the May 2021 executive order on improving the nation's cybersecurity. Red Hat's an important part of the supply chain for a lot of companies, so we need to make sure that our security practices are really, like, industry leading in state-of-the-art. And for a lot of companies, a big motivator was the Log4J, Log4Shell vulnerability at the end of 2021, and the pain of dealing with that. Now, Log4J is a typical example of a software package in open source. It's very common. It's used in most Java code, and it's a widely trusted software library. People had no reason to believe it would be a problem. It's used by hundreds of millions of devices. In 60% of the Java projects, it's used indirectly, so if you look at the dependencies of your own software, you won't see Log4J listed. But if you look at the dependencies of your dependencies, that's where you find it. It makes it a little bit harder to find. You need tools or software analysis to help you find it. And this vulnerability allows arbitrary code execution, which is the holy grail of exploits, and it can be exploited over the Internet without authentication. In fact, within days of this vulnerability being published, thousands of companies had been attacked. Within a week, it was tens of thousands. So with all this, why do we need security champions in particular? Like I said before, only one cybersecurity professional to every 100 developers is not enough. They're spread too thin. And furthermore, developers, the people closest to the code, are the ones who are best equipped to keep their own code secure, but they need some basic application security training to do that. Security champions is also an excellent way to promote a DevSecOps culture where developers have control and responsibility for their own application security. So what do I mean when I say security champions? Well, to me, it's when you have one management-designated person per small team or squad. And this should be at least one-tenth of your development population. In that, I'm including developers, architects, testers, SREs, et cetera. And they should have about 10 or 15 hours of application security training, which is enough to be a local subject matter expert for their own team. A couple of terms I use, and everybody uses different terms, this is how I define them. Security focal, somebody more like myself. So it's maybe a 50 up to 100% time commitment, advising across five to 15, maybe a couple dozen teams or squads, and security focals are going to work closely with the other security focals as well as the cybersecurity organization for your company, and they're working across a division or business unit. And they're the primary point of contact with corporate cybersecurity experts, and they're responsible to ensure that the security champions are reacting quickly and reporting back status on urgent and important security work. Security champions, on the other hand, is more like a 25% time commitment. So in some weeks, like when threat modeling is happening or doing penetration testing or maybe audits, it can be a full-time job. But in other weeks, it might only be an hour or two. And they only need to advise their own small team or squad. They're going to work closely with the other cybersecurity champions in their own organization and with their own security focal, and they're responsible for monitoring communications from their security focal, getting the work done within their own team and then reporting back. So there's a playbook for how you can get this set up. We've done it at a few companies now. If you want to start your own security champions program, first you're going to need a small working group. Just a few people's enough, three, five people, you usually just go out and ask for volunteers. Who wants to help us start this up? You need to create and publish a program description explaining what you're doing and why and what the expectations are. And then you're going to take that program description and shop it around to your management team, an executive team, to get their buy-in. Because you're asking them to commit resources in terms of time and money. Then you're going to choose a training program, which, depending on what you have available, might require some time to go actually watch some training modules and see what you like. You're going to choose a training program, you're going to pay for it and set a due date for when you want your champions to have that done. Then you're also going to ask your managers to appoint the designated security champion from each of the small teams or squads, and you want to keep a list of that. Your working group will set up the communications channels that you need and then kick it off. So I mentioned that you need a program description. Actually in the backup slides I have more details about what I put in it, but some highlights that I wanted to call out. Of course you're going to want to have introductory information, what you're doing and why. It's important to lay out the expectations for the security champions and their teams in terms of time commitment, the training that you're expecting them to complete, the communications that you want to have in place, and everybody's responsibilities. Do this in advance so the managers know what they're signing people up for. I like to call out that these are highly valued subject matter experts. This is really good on a resume, so go ahead and call that out. It's also not necessarily the technical lead or the most senior person on the team. Honestly, technical leads get stuck in a lot of meetings. They might not want this role, but if you can find somebody else like a more junior developer who's really excited to learn more about security, that can be a great person as well. And you have to call out, these are not the only team members who are going to be responsible for security work. A lot of the work of security can be tedious and time consuming and you need to make sure that that's going to be shared across the entire team. Here's some things to think about when choosing your training plan. Like I said before, we're going for maybe 10 or 15 hours. And I like to give people, if I'm doing a 10-hour training program, I'll give them 10 weeks to complete it. So if they want to do an hour a week, that's fine. Or if they want to cram it all into a couple of Fridays, also fine. Cybersecurity basics is something that hopefully everybody in the company is getting, and that's an hour or two on things like how to recognize phishing attempts, how to secure your passwords, how and when to report a security incident. Then security and privacy for developers is going to be three or four hours that hopefully all technical people in the company are taking. This is things like how to validate your input and why. How to escape your output correctly. How to handle personal and private and confidential data. Your companies own cybersecurity standards and processes and people and who to reach out to with questions and how they should report software risk or vulnerability if they find one. Then what we're adding on top of that is maybe three to five hours about security and privacy by design practices. Things like applications that are secure by default and how to properly configure your applications. Security in your CI CD pipelines. And how to run code reviews with a security mindset. Importantly, we also want to make sure that we're covering the OWASP top 10 or SAN's top 25. The most common vulnerabilities in software. And yeah, four to five hours, this is probably good. This is teaching the developers how to recognize and find and fix those vulnerabilities. I also like to include an hour or two of the basics of threat modeling. Like why it's done, what you're hoping to do, how you should prepare for it, what kind of architecture diagrams you need in advance, what kind of documents you're going to produce. This is not enough for them to be like great leaders of threat modeling exercises, but it's enough for them to be a lot of help to a security architect who comes in and runs those threat modeling exercises with them. And it should be enough actually for them to keep the threat models up to date over time as the application changes. And I have some examples of training plans that I like in the backup slides, so I'll put them up at the end of the talk. As a nice side effect, according to Sonotype, in their 2020 DevSecOps Community Survey, developers who receive training on how to code securely are five times more likely to enjoy their work. Awesome. It's also important to have an interlock meeting, and we do this in the context of a secure engineering guild. So I say that attendance is expected for the security focals and security champions. If they can't make it live, then I want them to watch the recording. We do want to make sure that we have a recording or at least a really good agenda with detailed minutes and notes. Some topics that we've covered in the recent past are things like checking on status of time-sensitive work. Any new corporate initiatives or security alerts that we want people to be aware of that week. Security tools and tool adoption details. So a lot of these security tools that you get integrated into your pipelines, you may find that there's tips and tricks that only apply to your own organization, right? Like where do I get the value for this variable? Or how do I sign up for this software in my org? So we're sharing that information with each other. We also do pair programming. Just somebody who's figured it out will sit with somebody else who is trying to do it at the time. We talk about penetration testing and remediation when that's going on. It's also fun to include security in the news. A couple of recent articles. And as time permits, five or ten minutes of education on a topic that's relevant that week. So for example, at Red Hat, a lot of people are going through security assessments right now. So one of our upcoming topics is just going to be a walkthrough of an assessment where people can ask their own questions about it. A little more about other communications channels that are helpful. My favorite is actually instant messaging like Slack. It's great for real-time conversations. And if I have a private channel with just the security champions in it, I don't have to worry about doing the at channel if I need to just ping everybody really quickly. We also have like email lists for security champions and focals and their managers. And so for bug tracking, you need a system set up to handle security bugs. They might require embargoes and they definitely require auditability. And then just your normal work tracking system for your squads just to make sure that the time sensitive work gets done. Like, okay, let's make sure we all get our security assessments done on time. There are a lot without challenges. One that I've run into a lot is dealing with global teams. So how many people in here have globally distributed development teams across multiple time zones? Yeah, like more than half. As you know, finding a time for any kind of group meeting is difficult. And everyone tries to schedule their recurring meetings during the exact same time slot. It's a four to five year time first thing in the morning in North America, right? So my recommendation is to either choose a meeting time that works for your most engaged people who are actually showing up, or have two meeting times about 12 hours apart. Either one can work. I also recommend encouraging and supporting asynchronous communication. So you've got instant messaging, you've got your shared documents, et cetera. Another one that I ran into at Red Hat recently was the training program funding. So we chose one that cost about $250 per user, which is pretty good. And by the time we got the budget approval, we only had three months left on the contract before it ran out, so they weren't taking the users. And we had to wait for the security team to negotiate a contract for the next year. So my recommendation is to go ahead and choose the online training program and request the funding early, because that could become like the long pull. It could take a while. Also, if you already have a good training program available to you, just start with that. So at IBM, I was lucky. We had a really good internal learning system. And all we had to do was find the security modules that we liked and evaluate them. You know, we shared, each person watched a couple of hours and strung them together into a training plan, and we didn't have to pay for anything. Another thing I ran into, how many people here have compliance fatigue? You're like sick of patching things and filling out forms. Yeah. Our developers at IBM were already spending up to about 25% of their time on things like data subject requests, privacy and security reviews, internal audits, patching software every week. And to them, this felt like just more compliance work piling on top. And too much tedious work like this leads to a type of burnout. People really get sick of it. Frankly, it's not actually more work overall, but it is kind of concentrating more of it in one person in the team. And one of the goals we chose to address this for the secure engineering guild was to automate as much of the toil as possible. So we had people doing little side projects to help get things automated, and they would share that back to our team. And again, programming with each other to put these things into practice. And we also partnered with our CSO organization to pilot new tools that they were considering buying. That way we made sure that the tools that were available to us were the ones that were going to work well for us. So what do we get for all this work? So 12 months after starting the program at IBM Digital, we did in fact have 1% of our developers who were subject matter experts. So they completed their training programs, and they stuck with it as security champions. We were able to handle critical alerts, which we called CSO overrides, often within one to three days. And by handled, I mean, that's from the time that the vulnerability was reported. The cybersecurity org handed out to the focals. The focals handed out to the champions. The champions evaluated their software. They saw if they had the vulnerability. They reported back to cybersecurity if needed. They fixed the problem if there was a patch available. And they put it in production. Within one to three days. Importantly to me as a focal, I knew that no teams were falling through the cracks. So we were responsible for about 130 applications in my group. And I knew that by talking to my dozen or so security champions, those were all going to get covered. I just needed to hear back from those dozen people. We also saw far fewer of our security reports, whether they were from pen testing or security tools or from Hacker One that were marked as false alert or could not reproduce. So false alert or could not reproduce is often developer code for I don't know what this is. And by giving people that extra education, they actually recognize what the vulnerability is and they have a path to figure out how to fix it. In fact, about halfway through that year, our security team said that that was no longer a valid answer and you were going to have to provide some sort of fix or mitigation for everything. We also uncovered new vulnerabilities. So as our developers were doing the training, they were thinking in the back of their heads, oh, I might want to look in my own software for that particular vulnerability. And they would come back to the guild meetings and tell us, hey, you know, I found this vulnerability in my software. Maybe you should check in yours as well. And we saw a broader adoption of threat modeling. So we were used to also doing threat models right before pen testing or right before shipping a product. And with this program, people took their threat modeling training and they're like, oh, this sounds cool. And they reached out to our security architects and they asked to do them early. And most importantly to me, we had people who were thinking about security every week. So a few key takeaways. If you're interested, know that a security champions program is forward and repeatable, you can do it. But you're going to need management support. You're going to need a few people, handful, one to three months to get the program started. You'll need that funding for the online training. You need to identify one security champion per team. And you're going to need one or two secure engineering guild leads per org. And let's work together. So I would love to talk to you one on one after this. Have you tried something similar? What worked? What didn't work? What did you learn that might help others in the future? So thank you very much. You can reach out to me on Twitter or LinkedIn.