 Welcome to exit stage left, replacing theater with chaos. I'm Kelly Shortridge. I'm VP of Product Management and Product Strategy at Capsule, just a startup based in New York City that provides infrastructure monitoring and protection for production systems. My spare time and maybe how you know me already, I researched the intersection of information security and behavioral economics. So what's the problem we're exploring today? Well, it's that InfoSec Offensee itself is the under-appreciated star of the IT stage. Often laments that it feels like no one else cares about the security of our data or of our systems. I would certainly agree, and probably some of you all do too, is that InfoSec is often a prima donna about things. It doesn't exactly state its concerns always calmly or constructively. And it's pretty boisterous in its displeasure with developers, operators, and end users. What I've seen way too many people in information security consistently miss is that developers and operators do actually want to build secure software. The problem is that they're judged more on the delivery part rather than the security part. InfoSec, of course, in classic Shakespearean tradition, bites its thumb at them in response and leaps out of the security theater stage that all of our detriments. So what can we do to remove the drama and the pain from security theater and actually start delivering safe software and systems? This talk will explore one way to start. So in Act 1, we will welcome you to the security theater. In Act 2, we'll explore the fisticuffs between security theater and security chaos engineering. So that Act 1, welcome to security theater. To spoof the classic song, welcome to the theater, to the magic, to the fun, where snake oil tools and roadblocks grow and blaming rings for dismo. So what do we actually mean by security theater? Similar to the origins terms in physical security, security theater involves any efforts towards producing the perception of improved security. Unfortunately, creating the perception of improved security is often at odds with actually creating meaningful and valuable security outcomes. The end result is that you produce a whole lot of drama. You have a histrionic theatrics that are enough for the entire audience to hear. Security theater is also particularly obsessed with bad apples, which are referring to humans who do something malicious or accidentally careless. And of course, these humans are incredibly rare, but that doesn't change the focus. And the problem with this is that security theater then involves policies that apply to everyone because you have to be hyper vigilant for those bad apples. So as Bogna said, there might be someone who can't be trusted. The strategy seems to be preventative control on everybody instead of damage control on those few. The philosophy basically sucks for everyone, but the actors on the security theater stage. Nymphosec, this is why we often see the department of no, they say no to all requests, just in case someone is a bad apple. And this is in part what fuels the high conflict relationship we see often between Nymphosec and engineering. Jess Humboldt did some work on risk management theater and noted that any sort of theater is a common encounter control apparatus imposed in a top-down way, which makes life painful for the innocent, but can be circumvented by guilty. Now, if you think about things like shift left, right? The goal is really like for a developer or an architect on each team to understand things like threat modeling to help build systems more securely by design. But instead what we see with something like the shifting left is catch the OS top 10 and whatever else our tools say is important during the build phase rather than the deployment phase. There's maybe a kind of benefit to shifting that friction earlier, but to be real, it's minimal and it certainly isn't fulfilling the stated goal. A lot of tools, security tools with parole ROI are also hurting and punishing the innocent and are still bypassed by the guilty. Relatedly, even though I still think the buzzword is useless at best, Zev Sikops kind of marches on to the stage and says, I'm not a writer or security theater, I'm a cool security theater. In an ideal world, Zev Sikops should be about similarly unifying accountability and responsibility in the realm of security, just as Dev and Ops was a unification of responsibility and accountability around operations. The real world though, Zev Sikops is basically just security theater with a fresh makeover. It's deploying code and image scanners driving with firewalls and anti-virus engines and adding the word automation in as many places as possible. The problem is it can't be just us or them, right? If security can't trust engineering teams, that's a culture problem and almost assuredly a process problem, but it's not a technology problem. As we'll discuss in the next section, localizing changes is actually a much better way to thrive. So when you see Zev Sikops, it's like if Dev Ops is like a building, then security is smashing through the building, like it's the Kool-Aid man. And why stop at Dev Sikops? Why not Dev SikTestops? Why not Dev SikTest and DBA Ops, right? We're actually delivering secure software that is the seamless part of Dev Ops processes and its insertion into the buzzword is totally extraneous. That's the thing. There's actually an incentive on the security side for it not to be seamless and to still treat security as distinct, but equal. I actually discussed this in a recent blog post about YOLO Sik versus FOMO Sik. I recommend you all checking out. FOMO Sik can be thought of security strategy that's driven by fear of missing out and all of the related thought patterns around it. So it gains the spotlight and FOMO Sik is escaping the feeling of inadequacy. You want to regain a sense that security is in control and it isn't irrelevant, even if that's at the expense of security outcomes. What's interesting is envy is actually at the heart of FOMO security and for the context of this talk, the most relevant target of InfoSik's envy is actually software engineers. So Devon Ops actually possess meaningful goals and that are part of meaningful work, all of which is measurable and allows them to somewhat answer the question of like, did you deliver software customers or actually going to buy and use? InfoSik's goals are largely nebulous and its success is often abstractly shaped and often bittersweet at best. So when security wants to cover every last inch of attack surface when all vulnerabilities are treated as mission critical, when security approvals are demanded on every last bit of new code, all of this is part of FOMO-Sik's melodrama on stage at the security theater. Ironically though, InfoSik's FOMO-Sik driven lust for the idea of being treated equally to Devon Ops at the big kids business table makes it lose sight of organizational priorities. It's so intense on maintaining control and feeling important that it actually architects suboptimal strategy that keeps it from earning its seat at the table. FOMO-Sik's histrionics around desperately desiring a feeling control make InfoSik quite like someone who's like a statically gripping a wheel. Even the wheel isn't attached to anything and all that fake steering maybe makes you feel like you're doing something but you're actually staying stagnant. And if you're staying stagnant, there's no way you're actually promoting safety or security in your systems. As shown in Dr. Nicole Forsgren's state of DevOps research, the elite organizations who have the fastest deploy frequency and shortest lead time for changes also actually exhibit the lowest change failure rates and shortest time to restore service at this speed and stability are positively correlated. And especially when you consider the typical ratio of security engineers to software engineers, it's basically impossible to scale any sort of like heavy approval process. In fact, like the organizations that have the most restrictions in place in which are in theory, like the most careful aren't actually the ones that are most stable because they experience the highest change failure rates. So they seemingly, you know, careful and cautious organizations actually see 46 to 60% of their deployed changes to result in either degraded service or something that requires subsequent remediation. Ultimately, the lesson here is that the more hoops through which you jump, the more opportunities there are to fall on your face. For instance, if there's a nasty like remote code execution vulnerability in a library that's critical for one of your production systems, you wanna fix that quickly, right? So a heavy and inconvenient change management process will actually make it harder to deploy a patch, which means your production systems will language with both security and stability at risk. But if your process is speedy, you can push up patches and fixes and other sorts of enhancements much faster which allows for continuous improvement of your system stability and security instead. Ultimately, like a fortress of approvals and red tape and nebulous policy doesn't help your security program as much as ensuring that you can adapt your security program as your systems and environments evolve. You can test and fix things quickly and easily. You can actually bolster security far more than any sort of like extra strict policy. Okay, so we can probably re-security theater is like not a great time. So how do we spot security theaters red flags and is there a better way ahead? Yes, there is. This brings us to act two, which is theater and chaos and fisticuffs. So there's conflict born from how we should treat security failure and where accountability for security rests. On one side we have security theater and on the other there stands security chaos engineering. We'll explore these tensions presently. The TLDR on security chaos engineering is that it seeks to leverage failure as a learning opportunity and as a chance to build knowledge of systems level dynamics towards continuously improving systems resilience. I'm actually releasing a report on this topic with my co-author Erin Reinhart. You might actually be out by the time this airs by O'Reilly. So check that out if you wanna deeper dive into the topic. Again, security theater's vibe isn't nearly as chill. We would like to go absolutely harsh or mellow. So the security team that acts in the security theater truth wants to avoid failure by any means necessary and punishes chosen humans involved when incidents occur. Obviously these are odd. So this brings us to scene one and act two, the dual. Let's do some comparing. Teams who adopt security chaos engineering are very commu about things. They radically accept that failure is a part of any sort of complex system that continuously changes. They don't expect to know all the ways things can fail. So they instead pursue and incentivize feedback leaps and experimentation to uncover evidence of how systems behave. Teams who adopt security theater though are pretty stubborn if they lament that if only humans followed all of the infosec laws and the kingdom perfectly incidents wouldn't happen. Users are lazier, stupid or the F on the cheap bad apples. So they deserve to be ruled with sort of iron fist despite the fact that policies are never gonna be globally applied in the reality of context nuance. Another comparison is security programs following security chaos engineering are designed to minimize the impact of incidents by making incident recovery as efficient and graceful as possible. So teams know that pointing fingers is useless and that conducting experiments to power feedback loops instead is incredibly valuable. Security programs though that are following a security theater approach use technology and policy to eliminate the possibility of failure if they can, which they can't and allow for easier blame when incidents happen. No one can enumerate all possible failure in any sort of complex system and keep any failure from happening with control. So you know, I guess we'll see. Teams pursuing security engineering wants to unify accountability and responsibility by handing off acceptance of risk to the engineering teams that are actually performing work which what we refer to as localized change approvals. So the security team instead operates as an advisor sharing knowledge as widely as possible. In the O'Reilly Report, we also talk about the idea of security champions very closely related. Teams performing security theater though in contrast to really thirsty for bottlenecks and silos. So the security team is generally in charge of approving or denying changes and releases despite the fact that they're actually divorced from the systems and the work itself. Importantly though, this bottleneck and this requirement for security approvals is actually seen as providing security in itself even though neither of those are actually security outcomes. As far as culture, security cast engineering is all about a culture of curiosity that's always down for like a healthy retrospective and an open discussion of oofs and then big oofs. And teams feel super safe coming forward about security issues because they know they won't be ridiculed or blamed. And also they know that problems can actually be solved collaboratively. Security theater of course is the opposite because of its fear of change that also means it has a fear of learning. So teams would rather keep security issues quiet and hidden in secret so they don't face retribution or disruption. Honestly, it's all around the bed. And finally, security teams or the security cast engineering approach are all about providing a guiding light so that unfamiliar situations can be approached constructively. They welcome audience participation and challenging assumptions about security strategy because it's accepted that security strategy has to evolve as your systems and your environmental context evolves. But security teams with the security theater approach instead are a stickler for rules and they find the status quo very comforting. They would much rather copy paste policies over from the world they're familiar with to this new scary world which is why you see things like container firewalls which don't make a lot of sense. So ultimately, do you wanna thrive in the real world or do you want to try to survive in security theater's fantasy world? The security cast engineering teams are ultimately accepting the reality and especially the reality that security just isn't the most important priority to most organizations. It's really about building a custom base like fueling revenue growth, improving profitability and so forth. With security theater, it's almost like teams are method acting so hard that they start to believe in this fantasy land and they start to feel that security actually is the top priority and there's this elaborate song and dance required to fulfill it, it's just not true. So those who actually adopt a security cast engineering approach want to produce meaningful outcomes. They don't care about doing something just for appearance's sake. Again, going back to that blog post I mentioned on yellow sec and bonus sec, security cast engineering also doesn't encourage feel like caring about feeling important or buying the latest security shiny just cause everyone else is doing it. That's really important. The other thing is that most engineering teams already understand that bugs that could disrupt performance are inevitable. So they try to continually test and refine their systems to get the results they want. Security cast engineering thinks super similarly to this which means that it's actually aligning to how developers and operators already operate. So security bugs exist, misconfigurations are a fact of life. Keys are accidentally gonna be pasted into code. It's quite silly to think that just because some security checks passed once that everything is perfect. On the flip side, joining the security theater actually results in a dangerous and self-fulfilling prophecy. Everyone is treated like a bad apple. They're more likely to hide and keep secrets which ironically actually turns the innocence into bad apples. That's terrible. Instead they're gonna now start covering up mistakes and they're gonna be too scared or resentful to actually raise concerns which totally dismantles your feedback loop. And honestly it also erodes the security team's political capital which as much as we would like that not to exist it's a fact of most like organizational realities. And so without this feedback loop how can you actually be certain that your security program is working? This brings us to scene two about judgment. Security cast engineering welcomes objective judgment where a security theater would rather bribe the judges. So classic red flag of like any sort of security theater is that it's nearly impossible to directly tie costs to benefits which is often a lot of hand waving and purposely vague math at all. And it's sometimes alarming at least in my experience to discover or often rediscover how few info sec leaders and professionals can actually articulate the ROI on the dollar spent new security programs. I think we can all understand that not being able to articulate how dollars spent transforming to organizational benefits is super problematic. So moving towards security chaos engineering is actually a movement towards creating success criteria so you can keep track of the efficacy of your security program in a quantifiable way. So it's obvious but you either want to prove that you avoided a monetary loss which is more common in info sec or that you're helping your organization make more money. Definitely rare in info sec but I think we can get there. Problems, this calculation also needs to consider the effects that your security program has on other parts of the organization in economics literature. This is what's referred to as negative externalities. So drawing on an example in the physical security domain one study by Boleylock and the colleagues suggested that road fatalities actually increased as a result of airport security theater because more people decided to try rather than deal with all of that pain in the airport. So security metrics as a result of all of this are often way too tone deaf to what the rest of the organization is doing. No one cares about your number of phishing link clicks just like no one cares about line of code written on the engineering side. It's a huge, huge mistake not to include these negative externalities in your measurement as it keeps you from measuring security efficacy holistically. Any measurement of info sec efficacy needs to look at other teams measurements to track whether there's increased friction or some sort of inconvenience that's being engendered. So an increase, for instance, of COVID be coverage by the security program could actually be a great thing. But isn't actually great if critical software delivery metrics like lead time or deploy frequency are being hurt as a result. So it's obviously ridiculous to assume that any change in the security program or any change in an engineering metric is the direct result of the security program, right? The point is that these metrics open up the opportunity for investigation. You wanna investigate security's impact on software delivery and ensure there aren't any negative consequences to the policies and programs you're instituting. Because otherwise those might appear like security improvements when they're actually not. As is likely obvious, like this is gonna require collaboration both between security and software engineering. The mutual benefits are ripe for the picking as is explored in that Aurelie report I mentioned on security chaos engineering. So now we reach the grand finale. So security theater prioritizes gatekeeping. It prefers to punish humans for supposedly causing failures even if it leads to a fear of learning that actually stifles continuous improvement. And that's what we all want. So when security teams serve as this kind of, an authoritarian external approval approver, bottlenecks are inevitable. And this friction in terms actually leads to longer deploy times and larger deployment sizes both of which seriously jeopardize stability and thus security. So again, continuous improvement is our goal for systems performance and it also should be for systems stability as well. And security chaos engineering is the path to get there. Plus, I mean, we constantly hear about how attackers are constantly evolving their methods and their software ahead. So shouldn't our defenders and our defense continually be evolving too? Makes sense, right? So security teams, as we discussed in the report can be freed from the burden of heavy handed control and they can instead become advisors allowing them to focus on more strategic work. Product and engineering teams are actually ones that start to accept accountability for all these security changes and importantly cleaning up any resulting impact of these changes which makes for healthier security for everyone. So I hope today all of you are very motivated to close the curtain on security theater and let's speed and stability instead flow in concert through the security chaos engineering approach. I always like to close with a quote. It's all invoke Chuck Paul and Yuck. Classic kind of monologue tradition at the theater. People don't want their lives fixed. Nobody wants their problem solves, their dramas, their distractions, their stories resolved, their mess is cleaned up because what would they have left? Just the big scary unknown. Again, this was only surface dive in security chaos engineering which we're really excited about. So check out the O'Reilly report I wrote with my co-author Erin Reinhart. It is available for free. Should be at the link by the time this airs. So download and dive into it. We can close the curtain on security theater. That, thank you all very much.