 Now, to continue on to our next speaker, Joni Clipper is CEO and founder of Stackhawk, a software as a service company built to help developers find and fix app sec bugs in their code before they hit production. Joni has extensive experience building software for developers. She previously served as VP of Product at VictorOps. It's great to be here at GitLab Commit 2020. Welcome to DevOps Works. So why hasn't security kept up? I'm Joni Clipper, CEO and co-founder of Stackhawk. I've been building software for software developers since I graduated from my MBA program in 2010. The first technical company I worked for built software that allowed users to deploy and manage more than 100 different open source applications on any cloud provider, but just a couple clicks back when we had many cloud providers. We managed updates to those applications programmatically and can even move apps across cloud providers in the case of downtime. So that's really where I cut my teeth in technology. After that, I was the VP of Product at a company called VictorOps from seed stage through acquisition to Splunk in 2018. At VictorOps, we're an incident management platform that helped disrupt that traditional knock and operations model by making it easy to deliver alerts to the engineers who were closest to the code, which really helped reduce mean time to know and mean time to resolve. In 2019, I founded Stackhawk to bring application security into the DevOps workflow by helping software engineers find and fix security bugs in their code before deploying to production. My co-founders and I saw an incredible opportunity to reimagine application security through a software engineering lens. So in this talk, I'm gonna share what key DevOps tenants have been missing in the realm of application security that have kept us shifting APSEC left into the developer workflow and what we need to do to modernize the product practice so we can rapidly deliver more secure software. So DevOps is pervasive and it works. And what I mean by DevOps works is it has unlocked the ability for companies to deliver value to their customers faster. We've seen major shifts in tooling and processes that enable you, our software engineers, to go fast and to deliver that value to our customers. So Agile unlocked the ability for us to quickly plan, develop, and deliver software to customers. Virtualization, cloud, and infrastructure as code removed that sysadmin bottleneck as we empowered engineers to deploy and manage their own resources. Then we've seen massive efficiencies with automation throughout the integration and deployment pipeline. Then we further unlocked speed with orienting to really small releases by breaking up the monolith and investing in microservices architecture and containerization. Then we improved our uptime by creating the ability for engineers who wrote the code to also manage it in production. The next most important area of investment is empowering our engineering teams when it comes to application security. When we do this, we unlock not only the ability to deliver software quickly but more secure software quickly and our customers are expecting this of us. So whether you're part of a younger company that was born with the technical and cultural tenants of DevOps or maybe you're a part of a legacy enterprise that's undergoing digital transformation, we invest in these DevOps principles to quickly respond to customer needs, deliver high quality software, and ultimately remain competitive as a business. So why hasn't AppSec followed the rest of the product and engineering organization in shifting left? So as I was doing research for my company Stackhawk, I wanted to understand why security hadn't yet made it into the developer workflow. I interviewed more than 50 CISOs, security professionals, CTOs, VPEs, you name it, in a matter of a couple of weeks. And we found that there were a lot of technical barriers to current solutions, which we're going to talk about, but also some pretty meaningful cultural barriers that needed to be addressed for this change to happen. So the most glaring thing that stood out to me was the relationship between security and engineering was deeply strained. It's not too dissimilar from the early days of the more traditional IT ops versus devs type of strain, but at least those teams were really aligned in delivering and maintaining software rate for our customers. So this might be a little worse, honestly, in terms of the type of tension that exists between these groups. But it's somewhat easy to understand when we think that these groups have very different KPIs and operational objectives. Whereas DevOps is about orienting the team to focus on business objectives, security teams are aligned to reducing risk, oftentimes in a way that can come off as at the expense of the business objective. And this has fostered some pretty meaningful cultural issues. The first one really being lack of trust and empathy. So when talking to security folks about engineering being more involved in the app sec, I would commonly hear, you can't get engineers to care about security, to which I totally call BS because security is quality and engineers care a lot about the quality of the software that they deliver. But then when you talk to engineering about the security team, we've all heard these anecdotes, but it's referred to as the department of no or being historically just so difficult to work with that you try to avoid working with them at all costs. The next piece is transparency and observability. We know that both of these are very core tenants of DevOps and something that's missing in the security discipline today. Developers aren't in security software, the vast majority of the time, right? There's little trust that developers should be able to see or action on issues until security tells them to or proves it, right? Which is super inefficient because software engineers are responsible for ultimately solving the problems anyway. And lastly, empowerment. My co-founder, Scott Gerlach, was formerly the CISO of SendGrid and he talks a lot about how we want developers to care about security, but we don't trust them to take a first pass at assessment and resolution of issues. So he comments about the tendency for security teams to buy these expensive, huge AppSec platforms, but then they lock them down so tight with approval gates everywhere. It's impossible for a software engineer to engage or use the products so they abandon. And then we also have several technical challenges when it comes to modernizing AppSec. So first, most tools on the market are built to run in production by a security team. There are many challenges with this, which I call the production bias and we're gonna double click on that here in just a minute. Next, these tools are really hard to configure and deploy. Many, particularly in the land of DAST, which is Dynamic Application Security Testing and the land we operate in the most at Stackhawk is scanning the running application. And that's really difficult to get working in a repeatable manner, making automation super hard. These tools just weren't built for it. Most AppSec tools were built with the idea that a human is going to be manually running tests against an application and the good news is this is really starting to change. Next, a lot of these tools generate a ton of noise. Output is super verbose and it's also written in this very security specific language rather than something that's easy for an engineer to quickly grok and action on. And we all know that a key component of DevOps tooling is to mitigate noise and ensure that a developer's attention is only drawn to what's truly important. And lastly, these tools just don't fit in with existing developer tooling and processes. So on the previous slide, we talked about security folks saying that devs don't care about security. But when we look at this list of challenges, I mean, my question is how can they afford to? In a world where software engineers are largely measured on delivering business value, the cost of caring about AppSec today is often just way too high. So the production bias. We discussed that many AppSec tools on the market are built to run against code that's already been deployed to production and they're designed to be operated by a security professional. This orientation toward production and lack of involvement by the development team at the right time and in the right context makes it impossible to modernize AppSec. So the first piece of the production bias is the people. So today, AppSec tools are commonly run in production by two groups. One is the security team because production is where they know the application the best. The other is by a pen tester. And that's because production is their point of access. Often with black box testing, we expect them to test from outside in. So in this context, it makes perfect sense that these tools to date have been designed for these two groups. However, it's highly inefficient. Both of these groups struggle to instrument their tests because they're less familiar with how the application was built to work. So setting up an engagement is a pretty heavy lift in order to actually get a really good assessment. From there, the primary value by these groups, then tends to center on the finding of the things. So there's a lot more emphasis on finding the number of things versus finding and fixing the right things. And this has repercussions, right? It's super inefficient because the finders of security bugs are not the fixers of security bugs. And it reinforces that adversarial relationship we open to talking about, right? Your product and engineering teams are working hard to deliver features and business value. And some number of months later, an outside party breaks your things and says, look, I broke your stuff. You need to go fix all of these bugs. We've long moved on to other tasks. So that involves a lot of context switching. So I wanna be super clear when we talk about people. I am not advocating that people should not get pen tests. I am suggesting that that method of knowing about security bugs should not be your only introduction or access to this type of information. The next most critical part of the production bias is timing. So as companies are rapidly shipping code to production, security is not baked into this workflow. Either you're not rapidly shipping in which case, APSEC processes act as a blocker for getting code to production or the security team is constantly playing catch up. And it's oftentimes, nine times out of 10, the latter situation. But it gets worse. Because when APSEC tools favor running in production, like the elephant is in the room, is that the bugs are already in production. I try not to laugh hysterically when I read the slide because it's like, it seems crazy to me, right? We wanna find these things before we deploy. Now, there are going to be times when you intentionally ship security bugs to production, just like you would a defect, right? But the intentionality is the important thing here. This should be done eyes wide open and be a risk-based decision. So you might know that exposure is super limited or you might know that you're planning on fixing that security bug in the next sprint. But at the end of the day, production should not be the first place that you're checking if you have security bugs. And the third piece of the production bias is context. So when we check for security bugs in production, some period after a release, excuse me, it's super inefficient. Engineers have moved on to other sprint tasks and they're no longer in context of their code. So fixing involves context switching, which we know is very both inefficient and expensive. When scanning for apps like bugs in production, typically tools that are on the market today are scanning an FQDM, a fully qualified domain name. So something like dot dot dot dot or app dot. And the result is a list of bugs that exist somewhere in the app, right? Under that parent domain name. And that makes it really difficult to identify the specific part of the app or the service that's affected. And it also lacks the context of the specific data handled by that service. So you end up with a security team creating these tickets and doing a lot of ticket shuffling, trying to figure out which service was affected and then the team that owns that service and who is going to be available to fix it. And then we talked about an inherent focus on the number of bugs being found. And this is really problematic. You hear a lot of stories about, you know, employing these security tools that find just like tons and tons of security bugs being real or not, right? And then the goal of the development team or the relationship between security and development is just a goal of percentage to fix bugs over time being a primary driver of value, which is arbitrary. And it ignores the business context of the findings and the trade-off decisions around risk and business value generation. So instead we should be engaging in discussions like how important is this thing, this app to the business? Should we be fixing all of the bugs on an internal application or should we just be going fast on that? And how should we think about our apps and the data that they handle? So the production bias results in the wrong team and by that I mean the team who isn't ultimately going to fix the bug, finding vulnerabilities at the wrong time because they're in production without appropriate context. So here's how test-driven security should work. And for this slide, I'm just gonna read it, right? When a team writes code, they know the syntax is wrong when it won't compile. When a team merges code, they know there's a problem when it doesn't merge. When they run unit tests, they know the code is wrong when it fails the unit test. When they run integration tests, they know the code is wrong when the product doesn't work as it was designed. So when a team introduces a security vulnerability, they should know because it fails a security test. So at this point I want to emphasize that instrumenting test-driven security is something that engineering teams should instrument. Even if they don't have a security team or function at their company, this is something that engineering can and should lead within their organizations. So what does the right team look like when it comes to APSAC? We think it's really about developers and informed stakeholders. Ultimately, our developers fix security issues. So let's make them aware of those issues as they're writing the code. And in this world, we reimagine the role of the security person or team as a coach whose responsibility is to enable their team, the dev team or league of teams to be successful. And there is value in optimizing for the developer experience that really plays into what security professionals are looking for anyway. It's super efficient. Engineers fix bugs in flight. It democratizes security information across the organization. And when we do that, security becomes a standard discussion in the building of software. It affords collaboration among teams at the right level and provides an opportunity for targeted education. And when all of this happens, it allows our security teams to truly scale. So this last tip is again, absolutely leverage pen testers for security reviews and tough business logic findings, but let's make sure we have access to this information with every single release. So the right time to be checking for our apps at bugs, if it isn't obvious already, is pre-production. Instrumenting security tests into CI CD gives our engineers feedback immediately. Many of our customers at Stackhawk are checking for app stack bugs on every single merge. They also do this with an awesome GitLab integration. And this is ideal because if a security test fails, you know you introduced a security bug in your latest change. So when we then add the ability to also test locally, engineers have the capability to quickly iterate on the fixed test loop if they find a new bug. So for app stack to modernize, engineers need to be able to test while writing code, test while building code, and security tools should play really well in these phases of development. Next is context. The right context for finding security bugs is in your code as you're writing it. And when this happens, engineers can fix egregious security bugs in flight, and this results in less bugs making it into production and less rework later. The next piece is about the context of your app. And this is important when it comes through with real-time security testing in the context of the application that you're actually working on. Teams then can leverage their microservices architecture to instrument security checks on smaller bits of code, which makes it easier to isolate and fix those issues. Teams should also be able to make better judgment on these smaller bits of code and in context of the app because they know the job of the app that they're working on and the kind of data that it handles. And then lastly, the context of the business. We need to empower engineers to fix the most important things, not all of the things while they're in context of their code. The security team should collaborate with engineering on assessing business risk and work across product and engineering teams to triage lesser severity issues based on resources at hand, business objectives and risk. So when the right people are doing the right jobs, there is a nice bridge built between the security and development organizations. The goal of security team becomes enabling the business to go faster, safer. They lay the right foundations. They focus on scaling security within the organization by empowering their developers and they serve to help educate around risk. When this happens, the development organization can build more secure software also faster, right? When we automate the job of finding app sec bugs and then empower engineers to fix issues in flight, security becomes a natural part of the development process. Okay, so perhaps you're thinking that sounds awesome, but I'm not quite sure where to start. So I've included two great types of app sec tools that every company should be using. SCA or source code analysis helps teams identify vulnerabilities in open source libraries or containers. These tools compare versions of libraries to third party components that you're using to known vulnerabilities and they alert you when it's time to update and some even create a merge request for you. DAST or Dynamic Application Security Testing scans your running app and APIs for vulnerabilities. It's an active scanner in that it's actively attacking your app with inputs and seeking outputs that indicate a present vulnerability. The nice thing about DAST is since it reports successful attacks, there tend to be less false positives. GitLab provides both solutions and really great integrations with several of the tools that I've included on this list of which they're both open source and commercial here. So you should totally check them out. To get started, if you're a developer, it's great because you get to skip step one because you know your app and how it was built and you know your pipeline. So in that case, simply choose an app or a service that you want to start evaluating for security bucks, right? Pick a technology, one of these two are great and just start scanning and pipeline and then as you get comfortable with security driven testing, you get to layer in additional applications and technologies as you go. So that's it for me. Feel free to email me with any questions or anything you want to talk about at joeyatstackhawk.com or feel free to visit stackhawk.com to learn more about how we're putting AppSec in the hands of developers and enjoy the rest of your conference. Thank you so much.