 So yeah, I didn't plan that I would be following Victoria's talk about testing with something called the on testing Thanks organizers. Sorry. That's cool. Um, you can follow me to Jared on Twitter I've been in the software business for about 20 years In various roles spent the last eight and a half at barricade doing application security stuff My bio says Grammy award winner. That's this It's you know, in fact, I was on the recording that we got a Grammy for a couple years ago It's an orchestral performance recording. I was in the porous so you can draw your own conclusions from that My bio also says bacon number of three, which is that's the soup not see for folks who don't know But just to prove that I don't lie in my in my conference bios I'm gonna start by talking a little bit about culture clash and which I think is the topic that's close to most people's parts at DevOps Conferences and in implementing DevOps and then broaden it a little bit beyond the dead and off part of the culture clash We're gonna talk after that a little bit about the part that I'm dealing with them in this world Which is application security a little bit about why we think it's relevant and then talk a little bit about Okay, so if you care about application security, how do you think about bringing it into what you're doing? This is the culture clash that most people think about when they think about DevOps, right? You know, it's the classic class between I'm development I need to get stuff delivered so that I can deliver value for my business and the And the you know the job of production, which is to sort the big thing stable and safe We've all solved that right and you know, everybody's really happy about how that's going so we can all go home In reality any culture clash and working that out of work in progress, right? What I'm here to say is that from where we sit in security There's an equally big culture clash between the culture of DevOps, which is you know We're happy rainbow land and we're you know deploying in minutes and everything's wonderful and the culture of security Which says really you're gonna send that out to production without checking to make sure that somebody could happen to it Really, you know, and and this is the reaction that a lot of people in security have to the whole DevOps mindset even now A lot of people probably Slide This is how a lot of people in security think about DevOps. It's like oh, it's wonderful rainbows in the uniforms But in reality, there's a there's a hidden cost to moving fast without considering what the security is and a lot of people in security Myself included frankly, I'm gonna own this one are a little bit snarky about it, right? And you know, but the point is that security is like anything else like infrastructure failures and configuration issues like you know issues with other parts of your Application if you don't own making sure that you're solid as part of your overall end-to-end process You're just pushing that cost downstream into production where it can be really big and hit you later I love that you brought up the open SSL example, which is a you know a great and ongoing story We first heard about that with Hartlead a couple of years ago Take the pain of something like a Hartlead which in a lot of times is a component that's resident You know in a component that's resident on your server library and then to take it into something that's actually inside the application back in November somebody Published an exploit for Java deserialization vulnerabilities basically you're going to send a Java object over the wire that can get deserialized in such a way that it can cause the host application the host JVM to run malicious code on your behalf and a lot of Java applications are designed to take Java objects over the wire and it affected everything it affected like The the the actual web application servers like Tomcat and J boss and affected Jenkins and affected By our numbers it affected about 25% of all Java applications in existence for one vulnerable version of the Java library that is affected alone right so that's just open source vulnerabilities forget about you know Your developer has coded something that that that breaks the world in its own special and unique specials in a flake way So if you're not in a place that cares about security for security's sake and most organizations aren't I would argue that your motivation Needs to be thinking about this is quality a lot of organizations care about The quality of the application that they deliver in terms of it's being able to deliver value to customers in the way That it's anticipated to do right and it's advertised to do Security at its simplest level can be thought of as a way of taking quality from the perspective of The availability the application the integrity of the information that application provides or the confidentiality of that information and just ensuring that that is you know is is is Protected is is is is kept whole right so if you need a motivation on how to think about security and in DevOps terms This is not a bad one and it's one that's probably more affordable than most ways people talk about If you want to talk about the pain of not having security This is as good an illustration as I've been able to find information is beautiful Has a fantastic visualization of data breach activity size of bubble is number of records leaked color of bubble is what caused the breach ultimately to occur and you can go to information is beautiful I think it's calm, but if you look at the missions beautiful data-reach visualization you'll get this and So what I did was take a look at you know the breach activity up to you know September whatever when I grabbed this and then I turned off all the bubbles that weren't caused by something That were caused by something like configuration failure, which could lead to a security problem that were caused by hacking Or were caused by some other information security related cause right and I think the the practical point is that If you care about your customers, you know data safety then security should be around your Let's motivate this into applications specifically because a lot of people don't necessarily make this connection We spend a lot of time wondering you know are we being asked by China or Russia or scripted is or whatever else Attribution is is kind of a pointless exercise because a lot of it ultimately boils down to code It's an error in the code that allows an attacker in and they're more widespread than you think This is data that's coming from our forthcoming state of software security report Which is an annual report we do every year in the applications that we tested We see 35% of them having some sort of hard-coded credential You see 32% of them having some sort of sequel injection vulnerability That will allow an attacker with the appropriate string to go in and steal data open redirects site scripting It goes on and on on Something like seven out of ten applications that we test the first time that we see them have One of the top ten most prevalent web application vulnerabilities in them the first time that we see them And that's a number that stayed relatively constant over the last eight years. So it's a pervasive problem. So Okay securities problem There's a culture clash between You know the desire to make things secure and the desire to shift things quickly Wouldn't it be great if you get into alignment around making those things happen together This last point I think we've made the point but the last thing is you know If you're in a company that cares about or manages credit card information if you're a customer that cares about security if you have a Desire to not put your customers through windshield when you're you know deploying software very quickly then you actually want to avoid this scenario of Being stopped at the last minute because you care about security and you find out that your applications You know insecure and you can't deploy right so our motivation for taking out location security and making it part of the DevOps happy family of You know Functions that that work together is to pull that you know Batman bitch slap moment back further into the development process and make it less painful by catching the security problems closer to when they're Introduced so that you do not get into a scenario where you are making a choice between Shipping late or not shipping at all or shipping something that you know is vulnerable So let's cooperate We've been talking to customers about securing fast moving development cycles for a long time under various names The cycles get faster the names get Different recently you know DevOps is what customers are talking to us about in enterprises and small Small software development shops in other places And what we're starting to come to is is a set of principles that we're kind of coalescing around And that's what I want to spend basically the rest of the talk talking about is how do you do this? And these five principles are pretty common sense But there are some nuances in them that we'll explore that maybe you aren't obvious if you don't spend a lot of your time Obsessing about application security And that's what I'm going to try to bring a little bit of play around for you today Automation integration avoiding false alarms security championship and operational visibility the idea with this is I think that one of the things that we make as an Fundamental error when we're looking at DevOps as non DevOps practitioners and trying to figure out how we play is we spend all of our Thinking about CICD pipelines, which is fine and it's important and it's necessary But I don't think it's the whole story for how you securely practice delivering software at speed I think that you have to think about what happens before the pipeline and how you change the culture of the people who are Developing software and how it's developed and I think you have to think about what happens after the pipeline in production So let me take you through that So when we're talking about automated security testing I think this is the part that is maybe the most obvious It's congruent with what all other sorts of testing that go to DevOps end up doing right We just spent a lot of time talking about automated testing to make sure that your infrastructure is what you think it should be Right, there's automated unit testing. There's other types of automated testing that gets run in the security world What you're talking about is for applications as distinct from testing for for vulnerabilities in the Infrastructure layer, you're talking about one of a couple of automated technologies. You're talking about static application security testing Basically, you're taking either the source code or in the case of what my company does the the compiled code of the application Creating a model of it looking for conditions in that model that would indicate that a security block It happened and then tracing back and making sure that an attacker could actually get to that point in code Is there's no point in flagging on something that's dead, right? So static Automated security testing covers the whole application runs quickly generally speaking And gives you a view of what's going on and what the developer actually wrote There's dynamic application security testing running against web applications or with some flavors of that against mobile applications Looking at the application of runtime has the ability of being able to look for things in the infrastructure as well as the application and there's also software composition analysis, which is It's one of those technologies that kind of crosses over into what people in DevOps are already doing But it's basically concerned with if you're building an application using code that you've gotten from other people as assemblies Ruby gems Dls in the in the Microsoft world Jar files and other frameworks other packages in the job world You're likely Have to getting that that third party library and was your developer found it Or you found it you decided that it met your needs You brought it in your application and then you forgot about it And then you probably upgraded it the next time you needed some of the functionally useful in the next version of that library The problem with that approach turns out to be that software It does not age like fine wine if it ages from a security perspective a little bit closer to something like milk and the longer the software is sitting around the more likely that nasty actors are going to come in and Find something to exploit in it. I'm going to credit Josh for the sort of type formally some type of that analogy by the way Not mine So with software composition analysis You're just getting a bill of materials of the third party components in the application and figuring out What is that application vulnerable to today and then you keep the bill of materials in case tomorrow? There's no vulnerability of an SSL so that you can figure out that you've got a problem in that application so At least three ways of doing automated security testing. There's others that are out there, but we'll stop there The other point about automation is you want to be able to actually automate the testing It's no good being automated if you're requiring a person with a tool to sit down Configure the tool we get and run it by hand, which was version one of application security starting about two thousand to about 2006 right you need API to be able to script it you need to be able to run it from your pipeline And and so the good news is that there's a lot of options that you can take to do that So that's automation of security in the second point is failing early in the process And there's a couple of pieces that are important to think about for this one is There's a lot of choices about where you put security testing in a development process There's we've talked with customers who want to do some level of security testing as a Check on applications after they've passed all the other functional tests Which is not a bad way to think about it There's others who want to bring it up further as a like a pre-commit test for very serious issues We think that there's other you know things you can think about as well like giving security testing tools to developers that are actually in Their IDEs so that you can test while the code is being written Bring it closer to point of failure so that you can figure out that you've got a security problem as early as you possibly Can is that is the general principle here? and Then the third problem that you run into with security tools in particular Is what we call the false positive problem? Security tools are written for security people by and large and the mentality with which most security tools are designed is It's much more important to make sure that we find out everything that could possibly be wrong with this application Then it is to worry about whether we're gonna have a few findings that aren't actually quite correct Right and the algorithms that you're writing to figure out if there's a security vulnerability there Have to do things like consider data flow across applications with hundreds of millions of Control points that are you know mapped into memory of one big in-memory model So they have to follow this enormous graph. They have to you know perform You know compute tasks that don't actually complete, but they have to complete right so So there's trade-offs that are get made between having the tests complete in time and having them not return a lot of noise and Generation one tools generally, you know here on the side of including everything making sure that you find everything And then they have really high false positive rates That's okay. If you've got a security professional running tool. It's less good if you're plugging the tool into your CICD pipeline Best outcome in that case is you're taking the output from the tool and you're pushing it out of the pipeline It is something like a defect tracking system and going back and dealing with it later Worst output is that you start stopping the pipeline for things that don't turn out to be really security issues And then after you do that five or six times you turn the tool off, right? And then you're back to where you started so We think there are two things that you want to consider here in how you view security testing in the pipeline One is start with something that has a lower false positive rate to begin with But people don't actually necessarily think about it when they're when they're looking at this problem and the other is if you actually do have a False positive you know problem you need to find a way to tell the tool that in this application That's a false positive and then have it remember the next time it stands the application Not rocket science, but it is something that we run into when we talk to customers about this I want to talk about something other than the pipeline for a second because this is kind of the hobby horse that I get on about About dev ops. I think that it's very easy to make You know shipping software quickly just the technology problem But I don't think it is and I don't think that a lot of people in this room will believe it is anyway I kind of have to go through this field for folks who are in Corporate development shops and stuff because they haven't necessarily gone through that same thought journey If what you're trying to do with dev ops is shorten feedback cycles take lessons that you're learning in the process of building software And bring them earlier in the process you don't repeat the same mistakes over and over again You cannot ignore what is actually happening in the development of the software How the software is actually being built and how you know security issues are getting in in the first place I'm here to tell you that there's not a developer that I know deliberately create security issues or create security issues because they don't because the developers aren't dumb right you know the Issue generally speaking is number one either you didn't know that you know particular attack category was possible Number two you didn't know how to defend against a particular attack category or number three We're going so fast that you made a mistake, right? It's like anything else so We can't do a lot about number three Except to test and catch those mistakes and tell you about them when they happen what we can do about number one and two is to Get developers a little bit of coaching and education so that they learn About what security principles are about what types of tax are there out there? And so that they have that knowledge from the same young and resin software And we can do some other things as well like making security into the way that software is being built from a process perspective And I don't mean capital P process. I mean things like you know, hey once I've got a security educated person on the team I might actually have a task for a story that's part of the definition of that story that says We're going to have a security review of the design before the future gets coded And then we're going to go back and maybe do a security code review if we want to do that Or else we're just going to let the tools catch it But one way or the other getting security into the definition of done if you're doing agile stories Is is not a bad way to think about this from a cultural perspective When it comes to the training side of it, there's lots of different options you can choose from here all of them do something Computer-based training is a scalable way to get you know introduction basic concepts It moves the needle a little bit, and I think it's important I also don't think it's it's enough we see from our data that organizations that do Computer-based training on secure development principles have about a 30% higher vulnerability fix rate than those that don't which is good It's a good start What actually turns out to really move the needle is Teaching developers about secure coding principles and remediation strategies in their own application When there's a security finding that they don't understand how to fix we call that remediation coaching We've measured the effect of that it turns out that your fix rate for vulnerabilities when you go through a coaching session With somebody who's actually done this before is more like a hundred and fifty times 50% more vulnerabilities being fixed so not quite It's it's it's a little bit more than than double the vulnerabilities that you would fix without the benefit of that sort of capability, so The big takeaway that I take from this is that there are a couple of different ways that you can Take knowledge about application security get it to your development means bake it into how they're making software bake it into the skill set that they're already bringing to the table and It actually has an effect on how quickly vulnerabilities get addressed how many vulnerabilities get addressed and Over time we think and this is the part that is harder to measure reduces the number of all abilities introduced in the first place which is what you're trying to do with these feedback cycles and dev ops the last piece of this you know puzzle is The offside of things and this is the part that you know gets up my nose the most when I hear people talking about Security and dev ops, and they're talking about the pipeline. I'm like oh good We've made sure that what we're you know shipping is as tested as we can as it can be before we push it into production or Before we have a release candidate that will later push your production So we're not going to do anything about it in production, which seems kind of odd, right? So from from a couple of different perspectives I think that there are some things if you want to think about in production with applications From a security perspective as well The first thing that we think about is number one. What happens if there is an attack, right? a I want to know You know I want to know that that's happened. I want it being logged I want it, you know showing up in bright blinking letters on my dashboard And I want visibility into it so that I can take that feedback and act quickly on it first in an incident response way Second in doing root cause analysis with the team and figuring out why did that attack succeed or why did that attack happen in the First place and what can we do about it, right? The other thing I'd really like is if the attack was attempted and failed because we had some sort of protective technology In the security industry there have been two approaches that have been tried for this one is Network layer web application firewalls anybody use a web application firewall and what you do you like them I don't have anything against them. I think they're great for DDoS I think the operational problem with web application firewalls has been to get them to actually protect against application layer attacks You have to know where the vulnerability points are in your application You have to know what you know URLs what form fields etc that you're looking for because the The raw pattern matching that last you without any sort of training or rules on them is not enough to block these attacks And the problem is that you then have to maintain the rules that along with everything else you're doing you shift the application So if I'm doing code to protect vulnerable point in my application as part of my development process Why don't I just like do code to fix the vulnerability in the first place? So last is still useful They're actually a compliance control for things like PCI. I'm not saying don't do them I'm saying you know be aware of what they can and can't do and what you have to do to make them Successful the other technology which is emergent is something called runtime application security protection This is the concept here And there's a couple of different ways that this gets implemented the industry is you have an agent that lives inside the runtime of The application if you use New Relic something like that it's you know in the job of worlds an instrumentation agent That sits on top of the job instrumentation API and can access the inside run into the application It can look at places that are vulnerable it can see you know Execution has the touch those and if it realizes that something is in attack And it does have to have an intelligent intelligence to recognize an attack It can actually rewrite the execution of what's about to happen so that the attack gets neutralized as well as Logging out to the server so full technology very early stages Something to keep an eye on and be aware that it's out there The last thing on the operational side that I want to talk about is just you know something that your security team can still do You know even if they're a separate organization is Execute tests of the entire perimeter to make sure that there aren't any applications popping up They didn't know about that aren't going through your dev ops team's processes that aren't you know going through all these controls That we've already talked about and then just make sure that they understand the vulnerability state of those So that's you know a different thing that can make up from an operational perspective so When I want when I go and talk to you know insurance companies banks aircraft manufacturers ISVs whatever about how they're taking steps into dev ops and what they're raising the world is and they ask me Okay, so where should we put the testing tools? I say I think it's a slightly bigger problem than that I think that if you if you're thinking about how to secure dev ops Yes, you want to do testing of the application as part of your integrated pipeline But you also want to think about what's actually happening in the development side of it and think about how do you take feedback about? application security and make your developers better and I think you want to think about in the operational side as well and look at What's happening when the application is running and under attack? and I think that if you broaden the aperture of how you think about security with applications and You know take in mind some of these things that that are possible to do now that go way way beyond testing I think you actually start to have a fighting chance at keeping applications safe against attackers And that's something that we don't be happy to do so with that I'll take any questions