 All right, well, thanks very much for being here. And thanks to Stuart for having me. I really appreciate it. So title of talk, Incorporating a Security Development Life Cycle and Static Code Analysis into Our Everyday Development Lives, Overview of Theory and Techniques. And I apologize in advance for using windows on my laptop. I'm sorry. So and thanks for HP for supporting my travel here and also to my family for allowing me to be away and come. So a little bit about me real quick. A consultant with Hewlett-Packard Enterprise Services. I work mainly for US public sector entities. And unfortunately, I don't get to develop open source as my full-time gig. However, I like to dabble in my part time. And I do a little security research on the side. So I really hope I'm not preaching to the choir about this topic and that everybody's already kind of got a security mindset for a project and have a set of tools that they're using in terms of code analysis and processes in general. But the picture here from my rental car shows what to keep left. Sometimes it's good to talk about obvious things just to make sure that something's not forgotten. So as usual, I have more slides and I have time. So some of this I'll be moving kind of quickly for. I want to start just giving a few good resources on these topics. The best one I've seen that is just out on the internet, the open source web application security project, or WSAP. Check that out for sure. Give you a lot of good introductory materials on developing secure software, implementing security development lifecycle in general, and a list of tools and that kind of thing that you can use. The books mentioned there are pretty good. If you're interested in the internals of static code analysis, the book there by Brian Chess and Jacob West from HP is pretty good. And then I'd like to also mention Caverity. They have an open source scan cloud service that a lot of open source products are already using. It's really nice of them to do. And it's great. And then I have to give a plug to HP as well with their Fortify on-demand service. So let's face it. It's been a bad year for security and open source. And I'm not here to pick on any particular projects or point of finger. I'm certainly would make mistakes as well. But so don't take it fenced if you're part of any of these projects. But here are a couple of headlines that we've all aware of from this past year. Obviously the open SSL, Heartbleed, some problems found in Drupal with SQLI, and of course the bash of vulnerability shell suck. So we really can't sit here and shake our finger anymore at the closed source giants like Microsoft and say, hey, without pointing it at herself and say, hey, what can we do better in terms of improving the security and the stuff we're putting out there as well. So I was stuck in the Hong Kong airport for 26 hours because I had to miss the flight and I saw the sign. So hopefully I needed to incorporate it in my slide somehow. So hopefully 2015 we're not going to be sheep and we're going to make things better. And so some of the traditional assumptions about open source, they're just approved that not really working out well. Many bugs, many eyes make all bugs shallow, for instance. It is really valid to a large extent, having source code out there available for everyone to see. But it kind of begs the question, who's reviewing it? Is anybody reviewing it? Certainly there's some of that going on. But the best people to review this code are the developers that are writing it and people that are really familiar with the project themselves. And I get a quote there right from Wikipedia's at one time, this open firewall toolkit had 2,000 sites using it, but only 10 people gave them feedback. And then old code is stable as another, with the bash vulnerability, that problem has been there for years. And then firewall, yes, will protect us from our bad coding practices, is not true for zero days, which we're seeing, of course, more and more of. And component-based software development and frameworks are more secure. They definitely help improve the situation quite a bit by removing some common problems. But it introduces a whole nother set of problems where bugs are perpetuated across the large set of systems a lot quicker. And with component-based design, it's hard sometimes to keep up the date with all the vulnerabilities and the components that are used in the build your code. So who's the blame who can't fix it? I think all of us, we're all on the hook to make things better. And I think it is something I'm kind of passionate about. I think it's a failure to educate to a large extent too. And I'm not trying to pick on the local university here. This is true for every software engineering class you probably can think of. But if you look at the syllabus, you don't see anything about security engineering. And that's really a failure that needs to be addressed. I'd like to see students have to be also graded on if they're not only at the code compiles, but that it doesn't have glaring security problems with it, especially when you get into web development and such. So how can we fight back? As I already said, security-oriented developer education is kind of probably number one. So there's been a lot of pushies especially on the corporate side for this security development life cycle. Trying to apply that to the open source, get its own set of challenges, but it's something that I think every project needs to consider how they can do. Organized strong security teams, be ready for incident responses because they're gonna happen. And I mentioned Drupal earlier in my list of vulnerabilities for the year, but they did a great job or security team of getting the fix out the door quickly and making everybody aware of it the best they could. And that's kind of key. And make your updates as easy as possible. Encourage use of tools to manage dependencies and judicious use of automated analysis through static code analysis and human code inspection. Definitely key and empower a developer and include security and everything we do. So if we're not examining the code, who is? So there's definitely other entities that are looking at source code and they have their own set of motivations, hackers. Obviously, some of them are out there just to be able to say they did it. I recall this talk at DEFCON a couple years ago, a guy broke a Google TV firmware update process and he was calling the engineers a bunch of idiots. And I was thinking, it's a lot easier to destroy it and it's great, so that's not really fair, but you don't wanna be a guy being called an idiot, so you'd want to try to be ahead of these guys. And security research companies, they ultimately need to monetize what they're doing. They need to, they're in firewall products or application containers and that kind of thing. And academia has got a publish or die mentality. So they might not have your interest in mind and your friendly local intelligence community. They don't need back doors. Everybody says, oh, they're putting back doors on their software. It's a lot harder to get back doors in the software than it is to simply exploit problems that are already out there and for everybody to see in the code. And you might not have time to do a furrow security review, but you can bet that these guys are, and that should give you a warm fuzzy. So who else should not be doing it? As I already said, developers are writing code really the best defense, but if they don't have the knowledge and the tools, they're gonna miss things. If you need to have an overall system architect that really knows what he's doing in terms of security because you can have tools that are gonna find bugs and code, but they're gonna miss things and overall architecture flaws. And again, don't depend on your project security team. Take initiative if you're a developer to make sure your code's secure. Oh, I'm sorry. Who else should? Companies, I believe, have the obligation. They're using open source. They're putting it into their products or making money off of it. I think they have obligation to try to make the security better. They try to, I don't know, maybe put their will onto the community to an extent to enforce some security standards. And then if you're offshoring development, offshore developers might not be up to date on some security stuff, so you definitely wanna be reviewing that code. Quick example, shown here is a, there was a talk at the Black Hat in Def Connors past year by MITRE, UEFI firmware exploit that if you're running Windows 8, which hopefully none of you guys are, it can basically flash your firmware with a malicious agent that is persistent because it's in firmware. And they looked through the, and how they figured out how to do it is they looked at the UEFI DEC code that is out there, Intel published it, put it out for everybody to use, they review. All the companies, HP included, used it. They didn't look at it. You can see right here, there's some comments, make more checks later. But I think companies of leverages could really do have an obligation to kinda look at it for themselves too. So there's a lot of inherent difficulties. I probably don't have to say too much about this. I mean, if you look at a project, I've been a lot of talk here about OpenStack. You have thousands of people contributing to it. You can't really enforce a, easily enforce, I should say, a true security development life cycle. So you really gotta try to catch things before you put out official builds and such. But if you're working on small projects or you're working on projects for your company, then you really do have a chance to implement a good process. So there's a number of trends that are kinda helping and hurting. So we're using more frameworks, APIs. And these are moving common security problems. But again, lazy admins and time delay in implementing patches or problems. Component-based design I already mentioned can be a problem with version management for affected versions. Rapid development life cycles is great for getting things out to customers, but it might not leave time for adequate security testing. So there's a lot of good resources out there, but not all developers have equal knowledge. There's security researchers who are really looking heavily at open source, but they might not give enough time. And tools like static code analysis are being used on an increasing basis, but it doesn't cover everything. And a lot of times it's not used until the end, where it'd be really nice if it's in the hands of the developers that are working on the code, they can check themselves before they submit, before they submit code. So more trends, there's, yeah, identifying security issues after it's done, it's a costly way of doing things. In the race condition, IP gets stale quickly, we all know that. So we wanna get cool things out quickly. Security is in a future release, et cetera. And then the changing threat landscape we're talking about like in a nation-state, advanced persistent threats. And embedded systems are just a mess when you think about all the things that are out there like phones, that companies have no motivation to create updates for, but they're still out there connected to the network. So advanced persistent threats are what we mentioned is the changing threat landscape that we always hear about. So we need to kinda look harder at the past traditional defenses in order to try to, in some way, counteract this threat. And a good example there at the end was a quote from Ellen Amale from one of the Hadoop designers. He was saying that the security design really had nothing to do with actual hackers. And because it's gonna be behind firewalls and such. And that assumption is just not true anymore because with the APT, they'll find a way around firewalls and whatnot. And the last level of defenses is in the code base and it might as well do the best you can to try to keep them out. Cavarity published a pretty good report I'd recommend reading it. It's from last year. They are scanning a number of open source projects. And here's a couple of highlights that for the first year, open source had a less density of defects than their proprietary code that they've scanned. And they also had a kind of interesting, they've been scanning the Linux kernel for a number of years. And you can see in 2013 they've, more defects were fixed than were introduced. So shows that backlogs are getting too. These numbers are kind of funny too because what they've identified at these facts, the developer might have decided aren't defects or they have a risk mitigation strategy. So onto the SDLC, Security Development Lifecycle or Secures Software Development Lifecycle. So there's a lot of formal approaches. But they all have kind of like the same goal in mind. Basically, try to get security into each stage of your development process. Who should contribute and where? Everybody has a role from the system engineers, developers, testers, and a security team. It's kind of critical that everybody have a security mindset. So here it is in a nutshell in graphical format at that. So no one likes to do it, but it really needs that you have requirements that also include security. You know, that initial phase of trying to figure that out is often skipped when you're making, you have an organic kind of open source collaboration, but you might even need to go back and try to retrofit it, but it's definitely a good first step because that leads into your design phase where you're trying to get your data flows and figure out how things are gonna interact. And if you don't have that figured out and you can't implement the next kind of principle, which is threat modeling, and then of course going on the coding, having developers that are trained in good coding practices and using static code analysis, store and development or at least at the end of it. And testing that includes vulnerability testing. So if you have good requirements designed, you can trace your security requirements in your threat model to test. So you have a traceability matrix, do-fuzz testing as well on top of that. Testing might be a place where you're running your final static code analysis too and generating a report and making it a review. And then the deployment, making sure you get good documentation out that explains the proper way to configure it if you want it to be secure. And also there's no reason not to publish the threat model or risk model or the other documentation that you have. So somebody can make a good independent analysis of what they're getting into. And the image on the bottom is Microsoft's version of this. It's pretty similar. They threw in a training step as well and a response phase. So you definitely wanna have an incident response plan as part of this as well. So there's a couple different methods of threat modeling. But long and short of it, you need to have your data flows figured out and user interaction or security requirements. And then you kind of try to reason about it and stride as a way to do it from Microsoft. Dread as a threat ranking system that they have. And they're actually pretty good ways to try to figure it out. But the general process, you're gonna decompose the application, determine the various threats. And then you're going to try to determine whether you're gonna fix it somehow in code via countermeasures or if you're gonna mitigate it via some other mechanism like document the threat. So somebody knows that they know that it's there. So here's a data flow diagram that they started to add a, I guess, sorry, security boundary layers to the dotted red lines. And then here's a stride. Since we're spoofing, tipping, non-repudiation, information disclosure, denial of service and elevation of privilege. And basically you go for each of these categories and think of ways that your particular code could be susceptible to these various types of attacks. And I don't have time to go into this, but this is an example of what I did for Hadoop for another thing that you could see, essentially go for each thing and try to figure out the weaknesses in the design. Another thing you might wanna do is try to document attack paths and root causes of the threats that are there. And then also kind of try to document what you're gonna do to counteract that. And then use cases you can turn into use and abuse cases if you bothered to create those. So this is a good example of why it's good to do all this work in documenting because it makes it easier to see the various ways that a hacker shown here could try to abuse the use cases of the system. You wanna try to rank the threats in kind of terms of their likelihood. Dread is an example of how to do that that stands for damage, reprisability, exploitability, effected users, discovery. And the idea here you can read about as you assign a numeric value and then you kind of come up with essentially a priority of what you're gonna address and what you might not address if you don't have the time. And don't dismiss the small stuff. Something that might seem small when you're working this out, it might not be. And I have a silly little example here and explain what's going on. My three-year-old is in a preschool program at a small Catholic school. And the local utility company had a contest where they created some sort of video about now not executing yourself on a down power line and whatnot. And they had a slate where you can vote for who's the best one, but the winner gets eligible to win $10,000 for the school. And I hate stuff like this because to me it's a popularity contest. Obviously the small school is at a disadvantage to the big school. And so it says they limit you to three votes per day per IP address. Well, when my wife voted and she told me to vote, I was like, well, I shouldn't be able to vote because if it's by IP address and we're behind in that, we have one IP address. Well, of course, that wasn't the case. All they're doing is using a cookie and accepting unvalidated inputs. All you had to do was pass. You needed to get a valid session and then simply it made an Ajax call to a post to a form to submit your vote. So a little W get script and thousands of votes later, they're in the finals for that. So as an example of, they probably figured nobody was gonna bother doing that or they didn't know what they were doing. And the really only way to actually implement a voting system like that is to have a user account and they probably didn't wanna do that. And they ranked that threat low and their countermeasure was having a review process after the votes were submitted to actually determine the winner. So a small, stupid example. But testing is definitely a golden opportunity. If you don't do it anywhere else, you gotta try to catch it there. You definitely wanna try to have security oriented to test as part of your unit test, do Fuzz testing when you can. And if there's a separate test team, make sure they know the threat model, have access to all documentation and that kind of thing. Check out the OWASP link for testing guides pretty good. What about Agile? Well, a key point, Agile makes things a little bit harder, but it also makes things, I don't know, maybe a little bit easier in some extent, but you still gotta try to shoehorn the security in there, whether it's during major design reviews and definitely any final release that's going out. So there's a lot of research out there about how to get this fitted in. So if you're doing Agile, be sure to read that. We definitely don't have time to talk about this, but be aware of the secure coding and static analysis. Man, I'm like just about out of time. So static code analysis is something everybody can do via the free cloud service that I mentioned, or if they're working for a corporation, there's a number of closed source solutions including HPs that are really good that you can integrate into various places in your development stage, either the part of an ID or part of your CTI server, you can run it as part of Jenkins or something like that. So true static code analysis is an NP hard problem. If you're a computer science buff, you probably heard a halting problem. It's the same kind of problem. So basically it gets around that by heuristics, pattern matching and basically coding rules. It kind of works in a similar way to compiler, but it does a little bit of extra to try to figure out basically where tainted data could be flowing for the program, null pointers, buffer overflows and such, so it's still a little bit more work than with the compiler, but it kind of works in conjunction with it. So we're to use it ideally, use it in your IDE if you use an IDE at build time. If you have a huge project that you're compiling, it's not really practical, sometimes takes a while to run it. So you can run it as part of your continuous integration or you can script it, and basically your output was I'm gonna show you if hopefully you guys bear with me a second. You get a report that you can then review. These are early and often. Here's some various options for you. There's a couple of open source options you can take a look at. I don't have any experience with these. If you care about the magic quadrant, which I don't really, but you can see the players there. And it's limitations, it's not gonna find everything. And I'll give you a couple of quick examples when I show you the things that it missed. And it can only find problems in the code, not the architecture, of course. And there's some things that are just not obvious, but to the analysis tool because there's convoluted input paths and such. And it doesn't read your comments. So one nice thing about those new things come out, like for instance, the SSL vulnerability, they figured out a new pattern matching algorithm to try to detect similar type of problems, which essentially a buffer, off by one kind of buffer problem. And so thanks. I'm gonna show you a couple of quick examples if you don't mind, okay, okay. Sorry this is kind of awkward for me to see in this resolution. So this, I've run a report on my homework from years ago when I was learning how to do PHP. And what you can see is it breaks things down into different categories of problems and critical, you know, high, et cetera. And then it takes you right to the problem in the code. So here, for example, it's taking unvalidated input right from a, I guess this is right from a input field and just displaying it back. And it'll give you some details about why it, and give you some recommendations of how to fix it. So these are simple inputs, simple problems that they can easily find. I'll show you a quick, this is integrated with Eclipse. This is kind of how I like to use it. R for 4.5's Eclipse integration really only is good for Java right now. Everything else you kind of gotta run outside and then bring it back in. But it gives you a nice little toolbar and you can analyze the project, bring up your audit results there. And then again, you kind of can switch right back and forth between your design and your results of your scan. And in this case, my homework from a few years ago was pretty good, except it took me to an issue with the encryption I was using, which after, and if I was a Curie reviewer looking at this, and it's actually not a low default, this is a high default because it's not using any sort of, it's just shawling a thing as a encryption when that's not really encryption. Should be at least adding some sort of other value to it. So anyway, and then let me show you two, one other quick, so this is OpenSSL, an old version of it, one of the vulnerable versions, 1.8.1 in all fairness. This 4.5 would not have found this original bug, as I said, it doesn't now find it. And you can see it's right here. Yep, yeah, I just lost it. There you are. Okay, yeah, that's the line right there, the mem copy line. And if you actually look at what's going on here, you can see why it was kind of hard for it to figure out what was going on because it's not obvious that that value came from a network source, but they figured out some way to do it. And this is kind of cool, it gives you a little diagram that shows exactly where the taint came from, taint from that work you can see there. If you understood how this worked, you could see, you could try to, if you actually understood the code, you could get some good insights from this. Another thing I found interesting was the number of other critical vulnerabilities it found. Then, you know, and that's why you need to have a manual review on top of this because I can pretty much guarantee you most of these are probably nothing false positives. But who knows, until you actually go through and look at it, there could be a number of other vulnerabilities out there waiting for somebody to look at. And one last look at a failure of it. This is Drupal, version seven. Now this should be a vulnerable version to the SQLI vulnerability and you can see that they have zero critical vulnerabilities. It did not find this. And this shows you too that the Drupal team does use static code analysis as part of your final build processes. So they pretty much make sure it gets a pretty clean analysis back. So that, you know, that's good. But, and again, and this is one of the ones where I've lost my place in it. But if you look at the function, it's not obvious that the input is coming from a tainted source. So that kind of shows you where, you know, sometimes you do need to do manual review on top of it. So that's all my examples. So I'll take any questions and I'm out of, completely out of time. But thanks for your attention. Got some time for questions? Okay. Okay. Do you see a lot of difference compared to something like there's the, like, LLVM Clang static analysis ones? Is that sort of getting a pretty good sort of easy thing to go now that's open? Or do we still get a large advantage from a proprietary one? Oh, okay. Well, I don't have a lot of experience with the open source security oriented ones. If you're referring to, like, the ones that are kind of built into, like the... Well, not just security as well, just like correctness as well. Correctness, right. Yeah, so security stuff does a lot on top of what the correctness ones do. So correctness, the comparison that I'll use it as, is like a spell checker. And it'll help you if you are usually a good spower, but you occasionally mis-type something, but it doesn't do that deep analysis that some of these tools, these other tools do. My understanding is that the open source, the first one I had listed here, I forget what it's called, is visual code rep is getting pretty good. And then, as I said, the severity thing is nice where you can create a project on their site, submit your code and get a report back. It's not as ideal as being able to integrate it into your own development environment, but, you know, it's better than nothing. And then if you, like I said, there's probably like Hadoop, for instance, is on there, so you can sign up as a... So there's project controllers that really have, can do stuff with the scan, but you can also sign up as a reviewer and anybody can do that. So you can review their code results back without having to pay anything or what have you. So that's really nice. I've been on the receiving end of a customer mailing severity reports and one of the answers of why these weren't fixed. Right, and, you know, again, you probably see a lot of them and you're like, well, this is nothing, right? But, you know, it can help, especially help guide somebody that's not a co-reviewers through the code, too, to try to pinpoint things that might have been missed. And, you know, you can... Well, if you're using like Fortify, you know, you can have it in a collaborative environment and you can... Everybody can kind of get their feedback to say, okay, well, this is nothing or this is that or this is mitigated by XYZ. And, you know, you have a huge project that you're going back at, like open SSL. That's gonna be a huge task to do, but, you know, you're building something new as a really good opportunity to get this stuff in early and try to have the principle is introduce no new bugs, at least no new SCA detectable bugs is a good step. Do I have any questions around the room? All right, thank you. Thank you very much.