 Welcome everybody to my talk. Thanks for coming. Hope everyone's having a good conference. I know I am. Is everybody learning a lot? Excellent. I try to live a few minutes when I do talks because I learn so much in conferences. I want to talk about the stuff I'm learning and other people's talks more than what I came to talk about. So if I get on a side note about something that I heard the last talk, that was phenomenal. But anyway, I hope you guys get a lot out of my talk today. Before I say anything else, let me get this disclaimer out of the way. I work for the US Naval Research Laboratory. But my talk today is my opinions based on my professional experience and my personal experience. My opinions don't represent those of the US Navy, the US government, anything like that. As a matter of fact, if you do any research, you'll probably find that there's a lot of people in the government who disagree with me on a lot of things. Also, another disclaimer. I say we a lot when I talk because I have a really close-knit team, and it's an awesome team. And we argue about stuff we don't always agree. But when I say we, I'm not talking about Big Brother or all the developers I work with. I'm just kind of subconsciously referring to the fact that we try to make as many decisions as we can as a team. So I apologize in advance when I say we. So enough about that. A little bit about me. I consider myself a good programmer. Not a great programmer, but a good programmer. And I like to keep things simple. I study martial art called Akito. And in Akito, we have a lot of sayings. And one of the sayings we have is that advanced technique is just a simple technique done better. And I like to apply that not just in martial arts, but in all aspects of my life. And programming is no exception. So everything I do, everything I talk about, the underlying theme is keep things as simple as you possibly can. So just a little bit about this Naval Research Lab thing. It was started in 1923 by Congress by the recommendation of this guy, Thomas Edison, who said we needed a Naval Research Lab. And so we have one. And the group I work in, the systems group, has come up with some pretty cool technology you may have used. Most notably, the Onion Router Tour came out of NRL. And a lot of the foundational technologies and virtual private networkings were developed by Kathy Meadows and Rayn Adkinson, or two doctors at NRL. The Vanguard Space Satellite Program came out of NRL, which was America's first satellite program. Of course, Sputnik was first out of the Soviet Union. And there's a great paper from 1905 called The Reasoning About Security Models. It was written by Dr. John McClain, who's my boss's boss's boss's boss's boss's boss. But anyway, it's a great paper. It talks about system Z. And if you're into academics, it's a really cool theory about security. So all that said, my talk is not about anything military-related. It's not academia. It's not buzzword bingo. I had a really cool buzzword bingo slide, but I took it out because CC's was way better. So anyway, what am I going to be talking about? Well, I want to spend some time unpacking what I mean by security critical, like we just heard in the last talk. People throw phrases around. And it means different things to different people. So I want to unpack what I mean by it. Sorry about that. I also want to work through a use case. Now, this use case isn't an actual use case, but it's kind of a composite of experiences I've had. So it bothers from systems I've worked on and developed in, but it's not actually representative of any system we've ever built. But the main reason I'm here is this last point. Next steps. We've got a lot of initiatives we are interested in pursuing to improve our ability to use Ruby in security critical applications. And some of them we know how to do well. Others we have an idea how we do it, but we probably wouldn't do it well. And others we know we can't do. And so if anything you see on my next step slides rings a bell with you, please come talk to me after the talk because we're interested in getting help from people who want to do cool stuff with security in Ruby. So anyway, there's a great talk that I saw that influenced my thinking about this subject with Ruby. Back in 2012 I was at a conference called Software Craftsmanship North America. I really recommend you go sometime if you have, and it's a great conference. But Uncle Bob had gave this talk called Reasonable Expectations of a CTO. You probably haven't seen it. It's on Vimeo. If you haven't seen it, look it up. I'm not going to summarize it for you, but watch it. And as you watch it, just add security to the list of problems that systems have. It's very applicable to the security problem as well. And it rings even more true today than when he gave the talk in 2012. So when we talk about computer security, one of the things we talk about a lot is assurance. And assurance, usually as a verb, is something that I do to assure you that everything is going to be OK, that there's no problem. Well, when I talk about assurance, I'm not talking about telling you everything is going to be OK. Because what's the first thing you think when I tell you everything's going to be OK? Something's wrong. So I don't want to assure you of anything. What I want to do is talk about giving you assurances that allow you to make a decision of your own. And even if you don't like the assurances that you get when you do a security analysis on something, at least you know where you stand. And that's really useful. So when I talk about assurances, I'm not trying to tell you everything's going to be OK. I'm talking about evidence. We've all seen this chart before. And whether you're trying to make money or make the world a better place or solve a security problem, this chart is not avoidable to my knowledge. And when we go about solving a security problem, we bump into it too. And we look at it and go, well, we got a few choices. We can do something really clever that's going to outsmart the attackers. We could go buy a really cool library that's going to provide us all this super awesome security and solve all our problems. Or we could hire some outside consultant who's going to assure us that everything is going to be OK. Well, don't do any of that. Because attackers are really, really clever. They're more clever than me, they're more clever than you. And what's more is there's lots of them, and they have lots of time. You build a feature, it's onto the next feature. They are out there hammering on your stuff day after day, sometimes teams of them, if you're unlucky enough to be a target, most of you aren't. But we're going to make mistakes in our code. It's just a fact of life. They're going to be bugs. They're going to be security bugs. So I'm going to talk about what we can do to defend ourselves. A key point I want to make today is that a security critical system should have the right security controls and the right places and with the right assurances. Say that again, a security critical system should have the right security controls and the right places and with the right assurances. Now, I like to do that with architecture. We construct architecture. And a lot of times when we're building code, the principles that make code awesome are the same principles that make code secure. We want to reduce complexity. We want to localize functionality. We want to improve test coverage, things like that. But also, we want to make sure we have the right controls and the right places. A firewall at the front door is going to keep bad guys out. Just like a guy with a gun in your server room isn't going to keep hackers out of your server. So you've got to not only consider architecture of your code and design and test coverage, but you also need to think about what controls you're using where and how, more specifically, we layer those controls in our system. So some of these acronyms, you may not recognize. I'll explain them later. But these are really the security control layers that you should consider at a minimum in your application. You have your operating system. You have your security framework. And then you have your application framework. And these are what we're going to layer in our assurances. But what are these assurances? Are they something squishy that we can't measure? Well, kind of. But we do have the ability to talk about them in a semi-structured way. And I like to talk about them in terms of this neat principle. And neat stands for non-bibpassable, evaluatable, always invoked, and tamper evident. And so your security controls, the more you can measure and answer these questions, nodding your head instead of shaking your head, the more security you're going to have in your controls. So I just want to go through these real quick. And non-bibpassable is pretty easy to describe. If you've got a circuit breaker, it keeps your electronics from getting fried because of too much electricity is going over the wire. It's going to trip the breaker and keep the electricity from flowing. But if there's a wire going around the circuit breaker, going directly from the power grid to your laptop, it's not going to do you any good, even if it does trip because you're going to get fried. So for a good security control to work, it has to be only way from point A to point B. Evaluatable is a little harder to talk about. There's a lot of things like symbolic execution engines and static analysis tools that we can use to measure and evaluate code security. But for most of you here, I think a great thing to do if you haven't done it is follow the instructions on the screen. And you can get a good idea of how readable, how evaluatable your code is if your score is, say, less than 20. The lower the better, but if it's code that needs to be really secured, you should definitely be below 20. So keeping things small, reducing branches, not using things like eval or call are good things to do. When you're in a piece of code, you consider security enforcing in your application. Always invoked. I think the HTML sanitizer and action view is a great example. When it first came out, it was something you could call if you wanted, but you could also forget to call really easily. At some point, they brought it into action view, I think, and made it default so you had to not call it. I haven't used Rails in a while. I'm one of those weird Ruby people that doesn't use Rails very much, but this is a C example, actually. And I like doing stuff, having things like this littered in the headers because it makes the compiler insult people who do dumb things, not to be judgmental. It's a learning experience for all of us, and we all get a good laugh out of it. But type this in and see what your compiler says to you. And then tamper evident. This is another one that's a little tricky to describe. But this guy here, he's a coal miner, and he's got a little canary with him. And back in the day, coal miners used to bring these canaries into the coal mine with them because when toxic gas leaked out of the rocks, it would kill the canary well before it would kill them. And while it's kind of gruesome to think about, it was a good way for this guy to get home to his family with the technology he had. We do something similar in binaries every day. We put these little cookies on the stack, and so if there's a buffer overrun in your application, the A's, the not slide, whatever it is, is going to crush that cookie if the attacker is not careful, and we can exit the program safely. It's a way over simplification, and it's not bulletproof, but if you're interested in more, I've got a little link on my slide here. And if you just type stack canaries or stack snap smashing protection, you can learn a lot more about ways we protect binaries. So I've got my checklist. I've got some controls I want to talk about today, and I've got some assurances I want to apply to those controls and see how we're doing. So we're going to use this checklist as we go through the rest of this brief. Like I said, the use cases I'm going through, it's one example in three parts, and it's not a system I've actually built. It's little pieces here and there from different projects I've worked on that I think represent good explanations of these security principles. So at the base of your system is your operating system controls. No matter how secure your code is, if your operating system is not configured properly, you're screwed. And the main security feature in your operating system is access control. Have the right security control. Does security geeky talk about mandatory access controls? And it can get complex, but they're actually pretty simple. It just means something that the administrator sets up at boot and it can't be changed. So the neat thing about mandatory access controls is they're nice and reliable. They don't change. They also have a really pretty static code base supporting them because they're not changing. They get set at boot and they don't change, so it's easy to make sure that code works well. So at the base of your application, of your system, it's good to use your operating system's access control mechanisms, preferably in a mandatory way as opposed to a discretionary way. It keeps your system simple. So a use case might be you've got multiple databases and you wanna be really sure that people on different networks can only read from the database that they're authorized for and you wanna be really, really careful about what gets into those databases. Maybe, I don't know, all sorts of examples of why we'd wanna do that. But rather than trusting our code that does the very best it can to make sure there's no SQL in our posts, we basically give our applications read-only access to some of these databases. And that way, no matter how bad our network application is, there's no way it's going to be able to read from the databases it's not allowed to see. And it's not gonna be able to write to any of the databases. And then we simply implement a piece of glue, a little router, and we can do that very securely. And all it does is make sure that the right requests go to the right places. And with this way, we can ensure that our information flows are set up in a very secure manner. Let's validate that. And so the fact that these databases have read-only and permissions, they're basically gonna have to own your box to get around that. So you have reasonable assurance that to write, it's gonna have to go through the dataflow pipeline I've created. Evaluatable, well the security critical piece of code here is just the simple little router that takes right requests into the database owners. So I can keep that pretty small, pretty evaluatable and I can use a type safe language like Rust or something like that. Always invoked. Every single file system call you make has to go through the kernel. That's pretty reliably always invoked. And then I try to come up with a good example for tamper evident. Making sure your operating system isn't being tampered with is kind of outside the scope of this talk. So if you're interested, let me know. But I skipped it because I didn't wanna bore you all to death with it. So, some takeaways. Use access control mechanisms in the operating system if you can and then wrap them directly into your application maybe say with FFI. And then don't stop there though because it's a pin in the butt to develop your application with these things in place and more to the point if you screw up you can crash your development box. You don't wanna do that. So do your day to day development with a stub. We ended up in a situation where we had a third party that was gonna help us write our application and we needed to get their help and they didn't have our Mac infrastructure so we just gave them this stub. They wrote a really cool app that we couldn't write and then they sent us back the code and it was relatively easy for us to take the stub out and integrate the code in with their application. And then finally, it's only mandatory access control if the application doesn't have the ability to change the policy. So if you can avoid giving your application system privileges. If you look at stage fright and Android it's a really cool library. It does lots of awesome stuff. I could have never written it and I don't blame them the least for making a small little mistake. It's inevitable. But if they wouldn't have had system privileges on it and maybe they had to, I don't know, I haven't researched it that well. But it wouldn't have been as catastrophic as it was had it only had user privileges. So think, I'm not saying never give your software system privileges but think really hard about it because if you make a mistake it's no longer your box, it's their box. So some of the things we wanna do is make it easier to make test devils for our file system objects because we use the operating system so much in security. I've talked to some of the guys from test double and I don't know if it's such a good idea but it's something we've been playing with and if you wanna convince me it's a bad idea or you wanna help let me know. The other thing is is like I said we're looking at using Rust more for our type safe security critical code and or our performance critical code and everything I've learned about Rust Ruby integration I learned from this blog entry I have here. I'm not very good at it. So if there's more resources that any of you know of please let me know. Huh? Good to know. So moving on through the layers of the onion of our use case is what I'm calling our services security framework and I didn't have a really good name for this but basically if we're gonna separate our application into a bunch of processes we're gonna have to integrate them together in some way and those integration points are great places for attackers to break your system. Things like inter process communication or database access. These are great places where attackers are able to get in and do things like CSRF internationalization attacks, equal injection and it's a lot of time hard for us to get our paying customers to understand the sorts of things that can happen if we don't do a really good job with this and I don't know, have any of you guys ever read Ender Shadow? I know a lot of people have read Ender's game but Ender Shadow is a little less well read. There's a great scene in there where Bean's talking to his boss and he basically points out that as your attack surface grows defense becomes impossible and with these sorts of systems that we're building our attack surface is growing. Fortunately not as big as the scope of the aliens in Ender's game and Ender's Shadow but it's so it's not hopeless but it's bad and I don't have enough confidence in my ability and my team's ability even though we've been doing this for a long time to cover every nook and cranny such that our code can't be changed in 10 years to allow this stuff through. So we stick with this principle of separate, isolate and integrate and essentially what we're trying to do is every time a process component that's been separated ingests data, it uses some sort of domain specific language to enforce the security policies such that it's protected and then when it brings data, sends data out it also tries to protect the data and protect the next process. So that doesn't make much sense. I tried to come up with a better way to explain it but I think I'm just gonna have to use an example. So let's take a really, really oversimplified example and say that we wanna make sure that no semicolons make it into storage. Now there's a lot of web attacks that require semicolons and so I'm not saying you wanna do this or not but it might be a useful way to, it might be a useful policy to use in trying to protect a lot of web attacks. And the example I'm about to give does not take into account internationalization considerations so don't just use it internationalization is important for apps but it's also important for security so just wanted to throw that in there keep internationalization in mind when you're building your app especially with regard to how it impacts security. So let's look at some of this application layer pre and post processing we do. What's this code doing? Well it's not entirely clear it looks like it's doing some sort of escaping to turn the semicolons into something else because the semicolons might be natural and then it's doing some sort of resolution to make sure that the semicolon escape key doesn't show up in there and then it sends it off and then when it goes to render the data it goes to resolve it back to what it was and do I trust this code? Kind of but it's also kind of ugly and this is just one policy imagine an application with five or 600 policies that you have to apply this is gonna get kind of ugly. Let's look at the other side, the storage side I trust this code a lot more. It's job is very simple it looks for semicolons there's not supposed to be any semicolons in the data that's there and what's more is if you look at line nine it doesn't trust the caller to check the return code before it moves on if code is security critical you don't wanna make sure and trust that the caller is gonna check the return code because maybe you're checking it now but maybe someone's gonna introduce a problem in two years that's gonna block the check so if it's actually security critical don't rely on the caller to check your return code handle it right there. Die might seem a little extreme but like I said the application was supposed to have gotten rid of all the semicolons so if there's a semicolon there either there's a horrible problem or another application or somebody's taken it over so this is an example of how you can do tamper evidence without using fancy things like stack canaries or anything like that. Ruby's really awesome at monkey patching code into classes so there's all sorts of ways you can trigger these hooks so that they automatically get called we learned about how refinements could even be used to make sure that these things got called at the last talk which was really cool and again the point of the check was very small the normalization of the data preparation for storage may have been complex but it allowed for a very simple check at the time of storage and that brings up a point that I wanna get to in just a minute but I wanted to talk about some other cool technologies that are related I don't know if any of you guys have used antler or parcel it or anything like that parsers are cool but it's always hard to figure out what to do with the parser once you've got that AST tools like antler and treetop and parcel it and others make it really easy to hook in behaviors in your code when the parser hits certain things so if you need to do content validation or content parsing you should take a look at those projects another really cool tool is checksec.sh it's literally just a bash script and it does analysis of your binaries to look for what sorts of exploit mitigations have been compiled into it we use this all the time and not only that if you go to their website there's all sorts of links about all the security wonk stuff and if you're interested you can just learn a tremendous amount about binary exploitation just following the links on that side and then POC or GTFO has anyone read POC or GTFO? Think of it as why the lucky stiff but for security geeks it's really funny sometimes it's hard to follow because it gets pretty technical but it's really funny and then finally the spanner.co.uk it's a really neat blog where he talks about ways he breaks web applications I haven't found any better than that one so anyway a couple things I don't know if any of you guys have ever worked with SE Linux or XAML but really complex policy languages that can do everything they're very powerful and they're very good and people do great things with them but I have trouble keeping all the state in my head when I'm trying to write policies so I try to keep things simple and use DSLs that are kind of custom oriented towards the problem I'm trying to solve I think that's a very ruby way to look at it and it's a good way to look at it the other thing is to keep those checks as simple as you can at enforcement not just for the evaluatable thing but there's this other class of bugs called time of check, time of use bugs they're kind of obscure but basically it means you do a check you do some other stuff and then you write to the database and it can change in the meantime they're really really hard to detect really good hackers are great at finding them and causing them to occur your unit tests will almost never run into them and if they do you'll just assume it was a glitch and skip them so if you keep your check simple you'll avoid this whole really really ugly class of security bug and then a really good example I have a link here there's a guy named Matt Honan who works for Wired Magazine in 2012 there was this terrible hack or a great hack depending on your perspective but they did a bunch of little things and then chained all of these hacks together so if you ever hear the argument well they'd have to break this and then they'd have to break this and then they'd have to break this well that happened to this guy and so these things do happen and it's a really interesting story so next steps we, if I can get it to so there was a talk that Tom Stewart gave in Barcelona 2014 called refactoring Ruby with Monads and I like the idea of Monads right now more than I like to practice because I'm not particularly good at them but I do believe that we can use Monads to wrap our content that we ingest from untrusted sources and ensure that they're properly validated before we store it there was a good talk on Hamster we've been looking at immutability also provides security properties not just performance and code quality so there's a lot of things we can do to improve the mechanization in our code to enforce that it's properly validated the other thing is we spend a lot of time writing security rule sets and it gets rather mundane if you take something like the SMTP specification it takes a tremendous amount of time and it's very boring to go through and write those so we're looking to build out our tool set to automatically generate our rule sets from those things and yeah anyway, enough on that so now we're to the part that affects most of you most of the time which is writing applications and unfortunately there's a lot of security decisions that have to be made in the app they can't happen at the services and integration layer they can't happen in the OS they are in the app and a great use case is XML I try to avoid XML when I can but sometimes it's unavoidable and XML processing is very complicated so how do we build a high assurance secure XML processor? well we don't it's really complex if you've looked at all of the different XML libraries out there some of them are really great but they are complex we're not possibly going to be able to get them to meet that evaluatable construct at the very least so how do we do it? well we use the same strategy I've been talking about the whole time we break our goal into smaller pieces and we separate them and then we integrate them with well understood mechanisms that the OS can enforce another thing we're introducing here is what I call binary diversity there's a lot of different forms of binary diversity it's a great new research area but the simple act of using different libraries for different functions makes attackers job much harder so if you can do the separation and use different libraries it gives you some level of protection again it's not bulletproof but it's very good and like Justin Searle was talking about by breaking down your functionality into smaller units it's easier to test them and this brings up another good point which is you can have really secure code but you might be using some library like Psyche that is a good library but it has some obscure vulnerability in the underlying C-native code and you're screwed when it breaks so these things are going to happen we can do all the code analysis we want but your application is going to break so make sure that you've got fault isolation built into your system so how are we doing with let's see and it's not okay there we go how do we do well we've got that great non-bipassable pipeline the only way to the data storage is through or the next step in the application is through my pipeline so we've got non-bipassability evaluatable we've got a big code base there's really nothing we can do except for do our best to keep our flog scores low make sure our unit test coverage is good we've got good pen testers whatever we want to do there's only so much we can do to evaluate our applications you can do a lot with Ruby to make sure that your code is always invoked for example if you're using Rails you could instrument your checks into As XML so that they're automatically called and those little brick walls I had in the last slide they weren't just for decoration there's a really cool tool called Set Comp that we use a lot and so Set Comp think of it as like a firewall for your operating system every time you go to read a file your process calls down into the operating system which turns that into a bunch of op codes and things that very few of us understand very well and there's a few of those we really all need but there's a bunch of them you should never be using in production like high performance instrumentation to see how you're doing at the microsecond level or P-trace which is used for GDB these are the system calls that you probably don't even know are there but the attackers sure do and that's where most of the kernel vulnerabilities lie so you can use tools like Set Comp to protect your application so that if there is a security failure it can't be used to attack the operating system as a whole even more controversially is this GR Security tool it's a patch and you have to apply the patch to recompile the operating system but it provides security controls that protect against classes of bugs it's very controversial in the Linux community but there was a really good article in the Washington Post on November 5th that gave a reasonable explanation of Spender's perspective with GR Security versus Linus's perspective so if you look it up in the Washington Post from November 5th it's great also if you're interested in the internet of things there's a lot of tools open embedded, Yachto but we really like BuildRoot in my shop BuildRoot's a great tool for building your own Linux distributions and it makes it really easy to select the things you want Ruby is provided in BuildRoot so some of the things we wanna do moving forward is we wanna make it easier for other people to use Set Comp I wanna build a gem that makes it easy for people to block the system calls that they're not going to need in production this will greatly reduce the attack surface of any application that uses this gem or uses Linux obviously it's not gonna work on Windows but it does provide real protection but it needs to be easier to use than it is certainly in the instance that I use in my day job again the importance of even in your application separating things into separate processes isolating and then integrating with assured security controls and just being relentless in making sure every little piece of code that you have is well tested is well designed so like I said I wanna do a better job of making Set Comp available to users and another really cool technology that's come out recently is this Robusta I don't know too much about it I've read the paper but I haven't actually downloaded and tried to use it but basically Robusta is a container that lives inside the Java virtual machine and if there's a security failure in a native extension, a Java native extension Robusta actually isolates it so they can't break out and take over your whole JVM which is kinda cool and if you look at most of the vulnerabilities that happen in Ruby it's usually not in Ruby itself it's in some sort of native extension that we all use and love it's some gem, it's buried we don't even know we're using it so this could be a real winner for a lot of people in the Ruby community another thing is, we like M-Ruby and we're trying to learn more about it unfortunately there's birds of a feather talk going on on M-Ruby right now that I couldn't go to which is kind of a bummer but M-Ruby allows us to put better weaponized sorry I shouldn't use words like that I work for the Navy the more robust security controls into your binaries to make it much, much harder for attackers to break them and like I said, when you learn about GCC and Clang there's all these little compiler flags you can use that make your binaries stronger and they're really, really awesome so I could put a picture of my cat up but I always like it when I see Zach and Corey's briefs I don't know if you've ever sat through when a Corey's talks but he's a great presenter and so this is kind of homage to him my picture of Zach and I'm a little ahead of schedule so I want to take a non sequitur very briefly into security penetration testing like I said, I do that sometimes when called upon it's not my primary duty but it is a duty that I do and there's a lot of mystery around penetration testing for people who don't work in this I don't know if anyone recognizes this picture but this is the little great in Helms Deep from Lord of the Rings that the bad guys brought the bomb and blew the whole thing up and the obvious example is that you don't want to have this little hole in your outer wall but there's another example that a lot of people don't know about the way castles are designed that outer wall is just designed to make it kind of a pin in the butt for people to get through it's not really a defensive mechanism it's just a way to make attacking much harder so when they stationed all put all their eggs and guarding that outer perimeter and the bad guys blew it up because it had a water drain that was really the mistake they made they should have been guarding the keep which had a two by two entrance that no matter how strong the orcs were they would have been coming to it too and they could have fought them all off but anyway enough geeking about security and sorry about that so I don't want to talk to you about whether you should buy penetration testing or not often as money will spend sometimes it's not but if you're going to buy penetration testing services give information to them if you make them find the information they will find it and it's just going to take them longer and maybe annoy them and so if you give them information you're going to get more value for your money and along those lines build relationships with your pen testers because you're going to write these things called rules of engagement like what they can and can't do well there's always ambiguity in that and the better relationship you guys have the more you're going to be able to work with them to have a more granular understanding of what those rules of engagement are and don't just test from the outside we know in the Ruby community intuitively that we write unit tests for all of our classes no matter how deeply embedded in their application or to the outside in fact a lot of times those core libraries that we rely on the most we put a lot of work into testing those if we make our pen testers come in from outside the firewall really they're testing your firewall much more than your app so that's not a bad place to start but maybe give them script console access and see if they can test if they can get around your access control mechanisms in your application so like I had those controls at the different layers have your pen testers do testing from different layers obviously not on your production network but you know in a lab or something like that so with that I want to thank you for coming to my talk I hope it was modestly entertaining a little link I have here Kim Cameron Seven Laws of Identity who's read that? Wow this was written in 2005 and it was a treatise on what the identity management software community should do to protect the rights of consumers I really recommend it it's very relevant today so it just seems to be coming up a lot in the talks so it's a good read, it's timeless so with that thank you for coming to my talk I have 10 minutes for questions if anybody has any questions oh so the question is what's my opinion on getting third party penetration testers coming in versus just doing your own automated vulnerability scanning well it depends on what your goals are and that's a really lame answer and I'm sorry for giving it but I have to but I would recommend using an automated just like you have Travis or Team City we use Team City in my shop just like you do continuous integration you should have pretty regularly someone point an automated vulnerability scanner against your application both in production in the lab it's just it makes sense the cool thing about pen testers is they're humans they're not automated tools and if you get ones that know what they're doing they'll know what to look for that's not in that tool suite but a lot of times the usual suspects are the problem and you can get a lot of mileage just out of using automated tools yourself that answer the question well that's a really good point so the question if I got it right and jump up and down if I didn't is that now I'm using two libraries instead of one and the attack surface just got a lot bigger and a way of thinking absolutely it's true but if you look at the example that I gave all Rex and L was doing was checking for well formateness so it was doing a very specific purpose and we evaluated that Rex and L was going to do it well whereas no kagiri could assume up front that it was already well formed so mass assignment bugs would be a lot less likely to be applicable to it but it's going to do a lot more deep dive on the content and the math term for this is Floyd whore precondition post condition analysis but basically what you're doing is you're allowing when it comes into Rex and L your precondition is nothing and your post condition is that it's well formed and then with no kagiri you've got a precondition that it's well formed data and maybe that the comments have been sanitized or something like that but using Floyd whore it's easy to compose a secure system using preconditions and post conditions that probably wasn't a great answer but that's kind of our take on it. Any other questions? All right cool well thank you all for coming.