 I'm glad so many of you came out of church to be here this morning. I know it's difficult. This morning I want to talk about something fairly serious, security testing, because it's pretty much generally worldwide done completely piss poor. So put on your silver faces, wake up a little bit. I may even point at some of you and ask you questions. And if you're not awake or you don't respond, I'll ask you to leave. It's that important. My name's Pete Herzog. I'm the managing director of ISACOM, which is an open source research, security, collaborative community. Basically I manage all the projects. I began by being the creator of the AUSTOM and I'm still the main developer of the AUSTOM. Yes, the OSSTMM, open source security testing methodology manual, which the British started calling the AUSTOM. It works for me. So yeah, I brought ISACOM as the managing director, mostly the volunteers, the groups, not the operations. We have the AUSTOM, we have Hacker High School, we have business integrity testing. We even are doing open trusted computing as part of the trusted computing consortium here in Europe with HPE, IBM, SUSE, a big group trying to bring transparency to trusted computing, which is actually very, very closed right now. And even we have groups like Infineon who make the trusted platform module as part of it. Goals and ambitions? Well, you want, I'll be talking throughout this talk about my lofty goals and ideals, which is basically to make security better, to make it better how we test security. And that's really important to understand because we are all dependent on somebody else doing their job right. No matter what we do, whether we drive a car, we have to make sure that the guy who put the tires on the car wasn't asleep that day. It's so dependent on what everybody else does that my job, as I see it, my mission, my goal, is to make sure that security is done right and that it's not my fault for not telling people how to do it right. I love the Eureka moment. I love it when an idea works, when things come together. And I hate malicious ignorance. I really hate when people say, well, I didn't do it because I didn't know. And if it's your job, then it's your job to know. Unfortunately, there's nothing like malpractice in the security field. So as far as I see in security, any type of ignorance, if you're a professional, is malicious. So a quick overview of today. Security testing is complicated. Verification is always something you should do yourself. If you rely on tools, you're probably a fool. And there is a trick, quickly sizing up security. And I'll teach that to you if you're nice to me. Make it cool. The problem is that testing security is innate. And so is protecting. So whatever we built... Oh, sure, left the speaker. Okay. Anything that we want to secure is innate. As developers, we build things. But when we build things, we build them so that they work the way we want them to. We have an idea of how they're going to work. And generally, any failure in how they work is a problem. It's an issue. And even though we like to think of security as threats people attacking, it's also as simple as building a wall so that the wall doesn't... Okay, we'll try this again. So when we build a wall, we have... The last thing we want to do is test it by pushing it over. Because what if we're wrong? And the wall falls weak. While we build that wall, we are thinking that wall has to stay up. Because we don't want the consequences of what happens when it falls down. Or work. Builders. We have a problem with testing our own stuff. Our own things, which is part of the problem. Other issues are interviews are biased. People's knowledge of security is very subjective to certain rules of the game. The problem is, can analysis. Because then you find that what the tools will do is discard responses it receives because it wasn't expecting it from the experience. If you're in security, you will be constantly, consistently humbled. You cannot be arrogant and be in security. It just doesn't work. Okay? You build a better mouse trap, so it makes a better mouse. So it's only cool proof, so it makes a better pool. That's just how it works. Now a lot of people like to say the main issue of security is risk. Risk is not security. Security is protection. Risk is a statistics game. You can play that game if you want or not, but it's very subjective. This one? It uses me. There's a new thing in medicine called evidence-based medicine. The whole point is actually being, it came out in the 90s. It's growing in popularity. And it's about having the doctors actually use science to diagnose people. Yeah, this is new. You'll actually see it starting to come out in more and more magazines. It's a big deal. And the reason why is because obviously we know that certain medicines affect you. They have side effects. They may even cause kidney, liver damage in addition to healing you. So now with this evidence-based, they're looking at maximizing your lifespan and quality of life. So for instance, they may not give you a drug because your problem, your cancer, your disease may not have an immediate effect. For example, if you're 70 years old, there's a chance you'll probably die before something like prostate cancer actually kills you, which has a long term effect. So they start looking at these things and your quality of life. And so I like to think of this as evidence-based security where you're actually looking at the big picture. The only problem is it doesn't work in emergencies. It doesn't apply to everyone. And the biggest flaw when actually using science for security is the CYA problem. Cover your ass. People don't buy it. People would rather buy a big brand to say, hey, I did what I could. I bought HP. I bought Ernst & Young. They came in. They did their job. I'm clean. Okay? So the only people who actually buy into the scientific method, who buy into eSYCOP, who come around, are the people who actually need security and not compliance. So if you actually need to secure something and not just have a paper that says you're secure, you're going to be looking for this methodology. Quick quiz. You've seen in the sky, I assume here in Belgium, you can see in the sky, geese or ducks fly overhead and they fly in this D pattern. You all seen that? Pants, wake up? You've seen it? Yes, no? Okay. And one line of ducks is always longer than the other. You know why that is? The answer is there's more ducks in it. I think we know what we're doing because we are subjective. We need to test more. We don't know why there are more ducks in it. Okay? But we do know that there's more ducks. But we answer the question of why is it longer? Because there are more ducks in it. And this is a problem with analysis and security. And we're going to talk about that. Hopefully we have time. Basically, as human beings, we are subjective. Our minds work in a certain way where we apply weight to different risks, depending on how important it is to us. We're definitely not objective about it. So, we needed to look at something new in the way we perceive and define security. And we did this in the Austin, which is under copy left. And the methodology is under the OML, the open methodology license, something we had to create because the methodology cannot be copyrighted. A methodology is considered a trade secret. So, we designed a license for an open trade secret. Basically, the idea is that somebody else can't say it's our trade secret. So, what do we do new in the Austin 3? Well, we want to categorize security into calculable components, clear definitions, security metrics. We started adding things about test errors, test types, vulnerability classifications, blah, blah, blah. We wanted it to be practical for the auditor as well as the developer. So, the developer would be able to quickly size up security in what they're developing while they're developing it without needing to resort to some huge manual or checklist, which is often dated or wrong practices. So, the Austin 3 is completely new research. It's actually been researched, which is why it's taking so long. And for all those people who say, hey, why does it take four years for the Austin to come out? I want to ask you, who else has security metrics that actually work? Yeah, they've been working on it much longer than us. Okay, let's look at the security test defined. What is a security test? Well, the first thing we want to do is we want to measure operations. We want to know how something actually works. If we don't know how it works, we can't tell you how it's not working right. So, it's important to actually understand how something works. Then, we're able to apply measures, security measures, which are actually a good fit. From there, once we actually know how it works, what security measures are there, then we can start our risk analysis. But we completely keep bias and risk out of the Austin. There's only one place where it even shows up a little bit, and that's where you self-assess your own errors. And we do that for two reasons. One, because it helps you learn more. And two, it helps your client or whoever you are testing or auditing understand the difficulty level of the job you just did. It's not an insult to you if you have a lot of errors, because it could be the network that you are testing produced a lot of errors. It could be uncooperative cis-admin. And the beauty of it is that you actually get a report that you fill out at the end where you explain these things. So, you not only say what you test, but most importantly what you did not test. So, later, when another company comes along and does an audit, you could actually compare the two for thermos. This testing, we found that on a fundamental level, it was done wrong in a lot of places. We actually had the community push us to make a certification. And the reason why was that way they could hire people that they knew could hit the ground running, that they could work immediately, and they would know what to do. So, we ended up making two certifications for that in testing and analysis, the OPST and the OPSA. And just to give you a little bit of an idea of how they work. For something like the OPST, you'll have to define things like where's the physical location of a web server over the internet. If you have three different services on the server, returning three different TTLs during your port scan, what's actually happening? What kind of increased data do you need to collect in order to come up with a better analysis? And even little things like you do a port scan and port AD comes back and it's open. But then when you tell that to it, you don't get a response. Anybody want to explain why that one? Go ahead. Transparent proxy. Brilliant. Good idea. That's one way to do it. How would you check? What other tests would you need to do? When would you consider yourself done though? How many more would you test before you decided that you've tested enough? What's the magic number? No, it could be as simple as looking at the TTLs. You're going to have a shorter TTLs because it's closer to you. That's one definitive test. And yes, you could make more tests and see that it's happening. That it is a coincidence. But that's how you do it. And this is what we show. How you come up with the new tests on the fly because you're always going to have new technologies. You're always going to have new issues while you're testing. And we want people who can actually think for themselves and not just run a tool to be able to come up with the new tests as it comes up. So we had VPNs, then we had SSL VPNs. And I know a very large company who was looking for somebody who could test security tests, SSL VPNs, or procurement. The problem was is that the answer they always got back was nobody's ever really tested SSL VPNs before. Web tests. The problem was is that these security experts weren't defining new tests. They were stuck with a new technology. So this is what we try to bring into people. One last thing, by the way. Telnet is not the protocol for web. So that could also be your problem. In analysis, we do the same thing. We want people to be able to hit the ground running. We want, more importantly, the people you do business with to actually understand what the heck you're talking about. So we try to teach analysis, which is actually the OPSA is a harder class than the OPST. So we always recommend it second. But we look at the things like the famous trace route. You've all done a trace route, I assume. Well, a couple of developers probably know. But when you do a trace route, sometimes you get back the little stars. Sir, why do the stars come back? Yes, you. I see three packets. That's what you put on the report. What was the first part? I think it's an option 10. The router is not returning. Come on, let me give you the real answer. You get an old star. The only thing he knows is that you did not receive a response. You don't know why. See if you have this innate thing where we think we can assume based on our experiences, which is why we'll always be taken over by robots someday. Let's give some background information. Let me define for you what security is. It's something which protects an asset from a threat. Okay? Basically, you're actually separating the threat from the asset. So three combinations of doing that is, you move away from the threat. Okay? You're afraid of lightning. Your threat is the lightning. So you move inside a mountain. Okay? Your threat is terrorists, so you close all borders. Okay? You eliminate the threat. Some countries go after the threat. They decide they destroy the whole threat and that answers the problem. Not going to say, I'm sure you guys may have come up with an example. Or you convert the threat, which is something that was very popular back in the early days of Spain with the Spanish Inquisition. And we can see that any way that we apply security it's usually fairly drastic. Safety, on the other hand, is where you'll learn to live with the threat. Basically, you want to control access, process, interactivity, or the impact of the threat. So that's where you get things like your authentication, you have people traveling with IDs, all sorts of things that you want to do to lessen the threat. So of course, if your fear is lightning, maybe you walk around in a faraday cage. Okay? Or you cut down the trees around your house so that they can't knock over the trees and it falls through your roof and kills your family. Okay? So now that we understand the difference between security and safety, which apparently the dictionaries don't seem to understand, we can actually take our concerns with how security works. Okay? So what this means is that we don't have to care what they have for security solutions. We care how it works. Okay, everyone here might be familiar with the PCI data requirements for these investor cards. Some of you might know that. Basically, you have these checklists. And actually, anybody who works in the government will know these checklists. Or if it works with BS779, you will know the checklists. And they ask you things like, are they running antivirus? What the actual question is, does antivirus even work? And I'm not saying that they update every day. I'm talking about they've got a whole slew of lackless technologies that they're running. Lackless technologies. And they're calling that security. By the way, antivirus and security. Which I just said that now that this goes on the web and all of you know that, we're never going to get special funding from Samantha. So I still sit down with them and take them on them. So basically, we divide it into minimalized operational security or offset in the biz. Into visibility, access, and trust. Visibility is your number of targets in the scope based on an index. Index is how you help them. By IP address, by MAC address, by person, by street address. There's ways that you can help. And from a vector, from the inside, from the outside, from within one new vantage point versus another. Basically, your visibility is opportunity. If you know something is there, you can attack. Access is your unique interaction points. So basically, any place where you can have an interactivity could be a service. It could be a kernel response, like for ICMP. Or there's other ways of doing it, like in a physical security audit. You have doors and windows, which actually open. Those are interactivity points as opposed to the ones that are sealed shut, which are not. And then you have trust, where the targets actually interact with each other in an open manner. Like a web server, or a database server, or an internal help desk that will not identify the people who call if it comes from an internal number. And then you have your safety, your controls. Just take a seat. You have your class A and your class B. We got really clever with our labeling. So just go with me. The problem is that any time you label anything, you make the people who don't know a damn thing get very angry about vocabulary, because people actually like to argue. Yeah, it's human nature. We like to argue. So of course, if you label something, they're going to argue the semantics or the definition of what you label. And this is where I take a step back to just go with the definition and I don't care what you call it. So here we have the interactive process. We're going to go through them. Your interactive controls are authentication, blacklist, whitelist, tokens, passwords. Indemnification, which is the covering rest, basically take out insurance on everything. So you have an asset. An asset. You take insurance on an ex-asset. If somebody steals it, you get big money. You don't care. Even better in data security, you have asset. Somebody makes illegal copy of said asset. You still have asset. You still sell it. You get big money. Resilience. You want to be able to control how something fails so that it fails securely. It fails closed. So you go to a bank. You shoot the security guard. And magically, he falls against the door and you can't move him. He's so big so the door stays shut. Gets hit, falls over. Make sure everything is closed. Subjugation. Basically, the controls are handled by the people who make the controls. So basically, you've all seen this. You go someplace. The guard gives you the paper to sign yourself in. He looks at your ID, gives it back to you. You finish signing yourself in. That fails subjugation, which means he should have signed you in. Because now you can have signed anything you wanted. You think he's going to remember every card he looks at. And you see that everywhere. You also see client-side input filters and browsers. It was huge with JavaScript for a long time. Probably still is. And then, of course, when you mention people go, oh, server-side JavaScript. Continuity. Basically, there's no interruption in the interaction. So the grocery store, you want to go check out the counter. Full, full, full. But they keep putting on new cashiers. So basically, they can handle any volume. And they keep putting in more cashiers as you need to. Door is blocked. They have a secondary door. You can still come in and out. Business service doesn't stop. So we also know this as survivability, load balancing, redundancy. We want to avoid the single point of failure. This is actually a big deal. As you can notice, it's actually at odds with resilience. Because here you want the services to keep going. And on the other side, you want it to fail securely. Nonrecruitation. You want to make sure that you know all the actors' roles within an interaction that they can't deny it. Confidentiality. This is the big one. Basically, you're looking at anything that's displayed or exchanged between parties is seen only between them and not outsiders. And yet, we still call that privacy. When privacy is actually the asset is displayed or exchanged between parties in a way that's only known to them. Okay? So that means you have this, and you've seen this on pretty much every TV movie that deals with bad guys and drug dealers and stuff, is that they know the drug deal is going down. But they have to catch them doing it in the act, and they don't know where the deal is going down. So of course, all of a sudden they just know that famous Miami drug dealer shows up in town. Okay? Famous other Belgian drug dealer shows up in town. All of a sudden, they both walk away and one person buys a Lamborghini and the other one, you know, pies a kite. So the problem is privacy. They actually didn't see the interaction happening. So they couldn't prove that it happened. Hence, business you do behind closed doors. Integrity is your control over any undisclosed changes. The only place where I actually see this working really well is with any kind of patching and five-patch sort of thing, or PKI. So by the way, it's not realistic. And then of course, alarm. So any kind of notification that safety is failed and circumvented. So you have to wonder, things like IDS alerts. Is it an alarm or is it not? Server logs. If nobody reads them, if they get ignored, is it really an alarm? Okay? And then of course you have something different, like home security systems where something enters a door, there's a little passive infrared thing. And so there you can actually block your access point with an alarm so that way when it's open in any way whether it's the lock that's broken or the door that's broken down or when they come in through a window, the alarm still sounds. Then there's a third part of this called limitations. And here you have your vulnerabilities, weaknesses, concerns, exposures, anomalies. Yes, we classify five ways. You'll see the CVEs basically only use vulnerabilities and exposures. We have mapped pretty much one-to-one with the common accepted way of classifying things but we do it a little different because the normal way of classifying vulnerabilities actually doesn't work. If you've noticed over the years people keep rehashing old ideas there's nothing in any new research. Our new research that's come out has shown that there are better ways to classify vulnerabilities yet in order for it to be accepted it still has to be sort of like the old way so that people don't get confused and whether or not you want to call it a vulnerability weakness or ADCVE we don't care. Don't come back to the definition that's not what a vulnerability needs. That's what it needs to be. So basically you give, deny, access or you hide your assets. That's a vulnerability. A weakness, very simple. You can pretty much anybody could use this to figure it out. The risk that's involved in that happening is a completely different issue not our problem. We're just doing a metric for security. The weakness actually depends on the class A controls the class B controls for concerns exposures deals with visibility and anomalies are anything that's unidentified or unknown that can't be accounted for. Now the beauty behind this is that we don't give it multipliers. We don't say this one is worth 5 times as much as this one or this one which is only worth 1x versus this. That's obscene, biased old school thinking of eating 4 eggs in the morning a day and a heap of bacon. That belongs back to the 50s and that's still what they're selling. Instead what we said was based on the controls and the operational security that you actually have in place right now that's what determines the values that go here. So this is actually dynamic. How much one vulnerability is worth depends on how many controls how they work the controls because actually you can have a vulnerability in a control itself or the security that you have in place. So yeah, believe it or not the value of a limitation actually depends on the security controls you have. What an amazing concept. That's right, I'm the genius that came up with it. I can't believe somebody else didn't come up with it. I think somebody did, they worked for a corporation they got fired and basically we have this beautiful sheet that you can download it, you can use it yourself so you don't have to do math. You just have to count and most of us can do that counting real integers is fancy. Basically you come up with your visibility, your access, your trust and then your controls and when you count these up you count these up dynamically the values here will deteriorate. Okay? But all of a sudden here we see that there's not nearly as much weakness. You'll get this the many other controls you have in place you can have a weakness or concern or even exposure for more than vulnerability because it's dynamic it's rare and you have to have some pretty screwed up security to get it but it happens. So basically we look at visibility, we found 250 servers systems and up to 412 access points and 14 of them had trust from this method that we can tell we can also count up the rest of the cards. Okay? Now the beauty of it here is that 10 controls both 10 nullify one offset. So for every all 10 controls combined is basically the same as having that whole closed. So what that tells you is that you have an open service tell net control controls except authentication. So basically the value of that whole is 0.9 out of 1. So it's still pretty wide open. SSH offers integrity, encryption confidentiality, integrity and authentication. So you're already up to 7. So it gets even better. Each limitation's values depend upon the object control values. So if you add more controls it changes that there in which case your final delta of a change happens is influenced. Your delta tells you your security changed. So for example if this was a network one network out of two networks two companies combining and you already had the other value you could actually combine the two values and say this is the change that would happen when we add their network to our network. We could already predict the security change which I just talked ahead of myself in the slide. Okay. So for some fun and games we could ask ourselves we could answer these questions theoretically before we try it. Is one network firewall better than a host firewall in each system? Or does two factor authentication work better? So we can basically look at this and we say alright here we have minus 9 a firewall no controls because each one has a host based firewall on it. So we've got minus 9 for changing security. A network firewall adds one additional visibility it's an additional system and a single point of failure. So it has a new weakness and we've got minus 15. What do you think people are doing out there in corporate world? We know what they do. They do the single firewall because it's easier to administrate which isn't actually true if the administrator actually did their jobs. How many administrators here? How many? How many of you are lazy? Keep your hands off. Alright. Just go for the clock. Okay. One factor authentication you have four to five systems each with one each with one service open one of those has only one of those services has an authentication so we've got minus 7.9 and if you add an additional authentication to that service you're at minus 7.2 So it makes a slight bit of difference but two factor authentication also means a huge cost because you're talking about tokens and all sorts of new tricks so you can actually then see if it justifies the cost. I'm almost done here people. Now your party tricks because you're also nice to me. These are fun. You go to attract a guest to the party and you explain to them which room is more secure the living room or the bedroom and they should go with you. Come on you can do it in your head you can calculate the opposite access, trust just tell them in those doors how they open it's a great trick. Some of you are going to try it if you don't. You explain to the host exactly how secure the silverware cabinet is and then you test it. Huh? Tricks for your friends. Then when the police come to the house and they explain to the flight officers you can tell them exactly for slicing the security metrics that just because the house is rocking then it's still very safe and they'll just go with because they can't do science except the forensics practice instead of the other police. Okay to wrap it up some keys to proper security testing for the BIOSTA using the scientific method transparency tell it what you didn't do as much as you did do you don't rely on pattern matching meaning just because your experience says that the router didn't respond so that ICP packet doesn't mean that that's always the case your intuition is a weakness in a scientific methodology meaning what you think works uh you know or what your gut feeling is is probably statistically wrong and you won't know that unless you actually try it and do the test and you're consistently right all the time which some developers sometimes are but they're like they're like one gene away from being a robot anyways you want to be very formal in your verification process that means you actually test it formally and not just cut corners you want to recognize when something is in a hostile environment meaning that you're going to have a lot of interactivity that you did not cause yes, when you test it on the internet other people on the internet might be going to go servers at the same time that's right it's dynamic so I must forget that and common criteria is a good place to start when you're trying to figure out things about systems but since people get to make up their own criteria before it goes into it it's not always correct but it does give you some good ideas like for example Microsoft we know is not designed for hostile environments and they know that and they admit that yet legally legally they can sell by a server that goes with it they can package internet utilities, internet applications for a product that they knowingly know cannot be in a hostile environment and I say Microsoft but it also includes a lot of Linux's too so it's not designed quick thing about authentication if you're interested basically these are the walk the walk things even if you guys don't want to take it you might want to consider hiring somebody who is if you're looking for somebody who knows security it's pretty much a standard or if you want to hire somebody you want to make sure that they actually know what they say they know this is a good fundamental if you're looking for a good security person you can actually think and develop tests on the job no robots so we have the analyst we have the tester any questions, any information? does the CVSS fit into the Austin how does it work? sure anything can be integrated into the Austin in that way I don't particularly think it works but that's my opinion I haven't done thorough testing on it but I think it's very biased and in a security task where you try to say something about operations in a factual manner to integrate something biased into it where you don't specifically say in the report or to the client that it's biased you are lying that's where you find limitations within the authentication scheme sorry the question was do a video up there what happens with the strength of a password or the existence of a password in an authentication scheme do we just count or is there any kind of weighting to it the point is that there is no weighting it's just you do test it and if you find vulnerabilities or weaknesses or concerns in the authentication scheme itself then that also counts against it does that make sense? you're still testing the controls controls you can have for instance you can have antivirus but antivirus is just a blacklist and your tests are going to find that it has lots of weaknesses depends on how good the password is and how bad the token is that's why you test it I think we're out of time here almost anything else? anybody else? he says we're not so easy questions now only easy questions favorite color favorite animal, unicorn nobody? can I explain why I don't talk about risk? risk is biased and it's different for everybody so for example maybe the thrill of jumping out of an airplane is worth the plunge 10,000 feet to your death because you really like it it's a different thrill for me it's very biased so we would have a different risk weighting system towards something like that the Austin itself is a platform for risk so you do the metrics you actually have a foundation that's solid then you add your risk then you can say well this vulnerability is used a lot or we see this in the wild or any of the opinion you want that goes in your summary or your report but at least have the foundation of knowing what's even there first somebody else? yes as a matter of fact I do know that certain governments do, we find it in Switzerland, Germany Mexico United States Treasury Department recognizes it I guess you could say we train a lot of people a lot of departments not that many I hear from they don't have to tell me if they use it but I only hear second hand so I can only say on the side who talked to me or who told me, for example my biggest one I was talking about yesterday was that I heard from NASA Ames Research and for me that's like the geek mecca you know those are rocket scientists so for all those people who hate me that's okay because I got NASA but yeah, they don't have to tell me or it's nice if they do but I don't know but I think it is pretty much prevalent, I keep hearing from places that's it okay I gotta catch a cab and catch a flight back to Barcelona where it's warm