 So, I assume everyone can hear me, fine. If anyone wants to move a bit closer to the front, feel free, we've got loads of space, might be a bit easier for me to hear if there's any questions. So yeah, this talk, automating myself out of a job. I hope the apostrophe is in the right place, I think that's right. I'm a pen tester, we're going to be talking about left-shifting security testing. Some contact details down the bottom if you want to try and reach me. Okay, so who can relate to this kind of thing, right? Security, even security guys see themselves as the bad guys, unfortunately. We know that security hasn't kept up with kind of modern development practices. This is something that we are aware of and we want to try and address that. To even today, we've heard people talking about security and how security needs to do better. And I 100% agree with that. So just from show of hands, can I see kind of who fits in which place? Who's a developer in this room? Okay, cool. We've got project manager, awesome. QA testers, UIT testers, okay, a few. And then dev ops, okay, yeah, obviously that makes sense, right? And the question was asked earlier, but who's in security? Like, only security, okay, a few people, okay, awesome. Yeah, that just kind of helps me understand kind of how to, you know, have to, what kind of language to use in the talk. So why did I want to do this talk? Well, I do really believe that pentesting is of quite limited value to a lot of organizations. We still have vulnerable software, right? So, I mean, the Equifax thing happened recently. I had to talk about it. Security testing isn't finding as many issues as it could. In the UK, we have an ISP called Talk Talk. Again, quite a similar thing happened. Lots of data was stolen. And it was via something quite simple, you know, like a SQL injection vulnerability, I think it was. You know, probably just some kid with SQL map or similar, finding what should be a well understood issue. So, yeah, pentesting, it can be of limited value. This talk isn't really gonna be about use these tools and be secure. We'll be looking more about some concepts. To be honest, I'm sure that you guys probably know a bit more about the tools than I do. I'm not a DevOps person. I am a pentest, I'm a security consultant. But I do feel that, you know, it does take effort from the InfoSec community to help shift left testing. So yes, these are some of the things I hear about pentesting. So even pentesters say that pentesting sucks. I hear a lot of people saying it's quite boring to test a lot of the applications. You know, once you've tested a few web applications you've seen kind of most things you're gonna see. We have to report low risk issues. A lot of people we speak to think that we're padding the report like this. No, the truth is, if we've identified it as a risk we need to report it. And we know no one fixes the issues. I mean, that's actually kind of demoralizing. You know, we spend a lot of time and effort doing a pentester writing report. We know that people aren't gonna fix it. If we're doing a test where there are loads of issues especially loads of low risk issues we know that security wasn't considered during development. Therefore, no one's gonna fix the issues we find. Then these are the kind of things that developers tend to say. Again, padding the report with low risk issues. We don't understand the context of the vulnerability. So we kind of rate them in a way that doesn't always make sense, a lot of ego in pentests. I don't know if anyone agrees. Has anyone had to deal with pentesters before? Yeah, man, we think we're the greatest thing, right? You know, we roll in and we're probably on site for a week or two, hand a report thinking that we've kind of hacked something awesome and leave and okay, we're not too easy to talk to. And then of course, we're stopping development. People want to be able to release applications quickly. And then the pentester comes along and says actually we have these issues which actually kind of, you know, that stops development. The release should really come first, right? That seems to be the most important thing. Actually having a product out there. So yeah, who am I? My name's Jamil Harris. Most people call me Jay, that's fine. Pentester, security researcher at digital interruption. I'm quite interested in mobile security, radio and reverse engineering, and some Twitter handles there. I run a group in the UK, Manchester Greyhats. Feel free to join our Twitter thing because we're gonna start recording the workshops. We give security workshops. We run CTFs as well and we join in CTFs. I know CTFs were mentioned earlier. If you'd like something you're interested in, feel free to join our group. Oh, we have a Slack channel as well. That's it from about me. So just to define quickly what I mean by pentesting, even us in the industry don't always agree on terms, which can be quite confusing. Really, I'm gonna use pentesting to mean kind of all security testing. Any questions? Okay, cool. So, traditional development. I mean, you've all seen this kind of thing, right? This waterfall methodology. Gather requirements, design, implement, test, deploy. You know, quite an old way of doing things. I'm gonna say if you work like this, pentesting as it is probably works fine. You have other things you probably need to start fixing first. Pentesting in this way, you know, is gonna still be fairly expensive, but you can save a lot more money by fixing other things. Pentesting in this way works fine. Many organizations are trying to move to a more agile approach where, you know, test development is happening to quite a short burst of time. I've tried to pentest in this kind of environment before and it's really difficult. I was on a project and they tried to, they wanted to release every, I think two weeks. This project lasted for a couple of years. And how do you pentest that thing when the product isn't even developed, when new features are constantly coming in? A lot of what we did was code review, which, you know, whilst being useful, was not the best use of our time. But yeah, that's how we were able to do some amount of security testing in an agile environment. What we have now doesn't quite work. And then obviously, you know, we're trying to move to continuous integration and continuous delivery. There were talks earlier that kind of describe this better than I can, but the idea being that we want to constantly deploy, we want to automate our test, automate our deployment. And you know, in this kind of thing, you know, where does security testing fit in? So looking back a bit more from the traditional thing, this tends to be what happens. Security testing isn't prioritized until it is. We do have testing in this thing, but security testing is often done by a different team, maybe from an external company. And it's not really thought about until someone says to you, you need to get a pen test, then everyone panics, brings in a pen test company, they do the test, they're left with the results. From the pen testers point of view, we turn up, wait for some requirements, we do some testing, wait for some more requirements, do some testing. We leave, write a report, and then that's it. And after that, we're kind of done, which isn't, again, super useful. I see developers often feeling like this after we've left. Again, dealing with our ego, I was on site with one of my colleagues, a really experienced hacker. And actually, he's probably one of the best in the UK. We were on site, we were doing some testing and he was reading to some source code and saw a vulnerability and started laughing. The developer who wrote it was sitting right behind us. So how do you think he feels when he has this quite young guy on site who's mocking him for some code he's written? Often, as I said before, the report is gonna be full of low risk issues, and then a couple of maybe medium or high that should hopefully be fixed. So organizations just throwing money away, honestly. The pen testers, great time. So what we're gonna do in this talk, we're gonna look at an application. We're gonna look at some of the vulnerabilities I found whilst testing this application, and we're gonna see if we can figure out ways to automate some of that testing so that we could have found the issues before the pen tester came in. So this app is an Android and iOS app and it was used to make voice calls. So again, this is a real application. There was the clients and then the web service. When we did the pen test, it was from multiple perspectives. So from a lost device, what can an attacker do if they found the device? What if malware was installed on the phone and then of course attacking the server? So I tend to do most of my mobile apps from these kind of three viewpoints. So these are the vulnerabilities, the higher vulnerabilities that we found. This is just a subset obviously of the issues. So the mobile verification code could be brute force. So this was a code sent to by email or to a phone number. You use that when you want to register. You could brute force this. You could view other users' messages without logging in. SSL validation was disabled so you could intercept the network traffic quite easily and then there was direct read traversal on the web service. You have some medium issues. Now just kind of show of hands. I mean, who kind of agrees that all these things should be medium? I mean, there's that backups allowed in the Android app, right? You might say that that's not even an issue. Well, the thing is this allowed us to access some sensitive data that was in the application sandbox. So we raised it as a medium issue. Yeah, lack of permissions on the Android IPC endpoints, SQL injection. They're kind of medium because you needed to have some malware on the device which kind of changes the risk. And then some low issues. So logging, anti-debugging, lack of root detection. Now some people again, wouldn't kind of classify these as vulnerabilities but there is some amount of risk there. It's not just us patting the report. It does amount to some risk in the app. So let's see how we can try and integrate security. But first, if you have the report, this report, this was your app, who would go live knowing those issues were there? No one, maybe one person. Okay, interesting. So you're saying that no one, everyone would just not go live with the issues. Okay, super interesting. So they obviously just went live. They, I guess they plan to fix it in the future. I'm not sure if they have. So the solution, we can't ignore security, obviously. But we need to put the security in the hands of the developers, we need to left shift it. We need to make it so that testing is done earlier. Not only is it cheaper, but there will be new ways of testing applications. Developers are good at testing their code. They have things like unit tests. That's not gonna be the way that I, as a pen tester, test an application. So at the first stage, requirements gathering. We want developers and project managers to think like an attacker. There is gonna be multiple ways of doing this. You can use threat modeling. Try and understand the application, decompose it. Determine the threats. Determine the counter measures and mitigations. Document this. Think of the assets that an attacker is trying to get to. Often, we're not after a shell on the server. We're after a specific asset. Think about how an attacker might be able to access that. Look at previous pen test reports. If a pen test has happened, use that to understand the risk of your application. And if you need to bring in someone external to help you understand the risk of your application, this is a good time to do it, not at the end when the development has already happened. Different applications will have different risk ratings. An internal brochure-ware application, who really cares if it's not secure, it's not the end of the world. But if it's a thin type application, a payments application, this is actually going to have a high-risk profile. So these are the ones where you really need to think about understanding the security requirements before the app is developed. So take another wipe application. Think about some of the attacks. So let's say we want to try in brute force, the username and password. So here we have a list of basic requirements that we've decided before the app even started to be developed. We know that if these things, if these requirements are part of the application, then we can say that the application is gonna be secure, or at least in line with the risk profile that we've decided our application has. So this last one again, some people don't really like that the application shouldn't run on a rooted device. But we may decide, this is a very highly, this is an application with a quite high-risk profile. We don't want it to run on rooted devices because an attacker might be able to steal money, malware can do bigger things. So maybe we actually decide this is the case. So I don't often see something like this, but abuse stories. So in Scrum, you have the idea of user stories, right? I'm assuming everyone's kind of familiar with those. I don't often see people thinking about them from an attacker's point of view. So abuse stories. As an attacker, I want to log into the application without knowing the password. As an attacker, I want to read files in the application sandbox. Think about these, document these abuse stories. These might even be split even further. So as an attacker, I want to log into the mobile application without knowing the password by brute forcing, launching the activity manually on Android, SQL injection. Honestly, there could be any number of ways, but the more you document the better idea you're gonna have about the types of attacks that your application may be vulnerable to. Maybe actually document these in the diagram. And again, the point is to try and understand these before the application is even developed. So if there's any questions, by the way, feel free to just chat them out. Okay, cool. So really it helps us to find and address some design issues. I really love this gift. Cause I mean, this is kind of how pentests feel sometimes. They're just, there are design issues that should have been found early on. And I think if people had just taken the time to document the types of threats to their applications, they would have, this would have happened, they would have found it. I like to think of the difference between a house and a prison. If you want to turn a house into a prison, it's actually kind of difficult, right? You've built a house, has all these weaknesses, but the prison was created to be secure. Okay, so project management, departments gathering. Okay, we can think about security there. If we do, we could probably even say, okay, we're done. We've understood everything, but we actually need to implement some of those requirements. So how do we embed security knowledge into the development teams? Which I think is kind of one of the most important steps. Training, obviously a big one. There are lots of training courses and things now that are kind of gamified. They allow developers to learn about security through some game, they have achievements and whatever, that's much better than bringing someone in to speak to them in the classroom or just kind of one of the some of those super lame CBT video type things. Pairing, if you work in a development team and you're actually working together, you can actually check each other's code as you check it in. Look for security issues, do code reviews. If you have a security SME in the team, speak to them, ask if you have a question about security, ask them about it, see if they can actually give you a response. I need to use this technology, what kind of things do I need to keep in mind? I mean the security champion are quite a similar thing, but so the reason I have this in is because I've been working in organizations before where there've been people that are super interested in security and then they've left their development team to become pen testers or something else, which seems ridiculous. They have an interest in security, they're in the development team, why not allow them to become the security champion for the team? Security code reviews and then chat ops. So I worked at a place once where it was an internal pen test team. No one on the team was on the Slack group for the company. I was like, why not guys? This is, I answered more security questions being on Slack than I did by any other means. So actually having your security experts using the same kind of tools as you I think is a really good way of embedding security into development teams. Unit testing, I don't often see security tests done from unit testing. I mean, does anyone even write security unit tests? One person, okay. So developers know how to test their code. They have really good ways of testing their code, so try and test the security stuff as well. So let's take it as an example. Here is a unit test for the brute force thing. Again, this isn't a great test. It's probably not how you would actually write it, but the concept, right? We've defined the type of, we've defined a test for the requirement that we have. So let's say we've done those kind of things. What kind of issues can we take away from that take out of this report? So yeah, we have a unit test for the MVC being brute forceable, cool. We can write a unit test for directory traversal. SQL injection, I mean, that's a really super easy one to test. Depending on how your code base is written, obviously. And I would say kind of none of these really would be super easy to unit test. But the point is, we've tested some of them already. So that's now tests that developers don't need to, that the pen testers don't need to look into. Security tooling, I see so many security tools by hackers for hackers. That's so wrong. Hackers need to be writing security tools for developers. I mean, I don't know if anyone kind of feels the same way. I remember when I wanted to use, does anyone use Burp, Burp Suite? Yeah, an intercepting proxy. Someone asked, someone said to me the other day, Burp is really complicated. I was like, no, it's super easy. And then I remembered back to when I first started using Burp Suite and how complicated it actually was. These are not tools that we should expect developers to be using. We need to write tools that will actually integrate into the dev pipeline. So yeah, so anyone here who is writing security tools, think about that. So this was my quite naive approach. I took an application for mobile, or for Android application testing. It's a framework called Rosa. And I integrated it with Jenkins. It wasn't a good integration. I think I just basically wrapped it up in some shell scripts and things. But it was integrated, which meant that every time I did a build of the app, I was able to automatically check whether these security vulnerabilities were present. And if not, fail the build. If they were present, fail the build. So we should be writing tools that provide feedback to the developers and integrate with the build system. And I don't think the InfoSake community really understands this yet. So basically, as I said, we want a way to build automatically test using the tools that are out there, but written for developer use cases, and then notify the developers. So the tools that do exist for this kind of thing. Who uses any SAS tools? So static application security tooling. A few people. OK, cool. So basically, this will scan source code. So imagine everything that you've written gets scanned for vulnerabilities. Use these things like pattern matching and take analysis so that it's able to look at the data going into a function. So the source and then where that's being used, the sync. And it's able to say, OK, this thing is being sent to the database. It's not being sanitized. But first, maybe this is likely to be run over to SQL injection. There are lots of SAS tools available. So I won't talk about them here. But a guy called Nick Jones did quite a good talk at DevSecon I think last year on how SAS tools work and things. So if you're interested, take a look. We have SAS tools, so this is more dynamic stuff. So these are tooling that actually runs against the application. This can be slightly harder than static testing because the app needs to be built. It needs to be running. It needs to via requests. First is the source code scanning. You just scan it. The tools, they do exist, but to be super useful, they need to be aware of the application. So you can't need to tell it what kind of vulnerabilities to look for. You might need to give it user credentials. You might need to say, don't click on this button because that will delete everything from the database. So another reason why it can be quite difficult is we're dealing with this at the moment. Our SAS tool is offsite. We're trying to run a test applications on it, but the test environment is internal. So we need to open firewall rules or have some other kind of thing to allow it to kind of enter the network. So yeah, that can be a bit more difficult to set up. But they can be slightly better for design flaws. And then I asked, basically, that's with instrumentation. This is the new kid on the block, really. Basically what it'll do is it'll hook the runtime and it'll monitor threats whilst you're firing test cases at it. So arguably a bit more useful than just plain dast. So I'm gonna say, if you do this kind of testing, you can get rid of kind of all these high-risk vulnerabilities. Most of these medium-risk ones. And I'm gonna say things like sensitive information stored in the sandbox. Maybe the tools won't understand what sensitive information it is. Maybe you can train it, but out of the box it won't. Maybe weak authentication would be very difficult for some of these tools to pick up. And then I would say some of these low ones, version ban and disclosure, for example, that would be quite easy to test for automatically. And so what about the infrastructure side of things? Infrastructure is, you know, infrastructure's code is kind of the key to DevOps. This was gonna be a bit of a lengthy section, but I don't really think it's needed. If you're doing infrastructure as code, you can use SAS tools to scan it. You can say, you know, this is our config files. Let's compare that to some kind of policy and flag everything that is outside of that policy. Like anything that deviates from, you know, what we want as an organization. If you can't move the infrastructure as code, use some of the tools that we as hackers use. Again, write wrappers around them. Use things like Nmap or Nessus. Okay, it's not great. It's not gonna be like very DevOps, but it will give you some understanding of the vulnerabilities that are present before you go live. So then I'm gonna say, okay, so we've had the project managers and thinking about security in the way that they develop their requirements. We've had developers thinking about security. So how about QA testers? Can we get these guys to think about security as well? How many tests has actually tried to automate security tests? I've spoken to a lot of testers about this and they say security isn't my job. Well, honestly, the way I see it is a security bug is a bug, right? Doesn't matter whether it's a security or not, it's a bug, it kind of falls into the general remit of testing. I think the issue is that the tooling doesn't really exist to look for security testing, a security issue without a lot of background knowledge which is a shame. So we looked at this earlier. So imagine if we have a big list of requirements. Actually, it would be fairly easy then to give that to the test team and say, make sure that these things either work or don't work. We can't really do that unless we had the list of the requirements. So, okay, back to pentesting. So these are the vulnerabilities that were left over. So I think maybe things that we couldn't automate away. So yeah, okay, I thought we were automating security. I don't think we can automate all security. Maybe 70%, now that's the number I just made up. Maybe 70 of the volumes can be found before the pentest. These will be things like cross-site scripting or SQL injection, maybe cookie flags not set correctly. A lot of things that I spend a lot of time reporting, we can automate away and then should be really. So I would say once we're able to do this, we can actually say that checklist pentests are okay, that's something that we hate doing as pentesters. We hate to go in knowing that a test is only being done because someone's asked us to do it. I would say that if you can automate 70% of the stuff away, you can probably hire cheaper pentesters, someone who would literally go along and manually verify what you've done. And I think that would actually be okay to do. But the point is that you get to decide what gets tested. You get to decide the risk of your application. And maybe you say, this is a low-risk application. We're happy with 70% of the issues being found. Maybe you say, okay, this is a high-risk application. We actually want the manual pentester to come in. I would say pentests are probably needed for some applications. I don't think we're ever gonna get away from completely automating security, at least kind of in the near future. So what I would like to move towards as a pentester is a kind of continuous red teaming or kind of bug bounty system where as the pentester, I'm constantly looking at an organization's applications and reporting back to them with the issues that I find. So this is where you would find those weird logic issues or even new vulnerabilities, things that have just been released that are not always gonna be picked up by the tools or are gonna be obvious. But this will only really work if you can feed back into the development lifecycle. And this is how I think pentesting can work in an agile environment. Having someone come on board and say, okay, do a five-day pentest at some point during our development doesn't really work. Having a continuous kind of red team actually would as long as there's a way to feed the information of the vulnerabilities back to the developers. Or you can do something like this. So you can say after there's a change to a key area, maybe we are just gonna do a bit of a checklist pentest. We're gonna say, okay, this thing has changed. Let's check authentication. Let's check injection. Let's check, et cetera. And I think, again, I think that is another good way to do pentesting in this kind of environment. We need to be able to capture the results, though. So what would be great to see is output from our SAS tools and from our DAS tools from QA testing and red teaming, going into a central database of issues. At the moment, I think that a lot of these tools and teams will report in different ways. There have been so many times when I've given a PDF to someone in an organization and then it just kind of disappears. I was speaking to one lady where I used to work for a small contract that I did and I said to her, have you seen the last pentest I did? She said, no. I think that she had taken over from the person that I had done the test for before and there was no handover of the pentest issues. So these need to be fed into a central repository and then the application as it's being built can actually check this database and say, okay, we have these high-risk issues, we're gonna stop the build or maybe we won't. We understand now, we understand the risk of application, we can make the decision, what happens? We can also do things like regression testing. Oh, one thing I did forget to mention in the previous slide. So who's had to read a pentest report before? Yeah. I mean, how useful do you find them, really? Kind of useful, kind of? Wouldn't it be awesome if you actually got a test case instead? Like, I mean, that seems to me with a way more sense rather than a PDF. Actually some kind of code that you can run that says this is the runability and this is how you can fix it. So when you do fix it, you actually have something that will run to show you that it's being complete. So actually we have 10 minutes left, but I finished a little early. Are there any questions or does anyone want to have a bit of a discussion about some of these things? Yeah, go. Yeah, so for what kind of thing? Like infrastructure or application, web, mobile? Yeah, so I can speak to you afterwards if you want. But so for like a web app, I use things like Burp Suite, which is, as I said, an intercepting proxy. I do lots of things manually with it or I have my own set of tools that will kind of automate the things that I tend to do. There are loads of tools, but again, these tools aren't really written for developers. They're written for hackers. There's SQL Map, a really awesome tool for finding SQL injection, but it has like quite a horrible CLI interface and just kind of nasty. I worked at a place where they had this open source tool. I mentioned it before, Droza. And it's a really awesome application for testing Android apps. And they tried to sell a pro version of it. Wasn't very successful. What they did is they just threw a goo on top of it and assumed developers would want to use it like that. And they didn't, and no one bought it. So it's open source. If you do mobile stuff, take a look at it because it will find a lot of vulnerabilities in your applications. But it needs help, I think, from developers and from the DevOps community to turn those tools into things that kind of work for you guys. They work for us, but yeah, I don't think they work for you guys. Anything else? Yeah? Well, so in Android, do you mean in this case? So it can be a hard one. Obviously encryption is kind of the big thing, but the question is, then where do you put the keys? I've seen lots of times, especially like in thick clients, the data is all encrypted and saved on disk, but the key is within the application. So you reverse the application, you get the key, or it's the key that's shared across all applications. So that's why you do need to bring in a security expert, I think, right at the beginning, because if they say to you, okay, yeah, encryption, do that, these are the things you need to consider. The key shouldn't be stored here. It needs to, maybe you're, I don't know. I mean, there are lots of different approaches to do it. It really depends on the way the application works. It might be that storing the key within the app is fine if you have enough obfuscation and you say, we accept that someone might be able to recover this key, but it's gonna take them six months. Maybe you're okay with that level of things. So there is a lot of things to consider. It's very rarely a case of one answer for the best way to do it. It depends on a lot of things. So I've seen also before things like, like tokens stored inside applications. And this is one that actually I get asked a lot, you know, how, what do we do with these tokens if we don't wanna store it in the application? The best answer I can usually give is to, so rather than storing the app in the token in the application, use a web service. And that web service has the token which then is used for whatever service it needs to contact. And then you can have your own, like authentication methods on that web service. But yeah, I've only ever seen one person do that properly. Everyone else just kind of hardcodes everything. So again, today everyone hear the question because I like the question. So basically what's the best use of, I guess the consultant's time, right? So finding all the low hanging fruit is definitely a good thing to do because they're the kind of things that will get picked up by tooling. They're the kind of things that an attacker will see and be like, okay, they've done these things wrong. There are gonna be other issues. Yeah, like everything I think it depends. You know, if it's a particularly high risk application, you probably wanna spend a bit more time looking for the really complex stuff. But that doesn't really make sense to do that unless you've done the low hanging fruit which is why we need to work out ways to automate things more so that that isn't there. So when I do come in to do a pen test, I'm not spending my time looking for the low hanging fruit because if I'm running on the test for five days and I spend those five days doing cross-size script in SQL injection banners shown. I mean, it's not a useful way to do the pen test. Instead it should be five days of me pulling my hair out trying to find something awesome. And if I'm not doing that, yeah, I would probably say you don't even bother with the pen test at that point. You need to kinda step back and think of other ways to start to secure applications. Oh, good question. So I would love to see, I guess, just more communication. I think a lot of the time, you know, this InfoSec community kind of looks down on developers. The developers kind of look down on InfoSec which makes no sense. When I'm on site, I really think of myself as being part of the team for that week or two that I'm there. I'm not there to humiliate anyone. I'm there to be the security part of their project because I know I'm a terrible developer. So I shouldn't expect developers to be experts in security. And instead, you know, what I need to do is find a way to impart my knowledge in a way that they can use without being experts themselves. And so I think just like communication between everyone is, and that's the point of DevOps, right? It's about communication. So I did this whole talk. I didn't even mention DevSecOps which I was quite pleased with because I don't want this to be like a DevSecOps thing. This is security in development. It's part of DevOps. It's about bringing everything together, as was mentioned in the talk earlier. Anything else? I think we still have some time. We've got time for one more question. Okay, cool. No, nothing? Okay, thanks very much.