 Okay, hemi, a'r ddweud. Ha! Right, let's get started. First of all, thank you very much all. Thank you very much to the organisers of AppSecIsrael, not just for organizing this great conference, but also for deciding that all talks should be in English, because that's making my life a lot easier today. So that's great stuff. I'll give it a try. Okay. Dwi'n ffawr hynny, ymwneud o'r edrych ar bethau y Llyfrgell yn y llwyddiad â'r m puòig yng Ngher iawn. Felly, ddweud o gael cyfwyr, y cyfrifol, y cyfrifol, y cyfrifol, y cyfrifol, y cyfrifol. Regwyr 50% o ddweud o fi, yn dweud o gael cyfrifol, 20% o gael cyfrifol. Yn astrwysio'n dweud o gael gyfrifol o'u glwyr, ac yn dweud o ddweud o gael cyfrifol. ond that's really the talk that's aimed at today. The idea of this talk came a couple of years back when I worked on an application security test. Now in this test...it wasn't a regular test. We started off, we sat down with the developers, we understood, ok what is this application? How does it work? How do different modules link together? How does everything fit in? Where are the security controls? We really understood that in depth in there. Once we've done that we had various ideas of ok security improvements. Mae'r tyfu yn bwysig. Mae'r byw wedi chi'n gwestiynau'r gennym iawn i'r test. Mae'r byw bwysig yna, yn amlwg i'r defnyddio'r cyfrifben o'r bod yn meddwl. Mae'r byw'r byw sydd yn gweithio'r cyfrifben o'r ffordd. Mae'r byw wedi gweithio'r gweithlo'r gweithlo'r webserfa. Mae'r byw yn ei wneud i'r pethau, o'r pethau. Mae'r byw wedi gweithio'r gweithlo'r gweithlo'r test. Mae'r byw ychydigau'r byw yn cael i'r gweithlo. Er fyddechrau, ydw i'n golygu am ein mynd o'r cyfeiri wrth yn gwneud y cyfrwyr fel y cyfrwyr. Mae'r ymddoedd yn rhoi gwirio, mae'r ymddoedd splir ahead. Mae'r cyfrwyr yn haladau yn cyfrwyr, wnaeth nodi'r cyfrwyr yn cael eu cyfrwyr. Yn ymnyddio yn ymddwch â'r cyfrwyr i'n golygu bod gyda'r cyfrwyr yn cael eu gwirio. Mae'n cyfrwyr. Cymru yn cyfrwyr o'r cyfrwyr yn benodol. But, if you're a defender, you feel a builder, there aren't those resources available. There isn't that information really available, how to get more out the process and that's what I wanted to try and achieve with this. So quick background about me, in case you missed the beginning, my name is Josh Ros很多 I've worked for the last ten years IT risk, IT security, also bit software development, ac mae'r ddweud yma yn y tîm yng Nghymru, y dyfodol y dyfodol yn COMSEC. Mae'r ddweud yn olygu'r clwyddoedd yn dda. Mae'n gweithio'n ddweud o'r ddweud o'r ddweud o'r moddau. Felly mae'n gweithio'n gweithio. Mae'n dechrau'r llwyngau a'r ddwyfodol ond gan y ddweud. Mae'n rhai ddweud i gydag ac mae'n rhoi ddweud'r COMSEC, mae'n rhai ddweud o'r cwmsec. yma – this is a talk about ideas and I didn't want some grinding to distract from that. Having said that, that's obviously a concept that's given me some experience with those ideas. That's obviously being a big benefit so I want to note that up front. So what do you want to do today? Ideally it's something for everyone here. If you're a builder or defender, hopefully some of the ideas I talk about today that you need to use some of them will be relevant to you. Some that you can take home and use or take back to the office and use. If you're a breaker, I suppose the main question is, are you ready to deliver tests with this level of quality, this level of thought? If you've got any questions, statements to the end, don't forget to that stage because I'm a little bit pressed for time, but come and talk to me afterwards. I'll put my contact details up at the end of the talk. So, the way I see it, you're doing applications to cruise testing already. If you're a new company, then you've got regulations that say you have to do this. Maybe you're subject to PCI, maybe you're subject to banking regulations. They say you have to do application cruise testing. If you're a small company, you've got customers who say, okay, here's our list of security requirements from your product, from your service. One of the items on that list is going to be do you do security testing of your application to your system? It's something you're really doing. Hopefully even if you weren't required to do it, you'd do it anyway. So if you're really doing it, let's try and do it with the best possible value. Let's try and do it with much value and much quality as possible. It's worth doing, it's worth doing right. I had a client a few years back now, and they were very, very aggressive about fees. They didn't want to pay consultant fees. They always wanted the lowest possible rates. And we ended up doing some work for them. And as part of the work we had to review various documents, various reports they'd received, we saw a security report prepared for them by a different consultant. And this report was in English, and it was diabolical. It was a really upsetting report to look at. It was clear they'd gone for the lowest possible rate, the lowest possible option, and you could tell by the quality. And that sort of takes us on to the next step, the next, I suppose, underlying principle there. You're going to get out what you get in. If you put good meat into the sausage, you're going to get good quality sausage. If you put the effort in, put the timing to making sure that the test delivers more value, you're going to get that extra quality test at the end. And as well as that, ultimately every test is going to be based on here's the period of time, here's the number of hours you have to do this test, and you want to get the most value out of those hours as you possibly can, and make sure that you're tested for spending them in the best possible way. Another thing to bear in mind is that not every test is the same, not every company is the same, not every situation is the same. That goes to the test itself, and make sure the test is customised to the particular situation. It also goes to this talk, because some of the ideas are going to be relevant to some companies, some of them will be relevant to other companies, some of them will be relevant to specific applications. But hopefully, like I said, there's something in here for everyone. So when can we do this? Where can we do this in the process? So I'm going to talk about three opportunities, three times the parts of the process where we can start to think about these ideas. The scoping stage, when we're talking about what we're going to test, how we're going to test it, who's going to do the testing, what's going to be included. A preparation stage, okay, so now we know there's going to be a test and we've signed all the documents. We're now just waiting in the run-up to deliver the test. And the reporting stage. That goes for reporting during the test, that goes for the report. Without the test, that goes for the processes after the test finishes. So these are the three opportunities, five ideas for each opportunity. Let's dig in. So the first point, okay, this is a little bit 101, I guess, in that you need to set it up some lots of times. What do securities earliest possible? If you're still at the earliest stages of the development process, you're still at the design stage, you're still at the architecture stage, you want to start thinking about it right now because anything you can cover off of them will reduce a number of things, a number of findings, a number of issues you can come across when you get to the testing stage. We had a big client, very, very high profile application, very tight time scales. They brought us in to do the securities testing. We were sat in a war room together with us, the developers, the QA teams, the designers, the architects, the project managers, everyone sat together trying to get this application signed off. And there were a lot of findings, there were a lot of issues we came across in terms of security issues. And there was a lot of back and forth between us and the developers saying we need to fix this, we need to fix this, we need to fix this. At one point, a developer came up to me in frustration and said, why didn't you tell us this before? Why didn't this come up before? I didn't say to him, but I think the reason was because we didn't even brought in at the testing stage, we brought in at the design stage when a key decision was made, and a lot of these issues might have been covered off then, and then we wouldn't have been left to the testing stage when suddenly we haven't been finding reports of all these issues at such a late stage. So, yeah, pushing it to the left. So the whitest white box, or more likely the most transparent box, the client came to us and said, okay, we want you to test this little marketing site we stood up. We had a look, it was a WordPress site, so it asked us for a quote, so we said, okay, you can do this in two ways. We can do this black box, we can just throw all the normal testing against it at the site, we don't really know what's going on behind the scenes except it's WordPress, and we'll see what happens, and that'll take X time. Or we can do this white box, you give us the credentials to your WordPress install, we'll go in and look at the configuration, we'll look at the customisations you've done, we can then target ourselves, we can then focus on the specific high-risk areas, and that'll take you 75% of X. We'll take less time because we'll be more focused. They came back and said, yeah, we'll take the black box, so we don't want to mess around with the credentials and the white box. And that's very much not what we're looking for, very much not what we're striving for. We want as much information as possible. I think there's an occasional misconception that we're looking for a realistic test, we want to realistically simulate an attack. If you want a realistic attack, then insult Kim Jong-un to North Korea and you'll get a realistic attack. But that's not what we're going for here. What we want is value. We want to get value out of the process, we want to get benefit from the process, the most benefit possible. That means the most coverage application, the highest quality testing, and the shortest time possible. We had a client who gave us read-only access to their GitHub repository. Now, we're not going to start going through that application, I think it might have a vulnerability. If I'm doing this black box, I have to start throwing all sorts of different payloads, doing all sorts of permutations, trying to figure out what's going to trigger this vulnerability. If I've got access to the code, I can dig into that function, I start looking at how the function works, I figure out exactly what I need to do to exploit it, and then I'm done. I don't have to start throwing mud against the wall and waiting to see what sticks. Ultimately we'll just talk about an improvement process. It's not about an exam, we're not trying to test anyone to see who knows the most, who can do the most, without the least possible information. We just want to build improving as the application. This is a bit of a weird concept. Some of you might recognise it, some of you might think it's actually crazy. If you go to a boss and you say, we need to fix this issue, we need to have this security issue, we need to get this fixed because other users will have it. Maybe he'll listen to you, maybe they'll listen to you, maybe they won't listen to you. If you go to a boss with a nice, shiny consultant report that says here's the issue that was found, here's the risk, here's the relationship, say look, this is what is in this report. Odds are, depending on the culture of the organisation, the boss might say, well that's come from outside, so it's not going to come from an expert so we have to give more weight to that and that means we're going to be more likely to deal with it. Again, very much depends on the culture of the organisation, but if you're in that sort of situation, then you as a person who needs to receive the test could use that to your advantage. Right up front at the scoping stage, you want to be talking to the test provider, these are our concerns, these are issues that I personally as the maintainer and developer manager of this application are concerned about and things that I think we need to appear in the report. And then when they appear in the report, you go to your boss with the report and say look, raise these concerns, the consultants have raised these concerns as well, but suddenly you're getting more buy-in to get the issues fixed. So that's a way that you can benefit from that so the consultant benefits are ultimately giving you a better picture. Ask from time to time, should we have the same person testing each time, should we have a new person testing each time? Yes and no. The way I see it, you get benefit from having the same person testing for a few cycles. The same person is going to understand the application, they'll have tested it before, they'll understand how it works, you don't have to teach the tester again how that application works, where everything does, all the different users are, all the different roles are. And therefore you get some efficiency from that because you have to re-teach them each time. They can spend more time focusing on natural testing now. But hopefully, after a few cycles of doing this testing, suddenly you're getting, seeing less findings, less severe findings, and at that point maybe you want to get a fresh pair of eyes, maybe a new person from the same organisation, maybe a completely fresh person. There's something that hasn't necessarily seen the application before, doesn't know it previously, you're going to have to explain to them again how it works, but again they may come with a fresh perspective, different ideas, no preconceptions. So you can benefit from both an experienced person who's seen the application lots of times, and also from a fresh person who hasn't seen the application before. It's not something you would think about. This here is, I suppose, a very, very sort of minimalist way of how you might see a security test happen. Start off with scoping, understanding what needs to be tested, moving on to a basic overview where the tester discovers, okay, what is that? How am I going to test it? What sort of information do I need? Goes on to a testing stage, then a report gets delivered and something happens after that, maybe there's some follow-up, maybe there's a retest. To my mind, this isn't a full project. It will be aiming for something more comprehensive that covers off a lot more considerations and includes a lot more thoughts. Where I see a full project is more like this, where start off with a scoping as before, then the developer, sorry, the tester comes in and has a much more in-depth discussion, like I described at the beginning, to understand, okay, whatever it is going on in this application, how is it built up? What are the different components doing? How is data flow to the system? What do you get much more in-depth understanding? Maybe not a full design, but just to get a better understanding, make sure they're not going in line. Next, I've seen maybe supported by the source code, like I talked about previously, so that, again, they can test in a more informed way. Then, delivering the report, so already can be of a higher quality, more comprehensive, be more information in there, because the tester knew more when he was doing the testing. But that's not the end, and that's where it starts getting important. You want to make sure that there's a formal process where the tester sits down with the internal security team or with the developers, and make sure that the findings are fully understood. A lot of times, a company might receive a report, understand the most critical elements, not necessarily understand the rest. So, it's important to have that stage in making sure that the recipient of the report really understands all the different findings, what the real issue here is, and then moving on to the next stage about actually fixing things, helping the developers to find a solution and talk more about that later on. So, it's important for the testers' knowledge to help the developer to figure out what he's actually easily done. Finally, do a retest. Hopefully, if you follow these steps, then all my findings will be fixed as planned. Think about what's next. Do you want to wait a period of time and then retest? Do you want to test when a next iteration comes out? Do you want to look at the specific aspects of the application? I think something worth thinking about is based on something called a zero-day card, and that was a guidebook where we were making up about that. I think they did a talk in 2011. The idea here is that testing a web application, trying to test it as a retest in all web application from the standard perspective, and then we say, okay. Now, what happens if in this application a zero-day vulnerability is discovered? Is there a vulnerability in the application or more likely in the framework where the application is built on? Suddenly, the attacker has got access to the web server. It's now sat internally on the network on the web server. What happens then? What access does we have? This is very relevant because we saw Equifax. That's exactly what happened with Equifax. Equifax had their application sitting externally on the internet and the framework that the application was broken on. Attacker broken. Now the attacker is sat on the network on their internal network pulling hundreds of millions of records of data out. So it's about saying, okay, we're going to do all this process, but we're also going to have an extra stage. Let's stick our test around the server, give them the right access to the server. Where do they go from there? Are they blocked off by a DMZ? Are they stuck inside the area of the web server? Are they stuck inside the entire company's internal network? It's something possibly to add to that. Just an additional element to consider, especially relevant given the reason that Equifax is you. So that's the first five ideas. Let's move on to the next stage now, which is preparation. So maybe this isn't in the immediate run up to the test, but definitely something to think about. There's a lot of low hanging fruit, a lot of basic issues that the test doesn't have to report on. If you can catch those early on, as it gets there, then it's going to save the test of timing to report these issues and write them up and deliver them over. There are scanning tools that exist. OASP's app has a web application scanner. It's free. Burp Suite has a web application scanner. It's dirt cheap. You can use those, find basic simple vulnerabilities, get them fixed already before you get to the testing stage. And you can have some fun with it as well if you enjoy trying to break stuff. But on the other hand, it does require some time. Maybe developers do it, but actually developers are massively busy. Maybe you're on the QA to do it. Maybe you can operate it into the CI process, but it does require some time. But you can definitely catch off, think off some low hanging fruit and therefore not have to wait until the security test to get those fixed. So I know plenty of people I've spoken to say their product backlog looks like this. But in that side, that backlog, it may well be you already have no vulnerabilities, no issues. You know that there's this particular security issue. You've had a previous test, or maybe someone mentioned it and it's just gone on to the backlog it hasn't been dealt with yet. If you know those issues exist, tell the tester right at the beginning. Make sure that's clear from the centre. We know that these particular issues exist. You don't want to wait for the tester to go in and start testing the application in it and independently spend time discovering something you already knew about. And then having to write it up in a report in time that you spent doing testing on that. So you want to push that as early as possible and let them know about that. Another thing to think about, you may have areas of the application you're particularly concerned about. We have clients who have said, well, this part of the application was developed ages ago and it's quite old. It may not have all the security controls that are different than the newer parts of the application have. So maybe take a look over there. There may be areas where you've had to implement something in a non-standard way. It's just customised or not. It's not usual. And that might be another area where you want to flag up already the tester and say, take a look there because they know all the issues there. So ultimately it's not a competition who can find this, who can't find this, who found that vulnerability, who didn't find that vulnerability. It's about efficiency. It's about getting the most out of testing. The more you tell the test up front, the more they know when they come in to do the test. So it's all coined this phrase of security by non-testability. So there are lots of great technologies out there that you can stick in front of your application and they make the attackers life more difficult. Maybe it's a WAF, a web application firewall that's looking at the content of the request and blocking any that it doesn't like to look at and it looks like they might be malicious. Maybe it's something that blocks automated attacks. Maybe it's some system that randomises parameter names to make the attackers life more difficult. These are really great tools. These are really great things that can make an attacker real attackers life more difficult. But if you're getting a test done, make sure they're disabled. You don't want a tester going through these tools. You want a tester hitting directly your application. Otherwise, ultimately, they're testing your security tools and not testing your application. If you like, at the end, you can say, here we go, we've got this list of findings. Now, what do these findings look like if we now go through the security technologies? That's a possibility, but it can also make a mess. We had this a couple of months ago. We had a client where we'd done the test against their application. They said at the end, okay, can you test this XSS vulnerability via our WAF because we want to see whether the WAF can mitigate it. So, configured the test via the WAF, prepared the payload, sent the payload to the application, payload went straight through, succeeded. I went back to the hide and said, okay, well, look, it worked. They went back to the WAF vendor. The WAF vendor came back to us and said, oh yeah, it's a bug in the WAF. It turns out if you put the payload inside an array, inside a JSON, inside another array, inside another JSON within a request, it doesn't find it. But we've fixed that now. We've put up a staging version of the WAF. Here, test via that instead. So, said okay, fine. So, sent the same payload via the staging version of the WAF. And the WAF blocked it, said okay, that's great. Now, instead of sending one object in this payload, I'm now going to send 100 objects, a much bigger request. Sent that bigger request via the staging version of the WAF. It went straight through. It executed. I said, look, it's made a minor change. It still works. They went back to the WAF vendor. The WAF vendor came back to us and said, oh yeah, our WAF doesn't look at requests over a certain size. It just ignores them. So, okay. They're like, okay, we'll fix that, we'll fix that, we'll fix that. So, they fixed it, had the big request, sent it to the WAF. The WAF crashed. I was just getting 500 errors back. I asked someone in a different office, different IP address, can you try and access via this WAF? They also didn't access. This wasn't a block, this was a crash. Let's say they were at 5, 10 minutes. So, I went back to the client. Explain what happened. They went back to the WAF vendor. The WAF vendor came back and said, yeah, but it blocked the attack. So, it could lead to more complications. You really want to make sure that you're focusing on fixing the issue and the application itself. And the technologies can help, but you want to be fixing the vulnerabilities in the application. The environment we get to test it. So, anyone who tells you that they're testing will be 100% safe, nothing bad will happen. They're either not telling you the truth or they're bashing the application with a feather or something. Early tapping it. Things happen, even unintentionally and suddenly it might impact the application. We had a client over the summer where we sent a relatively innocuous, relatively tamed piece of JavaScript to the application. Wasn't intended to do anything specific. But it came across a bug in the application and basically brought down the reporting module. This is a multi-user, multi-tenant application and suddenly every user, every tenant could no longer access the reporting module of this application. Now, luckily we were testing in QA. Obviously if that had been in production that would have been a big issue. Again, this wasn't something that was trying to cause a lot of service. This was simply to JavaScript that just somehow came across a bug in the application. Because of this, my personal preference gives us a dedicated instance to play with. Let's stand up a dedicated instance based on the code base of the production instance and let's test on that. Then we're not impacting your customers, we're not impacting your QA staff or anyone else. That's supposed to be the best case scenario. If that's not possible then let's test on QA, maybe back up the database first because odds are we're going to make a mess of the database with lots of dummy data. That's not going to happen on production just in case. Another point here, let us use our testing laptops. If we can come in and use our laptops, we've got our environments set up, we've got our tools installed, we've got our workflow set up then we're going to get the first day of the test and start testing. Or even better, we're testing for remotely, we get started and we're away for testing. The hardware start trying to get admin access to the tools it just takes up a lot of time on the first day of testing and it's not using that time efficiently. Now I know that a lot of you may come from organisations where that's just not possible where external hardware is completely forbidden and that's it. That's the case, maybe try and have some dedicated machines that are your hardware but have the tools installed having some sort of environment installed to try and speed things up. The worst case scenario is what I saw a couple of weeks ago where I came in, a big company to do a couple of days of testing to get all the tools sanitised first, get all the tools installed on the first day of the testing I finished off the test at the end of the second day tell them I'm finished, they say we're going to format that machine now this is a big company, they're going to have lots of different testing going on and every single time someone's going to come in and have to resanitise the tools and reinstall the tools again you're killing the efficiency at the beginning of that test. A little bit, this is something that the tools are the most issues if things aren't ready, if that environment isn't ready. You want to agree up front with the test of a date and say that this date in two weeks time and through each time is when the testing is going to start you as a recipient want to get from the tester a written list, this is exactly what we need the tester needs users the tester needs links, the tester needs specific data, you want to get the information you're writing from them in advance and make sure that's ready by the agreed date and I did test it as well again, if we're spending the first day figuring out which users work, which users don't work and discovering that a particular environment doesn't work all the way, some of the modules don't work because there's some sort of error in the deployment then again, eating up time at the beginning so being ready is the best way to make sure the testing is started spray off and spray away testing full efficiency okay so that is the the preparation elements so let's talk about reporting now I can say reporting during the testing reports comes out at the end of the testing and what goes on after that progress reports we see requirements sometimes we want a daily report, we want to hear about findings every single day we want to hear about progress every single day I've done a lot of this and this is what I've seen there are two types of reports for us there's status reports and how it's going, what we're doing and there's findings reports we found this, we found that general rule of thumb the way I've seen it work best if there's a problem, there's an issue that's stopping the testing stopping the testing from testing you want to hear about that ASAP well I get that escalated immediately because again, reducing the efficiency slowing down the test stopping the ability to test efficiently if there's a critical finding critical finding means a road code execution on your web server or suddenly your entire database is exposed or suddenly a very sensitive operation will be performed by anyone on the internet if your app is already in production you want to know about that ASAP as well because you need to rectify that as soon as possible any other findings lead them till later even if they're high risk, medium risk doesn't matter, lead them till later because ultimately the findings together are showing a picture of the application at that point in time finding that on its own will be considered a low risk finding but suddenly put together with two or three other findings and suddenly but all at much higher risk because together they create the scenarios much riskier because there's much greater vulnerability so because of that it's better to wait because if we're trying to trickle the findings through day by day you're not going to get that context you're not going to get that overall oversight as to how the findings relate together and potentially there's going to be corrections that have to be done further down the line the other issue is that it needs a duplication of work to get the finding written up interrupt testing to get the finding written that then has to be reviewed, that then has to get sent to the recipient there maybe have to be refinements to the finding as extra information is discovered and then finally has to go into the final report and the final report has to get reviewed and it's duplication of effort against how we're spending those hours so ideally problems with critical findings ASAP everything else let's wait until the end let's wait until the report comes out having said that communications during the course of the testing that's important, not the formal daily reports on communications but just ongoing discussions between the tester and between the recipients I came across this is that supposed to work that way is that function being used correctly should I be able to see that does it hurt when I press that are you seeing that in the logs that sort of communication is good and that's important to keep that going by email, by phone but I'm talking here about some sort of formalised reporting process again a little bit 101 every report should come with an executive summary the first few pages of reports say here's what we did, here's what we found here's the risk, here's what we need to do about it and a few pages in a way that someone is not technical can understand you want to make sure that looks good you want to make sure that looks professional I'm going to give that to your customers to your clients and say look we did this curious testing of our application and here is a result here you can see the testing results you also want to make sure that business impact is front and centre because whoever is reading this isn't going to be interested in there's malicious script executing a browser or there's an HTTP header missing they want to know the data that's being stolen an attacker is something inside our network unauthorised functions are being performed we need to be front and centre of the exact summary to make sure that when you need to take that someone more senior to get approval to fix the issues that's coming in and that's coming through as well as the exact summary, the findings themselves detailed enough that the developer or the security team can fully understand this is exactly what's happening here this is what needs to be done obviously it needs to be understandable the recipient test needs to be able to understand it's not clear, it needs to clarify it needs to be full information about how to reproduce the finding ideally you as a recipient should be able to reproduce it so once you get around to fixing it you can test internally whether it's actually fixed it's something like a missing header then maybe it's going to be a very simple example but I've seen vulnerabilities where there are 3, 4, 5 stages to explain exactly what's going on and that needs to be clearly stated out and stepped by step so that it's clear to understand XSS for example is that the only finding of XSS in the entire application and there are no other findings there are no other examples of XSS in the application oh there are 20 more but there's just one demonstrated in the details but here's a list of the other 20 or maybe the test has to come back to this look we found one example of XSS we think there might be more but in a time available we couldn't find the one in that last case you want to push back to the test for us at any level how can we find other examples how can we go back to our code base so that maybe you didn't come up with some of the test one but we can then try and kick off from the code side and ultimately that should be clear for the report how many instances of this there were where this is the only instance that should be clear there and again recommendations you don't always want copy and paste recommendations recommendations have to consider the specific case again XSS you fix that in several ways depending on how it appears you may need to sanitise your output sorry encode your output if you need to content type there you may need to use some sort of input sanitisation because you need to keep the HTML code there you can't just encode it away and the test should be aware of which of those recommendations is appropriate and that should be what's there you need to push to make sure that's clear from the findings you've written so this step I think maybe gets missed occasionally you've received a report the report has risk ratings so you want to get the testers and someone from R&D to sit down together and figure out how what time scales are these going to be fixed the testers they're presenting on what's the severity, what's the risk of this issue the R&D teams are saying this is how complicated it is to fix this is how difficult it is to fix maybe it's a very simple one line change maybe the fix requires months of effort or it can only be done at a later date that's the case where you've got a high risk issue with a complicated fix maybe you want to go back to the testers and say what can we do short term what can we do as a sticking plaster as a short term fix to make sure that that issue gets mitigated at least in the short term and then in the long term we can try and fix it properly again push the tester for that give us a short term give us a long term then we can add that into the plan of what happens when we make assistance with with the fixes so the developer is going to know okay this is the code base this is where exactly in the code this issue is but they may not know exactly what the fix is on the other hand the testers will like to know what the fix needs to be but they may not need to know where exactly it needs to be done and you can get value from sitting down together and figuring out okay where exactly how exactly do we fix this that means that you're reducing the likelihood that it's near to fix from reducing the likelihood isn't going to be a misunderstanding and over with making sure that you're not going to end up getting around to the same stage again and having the same finding coming out again now we saw this literally a couple of weeks ago where a year or so ago we did a test on an application CSRF came up and we reported that gave them a recommendation their request to be then provided a more detailed document to talk through okay here's what CSRF is here is how you fix it in different cases here's how we think you should fix it in your case taking into account your application taking into account your frameworks let me say okay you've got this document do you want us to sit down with you and talk through and say no no no it's fine we've got it covered so we came back to test that application a couple of weeks ago they added a CSRF token to the application it's now being sent on each sensitive request but it wasn't being checked to the server side you could send the wrong token it would still be accepted you could send no token it would still be accepted so that finding has just gone exactly as it was back again in the report this year and a lot of time left has been wasted and it's a shame we'd be able to sit together with them and make sure they understood exactly how to fix it so that's definitely another way of making sure that even in the report you've got the fixed but in problem alright that went by very fast so here are the 59 days just talk through them quickly again starting from scoping trying to make sure that anything you can do at the design stage anything you can do at the architect stage you're up front means that you can cover those things up earlier that should be easy to fix disclosing as much information as possible about how that application works what's going on behind the scenes leveraging the report if you want to get by in for a particular issue that you know you have using an old hand for a few times someone has seen that application a few times the first few times of the cycle and then a fresh start where you get someone new to look at it after the finding start reducing and having a more comprehensive project demanding from the testing community we want something more comprehensive we want a full cycle of testing we don't just want something realistic going on to preparation have yourself first disclosing any known vulnerabilities up front any known issues any concerns disabling all your fancy security tools or fancy security technologies that make attack advisement as a rule giving us a dedicated setup something where if something goes wrong we're not going to have something to ruin your customers and letting us use our own environment to test being ready making sure that everything is ready for the start date of the test finally not too heavy on the progress reporting critical issues things that are blocking the tester otherwise work to the end executive summary that's clear looks good, how it's a business impact make sure the findings are clear have good recommendations show how to reproduce explain how many instances there are prioritise action plan where we're going to fix this short-term fixes, long-term fixes and finally using the tester to get some assistance making sure that the right fix is done so that's the 15th I suppose I'll have some key takeaways I've got 15 ideas there if you can apply one of them, if you can apply two of them if you can apply five of them you're going to get incrementally extra value it's all going to help, every little help is there I've said about a thousand times I've said a thousand more times efficiency, efficiency, efficiency when I maximise those hours when I use them as best as possible you can finally building a dialogue making sure that there's that discussion right from the very beginning between the tester and between the recipients and the sages afterwards to the after reports that have been delivered a little further you've got questions maybe a couple of minutes now or contact details here yeah, thank you very much for listening thanks for your time