 Let's start a little bit early. Okay, me here again, to introduce to you our next speaker, Joe Shotma, he's an application security professional with experience in pen testing, purple team exercises, instant response, threat, hunting, dev ops, web application development, you name it. He's currently working as a security analyst and BBNT and he's going to talk about purple team strategies for application security. Give him a round of applause, please. It's like the title says, I'm here to talk about purple team strategies, I do a lot of that at work and I searched really hard to find a word that would mean increasing purple and there is actually one but it's a really terrible word so it's kind of a clunky title, but I got rid of that. About me, my name's Joe Shotman, you can reach me on Twitter at Joe Shotman or a couple other ones if you want to find them. I'm a senior security analyst focused on application security. I have done pretty much everything at some point in both IT and security, so when we started doing purple team engagements at work, that's why I got pulled in. I am pretty enterprising, so I work with many, many, many applications, many, many servers, users are measured in the tens of thousands to millions of customers, so not everything I say may be directly applicable to what you're doing, especially if you're in more of a consulting role than working with the company, but hopefully you've got a couple take aways that you can get away from this. The obligatory disclaimer, I'm not speaking on behalf of BB&T Truist, which is the company that we're about to become, or any other entity, all opinions expressed are my own. So if you have absolutely no idea who this person is, why he's screaming at you on the slide, this is Eugene Hoots, he is the lead singer of a band best known for his song called Start Wearing Purple. And given my natural tendencies, I would make a slide deck that is absolutely plain black and white text, and people get really, really bored. I've been advised to add more cats, but in this case, I've decided to add more Eugene. And the band, if you are not familiar, is Go Go Bridello. It's kind of a mix of punk and various ethnic music from mostly Eastern Europe, but other places as well. So the agenda of what I'm going to tell you, I'm going to give a brief introduction on purple teams for people who are not familiar with it. Going to talk a little bit about DAS, if you're not familiar with that, that's one of the really standard types of testing that's done on applications. Then I'm going to talk about the problems of applying the purple team strategies to DAS testing. And it's not just DAS, you can also apply it to more manual testing as well. And then some general strategies about bringing together both the offensive and defensive teams to work together better on application security. And I will send a copy of the slides to the village and they can stick it up on the website. So if you want a copy, you'll be able to download it. So purple team and a nutshell. So the core of it is you want both the offensive and defensive teams working together. And a big part of this I've found is that the corporations or the powers that need to realign the incentives and goals of the teams. So very often they find themselves working in opposition. So the red teams upper management may have a goal of find lots of bad things. And the blue teams management may be incentivized to close a lot of tickets. That doesn't necessarily increase security. So it's bringing together the goals for both of the sides so that they're working together so that you've got a holistic solution to security, not just one side trying to defend and one side trying to attack. Because we really want to avoid this. And having been on the blue team side, very often the experience is you're getting punched in the face, especially when you're talking about not just APSAC but the full demand experience. So the red team gets in, that's pretty easy to do. They start pivoting around, they get domain admin. They grab a bunch of credentials, they start getting into sensitive systems. And all of a sudden their boss is getting an email saying that hey, why didn't you stop the red team? And when test after test happens like, it really damages morale. So by bringing the teams together to align better, it reduces this conflict. So a little bit more definition of purple stuff. So it's about finding, confirming and closing gaps. So you want to find both your visibility and operational gaps. It's about trying to speed up the time to detection. So as far left as you can push on the attack matrix or the cyber kill chain, the battery off you are. It's especially looking for pivoting. Again, especially in large companies, you have to assume that they're going to get in. So you're really trying to find when they start trying to go from that initial breach location to others. And it's about going through, like living off the land is one of the big terms in red teaming right now. Finding everything that's on systems that doesn't need to be there and getting them off so it's that much harder. And the blue team and red teams both have visibility into how that inter-operates. So getting them talking to each other can help with that. And it's just increasing the educational and situation awareness of both sides. So the blue team knows where a lot of the bodies are buried. They see the vulnerable systems, they know where real world attackers are proven to try to get in. The offensive security team doesn't necessarily know that. So getting the information sharing, going from the blue team to the red team, it's not just the red team telling the blue team what to do. It's both sides working together. And it's delivering better return on investment. So you can have a very expensive test from a top name company that finds next to nothing. And if your only goal is to say that we had a top tier company come and do a risk assessment as part of your PCI compliance or what have you, that's fine. But I'm lucky enough to work for an organization that truly cares about increasing security. We want to not be in that next headline similar to the recent events. So we try very hard to actually deliver goals and objectives in our testing. And as part of that, also getting the blue team up to speed, big help. So finding vulnerabilities doesn't actually make you more secure. I can sit and test and test and test and I can find RCE after RCE. It doesn't matter. That in and of itself doesn't make the company more secure. The hard work is taking that information and getting it remediated, whether it's at the code level or if you're doing soft patching with WAVs, whatever it is, getting the things actually closed, that's what makes the company actually secure. As we get faster and faster and more complex as DevOps and infrastructure as a service and everything as a service gets bigger and bigger, not everything that we have in production will actually be tested by the point it gets there. We're doing deployments. We as an industry, not we as my company, we're in some cases doing deployments of new code dozens of times a day. There's no way you can do a thorough security test on that. And so to get defensive posture working, you have to have a defensive team that knows how to spot actual issues, go through the logs, go through the alerts, do the threat hunting and find things that are in things that you never got a chance to test or that you might have missed. Every single test you have, where you succeed in your engagement, objectives, whatever it was, whether it was getting data or just getting access to system, every time you don't sit down with the blue team afterwards and make them better as a blue team, you've missed a big opportunity. Like I said, it damages the morale and it doesn't actually close those gaps. So like this background, purple is a spectrum. You do not have to do everything to have a purple team. You do not have to have everything work into your system, into your SDLC. You can take a few things from this and make yourself better. So don't say, there's no way I can accomplish all of this or there's in purple team in general. There's no way most people can accomplish everything but take what works best for your environment. So a brief walkthrough of a simulated purple team engagement using Eugene who is also a member of the band in Surgey. Surgey here is playing our red team and he's used the zero date and he communicates to the blue team and says, I have used the zero day. I got access to these three boxes. Were you able to see it? Eugene playing our blue team here, responds and he did not see that attack initially. So he's done research on the what happened and figure out rules he can add to the SIM. So now they've got visibility. But then he also says, I've seen something similar to this where we're able to bypass the IDS IPS system by using this type of encoding. Have you given that a try? So you get the circle going back and forth where now the red team takes that information that the blue team provided, tries it again and it turns out that Eugene has enabled a detection for this and now by working together they've not only patched the initial issue, well not patched, they have created an alert that will help detect this initial issue but also validated an additional form of the attack that the red team was not aware of and didn't test initially. So that's the core concept of going purple. So you work together and then everyone gets the big bow at the end of the show. So DAS in a nutshell is how many people are familiar with DAS as a term? So it's dynamic application security testing. It is the very basic type of test that most companies do and it's low cost, reasonably good results. And basically you're taking, it's a black box test, you've got a number of tools. Oh, Zapp is the open source dominant one, BurpSuite Pro is used by many people and there's options from everyone from HP to IBM to many other smaller companies that just focus on these tools. And basically you throw a lot of garbage. You're fuzzing the applications and throwing things that are cross-site scripting at it. You're throwing things that are SQL injection and you're looking at what comes back and trying to guess whether the attack was successful. There's also other forms of passive fingerprinting that you can do as part of this. So it will do things like look at the cookie settings and is there a session cookie coming back without the secure and HTTP only flags. On a moderately large site, these tests can generate anywhere from tens of thousands to hundreds of thousands of requests. It is very noisy, it fills up the logs with junk and it's easy to spot. So when you start as a blue team analyst, you'll see people running Burp scans and it's just page after page after page with malicious activity coming from this one IP again, again and again and again, unless you're blocking it. So why DAS needs improvement? So DAS tests typically go one to two days depending on the size of the application. They're mostly if not entirely automated and they're testing for a single issue at a time. So they're saying does this type of cross-site scripting attack work here? Yes or no? It's going attack by attack variable by variable. Attackers on the other hand, they have as long as they want. So if you're a high value target, they can spend weeks, months, years hypothetically figuring out how to get into your applications. They're combining both automated and manual testing. So again, they've got the economic incentive in some cases to go after these applications very hard and so they can spend time digging through obscuring coatings that your developers might have used on a variable that your automated user might have used that your automated tool doesn't know to decode as a base 64 variable twice. Put attack code in there and get successful SQL injection and your DAS tool is not going to find that. And then they work, they chain vulnerabilities together to make something that's actionable. So very often you're not going to find a single issue that you can get root with, that you can get shell with. It's figuring out how you can combine a cookie attack to get access to the administrator page to get access to a SQL injection where you can drop a file in the system and it just gets you a web shell. So no DAS tool can do that, but the attackers do. And so part of what I tried to push for is to figure out the ways that we can do the DAS tests which are still reasonably fast, reasonably inexpensive, yet get better results. So, you know, everyone says tests smarter, not harder. So an example, Durabuster, people use Durabuster here. It's one of the dominant tools for going through and guessing common file names and directory names to find things that aren't linked to from the main site when you go through and spider it. You can get as a standalone tool, that's been deprecated, but there's a version that's almost identical that's integrated into Zap. And there's many other tools that do the same functionality. But you can also do smarter. So if you're working with a blue team, they can look at the file system and tell you every single file that exists there. They can look at the web logs and go through and tell you not just every file that exists, but every path that exists. Because in some cases you may be doing mappings that are not going directly to a file. They can also go through and pull out every git variable and every post variable, things that may not be obvious to you. Especially in complex applications where there's multiple roles, you might be doing a DAS test or even manual test on two or three out of seven or 10 different roles. They may not think to give you the highest level access or one that has access to certain administrative functions. So this will get you access, get you a listing of everything that you need to go through and test as part of your DAS. But I'm not just advocating for gray box testing. So taking that information and feeding it to the tools, that more falls under gray box testing than purple team testing. I'm pushing further beyond that and I want to increase the amount of collaboration between the blue teams and red team. And it's all about trying to speed up the teams, make them more effective, cut down on the advantage that the other team has because they've got time, you've got the insider knowledge. So let's take as much advantage of it as we can. So I've got five main ways it's hard to make DAS purple or the five main pain points. The shot I loved, it was labeled my favorite picture I've taken of Eugene by a guy who's taken quite a few photos of him. So the first big problem, like I said, is extremely noisy. So it's very common when you're doing a test like this that you've got your IDS rules, your WAF rules, it's either bypassed or silent. So things that would normally be telling the blue team that something's going on aren't going off at all. So ways to combat this at the end, re-enable the rules, take just the things that were successful, just the things that you want to make sure that get alerted on, replay those attacks. You can do this manually, you can do it with curl, exports and many tools. You can do it with Selenium. I love the idea of getting Selenium rules at the blue team because not only can they replay the test with it but they can give it to the devs and say make sure that when you revise the code it can pass this test. And obviously in that case they can write code that just passes that specific test but hopefully you've got development going off the developers so that they actually fix the problem once they understand it. It's often tested in non-production environments. So we try to avoid outages as much as possible. We've got a lot of applications that we need to test. So almost all of our testing we do in test environments either UAT, SIT or a special test environment that's been stood up just for this particular test. Many sims charged by the volume that you input. So often the test systems aren't sending any information whatsoever to the sims. Or you may not have a WAF in front of your test environment. So you're not getting a full example of what an attacker would be doing. So very similar to the previous one when you have something that works take it, rerun it in the production environment. So if it's a dangerous exploit you wanna be really careful make sure that you've got the proper sign off on this and or you do neuter it in some way. So all you wanna do is make sure that your WAF triggers on it or your IDS IPS triggers on it. Don't even send it to an actual server in production. Stand up a box that just sits there and listens on port 80, send it bad stuff make sure that the WAF triggers make sure that the IDS IPS triggers. So a large part of why I started speaking I kept having people in our SOC come to us and say are we vulnerable to the one equals one attack? Because very often they're junior analysts it's their first job in InfoSec. And as an industry we don't do a great job of treating the tier one SOC analysts very well. They don't get the great training. They're kind of thrown into an expensive environment with lots and lots of tools that send alerts all the time. And they don't have a context to understand one equals one is part of a SQL injection attack. But going deeper than that what's the most dangerous type of response on a SQL injection attack if you see an alert for it? 500, 400, or 200? Why? So a 200 indicates that it was successful. 500 generally not always means that the attack caused a problem at the code level and the server stopped executing it. This is information that I know as a developer because I spent a lot of time with SQL. I spent a lot of time generating inadvertent 500s. The people working in our SOC don't have that experience and we weren't giving them training. So we're unaware that a 200 alert on SQL injection is something that needs a lot of investigation. Whereas a 500 that there's a couple thousand of unless there's that corresponding 200 probably not important. So another problem is the app test team may actually be testing the infrastructure rather than the actual environment. So an example of this if you're testing for remote file inclusion if there's a firewall rule on that local box that prevents outgoing requests you're not going to get a successful result using a dash tool for the remote file inclusion. It doesn't mean the application isn't vulnerable just means that as currently configured that server is not vulnerable. And I wanna find everything not just relying on the firewall to block it. So if I'm working with the blue team if they say if they know that I did a test at 2 p.m. they look at the logs and they see these outgoing firewall drops they can tell me hey there was something going on unusual outgoing requests. So again it's not just us telling them things it's them telling us things. The blue team may not receive the results. Some companies are very siloed as far as who gets what information. And so getting the reports especially the really actual things into the hands of the blue team so they can learn what they need to be looking for really important because developers cut and paste and reuse code. So if you found that vulnerability in application A it may also be an application B and C that you haven't received authorization to test yet but letting them know if they understand that there's a similar code base they can start looking ahead of time. And part of the pushback you'll get with management on this is if you give out all this information on vulnerabilities if there's an insider threat you're handing them a laundry of laundry of things. You need different controls for that if you're relying on the blue team not being evil. You've got bigger problems. So stepping up collaboration. So as possible if you can before testing have everyone sit together and talk about things. The blue team knows what the current high risk applications are. Again they know where the bodies are buried they see where the traffic is coming they may know something moves millions of dollars a day in transactions versus something else that looks very similar to it may only move a couple thousand. So they have better information often the times than the red team does. They know what's going on in the real world what may need be investigated. They know the topology of the network so they can say you need to connect in this fashion to bypass the WAF because again you wouldn't be testing the application not just your security infrastructure. They know where the coverage gaps are so they can tell you ahead of time look I know you're gonna find this let's do something really fast we can show management there's a gap let's get this fixed. So you don't have to spend two days doing a test when in fact you could just do a proof of concept that works in five minutes. And they also know the compensating controls that you may need to bypass again things like that outgoing firewall rule to make sure they get the effective test. The red team can say we're going to do this as our testing strategy the blue team may have feedback on whether that's good strategy or not. And they can explain the techniques tools and procedures and TTPs that they'll be using to bypass defenses because the defenders need to be learning about how real world attackers do encodings to get past WAFs and that sort of thing. During the testing if you're talking back and forth during the test if the blue team comes to you and says hey is this a true or false positive you can give them feedback right then so they start learning and understanding. You can give them indicators of what they should be finding as a result of your test and you can feed them information about what is and what isn't working and giving them the context of this looks bad but it really isn't or this doesn't look bad but here's why it's dangerous. The blue team can be providing information about what's going on as far as things like the firewall, antivirus, other things like that. And there may be other environmental factors so a lot of test systems are under provisioned so you may be spiking the CPU to oblivion and getting a lot of dropped packets as a result and that can throw off some of the dash tools. After testing the application security team can go they can give feedback on what they thought worked and didn't. They can redo that testing if it needs to be done in the production environment or sometimes just take the meaningful logs that they generated things that actually worked and feed it into the SIM so that the SIM has examples of things that are bad if you're doing machine learning. You can do a blue team oriented read through you don't have to go through the entire test but every week or two you can sit down and say here are the tests we did past week or two here's what's meaningful here's what you should have spotted or that we think is impactful to the company. You can give help doing the threat hunting so if they get really excited and think that they've seen something similar to what you just did you may give feedback on whether something's a true positive or false positive and set up a learning environment for your blue team. But people learn how to hack apps because that's what gets them really engaged in understanding and excited and it also starts to give them a path out of that junior role so eventually you can keep that domain knowledge that's accumulated about your company and move into a more advanced position. The blue team can validate the findings who here has ever been really confident about a false about a what turned out to be a false positive. I found stuff that was sure was a true exploit and the blue team pushback said no it's not here's why and so that schooling coming back from the blue team especially the app team is willing to listen is really important. They can get sign off on the report so that it goes to management as the blue team and red team worked together to generate this. Here's what we've taken. Here's the steps we've taken as a result and they can go and if there is an instrumentation gap start getting those plugged right away. Wrapping up. I wanted to say thanks to Ares, Liora and Joe as well as the other volunteers. This is my first time speaking at Vegas so I'm very excited to have been accepted here and I know they did a ton of work to pull this together as quickly as they did so a big hand for them. We gotta get the next speaker up so if you want tweet at me questions catch me in the hallway and thank you for your time.