 Thank you so much for waiting. Up next we have Kayla and Ali who will be speaking about Rekon like an insider threat for best user training, return of investment. And I would like both of them to introduce themselves and the topic in detail. What do you think? Hey guys, it's a good thing we're in a room full of people who work in IT. So we wanted to talk to you today about an exercise that we did to see if we could Rekon our users use the topics of social engineering and change the way that they behaved. A little bit about us first. I'm Ali. I am a user training specialist. I've worked in a bunch of different fields trying to learn why people do what they do, how they think, how they connect and communicate and then how we can use those things to hopefully change their behavior. So my name is Kayla. I am a security person for my day job and a troublemaker the rest of the time. So Ali and I worked together for over a year on some research that we did and the point of this talk is just to discuss that research. So like she said, we started working together a little over a year ago and we wanted to take her hard skills and my soft skills and try to approach a security issue from a different perspective. And the first one we were going to try to tackle was the fact that every single phishing campaign that came into our organization was a potential breach. So how could we address that? How could we help to lock it down? So management has the first idea that they always have which is spend the money and secure all the things, right? So it's a good idea and if you do it well, you can get some pretty good numbers. So in looking at how effective our technology was, we had an email gateway where 14 million emails hit that gateway, 7 million emails were allowed through. We had 229,000 emails that were blocked as known malicious links. We had 3200 emails that got past the gateway. Our blue team incident response defense team actually pulled back 2200 of those emails. And then the last 1000 emails actually got through to users inboxes. They were delivered. They were sufficiently sophisticated that nobody could pull them back. None of this security technology or architecture could actually detect them. So at that point, our actual stats were 99.98% effective, which is as good as birth control. So I felt pretty good about the technology that I had put in place. But as good as the technology was, of those 1000 that got through, 850 of them were clicked. So we could have a 99% effective tech rate, but our users were 85% failure, which I think we can all agree is not a great starting point. So that's what we were trying to tackle. So we were like, okay, let's come up with a better idea. So the better idea was that we should create a feedback loop between user behavior, security analysis and training material. Our goals were fairly simple. We wanted to take the tactics that are traditionally assumed to be from the bad guys, the evil tactics, the recon, use them hopefully for good and also create kind of a business case. Understand whether or not this was valuable and how we could express its value and alongside that not be creeps and hopefully keep our jobs. So to do this, we created an insider threat game plan. So we wanted to look at the network and all of its assets as an insider threat would to see what was exploitable. We also wanted to find out, you know, all right, so there's some good data out there who has access to it and how can we observe their behavior for weaknesses to manipulate their behavior for the good of the entity as opposed to the good of the individual, which would have been way more fun. All right, so prerequisite legal stuff. You have to establish the right to monitor if you want to do a program like this. So, you know, that login banner that says we are going to watch everything you do. Your legal department would be very grateful if you put that in place before going forward with this type of program. So we broke it down into phases. Our first phase was just about figuring out where we stood. If we needed to be able to express the value of this down the line, we had to know where we were starting from and our baseline was pretty rough. We had that 85% failure rate. We only had about 15 emails per month in a group of 6,000 employees reported as potentially bad and there was basically zero relationship between any of our coworkers and security. Aside from an assumption that security is just like the software police. You look the other way when they walk down the hall, you do your best not to interact with them and they won't take things away from you, which is definitely not our aim. Yeah, so on my side, I was like, okay, this is fun. I get to go on a treasure hunt. Being on the inside means that I could access all kinds of data. So one of the first things I want to ask for is, hey, HR, give me all your data. An attacker is going to have what's in LinkedIn. That's going to be old, depending on whether someone's looking for a job or not. It's not going to be up to date. It's not really going to tell you a lot of title inflation occurs. You don't really know who actually has access to important data or could perform important functions. It takes quite a bit more recon on the outside. So I took a little shortcut and said, give me your data dump. Additionally, application whitelisting data was extremely important to this process. It ended up being important to Fold. So you got to find out who was keeping important data where they shouldn't, which was in local data repositories as opposed to the secured ones that they were supposed to store them in, as well as finding out who your habitual clickers were, the kind of people who would click okay on a pop-up and get a toolbar installed. So it was very easy to identify those who needed to be targeted with training. Now, the next part was a little bit harder. We actually had to find out where all these people work. So great. I know John Doe is a risk to the entire enterprise. Where exactly is John's cubicle or office or courtroom? Wherever he is, I need to find him. And, you know, we had to build a target list. So we had to go to management and say, these are the people who are the most high risk based on this treasure hunting I've been able to do. They either have access to data or they have extremely risky behavior that puts them onto this list. And that list changed a great amount. The further we got in the process, the more people we talked to, the more we were able to look into the records. We added more and more names and job roles to that list. So as far as completing the baseline, we also needed to understand what they had been trained with and what they had, what they were used to seeing. So we had a fairly traditional training program in place already. It was once a year. It was an hour online. People basically muted it and went on with their regularly scheduled programming. We did see a change from our tools in the clicking rate right after training. But it would spike up to get better and then immediately drop back down. So we still maintained our baseline throughout the year. The training was effective but short lived. And that was another thing that we needed to tackle. All right. So I made some observations about campaigns off of stuff that I was able to pull off of the email gateway. So the Have I Been Pwned data did correlate to campaigns, but not at the time of breach. It's actually cyclical. So anybody whose information was in there, they were going to be involved in multiple campaigns. And overall, they were going to be a higher risk than somebody who had never had data that showed up in sources like Have I Been Pwned. And additionally, the emails going through were so much more sophisticated than the tests that were going out. So there's a lot of false baselining that's out there that sometimes people do to make their programs look good. A lot of the tools come with cheesy templates. And no one's falling for a 414 scam. What people are falling for is actual sophisticated emails that are like, hey, we need you to pay this invoice. And we look like a real vendor. Our logo even has higher resolution than the one that you usually get on the digital signature of the person who usually sends you this email. So the comparison there was extremely important for our future endeavors. All right. So a little bit more about face to getting to know the users as an attacker would. So I lovingly called this mixture, OSINT Plus, the normal OSINT sources of social media, Have I Been Pwned, looking at, you know, Maltego and pulling up people's, you know, social media profiles, as far as their Instagram, Facebook or even Pinterest turned out to be relevant. And mixing that with the internal jobs was important to coming up with the right blend of how to be preemptive in our training and actually prevent user targeting. And observations of the external data. So one of the interesting things that came across was Pinterest boards, you would never expect. But if somebody's posting a public board and they're putting recipes on there, recipe emails were extremely effective campaigns. It actually worked a lot. And so being able to go back to the users and be like, Hey, this happened to you, but it was also kind of your fault. And let's show you how to lock down your Pinterest. And let's show you how to, you know, not have a Facebook or Instagram account that tells everybody everything about you. So feeding that back into the loop as training for people as a sort of counseling that occurred. So they could feel that they could take part in the action was extremely important. And observations of internal user data. So taking that external data and tying it to the internal data. The most important things that bubble to the top were that there was higher email volume meant a higher susceptibility to click. So the more emails you get, the more hectic your day, the more likely you are to make a mistake. I don't think anyone's surprised by that. But the fact that you had higher office calls also contributed to the same thing, a higher susceptibility to click. But another thing that was interesting is work email just being on your cell phone, being the type of person who has your work email on your cell phone actually makes you a higher risk. You were more likely to click a link. And so because of that, we were able to know that those people should be targeted for special training. And additionally, anybody with a public facing position, anybody remotely customer service oriented was going to be much easier to target. So overall, if I could name one lesson, it's that, you know, multitasking isn't real. And nobody is good at it. But we all try. And the more we try, the more mistakes we make. So phase three was where my side of the recon came in. If the best kind of recon is in person, we had to get out of our offices. So we scheduled meetings and we left and we went to their desks and we sat down with them. We went mainly to our target list, but also we focused on anyone who had a public facing position, anyone who was very high in the organization. Anyone who had elevated user privileges was a really, really big user group that we wanted to talk to and anyone who provided support, specifically IT support or administrative. We went and sat down and talked to them and asked them questions. We were trying to go into these meetings with the same perspective that makes social engineering so successful, which is you pay people respect. You make them feel important. You make them feel valued. We use that to our advantage. We went in and we said, we know how busy you are. Tell us more about your work process so we can help you. We tried to figure out their holdups and their blockages and we went back to our desks and tried to clear them. Kayla from the technical side and myself from the soft skill side, I would reach out to other department heads and try to clear up those blocks for them and help things move forward. By going out on site and talking to them, by doing those favors, we gained goodwill that completely changed the game whenever we did have to make changes down the line, either to the tech and their work stations or changes to the training. One of the biggest things that we learned here was that we heard over and over again that reporting, phishing emails is just, it's just too hard. It's too hard. We're not going to do it. Our ticketing system is so ugly. We're not going to touch it, which is actually one complaint that we heard. So we knew that based on that feedback, we had to limit the difficulty of our end users getting reporting about email. Instead they were just deleting them and doing their own triage instead of reporting it to anybody else. So that was one of the things that we wanted to tackle going forward. And yeah, she really did say we left her office. That was the thing. So getting a manager who was on board for this was really interesting, but if you think about it, why would you hire a cybersecurity trainer to do one hour of material and then sit in the office the rest of the time? It's why everybody gets, say, canned training from SANS. And we all know that that canned training is not effective as the numbers have shown. But this was an interesting way to tackle it, and it really took an open-minded manager to be like, yeah, sure, give it a try. So some of the things that we told him we were going to do, and hopefully pulled off, was that we were going to meet with that targeted list, which included like VIPs and a lot of very important people. So it took a little bit of political smoothing from the management people. And we discussed their duties and their issues with technology. So we wanted to make sure that we could present a narrative to them that actually fit their workday and the way that they looked at the world. And so that actually makes a lot more changes to your training than you would think. And it's definitely not something you're going to get from a SANS training. And one of the real benefits of going out on site and talking to them was being able to see what was happening in their office in the moment. Were they checking Facebook while we were sitting there and talking? Were they checking their work email on their cell phone? Did they not have a pen or biometric lock on their personal cell phone that held work email? That one was super important. Do they have their password taped to their computer? That's a bad sign. So all of those things were super valuable pieces of feedback that we were able to get just because we happened to be in the room. So the next phase was about redesigning. We know our users a little bit more now. We know our threats a little bit more now. We have all this recon in our back pocket. What do we do with it? The first way that we redesigned was frequency. Once you hear training, just flat out doesn't work. You mute it, you go on with your work and it doesn't really take effect, which we could see from our tools because it would get better for like a day and then go back down. So we needed something that would have a longer impact. So we pulled it down to a 90-day training cycle. Every three months you're going to hear from us and you're going to get new information and we're going to make sure it's applicable to whatever's happening in the moment. Which is the next thing that we change is the material. If you have 6,000 users and you give them all the same course, how on earth is that going to be useful? We had to make sure that the content they were seeing was applicable to what they were doing every day and to that level of access they had. Someone who's logging in and can only get to their time sheet doesn't need to know the same thing as someone who's logging in with an underscore admin account. So we made sure that that training was applicable to everything that we learned when we went out and met them and the stuff that Kayla learned doing all of the back end searches to the tools and finding out kind of where our weak spots were. And lastly, brevity. We had to keep it short in the same way that we went out to those meetings in an attempt to create a system of respect. We wanted to show that we respect your time. We're going to keep it to five to twenty minutes. We aimed for five and we never went over twenty and that helped reinforce the idea we are not here to hurt you, we are here to help you and continued keeping them in that system of respect and hopefully keeping them in the feedback loop. So then once the training is done, now we have to test. Where are we? Does anything that we just did over the last six months, did it actually make a difference? So we looked at a lot of different simulation tools, phishing simulation tools, a lot of them are real terrible and they have templates that are very hokey. They have a lot of the I'm a Nigerian prince, I'll give you a million dollars templates, but we knew from our tools that wasn't what was coming in. We had enough research to know that the emails our users were going to see were going to be advanced. They were going to be well crafted and they were going to be targeted. So we had to mimic that exactly. We had to spoof our own domain, we had to follow the exact thing. So we created all of our simulated phishing templates exactly after, exactly after what we had seen. The emails that got through that thousand that did land in our system and especially the eight hundred and fifty that were clicked on, we sent those out again. We used the information that we gathered in the meetings that we had to target specific people and specific job roles and make the emails look exactly the way you would think they would look whenever the people would take action on them and the last we didn't ever leave out VIPs. So an important note about that specifically there was really creepy targeting going on that of course the end users would never know. So like as an example we had a deaf services division and their only job was to service the public. If anybody was deaf they were supposed to provide you know public goods for them and so that particular like division of the company was targeted by their social media accounts. Their Twitter likes specifically had had deaf kitten and deaf animals in there so they took a deaf kitten gift and put malicious software into it and sent it specifically to the deaf services department. That kind of specific feedback to where they know what they're being targeted with and their training reflects that resulted in them having better privacy in their personal lives as well as better habits at work. So it was a two-fold effort people really saw it as a personal value to them which I know security a lot of times we feel like we can't do good or that we don't or people avoid us you know they're like oh software police at work they never think about how it impacts their personal life. Yeah and another reason to always be careful whenever you're testing to include everybody one thing that we learned we had somebody a very high up in the organization who clicked on one of the bad links we sent out and then clicked on it again and then clicked on it on their personal phone and then clicked on it on their personal computer and then they replied to it with some personal information because they just couldn't get the link open. So that's the kind of thing I want to receive that's valuable recon for me because I need to be able to counsel that person differently I don't want that click and that reply going to someone else so this served so many purposes for us. All right and the last thing that we did was create that loop by revisiting the tech so I'm sure you guys have all seen bumpers and bowling lanes at some point in your life or at least had children who needed them so we're not all equally prepared to deal with threats that are coming at us some of us are you know less cynical and some of us actually are just very busy individuals it really I mean the mythology that users are not intelligent is not true I mean clearly doctors and lawyers everybody gets fished it'll happen to you eventually if you're busy enough. So taking the people that we knew were high risk and feeding that back into a technology loop to say now that we know you and we have this personal relationship we're going to do this service for you so that when you make a mistake you're not going to cause a very expensive breach that is going to be on the news so if you don't want to be the person who causes that we have tools for you and we're here to help so doing things like making sure that scripts jpegs pictures links processes everything that tries to spawn from outlook.exe is killed and they have to do the extra trouble of like taking a url and running it through their technology or administrator person before they paste it into a browser was something that you could train an individual to do but you would never want to train an organization. So rather than waiting until the entire organization is breached during the news and now we're going to roll out some heavy-handed you know white listing program or technology solution across the entire enterprise we could take specific individuals and target them and know that they were going to see it as a service and not a punishment. And one of the other tech controls that we put in place based on the feedback we got in those meetings that reporting was too difficult was we put a button into everybody's inbox to report a fish and send it directly to us and remove it from them. That way we took out the requirement to interact with the super ugly oh we all hate it tech ticketing system and we were able to use a free tool. It took a couple man hours basically no money and it created a lot of good will and it also really increased our reporting. So with all of our phases in place we were six months later and we wanted to see where we stood as far as putting that button in place and increasing our communication with our users. We went from 15 emails a month reported to over 600 and obviously in that 600 there are false positives but those are useful as well. That's another kind of recon that we can do because we know what's coming into them. We know what they think is suspicious or not suspicious and that's all more information we can feed back into the loop and we can continue building a better program and as far as our fishing risk we went from that original 85 down to 22 and 22 percent is not perfect by any means but it's definitely better than where we started from and it allowed us to be able to say this six months was worth it. We know what we got from it we can continue improving and we can lower that number even more from there. So of course management was like you solved it right and we're like no it's not perfect nothing's perfect but we did have a reduction in the probability of risk of 62 percent which is excellent and there was some nice math out there to quantify that thanks to the rise in breach report we were able to pull down some information about the cost of what we were doing and put that into a formula created by CIS and so I actually the enterprise we were working for thought their risk was seven million dollars CIS said it was 14 million and I was like even if we cut it in half we're still heroes so you guys are welcome we'll do it again next year thanks. So that's everything that we have thank you all so much.