 Cool. How's everybody doing today? Great. Yeah. It's after lunch, the second day of WordCamp. I'm astonished any of you are in here. But I'm very glad. Actually, the room kind of filled up and I'm excited about that. Apparently, this is a talk some people are interested in, which is a good spot for me to be. And I'm glad you guys are here. So we're going to be talking about incident response with WordPress, how planning for the worst can help secure your site today. We're going to be looking at some steps that you can take and some things that you can keep in mind now before something bad happens to make it easier to recover when something bad does happen. My name is Mikey. I'm a threat analyst at WordFence. My job involves researching what the baddies are up to and building firewall rules to stop them from doing other bad things. I am an ethical hacker. So I do penetration testing exercises and basically a hacker who uses their power for good. I'm also a locksport hobbyist, so I pick locks for fun. And according to two, I am the world's greatest dad, but that is disputed by probably some people in this very room. And you can follow me on Twitter at Hey, it's MikeyV. So the first thing that I want to cover is this thought that I don't need an incident response plan. And I want to illustrate this with some examples with a fire escape plan. So, you know, why bother having a plan to get out of this building in the event of a fire, right? We don't need that kind of plan because we don't start fires, do we? We know better. And we see that kind of response from developers and people building WordPress websites where we don't really think too hard about what happens in the event of a security incident because we just don't think we're ever going to create one. Another reason we wouldn't need a fire escape plan is because we've got sprinklers in the building. Any fire gets detected, some water rains down from the ceiling, everything's instantly better, nothing gets worse, right? You see that kind of attitude when we have malware scanners on our sites that are detecting things and then automatically cleaning them up and we think it's all good because the sprinklers activated and put out that one fire, but in reality, how did it get there? Did it get everything? That sort of thing. We also don't need a fire escape plan because we had an inspection done. You see this with places where there's, you know, PCI compliance or something where, you know, people have taken a look, yeah, we've checked all the boxes on their checklist and they're reasonably satisfied, you know. The fire marshal came through, gave us a thumbs up and I personally think that's guaranteed that no fire will ever occur in this building. So we do, we need a fire escape plan because things can go wrong and they often do, which is why companies like mine exist. So in this talk, we're going to be covering guidelines from NIST, the National Institute of Standards and Technology. They are the ones publishing things like the rules for password strength and things like that used by the federal government. You could ignore this talk and go read the 79 page spec document if you really want. I don't think you do. It's actually not a terrible read, but this is probably a little bit more easily crunched. And we do kind of, I have to move fast because there's a lot to cover in this. I'm sorry if I'm rushing through stuff. I'll make the slides available. So our goals with incident response and with security in general is to protect our CIA. CIA is an industry term for confidentiality, integrity, and availability. And these are sort of straightforward, sort of not. So confidentiality is keeping private data out of the hands of people who don't need to have it. This may be company secrets or something as simple as your password is something you want to keep confidential. This could also be user data or user information of your clients or people visiting your site and things like that. Integrity refers to making sure that what we're doing is that it has integrity. So this is things like making sure the content on my website says what I want it to. All the script that my website loads for its visitors are doing what I want them to and that nothing has damaged the integrity of that process. If we see things where a JavaScript injection has taken place and so now different code is running on the front of my site causing redirects or something like that, maybe they're not taking out our users' data or anything like that, but it is harming the integrity of my site and that's something I want to avoid. And then availability is just, is it up? Are my services available for people to make use of or reliably access? So if something like a DDoS event prevents my site from loading, that has damaged my site's availability. So everything with security is going to be preventing any negative impact to these three things. There are competing models, but some of them have as many as six or seven little elements that are all sort of basically the same thing. So this is a good solid foundation if you're new to incident response and this sort of thing. The difference between an event and an incident is one that we want to kind of keep our thumb on for the duration of this talk. The dictionary, well the NIST dictionary definition of an event is an observable occurrence in a system or network. This is going to be things like one hit to your website or one email sent or received or one blocked thing from your firewall, something like that. An incident is when one or more events combine and cause a negative impact to your CIA. So this is when a DDoS crash as a server versus just a single web traffic event. When a vulnerability exploit works instead of being blocked, that is an incident. As opposed to if it's blocked, that's not an incident, that's business as usual. Or something like a successful phishing attempt. Somebody sent you an email or somebody that uses your site and it worked. That's a security incident. And before we start the plan itself, we want to check a couple of these boxes. We need to make sure we know who's in charge of this process. If it's just you there, then congrats, that's kind of the de facto you. But if you've got a team or something, you want somebody to be in charge of this process. We'll talk in a little bit about the values of going third party for the incident response process, but you still want somebody on your team to be the point of contact for that. Somebody needs to be wrangling all of this and coordinating it to make sure that everything goes smoothly and that you can facilitate any communication between all relevant parties. They need to have management support. So again, if it's just a one person thing, then that's super easy. But if not, you need to make sure that whoever's in charge of this has the authority to actually do what they need to. So they need to be able to assign roles to other people in your organization. They need to be able to contact outside parties, be it your outside incident response team, law enforcement, your legal team, anything like that. The last thing that you want this incident handler to do is need to ask for permission to save your life. And the authority to actually test these plans and run drills and things like that, because we don't know if these plans work until we've actually tested them. And then you want to agree with everybody involved on how strict this plan should be. Do we want a nice rigid policy where everything has to be exactly by the book or can we get away with a set of more loose guidelines and just some rules to follow without conscripting every little bit that needs to be done. And then communication. At all points during the incident response process, you need to be communicating. Communicating with your incident response team, communicating with your users, with law enforcement if necessary. Keeping your mouth shut about this kind of thing is uniformly bad. I understand that there's like sensitive data and company secrets. They happen and you don't want to just rattle that off to everybody involved, but you want to be as open as you can be. So that means informing your users if something has happened. There's regulations like GDPR that force you to do this sort of thing. Other organizations might want to know. So people like my team at WordFence, if your site was attacked by something that we haven't seen before, I really want to know about it so I can stop other people from having that same fate. And then you want to make sure that you can clearly communicate with your incident response team before an incident actually happens. The last thing that you want is to send my team an email and then find out that you've had the wrong one and have to go and figure that out in the middle of an otherwise terrible incident. This is NIST's actual diagram of the incident response life cycle. And this is kind of the framework for the rest of this talk. We're going to be going over these different steps. This is a cycle that you are always in. Right now, assuming nothing bad is actively happening to your site, every one of you are in the preparation stage because we're learning. We're working on improving things. You are going to remain in preparation indefinitely, hopefully forever, until you detect something. When you detect something, then you want to analyze it. Once you've analyzed it, we want to move to containment, eradication, and recovery. And then that's a bit of a loop there while we make sure that everything is done. And then when it's over, we want to have some activity after the incident is over, sort of a post-mortem period, if you will, before we loop back around and go back to our preparation holding phase, which is the first one we're going to talk about. Yay! So this preparation stage is where we want to make sure that your team has all the tools that they need to be successful. So tools in this case mean things like literal software tools, like analytical tools and log aggregators and things like that. This also means just simple things like documentation. If your incident response team needs to ask you about every little thing and whether it needs to be there and why it's there and how it works, that is going to markedly slow them down when they could just be reading documentation to teach themselves all of this. Reporting tools for third parties to inform you of stuff. So if you have a user who sees your website has been defaced, you want them to have a way to tell you about that. Sometimes Twitter DM is all I end up having the ability to really do to privately contact somebody when this kind of thing happens. So you want something a little bit more solid. This is less for WordPress teams because they're usually a little bit more distributed and smaller, but a war room. If you do have a semi-large organization and all of it is in a big open office floor plan, they need to have somewhere that they can go to discuss sensitive details about the incident response process without having to worry about being overheard or eavesdropped upon. And then, of course, clean backups, which are always good to have, but we'll talk a little bit more about those later. This is also where I want to talk about the easiest incident to handle is the one that didn't happen easily, right? So we can minimize the number of incidents that we have to deal with by securing our assets. So all the stuff from all my other talks, like run risk assessments patch your software and get some sort of intrusion detection like a malware scanner or something more host-based going, you want to really shore up your security in this phase. That way, hopefully, the rest of it never has to come into play. Step two is detection. And our goal with the detection stage is to reduce TTD, which is time-to-detection. So it's from the point an incident actually happens. So when the bad doer did bad to when I know about it, that is a nebulous period of time and I want it to be as small as possible. That way, I can minimize the impact and I can get everything back to normal that much faster. We're going to reduce TTD by identifying attack vectors, precursors, and indicators of compromise. An attack vector is just simply all the places that somebody could attack. They're going to be different for everybody. And the response to each varies based on its own scope, so to speak. So your web services, like your actual WordPress site, that is an attack vector. Things like email or removable media, like if somebody slips a malware-ridden flash drive into your computer, that could impact your WordPress site if you connect to it. Impersonation, things like phishing or just otherwise people trying to trick you into doing something you don't want. And improper usage is one that people end up dealing with in larger organizations, like somebody has their work computer and they downloaded some file sharing tool onto it and got some malware through that. That is an attack vector, even though it's something that they sort of did to themselves. A precursor is a sign that an attack is about to happen or maybe about to happen. Not all incidents have precursors, but these are things like server logs telling you that somebody has been running a vulnerability scan on your website, like WP scan or SQL map. If somebody is doing that without your permission, it could be a sign that they're trying to find some way in. If word fence or any other security body has issued a vulnerability warning about software you use, so a plugin you use or the WordPress version you're on, that could be a sign that something is going to be coming around the corner. And almost certainly it will if it's an actual vulnerability report. And of course, if you've received a threat, that's a pretty good precursor that something might be up. And lastly, indicators of compromise are signs that an attack has happened or may have already happened. So IOCs are going to vary based on the attack vector they're associated with, but this is going to be stuff like unusual login activity. Somebody logged in with your username from a country you've never been to, that's questionable. At a larger scale, if somebody never logs in after 8 p.m. and now they're logged in at 3 a.m., that's weird. So you want some kind of information about these events as they come in. But other things like malware detected on your server, that's obviously a bad indicator or files that have mysteriously changed or user reports that came in through the method, whatever method that you laid out in step one. So indicators that compromise have taken place, they need to be verified. Not every little event that we're curious about is actually a sign of something bad happening, which is why step three is analysis. This is where we're actually going through and vetting this information that we've gathered. So some steps to make this easier are going to be things like knowing what baseline behavior looks like. It's really hard to know what's weird if you don't know what normal looks like. This means being familiar with your log activity, the amounts of traffic you usually get, things like that. Log retention policies are important. A lot of people are on shared hosting and have no control over how their logs are stored or for how long or where they go when they're gone. So you need to confirm that you have some amount of access to this over time, especially because incidents can happen long before they're detected. If something happened three months ago and you only have 30 days of log retention, you're fighting blind. And then online research, like if you find a sketchy PHP file and it's got some code in it you don't really understand, try googling it. It works a lot more often than you think. If we've determined that an incident has taken place, then we have to start mobilizing for a more active response. We want to try to identify the intrusion vector. Sometimes we can't write away or even at all. We want to find out how our CIA has been impacted, so our confidentiality, integrity, and availability. And we want to try to identify early in this process the recoverability, which is how soon can I get back to normal after this has been done. You want to at least get estimates of all of these before you start contacting things like law enforcement, your incident response team, your own leadership. Maybe HR needs to be involved if you think something inside happened. But from there, oh, I added this at the last second. When it comes to DFIR, so digital forensics and incident response, it is a very specialized skill set. It uses tools and skills and knowledge and data that your average CIS admin or IT person may not have completely under control. There's a lot of the tools that are relatively user-friendly, but they just kind of rely on, you know, hunches and knowledge that experts have that somebody who isn't used to this may not. Even big organizations contact outside incident handlers from time to time, even hosting providers that have their own SOC, so Security Operations Center, where they actually have a team watching logs and events for things that are going wrong. If they find an incident, they still call somebody who is an expert in forensics. If something mission-critical is on the line, it really may not be the best time to try to save a buck or two and keep it in-house. So we've identified that something has happened and we've identified and confirmed it's bad. So we need to contain it. We need to stop it from getting worse. We need to stop the malicious actor from continuing what they're doing. And so we've got some questions to ask. So depending on what we've found, do we need to shut anything down? Do we need to take the site offline in the meantime while we figure out what's going on? If so, how long do we have to do this for? And knowing that early helps you make future decisions further down the process. What sort of evidence do we need to preserve? And this is a big step that I want to point out early, because everybody loves to panic and delete the malware that they just found. That makes forensics so much harder because it's evidence. It's all evidence. It's like moving a body because it's gross. And then from what can we... And what can we learn about the attacker's behavior from this other stuff that we've gathered and looked at? If we think a legal follow-up is required, we have to be very careful with evidence. So we need to take care of logs. We need to note attacker IPs, timestamps of everything, malware hashes, file contents that have been modified. Everything that you can think of that is halfway relevant, your investigator is going to want. And so you need to enforce a chain of custody with this data. I mean, if you're doing this third party, then they're probably handling it. But in-house, who is touching this data? How is it being stored if somebody moves something from one point to another? How is this being tracked? And that way, if a discrepancy comes up later, you can try to pull that thread and figure out what happened. And it's finally step five of seven, and we're finally at the point where we can actually start deleting stuff. So we're destroying the invaders at this point, right? And depending on the scope, it can vary, but it can also be bundled in with step six. But this is where we're deleting malware or cleaning up injections into otherwise good files, going through database contents and cleaning things up from there. Just getting rid of the nasty stuff so that we can start to rebuild. Which is step six. So one of the biggest parts of step six is recovering from backup in most cases. So hopefully you have a clean backup. If you don't, shame on you. Get one, get many. If you are positive that this backup is pristine and has not been compromised and that the scope of the infection only covers what is in this backup, then restoring from it probably covers your step five. If not, then you need to obviously have done step five. This is also where we're going to be trying to fix any vulnerabilities that were used in the process of this attack. So we figured out that, oh, we had plug-in XYZ that was outdated. Somebody walked in through that weird door. Well, we need to fix that. And then after any incident, it's just always safe to update your secrets. So your passwords, obviously, if you have any kind of keys that you log in with to your SSH keys you get into your server with, sometimes it's a good idea to go ahead and switch those out. And the salts that you use to salt your password hashes in your database. That's all in WP config. It's really easy to generate new ones, but it prevents somebody who has already had access from finding a new way back in. And then after the recovery, we're all back to normal business as usual. What did we learn? So this whole step seven is taking this information, everything that we've learned from this terrible occurrence and processing it. So what happened? When did it happen? Did we respond effectively? What could we have used sooner, as far as information or data goes? How much quicker could we have gotten through all of these steps had we known blank? What things did we do poorly? That's almost more important than talking about what we did well. And then just figuring out things that could have made all of this process easier or even better, prevent it from happening again in the first place. And then from there, we want to make some decisions about the retention of our evidence. So especially if we're going to have a legal follow-up, how long do we want to keep this evidence that we have stored? NIST's guideline for actual federal entities says that following an incident, you need to retain this evidence for three years. I'm not a federal agency. I don't know if any of you guys are, but you can make your own rules here. But depending on what you're going to be doing next, you may want to keep track of this data for a while. And how are you going to retain that data? Are you going to keep it on the cloud somewhere? Are you going to have off-site storage or some physical media? How much does any of this cost? Like, you want to factor this in so it's not coming as a surprise when it does actually happen. And I really rushed through all of this. Wow. This is one of the reasons, just kind of from the top down, we want to identify all of our resources before an incident occurs. This includes your off-site incident response team if you're looking to have one. So if you're wanting WordFence or some other security company to do the cleanup, you want to be in touch with them ahead of time or at least have some kind of relationship there so you're not shopping around at the worst possible moment. It'd be a really bad time, right? Take your bargaining power away. Minimize future incidents by securing your assets. So you have to respond to fewer if fewer happen. Automate detection as much as you can. So I've got plenty of time. I could talk about this. So the biggest thing that you want to look at, and unfortunately it's so case specific that it's hard to just say, oh yeah, it's this. You want to be notified of anything quickly. Anything that you want to be notified about at least, I guess. So login events or big waves of blocked attacks or anything like that. As much as you can be informed of as quickly as possible as going to help prevent it from flying under the radar or going unnoticed for too long and letting somebody really get a foothold in your website. You want to keep reliable backups and you want to be doing them regularly. Especially the more you make changes to your site or the more data flows in and out of it, you want to be backing up and test those backups. My least favorite thing about incident response is when somebody has a backup that isn't good. Maybe it's old or it doesn't contain everything they thought it did or the process, but it's been broken for six months and we just found out because we needed one. Preventing this is just as easy as testing it. Spin up a test environment, try to recover from backup in that test environment. If it works, awesome. If it doesn't, you really need to fix that right away. When an incident occurs, don't destroy evidence. I really don't want to have to say that out loud but don't do it. That includes things like destroying files that are signs of something bad happening. Maybe rename it if you have to get it out of a workable state. If it's a PHP script, you knock that PHP extension off. It won't work anymore. But don't nuke it because it may have references to other files that your incident handler might want. It might have any other amount of data that could be handy in this response process. Keep clear communication with everybody. This includes your users. This includes the other people on your team. The last thing somebody wants is to be kept in the dark when their data is at stake and learn from these incidents. When you're doing that step seven post-mortem where we're having a meeting and we're talking with everybody about what happened and what lessons we learned, this is not an opportunity to publicly execute the person you feel is responsible. So if somebody forgot to patch or if it's a vulnerability, somebody wrote it, any money that has been lost has already been lost at this point. You can either try to take it as something bad and lampoon somebody in front of the rest of the class and ensure that nobody is ever going to cooperate with this process again. Or we can call it some really unique and expensive training and move forward and learn. Thanks for stopping by. If you have any questions, ask away. Thank you. I'm on Twitter at HeyIt'sMikeEV and you can check out our blog wordfence.com slash blog. Any questions? Yes. Yes, I'll post it on Twitter as soon as I can. Is Tripwire something like that free? Tripwire I'm not familiar with. It's like a just file just check for file changes in the word to you if something's changed without your permission. Sounds good to me. I don't know anything about the product though. But yeah, file change monitoring stuff like that is very handy regardless of who's providing it. So for file change monitoring I actually use Git. So if you just keep your site's content in a Git repository it's really easy to tell if something's changed. Also, as far as file changes go, I am more on the side of having some kind of malware detection and play on this. Especially, we deal with a lot of sites where files get uploaded a lot images or user content gets moved around. So any updated file could very likely be a false positive but something that's performing malware detection or any kind of heuristic like that at least alerts you to say get your eyes on this with a little bit better signal to noise ratio. I'll have to look at that specific software though. Yes. I love how you are on your entire process to address these incidences and I was just thinking in my mind this must be very expensive to implement and how big of a company would you say would this be more applicable to as you really don't need to be a company. If you're in this room and use WordPress at least some of this information you want to keep in your pocket. If you're not spending a bunch of money to do this you can cobble together some bash grips to check some stuff out or install. I mean WordFence has a free plugin that can handle some of the malware detection for you. There are ways to make a good solid best effort on every one of these steps without breaking your budget. Especially when it comes to like our site cleaning services I don't want to make a little pitch right now in front of everybody but it's not crazy expensive and especially compared to outages and things like that especially if any money is coming through that site. It's an easy call to make. Yes. My site got taken down two weeks ago and it went to a redirect and my VA solved it but he hadn't really told me I was just wondering could vulnerability still be inside the site? They could be. Is that like a cleansing we do or a look at everything? You would want to look at all the software like the plugins and stuff running on your site make sure all of them are up to date. Having everything up to date doesn't necessarily mean there's no vulnerabilities in it but it's at least a good step to prevent known vulnerabilities from existing. So I have started doing that and then also does that also mean updating all the themes as well? Yes. There can be a vulnerability in any piece of code that runs on your site or even lives in the same directory on your site. If you have an old copy of the site sitting in a folder somewhere that's not getting updated that still exists and we see that a lot where we've got like staging folder with an ancient website still kicking in it and there's malware in it or some other vulnerability. So even if I had gone through redesigns it could be in it or cleaned out. Yes. You want to make sure that the only files on your site are the ones that need to be there. Yes. I'm a blogger and I'm fairly new at this so I don't have a team, I'm the only person so I get a little confused sometimes like in terms of backing up I keep about a week's worth of backups is that enough? That's a good question. It just really depends on how if you only have a week's worth of backups then that's and that's fine because a lot of people don't have any but if you only have a week's worth of backups you need to be aware of any incident happening within a week of it happening so if something bad had happened eight days ago all of your backups are corrupted now. A lot of the time people will have like a rolling week of daily backups but then they'll have one a month old one two months old something like that just in case. So maybe I'll just have at least one that's a month old and then as long as you're doing some kind of checking to make sure that everything is good within the range of your backup availability For somebody like me who's not no version of the stuff like I use WP security or something like that how do I know if it's not like a technical person I'm not getting any alerts from them so I do assume everything. That's kind of the thing, if you're not getting alerts or even if you are getting alerts saying hey we blocked something that's business as usual like we said a blocked attack is just an event it's not an incident so sometimes those alerts are just notifications but if your software does say hey red alert this is a problem then obviously you want to respond I don't know specifically about that plugin I know WordFence has a free plugin that does very similar stuff and I like ours better than anybody else's of course but as long as you're remaining at least a little bit vigilant and trying you're putting yourself that much higher than everybody else cool anything else? cool beans I did they're in my bag I'll be at the happiness bar cool well thank you guys for coming out I'm glad I get to turn out I got here I wasn't expecting so many people in here thanks you guys