 and launch the recording and we will get started. All good. Beautiful. My screen's organized, everybody. So Ben, I do have one poll loaded up, but I guess we'll probably launch it at some point. I don't have a slide for it though, so we'll just decide at whatever point. And then I will ask you to launch that. Okay, welcome to session nine of Become a Cybersecurity Ninja. I cannot believe that we have made it this far. We started it in January. We are now in May. Anyone who is attending, who has been to every single one of these sessions, first of all, thank you so much. And I hope that you obviously continue to attend. So I guess that's the assignment you are getting good information out of these sessions and finding them worth your time. And we're certainly gonna continue to try to make that true today. Today we're talking about what now, incident response planning, what to do when something goes sideways at your organization. Here's been our Ninja plan. We have covered threat modeling and risk assessment network security basics, authentication, password managers, encryption, mobile security, working while traveling, phishing, social engineering, digital privacy, security tools, and now what, incident response? In two weeks, we're going to have a wrap up where we'll just do a very quick summary of everything we covered and some kind of high points around best practices. And most of that time will be dedicated to your cybersecurity Ninja quiz. And we will have prizes for anyone who aces the quiz or if no one aces it, we'll give out to the top three scores of the quiz and anyone who gets over 80% will get a framed cybersecurity Ninja certificate sent to you. And we certainly look forward to that. I of course am Joshua Peske, Vice President of Technology Strategy at Roundtable Technology and Roundtable Technology is a team of dedicated tech professionals and we provide all kinds of services to nonprofits and small businesses throughout New York City and Maine. And in fact, the entire world, believe it or not, but mostly New York and Maine. Credit, where credit is due, we, I am researching for this session, I really did not find anything that I thought was a good quality kind of incident response planning guide or even article aimed at small businesses and nonprofit organizations. Almost all of the stuff out there is either not very good or not very thorough or is so massively thorough and aimed at organizations that have not only entire IT departments, but in fact entire incident response departments that they weren't applicable. However, this guide from Digital Guardian, which is quite recent, was incredibly helpful. We have a link to it in the resources and I've essentially with the help of some colleagues, what's the best word, changed this or adopted it. That's the word I'm looking for. I'm sorry, adapted is the word I'm looking for. I've adapted it for small businesses and nonprofit organizations. And hopefully this will now become the resource that will make sense. Asking the question, what's your plan? And I think here, Ben, we'll go ahead and launch the poll and just ask from our audience, if you have anything that is a documented incident response plan. Hopefully we have the poll there, there we go. I'm just curious to know out of the folks attending today, does your organization have a documented incident response plan? And if you're the person responsible or would know, then you can give us an answer. And if you have no idea, then that's really an option. I'm just curious to know if any of these organizations attending today, any of you do have one. We'll go ahead and close that up then and show the results. And we've got about 85%, 86% that either don't have one or don't know if they have one and a smaller percentage saying that, yes, they do. All right, thank you everybody for that response and we'll launch back in. So this response plan, which is someone figured out my password and my password was the name of my dog. So now I'm going to rename my dog would be an example of an incident response plan but probably not the one that we want. Where to begin to create an incident response plan. And you'll see a lot of in the digital guardian guide they have a different sort of set of five things. I've kind of redefined some of these to what I think makes sense. And what I also wanted to say about the webinar today is that the timing of it is pretty great actually in the sense that we're right on hopefully the tail end of the wanna cry outbreak which began last Friday and round table which we've had certainly multiple iterations of our own incident response plan and varying degrees of success really executing our response plan when it's time to do so got a pretty good fire drill this week. And we will be doing our lessons learned on Thursday morning. But a lot of this is really fresh in my mind. So that's in the minds of a lot of roundtablers and perhaps many of you will go through each of these. Essentially your response plan should first of all define what constitutes an incident. In other words, what are the criteria that caused you to invoke this plan to say you know what this plan is now what we're doing, right? So there's something like a printer jam is probably not enough to invoke your incident response plan but something like a fire that melts your entire server room if you have a server room or a malware outbreak that encrypts all of your organization's files is almost certainly enough to invoke the plan. And there's a lot of areas in between those two things that you wanna have some idea of what criteria you're using to define an incident. Then you wanna have some clear process for declaring hey, we are officially now invoking this plan. This is now what we're doing. And by doing that, and this gets to actually communicating and containing or in the wrong order. Well, communicating kind of really wraps around this whole thing. If I had a chance to redo this, I would actually have the communicating with wrapping around all the other things. But communicating that, okay everybody, if you're part of this team, this is now your number one priority is to be working on this incident response because we have declared an incident, something's happening, we have to deal with it. Communicating, which happens throughout and then learning from it, which is I think the step that's most easily missed, which is after it's done saying, okay, what did we learn? What are we gonna do differently, if anything? And if so, how are we gonna implement those changes? I threw here in terms of defining the incident. This is a paraphrase I actually copy and pasted from our own incident response, how we define an emergency within round table for our purposes, because when we declare an emergency, it does cause some different things to happen within round table, which I'll talk about a bit. And I've removed some of the information here to make it a little more generic, but we have defined these three things as things that would cause an emergency. And these are a work stoppage for an entire organization or a critical department of the organization. So if we, as round table, have a work stoppage, something that causes us to be unable to work, that would certainly constitute an emergency for us, but we also consider an emergency if one of our clients has a work stoppage or their finance department, essentially is unable to work. And all of these, because we are obviously a very client-focused organization, these emergencies apply both to ourselves and to our clients. We actually use essentially the same process internally and externally. Second, if there's a potential for significant financial loss. And three, if there's significant reputational risk, either to us, round table, or to our client, or if our client has significant reputational risk for itself. And any of those three are what we use. And obviously that's not 900 pages of defining terms. We just said these are the things that, if one of these three things are happening, then that's an emergency we are declaring an incident. And that leads you to the next question of declaring incident, which is how are you going to know that something's happening? How do you become aware that an incident is underway? And I think this is a real challenge for the small businesses that don't have some of the sophisticated tools that might let you know that something weird is going on on your network or some account has been breached or other things like data loss prevention will let you know that someone is attempted to remove sensitive information from your network or what's referred to as egress. And the question for a smaller organization is how will you know if something's going on, I'll move on so we don't have to keep staring at Whitney Houston there. And here's what I'm gonna say, and this is where I very much adapted this, for organizations of all sizes, and this can be true even if you're just a one-person shop. Number one, well, if you're a one-person shop I suppose this isn't applicable. We'll say if it's a two-person shop or larger. It's one, this sounds really minor and informal. And for those of you who have been attending these sessions hopefully you have come to realize that it's not, which is this idea of encouraging people to tell you if something's wrong. If you clicked on an email and something happened and you think, oh gosh, I might have just clicked on a phishing email, I might have just triggered to malware, I might have just given my credentials to somebody, hey, let someone know who can think about whether this is something we need to declare an incident for or at least start investigating and making sure that that's communicated really clearly across the entire organization. So everybody understands we're not gonna yell at you, we're not gonna scream at you, but we want to know if something bad happens. That is really, really important to us to know if something bad happens or if you think something bad has happened and making sure that you don't have, I've referred to this before, it's like a shoot the messenger mentality within the organization where people are strongly chastised or told not to bother us when they bring forth problems that are going on. That's a huge risk unto itself. So making sure that people know to communicate when something bad's going on. That's one thing that any sized organization can do. The second is you can leverage notifications and I'll show you what I mean in the next slide, but a lot of the online services, Google Office 365 from Microsoft, SalesForest Dropbox, they have on the administrative side or even just on the user account side, a lot of notifications you can set up that can tell you about things that might indicate some hanky-panky going on. And that can include notifications which I'll show you of failed login attempts, of login attempts from new devices, of login attempts from different geographic locations which is a super helpful one of attempts from, or when things happen to devices, so this device rebooted unexpectedly or it ran update unexpectedly. These are all things that are free to turn on and get notifications for, may require a bit of what's called noise signal tweaking, meaning making sure you're not getting so many alerts that you just start ignoring them. I've actually gotten that really in a good place in terms of our Google notifications at round table and it's turned out to be very helpful. Larger organizations, and I'm gonna define larger as if you got 50 or more staff and probably for most of us, 100 or more staff and even larger, then you can start to consider SIEM or security, through the acronym stands for security inspection, event management tools, where they monitor your firewalls and the logs and they can actually detect anomalies and what's going on and send you alerts and then start to manage different security events. Splunk is probably the most popular one. There's a bunch of other ones out there, but that's really for larger organizations. Most of us smaller organizations really wouldn't have the resources to even manage a tool like that. Looking at this Google detection alert and this is Office 365, it's very similar things. Google just updated this quite a bit and it's really very helpful. And again, Dropbox Salesforce, they all have a lot of these that you can configure and this does not require having an entire IT department to manage and this is an actual email. I went ahead and redacted some of the identifying information here. This is when we got literally, I think a week ago today. No, a week ago yesterday. And it basically told us that someone from our domain logged in in Sebring, Florida, which was a new location to us and gave us the IP address, gave us the account and I got this notification, sent it on to my support director and said, can you look into this and see if in fact we have anyone who was in Florida at this moment. We determined relatively quickly that it was, but that actually there were some security things that we could change to make this a less of a problem for us. And without invoking the entire incident response, which we would have if we couldn't have tracked this down because we would have said, well, someone attempted to log in or actually succeeded in logging in from this, or I'm sorry, no, this was a login attempt from this location. But anyway, this is an example of the kind of note and we get, I've got it, when I first set this up, I was getting like 10 notifications a day. I've got it down to now where I get one notification a week perhaps of I've kind of gotten rid of the events that I don't consider to be terribly significant. And something that you can do that's free if you're using these cloud-based services. That gets us into, so we talked about defining, identifying that there is an incident and now we're declaring and now we're saying, this is an incident, this is happening. And that's pretty straightforward. Once we've identified that something's happening, we've been hit with this ransomware attack, our server is down, our internet is out, $5,000 has been wired out of our organization without authorization, we're gonna go ahead and it satisfies our criteria that we've defined. We're going to declare an incident. You're gonna make it really clear that there's an incident, we're gonna alert the team and we're gonna start talking about the team next. And we're gonna initiate the plan and the plan is what are all the things that are going to happen from this point forward. That's pretty straightforward and the communication is really the main part of that. And I would say in the plan, there probably should be some indication of who in the organization is authorized to actually declare an incident. If you're really small, if you're like a five person organization, it might be that anyone can be authorized to declare an incident, but if you're a large organization, you may want to limit that to a few people. So someone who needs to escalate it up and say, hey, this is something that's going on. Do we want to officially declare an incident and invoke this plan? Or are we just gonna kind of treat this as a problem and just kind of work at it like a normal thing? Gathering your troops. So once you've declared the incident, we have an incident response team. The templates that you'll see in the report will include identifications or tables for an incident response team. It'll say here are the people and their responsibilities for this. You would certainly want to have someone who's managing it. That's the person who's fundamental job is communication and prioritization. They are communicating with everybody who needs to be communicated with about what's happening, what needs to happen, additional resources that may be needed, the prioritization of actions. They're the ones, they're the person who's managing it. And it's kind of like a project manager in the sense of the role. A lot of communications, a lot of prioritization, a lot of managing resources to deal with the issue. Obviously you're going to need your technicians, your security analysts. If you don't, if you have these within your staff, and I'll talk about this in an upcoming slide, you want to make sure you at least know who those are. So if you're outsourced to round table, then you're clear, you contact round table and round table hopefully would get the right people on the job for you. And you want to have other folks, threat researchers or other people who may have jobs to do involved as well. But gathering the troops, declaring the incident, getting everybody rolling. This is where it gets even trickier. And I don't mean to make this overly complex for folks. But depending on the nature of the incident, there may be involvement from technical and non-technical teams. An incident where let's say your IT director turned out to be defrauding your organization, or let's say died suddenly, or an employee is engaging in significantly malicious behavior and you think is either stealing information or money from the organization, or you have a data breach of highly sensitive information about donors or customers. And it's pretty clear you're going to need to communicate to those customers or donors that their information has been breached. Then you're going to need to involve potentially, a PR department, and if you don't have one, whoever's in that role for external communications, you may have to involve your human resources, you may need legal help to determine what you can and can't do legally in this scenario. And you'd ideally not like to be scrambling to locate resources with whom you've never worked before at this very moment when you're undergoing an emergency. You'd like to have some notion of who those resources will be and have confidence that when you call them and say we are undergoing an emergency and we need your help right now that they will be able to help you in that scenario at least in some reasonable timeframe. And you don't want to have a lawyer that you have on retainer but it's a single person and the day you call them at their emergency, they're like, look, you only call us like once every two years, you've done like $200 worth of business and I've just totally booked up this week so I really can't help you. Have that conversation before you're actually in the middle of an emergency that you need legal help with. Hopefully that makes sense. And it may take a village, right? You might also need the larger your organization, the more complex the issue. You may need help with your executives. If you're a nonprofit, you may need to talk to your board. You may need, again, human resources. You may need public relations experts and of course you're going to need the technical resources and everybody tends to focus on the technical resources but all these other components may come into it. Again, depending on the nature of the incident and whether it's going to have legal or PR repercussions based on the kind of breach that you've had. And so build those relationships. Sorry, I covered this before but build those relationships before you're in the midst of an emergency. I really highly, highly recommend building those relationships ahead of time and for those of you who have round tables or are clients of round table, you've got the technical piece covered in terms of working with round table but those other pieces we may or may not be able to help you with and you'll want to think about if you are going to need help with those things, who would you go to? So here's the cartoon and I'll take this moment to kind of talk about how this played out this week with wanna cry and round table. We did not formally declare an emergency or an incident at round table based on what was going on with wanna cry. And there wasn't a ton of conversation about that at round table. It probably would have been my call to declare it and based on what I was reading around how it was spreading and what it was doing, it seemed to me that our clients were at pretty low risk of being affected by this and that the biggest problem we had was a communications problem, was that our clients were going to be reading about it in the news and we're going to probably on Monday morning start asking us questions about it and that is in fact what happened and we thought it would be a good idea to get out ahead of that and set out some proactive communications very early on Monday and put information on our website to help both our clients and to help our technicians so that they could direct people to that information. But there were some other things where clients were asking us to check on certain things where we did discover some capacity limitations in terms of our abilities to validate that a particular machine was at a current patch level really quickly, we were able to do it but not as quickly and easily as we would have liked. And some other challenges with our communications and so it did wind up being a bigger burden for our team such that we may have in retrospect wanted to declare an incident. But it was really hard to tell because it all happened over the weekend and we weren't sure how many calls we were going to get until Monday came around and then a lot of those calls started coming in. So there was a degree to which we were all running around in circles, going, what do we do? What do we do? And a degree to which we were pretty good in terms of being proactive about it. And again, we're going to do our own lessons learned on Thursday but it gives you a kind of idea of this nebulous area. If I had been reading about this on Saturday and on Saturday, we've been getting support tickets flooding in of people saying that they were affected by this ransomware by WannaCry and we could look at our queue and see that we had 20 tickets in of people either staying, we've been infected or asking for information before Monday even rolled around then it would have been no problem to declare an emergency and say, okay, we've got clearly reputational stuff around table going on all these things but that wasn't happening so we had to make call but that gives you an idea of the kind of thought process that we go through. Got my own quote here. I had another quote in here and I didn't like it and I thought I would share something and this is a policy change that we made and it's part of our incident response plan around table and this has been a very important change for us and I'll explain what this means. When we have an emergency going on at a client and we changed this, I wanna say about a year ago. What we learned is that we have a technician working really intensely and sometimes multiple technicians working really intensely on a problem. Let's say that a network suddenly goes down and we determined that the internet's fine, we determined that the server's fine but meanwhile you've got 50 people in the middle of a work day that have no connection to their own network and we have engineers frantically looking at the firewall, looking at the switches, trying to figure out what's going on in the network to cause it and that sometimes can take a little effort and they have to be very focused on that. Meanwhile, the client rightfully is wondering what's going on, what's our ETA? What's happening? Can someone please tell me what's happening? And the engineers who are trying to focus on the ground line, they cannot communicate effectively and also work effectively on the problem and obviously we want them to work effectively on the problem and the change that we made is we said okay, when we have an incident with a client, when we declare an emergency with a particular client, we are then immediately assigning a project manager in the role of communications around this incident and that project manager will talk to the engineers as little as they possibly can to get an update on the situation that allows them to explain to the client what's happened, what we've done so far, what we are doing and without any promise that we can't keep an ETA for when we think we'll either have a resolution or if we have no idea when we're gonna have a resolution, an ETA for when we'll have an update on how bad or how solved the problem is. And that has been a monumental change for us to have that kind of dedicated communications in that role that really helps everybody communicate better and I can't, I don't wanna get so long-winded here, there are so many problems where after we do a lessons learned with the client after the incident, they've said something like, well, gosh, if I'd known that we were spending four hours trying to fix this $300 device that we could have just bought a new one, I would have just had you buy the new one and I wish someone would have talked to me or communicated options to me in the midst of this and that's really hard when you're in the midst of an emergency but executives really want to have the full availability of what are our options for spending money to solve the problem or allowing time to pass to solve the problem or workarounds to solve the problem. They want all the options available and the engineers will sometimes just make assumptions well, they're not gonna wanna spend $5,000 on a new server, they'd rather fix this one and those assumptions can sometimes be very incorrect. If it's a breach, if it is a security breach, if it's malware, if it's something like that, containment is really, really important. We wanna make sure that we keep the little rascals contained in whatever systems they're in. This one's pretty basic but I think can be missed a lot, shut down and disconnect any compromised systems if you have a system with malware, get it off your network as soon as possible, collect important data about the incident from whatever you can. So if you've got malware that's detected it, if you've got logs, make sure that you're making that data available, any external intelligence, a lot of this is simply talking to people and learning, what did you click on, when did you click on it, when did you open the attachment, what was it from, all these kinds of things, just talking to people, what happened and reviewing the logs for those things. It's making sure that your backups and things like that are secure and if you're concerned that this malware is sort of spreading through your network, you may wanna actually disconnect backup drives, things like that from your network to make sure that the malware isn't able to go and overwrite those backups and of course collecting logs of everything that's happened and not deleting logs, a lot of systems, Windows servers, firewalls are set to overwrite logs every 24 hours or every 72 hours just so that they don't fill up space and be unable to log and in an incident you want to go check those log settings and stop them from overwriting until you're clear with the incident and then you can set them back to normal state. And that's something that's more for the technical folks but it's a good note. I realize I'm running a little bit long here which is surprising because I'm by myself this week but I am apparently being extra long-winded. Create your incident classification framework. There's a kind of classification category type severity and then a taxonomy and this all sounds super technical so let me just break it down for you based on our recent one with WannaCry and this is if someone got infected. So the category is malware, the type is ransomware which is a specific type of malware. The severity is high. If we had our files encrypted by this we would say that's a pretty severe problem. The detection method was user notification meaning someone clicked the email and a message came up and said all your files are encrypted and you have to pay $3,000 to get them, I think it was $300 in this case, $300. And they told us. The attack vector is how did this get in? Well, the user told us they clicked on the email so we'll say that's a phishing email. The impact is file availability. We haven't lost data because we have backups in this instance but the files are temporarily unavailable because they've been encrypted. The intent is malicious of a financial nature. Someone's trying to extort money from us. The data exposed to the best of our knowledge is none. The data was encrypted but we don't believe the data was egressed or extracted from our network and we have no reason to believe that our data's been exposed. The root cause, two root causes, user error and lack of patch management meaning we didn't have patch management and we can debate about root cause but we could say that the root cause for the, if we back up, right, if we do like a five wise exercise for the user clicking on the phishing email we could say, well, because the user didn't have security awareness training. And wasn't given opportunities to learn about how phishing emails work and why they wouldn't click on them. We could do another why and say, well, why haven't done well because the organization hasn't prioritized providing security awareness training or the organization provided it but the staff person was too busy to attend it and wasn't prioritized properly to do that. And you can keep going backwards and the same for patch management. Well, why didn't we have a patch management system because it costs $3,000 a year and we didn't want to spend that and why didn't we prioritize the budget so on and so forth. But root cause generally just, you're trying to identify what were the basics of how this happened. This is Tim Bandos. This is the guy who wrote the digital guardian. This is some more technical stuff that he has in terms of tips. Number two, I really love never let a good incident go to waste, meaning always do a lessons learned and figure out, okay, what can we learn from this? What could we have done differently? Is this something that's totally anomalous or is it something that could repeat? Be sure to reset credentials for any critical things anytime this happens meaning change administrative passwords and all stuff that you've been compromised. Do you want to assume that, you know all these things might be compromised and communication within and across teams can't say enough about how important that is and how hard that is. It is really hard when you are in the middle of a major incident to keep communication happening within and across teams. But it is so important. And I have done fortunately not a ton of these but probably a dozen or so lessons learned following incidents. I've never done one that I can think of where a lot of the talk was about things that were not communicated well during and a lot of the problems were because oh, I didn't know you were doing that. Oh, I didn't know that was an option that we could do. Oh, I didn't realize this person had that data in a different place. I didn't know this person was available to us to help. Always a huge amount of focus in the aftermath of these things around communication. And here's our wrap up. In the learning, complete your incident report. I have a Google form for an incident report template that you can fill out. It's just five, I believe five questions and that's in the resources. Identify any preventative measures you want to add to your environment. Monitor post-incident, make sure things are in fact okay. If there's any changes you're going to make, make sure you get buy-in with your organization for why you're doing those changes. And of course, update your incident response plan to reflect those things. One last thing is kind of understand your organization's priorities. Determine what matters most and ensure that the response reflects those priorities. If Roundtable says our number one priority is to make sure that our customers are communicated with clearly and that they feel that confident that we're being proactive, then let's ensure that we're not spending a ton of time doing documentation behind the scenes and we're not off changing passwords and doing all this back and stuff without communicating to clients what we're doing and without making sure that the clients understand what we are and what we aren't doing so that they have that level of confidence and feel. If we say that's a priority, if we say it's a priority to keep everything completely in the dark and we don't want anyone to know what we're doing, then obviously it's a different priority but it differs across different organizations. Key success factors, plan, declare, stay calm and having a plan helps with that. Communicate and again I would wrap that around this whole thing, learn. And here are our resources. The Incident Responders Field Guide, that is the link that I showed you about of the Roundtable Instant Response Form. I'll go ahead and click on this just to show you guys sort of what this looks like. It'll just pop up on a Google Form. Oh, I have the wrong link in there obviously, all right. So I'll just fix that, sorry about that everybody. We'll fix that link before. I think I should have it here, let me see. I can find that very quickly. Incident Report. This is the Roundtable one but there's a, the one I used is a sort of shared one. So here's what this looks like if you want to kind of complete this. So it's basically an issue summary, right? State the impact, what's happened, the timeline, the root cause, resolution and recovery, corrective preventive measures and any additional comments. And this is something that when we have incidents, we have our personnel fill out so that we have a clearance to response documented within the organization. And I think that is it. Two weeks from today, the wrap up, the Cybersecurity Ninja Certification Quiz and anyone who aces the quiz will get a prize worth at least 50 bucks. We'll have a set of things you can get and maybe we'll do Amazon's certificate if you don't want any of those things. So we'll have, I had the list when we first set this up in January. I think it was a Canary, which is a sort of home security system. We'll offer a Yubiqui, which is a universal two factor authentication token that you can carry around with you and a couple other things that we'll offer. And then probably Amazon Gift Cert, which probably most of you'll take but the other ones will be worth more money than that. And scores of 80% or better, we'll get a Framed Cybersecurity Ninja Certificate with your name on it. And if the scores are lower than expected, then we'll go to a curve for both the prizes and the certification. And with that, if anyone has any questions, go ahead and enter them into your questions box in the survey. I don't see any that have come in yet. And thank you so much for coming. And I look forward to seeing folks back here in two weeks for the quiz and the wrap up of this epic 10 part Ninja training. All right, I don't see any questions so far. I see thank you. And I'll wait around for another minute or so. And if not, we'll wrap it up. See, I see Christian here from round table. Christian, do you have any questions? Can I unmute you? I can probably unmute you if you want. Let's see. Nope, he's good. All right. All right, well thank you everybody. We're gonna wrap. And I look forward to seeing everybody back here in two weeks. Bye bye.