 The least advanced yet persistent threat that we're dealing with on a regular basis. Every time that I go to one of these, I can't tell you how many speakers end up talking about how the human element is the weakest link in our security apparatus. So I wanted to try to flesh that out a little bit. So I worked for an organization called Ideas 42. We were born out of a number of research institutions back in 2008 by academics who wanted to take insights from behavioral economics and psychology and apply them to real-world problems. As you can imagine, this is relevant for things like family planning and reproductive health, as well as energy, as much as it is relevant for cybersecurity. And I know what you must be wondering, why 42? Well, I can tell you, it's not because we only have 42 ideas. It is really because we're just big Douglas Adams fans. So for those of you who love Hitchhiker's Guide, we also love Hitchhiker's Guide. And we've been working over the past year to develop a deeper understanding of how human behavior and behavioral science can actually be applied within cybersecurity. And we've developed a narrative, a piece that pays tribute to Deep Thought, called Deep Thought, A Cybersecurity Story. You see small cards on your seats advertising. But it culminates our research within this space as we're trying to understand how we can bring behavioral science to bear on these significant problems. So before I continue, I want to be able to provide a bit of a framework for thinking about people's behavior and how we think about evaluating that behavior, the models that we use. So the easiest way for me to do this is to compare the way that we think about this to the way that a classical economist might. A classical economist assumes very much that human beings function kind of like this guy, Spock. They're rational beings. They're unemotional. They maximize their utility. They weigh all the costs and benefits before they act. And when they act, they do it with pure intention. We believe that we can impute people's intentions from their actions in this paradigm. But maybe we don't fully agree with this model. Maybe we think people act a little bit differently. Maybe we think people act more like this guy, Homer Simpson. Generally, a lovable doof enjoys donuts, maybe a little bit too much. Probably needs a very strong helping hand and helping him to follow through on the intentions that he may put forth. I would argue that we actually think about people quite differently. It's neither fully Spock nor is it fully Homer. It's more like this guy, Brad Pitt with a really gross beard. It's sort of an odd choice. And I bring this up because I think it's really important. The success of any product, policy, or proposal really rests on the behaviors of end users, the people who are going to be interacting with those products and services. And if we assume that those end users are going to be like Spock, we're going to design that thing in a specific way. If we think they're going to be like Homer, we may give them too much support. In reality, we should really be thinking about them kind of like Brad Pitt with a weird beard, generally smart, but making some strange decisions sometimes, and under certain circumstances. And that brings me to my other really important point about this, which is that it's really about understanding the context in which these actors are acting that's going to lead us to better insights about how we can design our solutions around them. Because the context matters the most, I think. So if we understand that sometimes people have intentions and sometimes they don't follow through, and maybe that has a lot to do with context, then we can start thinking about building better solutions for people. And what does this kind of look like? I think it looks like this. Now, I didn't find this, I didn't take this photograph, I can't take credit for it. But I think it's a really great example of this idea. People seem to want to work out, but in this context, they're going to take the escalator because it's seemingly the easiest path. It's the one of least resistance. But again, it's hard for us to impute what they want. Do they really want to work out or do they want the ease of the escalator? So you may be asking, what does this really have to do with cybersecurity? So let me just take a quick poll. Who believes that updating is a really important and critical way to protect our systems from bugs and persistent threats? Great. Now, keep your hands up if you said yes. Who amongst this crowd of people updates as soon as you get your update? Keep your hand up. So this is weird, right? Because one thing that we would assume about Spock is that if we told Spock that updating is important, that he would do it. And here I'm surrounded by experts in this space. I look around at your computers and you have stickers on your cameras. Some of you have VPNs that you use on a regular basis. And yet updating is still a persistent problem. Why? So we wanted to look at these sorts of problems. And I think that with updating it comes down to a few elements. It's a problem with the user interface. It's a problem with the choice architecture. If you're asked in the middle of something, do you want to update an hour later, you're probably going to say later. And you may do that not just for a few hours or a day, but for weeks on end. And when a threat is imminent, we don't have that time to wait. So what we set out to do is try to understand the vast array of different types of challenges in the space. And to try to understand what about the context is really causing people to do what we see them doing so that we can build better systems around them to get them to do what we want them to do. So as I said, we've been working on this big narrative. It's actually a novella. And as part of this release, we've developed a video which I'd love to show you right now. Are you next? Because in the time it takes you to watch this video, 12 cyber attacks will have occurred. And while many of the recent high profile attacks targeted information like emails, bank account credentials, and government employee data, some have shown how hackers can execute attacks of even greater consequence. Despite billions spent worldwide to build new technologies to prevent these attacks, cyber crime not only continues, but seems to have increased. Why? Well, most investments in cybersecurity go towards building more complex technology and bigger walls. But in doing so, we neglect the human beings who are at the center of these systems. While we might believe that rogue hackers pose the greatest threat to cyber security apparatus, experts estimate that 70 to 80 percent of the costs attributed to cyber attacks are actually the result of human error. Developers who unintentionally build errors into software, end users who procrastinate installing security updates and set bad passwords. IT administrators who neglect to manage access control permissions of employees and vendors, providing pathways to sensitive information. And C-level executives who don't always invest enough at the right time and in the right places. At Ideas 42, we use insights from behavioral science to understand human behavior. We believe the context matters and that awareness does not guarantee action. That we all have predictable biases and that we should focus less on how people should act or how we expect them to act or even how they intend to act, but how they actually act. And when it comes to cybersecurity, it means we realize that human behavioral factors present a rich vein of opportunity for making our system safer, more robust, and more resilient. Take a C-level executive, for instance. Classical economists believe that investment decisions a C-level executive makes comes from a careful weighing of the costs, benefits, and risks using all information available to them. Behavioral economists recognize that's not likely the case. Uncertain about the costs and benefits executives may simplify the question or take mental shortcuts. Instead of ensuring security, they change the question to, are we compliant? In determining whether additional investments need to be made, they may ask, did we have a breach this year? Choosing to invest only if the answer is yes and neglecting to consider they may have just been lucky. To improve investment decisions, we want to build tools that help executives cease our security, not as an investment, but as a key aspect of operations. We may need to reframe finding failures in cybersecurity systems as important successes and elevate cyber risk as a key risk area for organizations. But this is just the start. By turning the lens of behavioral science on various human challenges in cybersecurity, we can identify new opportunities and interventions to improve human behavior and security generally. To learn more about ideas 42 and behavioral science applications and cybersecurity, check out our new cyber novella and reach us at our website, www.ideas42.org. So I don't have a whole lot of time left, but I do want to talk about one other thing. Investment in cybersecurity is a really big issue, making sure that the executives who are making the decisions about provisioning these pieces of software, these infrastructures, they need to be able to do it in a way that's measured and is going to have success. But what we find is that if we just use the classical assumptions about how people make these decisions, we understand they're going to have to integrate a lot of information, calculate risk appropriately, and weigh the costs and benefits in order to decide how much will I invest in and what. But what we find is that it's actually really hard to do this in practice. Oftentimes, organizations, and this is something that we found through our research, don't even know what assets they have or where they exist. And if they don't know what they have and where it exists, how can they go about a measured way of responding? Well, in reality, they still respond, but they probably do it in weird ways. So for instance, they might use incorrect mental models about how to think about their security system. Is it a walled fortress that I'm actually trying to build? Is it a process that I'm trying to enact operationally? Or is it something else? Is it more like a network that I have to think about? They may substitute safety for compliance, thinking that if I just check all the boxes, if I apply this framework, it means that I'll be safe. But they may think about that in a static way, not as a process, not as a way of trying to evaluate on a regular basis, how we can improve that framework, how we can continue to check new boxes. And finally, we may find that some executives think about failure in an improper way. I think that we recognize that in order to be successful in building new cybersecurity infrastructure, we have to be finding failures on a regular basis. But if you've spent your entire career trying to show your successes, that might be a very strange thing to try to do. So what can we try to do to help people? Well, I think a lot of this is about building new tools to help these executives think about cybersecurity in the right kinds of ways. Provide them with mental models that are more true, maybe without boundaries. So they're not thinking about building a castle, but they're trying to manage a network of individuals across an open domain. Maybe we have to help them think about going from a checklist approach to a process-based approach. And maybe it's about actually embedding process in that checklist, so that they can't just check it off once and be done with it. And finally, maybe it's about actually trying to make failures your metrics of success. If you're finding new failures on a regular basis, you're probably doing better than if you aren't. But these ideas are not just to be used to bear on issues of investment and the C-level executive decisions. There are problems across the domain with all different types of users. People who build code, users who don't necessarily update. IT administrators who do have to manage access control permissions on a regular basis and provision these systems. And the C-level executives as well. So I encourage you, if you're interested, to please check us out. At ideas42.org. Today we've released the first two chapters of our novella. It's gonna be a serial release over the next three weeks. So it'll give you an opportunity to enjoy hopefully a good story and get some interesting analysis along the way. And if you're interested in learning more, please feel free to reach out to us. You can reach us at cyber at ideas42.org. I'll be around today. So if you have any questions, let me know. Thank you.