 Welcome to bridging the risk communication gap when using the open fair method. My name is Joel Basie and I'll be offering some insight in two ways to communicate the results of a quantitative fair analysis to organizations used to traditional qualitative risk matrices. Here's a little bit about me in my background. You can see I've been active in the fair-based risk world for a while now and have successfully built fair-based risk programs at three large enterprises, representing defense, retail, and pharmaceuticals. I've learned a lot along the way and it's my sincere hope that I can help you introduce and implement a fair-based program at your organization. This presentation looks at the problem of communicating risk as a risk manager and not as a security specialist. What do I mean by that? It means to be equally comfortable accepting risk as mitigating it. I have no interest in mitigating risk if the analysis shows that to be a suboptim use of resources. It means that as a risk analyst, I am not there to tell a story to support a decision already made unless the data and analysis support that story. As a risk analyst, I do not do analysis for the purpose of justifying a decision. When leveraged properly, the analyst is engaged to inform a decision before it is made. It is imperative that the analyst be agnostic to the result of the analysis and the decision to be made. Their concern is the validity of the process and its result. I have seen many security people disappointed in the results of quantitative analysis when it failed to support their qualitative assessment of the situation. Yet when walked through the open fair process, they were unable to disagree with any of the inputs. This presentation assumes the attendee is already familiar with the open fair analysis process and the results it produces using distributions generated by Monte Carlo simulations. The irony of using this quote is that many of you probably knew this to be a Mark Twain quote. Maybe he said it, but there's no evidence he did. How many information and cybersecurity people are out there knowing the risk associated with their operations, but it just ain't so. While I've worked with many security people who say they want more mature risk analysis in their risk management program, when it turns out the results don't agree with what they know, they generally retreat back to their assessments and rainbow scatter plots. Security and risk management are different disciplines with different knowledge, skills, and abilities. This is a point worth repeating. The discipline of professional risk management is not the same discipline as security. Additional security may be applied when risk exceeds tolerance, but understanding risk from an actuarial perspective and adding security controls leverage different practices. Far too often, security staff placed in charge of risk management processes see their job as mitigating all risk, whereas a professional risk manager understands mitigation is but one option for responding to risk. While the person with the security background may academically know about mitigation as one option among many, they will almost always view their primary responsibility as risk mitigation. I've seen several cases over the years where the objective of an organization's risk management team and exercise was to mitigate risk. It should in fact be to manage risk, mitigate where appropriate, but accept and transfer as appropriate as well. So what can we do to address the problem? For starters, I recommend hiring risk analysts to do risk analysis. Stop hiring security assessors and charging them with conducting actuarial work unless they're properly trained. Start with the resources published by the open group, including the open fair standards and tool. The forthcoming open risk risk analysis example guide includes examples for reporting financial metrics associated with an analysis. Once you understand the concepts and practices of open fair, the example guide will actually walk you through how to do a complete analysis and report the results in actuarial and financial terms. The example guide also demonstrates the differences between a basic quantitative analysis and a qualitative assessment. It's important for the analyst to understand the difference. One of the biggest challenges they're going to face is presenting the results to a person uneducated in financial metrics, such as return on investment, internal rate of return, and net present value. Next point, does reporting financial metrics to security people resonate? One of the things I've continually run into over my career is people who are experienced and trained in cybersecurity being expected to also be experts in actuarial risk management. The challenge becomes discussing mathematical probabilities with someone conditioned to seeing the inevitability of the big one right around the corner. Expect to do some level setting with the consumer of your analysis. Do they understand basic probabilities? Do they understand exposure and the difference between that and single loss expectancy? Have they already decided what the risk is and are only looking for the analysis to confirm their belief? This brings me to another recommendation. I recommend not doing analysis of decisions already made. Too many times I've seen that result in a security person becoming resistant to open fair based risk analysis. Why? Because they've already decided to expend resources based on their belief of the risk. Only to have the quantitative analysis indicate the risk from an actuarial perspective is relatively low. While unable to disprove the approach and data associated with the analysis, they become resistant to embrace a practice that may make them look bad in the eyes of their superiors and those they convinced for the resources expended. I suggest limiting analysis to decisions yet to be made or those that are open to changing. My suggestion for conducting the risk analysis within a security department is to focus on the prioritization of allocated budget rather than focusing on analysis to justify budget already allocated. That way the results are focused much more on ordering and direction. In other words, relative risk reduction or return on security investment, notice Rosie, those become more important than absolute risk reduction or Rosie. But if you find yourself in the position where you need to report to a broader risk management function, my third recommendation is to align with existing risk reporting practices whenever possible and to the maximum extent practical. This will likely involve taking an existing risk matrix and converting it to a single scale you can plot ALE against. Because a risk matrix generally comprises a frequency or probability or likelihood axis and a magnitude or impact axis and ALE combines the probable frequency and probable magnitude of future loss, the elements are there. But because risk matrices, even those with numeric values are rarely built with defensible mathematical principles, you will likely find yourself needing to make some judgment calls on the ALE risk scale you derive from the risk matrix. What this allows you to do is report your quantitatively derived results using a consistent scale to apply the low, moderate and high or green, yellow or red labels to your analysis consumer. An example of how to accomplish this is coming up. This is an example of a single axis scale derived from a two axis matrix. This scale provides the ability to report ALE results using qualitative labels already familiar to your organization and aligned to the maximum extent possible to the accepted thresholds. There are debates out there regarding which single statistic best represents the results of an analysis. For the purpose of this illustration, we'll use the average ALE. So using this scale, if the average ALE resulting from your Monte Carlo simulations is $120,000, you to report that risk is low and using the appropriate shade of green. If the average ALE is computed to be $1.9 million, you to report that risk as moderate and yellow and so on. As I mentioned, there are options for which single statistic to extract from the distribution to use to represent the risk. You may opt to go with the most likely ALE. You'll need to decide on which approach you'll take to calculate it, the median ALE or percentile like the 90th or 95th. The important thing is that you remain consistent. Don't use the average sometimes and the 90th percentile other times. That'll prevent you from comparing apples to apples. I've seen security consumers request cherry picking higher percentiles in an effort to show the risk to be higher than the primary statistic. Remember, the goal here is to distill a 10,000 value distribution into a single statistic. Again, whichever you choose, just be consistent. This approach doesn't stop you from providing additional statistics and graphs, such as loss exceedance curves to the appropriate audience. This suggestion though, gets you to the table reporting your analysis results in the vernacular of the assessment results without compromising the integrity of the analysis process. You may find analysis results consistently come in far lower than the assessment results. That's not unexpected and ideally will raise questions. You'll be prepared to answer by walking the decision maker through the open fair process and highlighting the differences between it and the assessment process likely followed to evaluate the other risks. Always using the same statistic, whichever you decide to go with, also facilitates the recommendation mentioned earlier of focusing on relative risk over absolute risk. Instead of focusing on the absolute risk one scenario has with an average ALE of $20,000 and another scenario with an average ALE of $200,000, focus on the ranked order list indicating one scenario presenting 10 times the exposure of the other. Once you've completed the survey analyses required to get the broad view of your risk landscape, you can leverage the additional statistics from the analyses to evaluate response options and calculate ROI, NPV and IRR and prioritize your response. I will start with a warning. You will need to apply judgment when replacing qualitative estimation guidance with defensible numeric ranges. Many of these matrices weren't created mathematically so distilling them to a single scale will require decisions to be made where ranges overlap. The first steps will be to derive usable values from the descriptions provided in the matrixes rows and columns. Using the example matrix provided, we took the lower bound of each impacts range and multiplied it by the likelihood of occurrence. This gives us an idea of the risk exposure that exists in each qualitative band and shows where there's overlap. For example, $100,000 exists in the moderate band and the low band. Using the matrix could result in a $250,000 risk being treated as low while $100,000 risk is treated as moderate. This is a peak into why these type of risk matrices, even when numbers are present, aren't mathematically sound for quantitative work. But that's a topic for another talk. Our objective here is to align our results to the maximum extent possible with the existing reporting style to help bridge the communications gap. We see again here where we had to make a few judgment calls regarding where to establish the minimums for our bands. The purple circles indicate our choices and result in the single-axis scale we'll use to report the results of our risk analysis as low, moderate, or high, or red, yellow, green, et cetera. I recommend always including your scale in your reports. That may lead to conversations regarding why you chose those breakpoints. Welcome those conversations and walk the analysis consumer through your process. The important thing is to solidify the scale then use it consistently. You don't want a risk going from moderate to low simply because the thresholds changed between reports unless that was a conscious decision made as a result of an adjustment to the company's risk tolerance. So for example, the organization grows and decides a $250,000 exposure is now low risk. The risk didn't necessarily change. The average ALE of $250,000 remained the same but the qualitative value of what that exposure represents to the organization has changed. Decisions like that are indicative of a more mature risk management function. So to recap, my proposed solutions for bridging the communications gap between security and risk are, one, higher risk analysts steeped in actuarial risk and decision disciplines or train existing staff. Experience skills or training in cyber and information security do not automatically transfer into effective or mature risk analysis or management. One of the key competencies needed by a risk analyst is communication. The ability to bridge the communications gap between risk and security will be somewhat dependent on the analyst's ability to be conversant in everything from technical architecture to finance. They need not be a cybersecurity expert. That's what the subject matter expert interviews and elicitation is for. Number two, the analyst will need the ability to assess the humans they will encounter and adjust their message accordingly. To be clear, I'm not saying they need to dump things down. Everyone brings different skills to the table. The skills needed to secure a network are not the same as the skills required to conduct and report the results of a quantitative risk analysis. And number three, align with existing reporting scales whenever possible. If risk is typically reported as low, moderate or high or using traffic light colors, don't show up with a report that only contains loss exceedance curves and percentiles. Report risk using the common terms, even though you got two the results differently.