 John Linford is the forum director for the Open Group Security Forum and Open Trusted Technology Forum. He's open fair certified and he was the lead author of the open fair risk analysis process guide. In this presentation, Eva and John will provide a high level overview of the open fair taxonomy and framework and highlight the updates to the open fair standard documents. So Eva, warm welcome John, welcome to you too. And who's going first? John's doing the driving. John's doing the driving. Right. I will leave you to it. Great. Thank you everyone very much for joining us today. I am John Linford so this is the voice you'll be hearing from me and I think we've already heard Eva pipe up as well. So to quickly introduce this, our presentation today is going to focus on providing you with a high level overview of the open fair body of knowledge, which is a quantitative risk analysis standard and we'll also be walking you through some of the updates that were in the process of making that body of knowledge. Probably helps though to introduce what open fair actually is. We've been talking about it and you've heard about it now from several different people. The fair stands for the factor analysis of information risk. It does not necessarily have to apply to information risk, but the standard itself is focused on information risk. The open fair body of knowledge is comprised of the open group risk taxonomy standard called ORT, which is currently on version two and the open group risk analysis standard, which is currently on version one. We are in the process of updating these to ORT version three and ORA version two. Among these changes, we have separated out to the example that was present across both of those that we will be developing in or that will be developing a separate example in the level of detail needed to actually understand how to walk through a risk analysis scenario in a separate publication to come out in the near future. That will be a risk analysis example guide that will actually provide guidance rather than trying to walk through an example as part of the standard itself. Once we have ORT version three and ORA version two out and published, we'll next look to make sure that the conformance requirements are updated to reflect changes to the standard. We aren't focusing on large changes to the standard, rather we've been focused very much on refining the concepts within them. Once we've got our conformance requirements up to date, then of course we'll make sure that our exam materials and study guide follow those changes as well. So why use open fair? Well, it is a framework and taxonomy for understanding, analyzing and measuring information risk. As I've already said, it isn't specific necessarily to information risk, but is widely applicable and can be used to measure and analyze many different types of risk. Crucially, within open fair, risk is defined as the probable frequency and probable magnitude of future loss. This is a crucial distinction. Open fair does focus on risk as a loss side of things. So when we're talking about risk, we're focused on loss to the organization. What this means is that organizations can discuss risk consistently, whether this be analysts within the same organization trying to figure out the amount of risk that they're attempting to mitigate, or if it's between organizations. By putting things in a units of currency per time period frame or probable frequency and probable magnitude, that means you also get to avoid some of the confusion about potentially having a high risk. Depending on your own personal preferences and views of risk, you could have a very different understanding of what a high risk means. Instead, this allows you to present things as potentially five different losses of three million dollars in a single year, which allows you to more actively work to mitigate those risks. Over to you, Eva. OK, so following up on what John was talking about, we have added some guidance from the ISO guide 73 to look at the overall risk management process and then broken that down in the open fair standard to look at the specific activities that need to take place in those areas. So for risk identification, of course, we need to identify and characterize the assets, threats and controls and look at the impact and loss elements that we're going to be analyzing the risk analysis phase is where we actually do the deep dive. Once we have done that identification and apply the quantitative techniques to get the now looking down at the risk management step, get that accurate risk model that we're going to use to get those meaningful measurements that we're going to be using to if we have different scenarios, scenarios provide those effective comparisons between the different scenarios based on those measurements and the actual quantitative guidance that we're using to get the losses. And these are going to lead to those well informed decisions. And if we look back to the risk management stack, that would be that risk evaluation phase where we're actually looking at, OK, now that I have done this risk analysis, I need to look at my results and see how this is going to guide my decision making process. And then those well informed decisions are going to lead to basically help management make the right the most effective decisions that they can make based on this detailed analysis, which, of course, is going to then help with the risk treatment that is going to take place and the ongoing risk monitoring. So if we can go on to the next slide, John. So what we do here is we use a top down approach and you already saw this little diagram in the I think Mike had it on one of his slides where we actually look at risk by breaking it down into the various components that can be measured. And in the document, we have actually added some tables and detailed definitions for what are the events, what are the units that we're using for each of the different events in this tree. So for loss event frequency, it can either be events per unit time or the probability of a single loss event in a given time frame. For threat event frequency would be events per unit time as well. Vulnerability, you have probability that a threat capability is greater than a resistance strength. And by the way, in the standard, we understand that some people prefer to use the word susceptibility instead of vulnerability. So we've actually added it as a synonym rather than replace the word vulnerability. So a lot of this clarification has been made in the standard to help people use the taxonomy, but also clarify the taxonomy as well. I'm going on to the next, actually before I go on to the next slide, the other thing that once we have done that analysis, we want to make sure that we have that those decision measurements documented and all our assumptions are documented before we present that information to the person requesting the analysis, we need to make sure that everything that we have done is actually well documented. And if you go and look at the process guide that actually deals with how to go about the thought process that goes into that and how to go about doing that. So, John, go ahead to the next one. So if we look at the left hand side of the previous tree, we have probably of action. Which is which can be looked at in its various components. It could be the risk of detection or capture basically if the threat actor is looking to see whether or not they're actually going to have the opportunity to do something. What would be the risk of detection? What would be the level of effort of the threat agent? And also, what is the value of the asset that is being acted upon? For the contact frequencies, we can have, of course, intentional contact where somebody actually has the malicious intent to act upon an asset. It could be random and or it could be just something that happens regularly that you have to look at what controls to put in because you know that every evening at six o'clock, the janitors come in and they get access to the offices. And you need to have the proper controls in place like cameras and stuff like that. If you look at the other side threat capability, attackers are going to have varying capabilities. You have the nation states that know what they want. And you have somebody who might just be opportunistic and the opportunity is there and they take the opportunity, which means that your controls have to be able to resist whatever the attackers have in mind. So that's where you get into the concept of resistance, strength. So vulnerability does not happen if you have resistance, strength that's going to impede that actor. So go on to the next one. OK, so we have then, of course, different types of if you look at the right hand side, we have the lost magnitude and whether. And if you have a primary loss, you can always have a secondary loss as a potential, but it doesn't necessarily mean that you're always going to have a secondary loss. But once you have that primary loss that took place and you look at the various forms of loss, then potentially some of those could lead to reactions from secondary stakeholders. And that's where you would get into secondary losses. So fines and judgments could very well come in as secondary losses if people decide to sue somebody because they weren't taking good care of their data. And so basically those forms of losses would be applicable to either the primary or the secondary, but some of them may be more applicable in the case of primary losses and some of them would be more applicable in the cases of secondary losses. So John, go on to the next one would be yours. Back to me. Within the open fair risk taxonomy, that risk tree, then looking back at everything back together, we've got various different categories of controls to help prevent loss. We can see those controls on the left hand side that affect our loss event frequency. These are our loss prevention controls. So these will work either to reduce loss event frequency or to prevent losses from occurring all together. For instance, we've got our avoidance controls that work to reduce our contact frequency. So a simple example of this would be something like a locked door. If your threat agent can no longer get through that door to the asset, then they can't contact it and attempt to attack it. We also have things like our deterrent controls that work to reduce the probability of a threat agent acting against that asset. So for instance, we might put a security camera above that door. If your threat agent is planning to enter that door but sees the camera, they may change the mind because now they've got an increased risk of detection or capture. We also have our vulnerability controls that will work likely to increase the resistance strength of that asset. So another simple example there might be something like encrypting your data. On the other side of things then, we have our loss mitigation controls that work to reduce the magnitude of a loss once it has occurred. So these are our responsive controls. So these will work either to reduce the magnitude of losses from that threat event from your threat agent attacking the asset. So reducing directly that primary loss magnitude or they could be things that work to prevent secondary losses from occurring once you have that primary loss that's happened. One of the other areas then that we've added to the open share body of knowledge is incorporating the NIST CSF five functions. Now you've seen this image before with Mike Jurbick's presentation and throughout the array, we do actually work to build this image bit by bit. So you don't just get all of it thrown at you in one go. Instead, we walk through how you get into the loss scenario from the beginning, then look at our loss event frequency side of things from our contact event all the way to the loss event before turning and adding our loss magnitude side of things for our different forms of loss, whether that be productivity replacement response losses, those sorts of things. This allows us then to show how these five functions which supports or which are the five primary pillars for a successful and holistic cybersecurity program work particularly well with open fair. So we can see that we've got those different aspects of the five functions scattered throughout our loss scenario all the way from contact event to the end of our secondary loss event where we are attempting to respond and recover from those losses. So this allows you to model what's happening throughout the entire loss event, documenting your rationale and assumptions throughout the process so that when you do eventually present your findings to the decision maker, they're going to be able to look and see, okay, if we institute a different responsive control here, we can effectively compare how much our loss will be reduced or mitigated, or it could be to make a decision for instance to take insurance and instead to transfer that to that insurance provider. Back to you Eva. Okay, so one of the things that we did in the standard is actually try to describe some of the various steps that are going on in terms of some kind of formula. So the first one is kind of like obvious that the loss event frequency is going to be less than or equal to the threat event frequency, which would be less than or equal to the contact frequency. So the other one, the second one is a little bit trickier in that basically you're looking after the fact, okay, there was a loss event to give in the threat event. So what vulnerability actually was responsible for that? So it's given, so it's looked at as a conditional probability of a loss event given a threat event. The other one, the vulnerability, you're not going to have a vulnerability unless your threat capability is greater than your resistance strength. So this is a you're going to have it or you're not going to have it. So that one is basically pretty obvious. So the other one, the loss magnitude, if you have a primary loss magnitude and it leads to secondary loss magnitudes, of course, that's going to be additive that you're going to add the primary loss magnitude and the secondary loss magnitude. And then the last one, the secondary loss event frequency is the probability of having a secondary loss given a primary loss. So basically, you need to have that primary loss before you can start looking at secondary losses. Over to you, John. Great. Thank you, Eva. Couple other important things to add about the open fair body of knowledge. The open fair body of knowledge is applied or is specific for risk analysis. So one of the core ideas behind this really is as a risk analyst, you're presented with a loss scenario and then you are asked to go about determining what is my level of risk for some decision maker. Defining the loss scenario then is the critical aspect of that. In order to actually analyze that loss event, you need to know what your asset is. So you need to be able to identify what is the thing that is going to suffer a loss. You also need to then be able to identify what that loss is. If you're unable to actually observe that loss event, then it's going to be pretty difficult to find any sort of information about how often that happens or the magnitude of costs to your primary stakeholder, the organization or individual that owns the asset and feels the loss event occur. Another interesting thing about this is that when we do have that primary loss event, that may or may not result in a secondary loss and that could then be confounded or built upon depending on the reactions from those secondary stakeholders. So we do provide as well some additional clarification within the documents that we can think of our secondary stakeholders almost as secondary threat agents. In other words, their actions after the primary loss can cause additional losses to that primary stakeholder. Now, although we don't necessarily have canned questions or checklists available to help guide the analyst, our risk analysis process guide does walk through the process from being given some loss scenario, identifying the asset and primary stakeholder, figuring out what that loss event is, and then going about finding data on frequencies, whether that be contact threat event or loss event frequency, as well as for your loss magnitude side of things, including some guidance on where you might go to look for information about where you about those loss magnitudes. One of the other critical things that we push in the standard is that it should utilize objective measurements as much as possible. We want to rely on data rather than the opinions of somebody potentially random within that organization. So if you're looking at losses from laptops being stolen, you'd look for data on, OK, how many laptops have actually been lost over the course of a given amount of time rather than asking somebody within the organization, do you think we lose a lot of laptops in a year? All of this goes together, then, to provide that final risk value of units of currency per time period, depending on where your organization is based, and then you can bring those numbers back to your decision makers and allow them to make decisions using them rather than just saying, yeah, this looks like a high loss scenario or no, this looks like a low risk scenario. This means then that your decision makers can use those numbers potentially to create a scale of what they consider to be high, medium or low that would be based on units of currency. So you might be able to say that a loss between zero dollars and five million dollars is considered low risk, but a loss of more than 50 million dollars would be considered high risk. And of course, then that allows your decision makers to consider the actual costs and benefits from implementing additional controls or potentially, as we, as I mentioned earlier, transferring risk through insurance. Any last thoughts from you, Eva? I don't think so. You said you wrapped up everything pretty well. In that case, both of you, I'm going to thank you very much. It was that was great. And you even did a nice job of touching on some of the questions that had come in as well. So.