 Hello everyone and welcome to this morning's webcast, Applying Open Faire to Analyze Risk in a Retail Environment. I'm Jim Hightella with the Open Group. I'll be hosting the event today and I'll be talking very briefly at the start just giving an introduction to Open Faire, the risk taxonomy standard and the risk analysis standard. I'll be followed by Jim May and Bill Estrum from Metaplexity who will be talking about using Open Faire to Analyze Risk in a Retail Environment. The material that we're going to be covering today in the analysis piece is based upon two standards that were introduced by the Open Group, the risk taxonomy standard which you see a graphic of here. This is a standard that we published four years ago now and did a revision last year and it looks at how you decompose risk into its constituent parts, looking at loss event frequency and probable loss magnitude. So I'd encourage you if you've not seen the risk taxonomy standard to get a copy of it from our website. It's free for folks to download from the Open Group publications page. The companion standard to the taxonomy standard is called the risk analysis standard and it adds a set of best practices around doing a risk analysis to complement what's in the taxonomy. It covers such things as how to do a fair based risk analysis, measurement and calibration, the risk analysis process, and some basic control considerations. So those two pieces really comprise the body of knowledge for the Open Group certification program around this which is called the Open Fair Certification Program very briefly. So it's a certification based on your knowledge of fair based risk analysis, requires that you take and pass an exam. We have a number of trainers that are in the midst of accrediting their training courses. It's suggested that you do take a course but it's not required that you do. You can learn the materials just by studying the two standards that's sufficiently to pass the exam. So that's the Open Fair Certification. Currently there's just the foundation level to it and we are working on a more advanced level for the certification that we'll probably introduce late this year. And just a little bit about the exam, it is available at Prometric Test Centers. It's an 80 question test that you have, 120 minutes to take, it is a supervised exam with a pass mark of 70%. And then finally some links to some of the things that I just talked about. So the taxonomy standard is available at that URL, the risk analysis standard is there as well, and then we have a couple of web pages on the risk certification program if you're looking for more information on those. All right, well welcome to the webinar. So the objective of this presentation today is to really kind of show how the Open Fair risk analysis standard can be used to analyze information risk. And I think we've picked a fairly timely topic here considering some of the things that are going on these days in the retail environment with credit card capture. And so we're using a scenario that will talk about some of the issues related to that type of threat. So we'll take a look at a real threat scenario that's based on some information that's been published on the web. And we've kind of come up with our own version of that scenario that will analyze the risk factors that are involved. And we'll also mention briefly some of the things about the Open Fair certification program, but I think Jim already covered it pretty well. So yeah, my name is Bill Estrema. I'm the president of Metaplexity. I've been involved with the Open Group since 1994 when it was first started. I've been working heavily with the Enterprise Architecture Forum with the development of Kogas and more recently, Archimage and Fair. And I'm a past chair of the Architecture Forum, sort of an open group governing board. Metaplexity is a company focuses on training and consulting services related to enterprise architecture. Well, I divide my time between Windsor Software and Metaplexity Associates. I'm one of Bill's guys who does some of the training. We recently got into a project with the folks at the Open Group to put together the curriculum for the fair certification training activities. And parts of what we're doing today is based upon the material that we've submitted and the Open Group was in publishing. We also recently, on the Windsor Software side of things, put together a book called Building Enterprise Architecture, which is a series of approaches to maximizing the use of architecture in the organization because we like Togaf over on the Windsor Software side of the world. So we're going to be talking here about the Open Fair standard and how it compares some of the other risk management approaches out there with a lot of similarity and some differences. So basically, when we take a look at most risk management frameworks that are out there and there are several, we find that they're trying to mitigate risk by using techniques like risk classification, identification. We typically talk about the initial or apparent risk assessment when you're looking at the baseline pre-intervention risks and then the risk mitigations and resulting residual risks that need to be examined. And then basically, once we get the system under control, you can then move into a monitoring and governance approach. So, yeah, there are two different levels of risk, as I just mentioned, the apparent risk or initial risk, or some framework's called, which is basically the categorization that you have when you're basically just beginning to look at the analysis and looking at the factors that are currently there. So prior to any mitigating actions, then after you've implemented the mitigating actions and any risks that remain be managed are basically called the residual risk. So a typical process that's followed is often this set of steps here is very similar to the steps that are listed in the TOGAP Enterprise Architecture Framework for Risk Management, but you can look at project management frameworks and other frameworks that may pretty much follow the same techniques. So the first thing we try and do is classify the risks and follow the nature of the risks. Then we identify a specific risk and we then perform a control risk assessment when we try to figure out what are the apparent risks that we're facing. Then we try and come up with some strategies, some mitigations, and then we basically do the residual risk assessment and go into a risk monitoring mode. So similar to what we just described a moment ago. Go ahead. And we'll come up with some kind of a model that allows us to classify the severity of the risks that we're facing. So if we have risks that are frequent and high-magnitude, the catastrophic events, then I would say that's an extremely high risk, all the way down to risks that are negligible impact or effect. And unlike we would occur, it would be considered low risk. And so each company can really determine their own risk framework in terms of how we classify different events and what level they assign different things in terms of catastrophic or high levels, things like that. And then you can have some kind of a table that you can use to kind of keep track of risks. So here we have a credit card, data loss, and the effect obviously for a company and for its customers would be critical. Right now we seem to be in a vulnerable where this would be coming with fairly frequent events, and the impact is extremely high. So mitigation would be to implement a set of security practices, and then we would hope to, by doing so, drive down the frequency and hopefully also reduce the effect to a marginal level, which would then make it a lower risk or a low risk. That would be the goal. So basically once these residual risks have been identified, it still cannot be managed. Some frameworks like, for example, ProGap recommend that your governing group for your enterprise article or IT governance organization should basically take on those residual risks, approve them, and make sure that they are being properly managed. And the mitigations should be carefully monitored to make sure that the enterprise is dealing with residual risks rather than only fixing the initial and parent risks. So the open fair standard, as Jim already pointed out, that is composed of two parts. We've got the risk analysis standard, which gives you a method for doing risk analysis, and then the risk taxonomy, which basically defines the term and provides a structural model for the risk analysis framework. So together, those two standards are the body of knowledge we call open fair. And as you look, you can kind of see the different areas trying to manage here. We've got various types of threats that are out there. We've talked about several different types of throw through the parent standard. We have assets that we're trying to protect in order to minimize losses. And we then have a set of controls that we can implement that will help us do that. So again, the open fair standard defines risk as the probable frequency and magnitude of future loss. So we're dealing here with probability, not just possibility. I mean, it's possible just about anything can happen. But here we're talking about the numeric measurement of the probability of an event occurred. So yeah, nearly anything in life is possible, but we try and make it so that we can actually do a quantitative analysis of the probability rather than just the possibility. So on the open fair standard, we'll see that we follow basically a four-step approach. The first stage is to identify the scenario components and you'll see the model for this. Actually, Jim already showed you a basic model of the open fair taxonomy. So you'll see how we populate that. Basically the two major components of the taxonomy are the loss of that frequency. That's the stage two analysis. And stage three is to evaluate the loss magnitude. And then based on those two factors, we could derive an articulatory risk in the final stage. This can be repeated for several different risk scenarios. So you don't just do just one threat you might actually have in a typical scenario or a typical risk analysis. You might actually analyze multiple threat scenarios to try and get some kind of an aggregate idea of the amount of risk that you're facing. What we're about to walk through is a condensed version of what we've done as the case study that we use in the training that we do. So our case study is called the Unfaithful Contractor. This would normally be woven in in segments in the training, it's a two-day course. And what we do is we base this on the overall taxonomy in doing this because we have the ability to, because it's a hypothetical, we have the opportunity to create a transit of the main factors at the lowest level with contact frequency. And you see them along the bottom here, contact frequency, probability of action, threat capability, resistance, strength, asset loss factors, threat loss factors, organization and external loss factors. And we populate a model and we show how to populate and do the estimates in it. So today we'll do a slightly abbreviated version of that. And we'll be open to questions as we go along post your questions, we'll try and get to as many as we can when we're done. So in this case study example, we're starting out with a real threat with some ability to estimate loss with done frequency and a semi-real loss, a simulated loss with some possibility of estimating loss and magnitude. It all starts out with an inquiry from management, Dan Johnson, the third CEO and owner calls and asks whether it is possible that the store systems can be hacked like ATMs and machines. Of course, you think hacked, but you don't think so. Then before you reply, you say, well, why the concern? You're the risk manager in this organization in this scenario. So you get to come up with answers for stuff like this for Dan, of course, as a CEO. So he says he saw a speaker at the Civic Club's monthly luncheon talking about security. He brought it up. It sounded serious. Can you look into it? As the risk manager, obviously, we know the difference between the thing on the left and the thing on the right. He said something about an ATM, but the nearest thing to an ATM in the store would be a device like this. So you do a little bit of research. And again, because he's the CEO, you're probably gonna have to do a little bit more research just to be a good guy. You like to help them out a little bit. Maybe even help them understand the difference. Well, along the way, you get the Krebsone security and you see this. This is a straight up a Krebsone security. It happened, got posted on December 2nd. We've got Brian Krebs permission to use this as part of our training. And what looks like a little bit of cell phone video shows a shell being removed from the top of a very commonly used card reader. So we might have a threat. Now what? Mr. Johnson always asks two questions. What are the chances this might affect us? And what's it gonna cost us if it does? On stage one, when we set up this scenario, we try and figure out what are all the factors that are gonna be possibly interrogated in an analysis like this. And it can include quite a wide variety of different areas of inquiry. The store is a group of stores. There's about 40 stores in the chain. They're regional, they do all for $5. They have stores, store staff, management, point of sale systems, things like that. The store exists in an actual environment that is a community of users and competitors and things like that. The store is hoping to open four more new stores in the next couple of years. It's all privately owned. And so we have some factors in there that might be worth considering in the event a loss occurred. In stage one, the outputs that we're gonna create will be the asset at risk. We need to identify, we need to say what that is. And we can look at risk, but the risk ends up becoming this completely blurry problem until you get a focus on the asset and then you can start taking a look at the things connected to that asset, related to that asset, associated with that asset. We also look at the threat community who might actually be this threat. Who are these people involved? And then the loss event, what's it look like? So in the case of the scenario we put together here, we said, well, we've got some facts here. That threat of credit card information by the point of sale skimmer. We're gonna say in this case is contracted service personnel doing the work that caused this. They're placing and removing it. They come into the store, they do their regular maintenance on the equipment and maybe they drop the shell on top of a reader once and then come back 30 days later in their normal maintenance cycle and remove it. In and out, data has to be harvested since you're removing the device. You're gonna have to harvest the data and you're gonna have to do something to complete an exploit. And some feeds and speeds, maybe 140 customers a day per checkout and using credit cards as their transaction choice. And cards is a mix of credit, debit cards, which are commonly understood. And then what we call electronic benefit transfer cards. That's the United States form of card where people get a direct payment for social programs from the federal government through the card. And that has extra regulatory requirements in order to be able to accept those. So there's some interesting possibilities here. So in terms of the outputs at stage one, there's a lot more detail that would normally go into this, but we're doing this for time's sake. We say the asset is the point of sale systems that that shell sits upon. And then of course, all of the elements in this scenario radiate from a certain point. And this is probably the furthest point that the company actually owns. There are other assets involved. There's the credit card number, which is an asset held by the bank that does the clearing of the transactions and you have a relationship with the credit card clearance organizations and obligations that are related to that. But the one thing you own is the PLS system. So that's just an arbitrary choice. The loss event is theft with exploitation of credit card data by PLS Skimmer. Theft with exploitation, that means that something actually has to happen with someone's credit card number. Simple theft may or may not be the fullest extent of the loss. So what we wanna be able to do is we wanna come up with a scenario and we wanna evaluate a scenario that has a fairly small exploitation in it. The threat communities would be an extended chain of persons beginning with a contracted point of sale maintenance staff all the way through the people who exploit the fraud. If they're not showing up for work and doing their part of it, then the whole exploit doesn't occur. So we have to develop some understanding or have some estimate of what their capabilities are as well, otherwise we don't have a fully realized exploit. Moving forward to stage two, and we start with our essentially a series of comparisons. We're gonna evaluate a series of factors. And these are probability factors. We say, what's the probable threat event frequency? And that's based upon the contact frequency and the probability of action, which we'll talk about very briefly. We talk about the vulnerability. So the threats and the vulnerabilities oppose each other to resolve into a loss event frequency, which you see at the top there. And vulnerability is based upon the capability of the threat. Are they strong? Are they weak? Are they random in their capabilities? And is there enough resistance strength to resistively some of these threats? Now, what we're doing as we do this is we're looking down the left side of the taxonomy. The loss magnitude is grayed out on the right side over there. And what we're doing in stage two is we're trying to come up with some frequency. Risk is expressed to reiterate what we said about risk earlier, it's a two factor expression. The first factor is actually the loss event frequency. That's how often something might happen in terms of a loss event. And that's expressed as a frequency with usually a range of probability from a low to a high almost likely. And then loss magnitude is generally financially numerator dollars or euros or whatever you've got. And that would be the loss magnitude is expressed as a financial loss. And it's got a low and a high and a most likely figure and some kind of an expression of confidence. So risk never really gets turned into some amalgam of numbers that have to do with loss frequency and loss magnitude. Risk is expressed as those two numbers in fare. So to get to loss event frequency, we make a series of what might be considered paired comparisons. We compare contact frequency and probability of action. I also do a comparison or association between threat capability and resistance strength to come up with vulnerability. And in our shortened version of this, we talk about as an example, we look at loss event frequency. Well, how often might a threat make contact in an asset in question? We call that the contact frequency. And then how often might that threat actually take action once contact is made? Threats can very often in a retail environment enter the environment and do nothing or not take action for a variety of reasons. There may be controls in place and there may be other reasons why it doesn't seem obvious to the threat that something might be an opportunity to exploit. So there are a lot of reasons why that might be gonna take action. So it's this combination of the frequency of the threat arising and then the probability that they'll take action that actually gets resolved into the threat event frequency. And here we would be looking at in terms of contact frequency, we won't do a threat event frequency. We won't do the probability of action. We'll just do the contact frequency for this one. Dishonest technician working as a shell, they only have the one in their bag because that's all they could afford. They got it on eBay or they get the skimmer shells. 30 days to all time to capture the card numbers. They're exclusively working for the $5 store so every time their exploit happens it's gonna happen to that store. They have normal leave holidays and vacations. And now open fair works with time bound estimates. So we're kind of trying to figure out in this one is you get to choose all what your time binding is but we're gonna say a year, that's typical. How many times in a year would they repeat this threat? So you'd come up with a rationalization and in this case to document what we did, what we said about our assumptions are we say the scenario assumes a single Dishonest employee with a single card skimmer device. The employee is generally assigned a small group among the 48 all $5 stores. They work exclusively on this contract. They get time off the leave vacation holidays. They may be in the store more often than once a month if other of the work is being done. They may be in the store more or less often if their reassigned second letter issues come up. And here's where this goes. As we're doing the documentation we come up with an estimate of contact frequency and we said minimum six times a year. Most likely about 11 times a year if they're going to vacation. Don't like leaving things like that behind on the vacation maximum possibly 14 times a year. And that would be probably a shorter time in a particular store. And the confidence, we know they're assigned to the contract because we did a little research. We looked into this to see what the likelihood was. We looked at our contract, the possible contract of the contractor. So we have our rationale. We've written it down and we've included it. So we not only have the numbers, we have a rationale behind the numbers associated with it. We keep this stuff together. We're beginning to build the case and we're beginning to build the rationale. Probability of action, we've skipped, but minimum most 20% shot that the person would take action when they walk in the store with this shell. Most likely about 70%. They know the store systems. They know where the cameras are. They know the managers in the back room. They're in the store possibly outside of store hours to learn employees watching them. So unless somebody happens to walk up on them it's a store employee that had a shot at getting it done. Maximum probability, 90% shot that they'll get that reader shell placed. And of course, we also are looking at the retrieval side of that as well. Confidence, pretty high. We think that we're pretty sure that the guys gonna have enough eyes that they're gonna be able to look around and be able to tell when it's safe to snap something like that in place. As you saw, it was very quick to remove that shell. And that's one of the things we know based upon actually looking at the thing being placed or removed. So at stage two, the other side of threat is lossy then frequency. We know that we wanna compare threats and we wanna compare the, we wanna understand the threat capability versus resistance strength, understand vulnerability. So we say, we figured out how often the threats are coming up. And it's how often the vulnerability might arise. So to do that, you say, what's it take to convert a threat into a loss? Threat capability, the actual capability is more than simply coming in and as you see here in probability of action dropping the shell and even retrieving it because there's a chain of events that need to happen. We have to think about what the chain of events is and we need to document that. So the threat capability may not necessarily be as great as the probability of action. That's usually something we'll ask. So what might prevent a loss in the scenario? And that's that resistance strength. That's the cameras, that's the people, that's the possible modifications to the shell. You might be able to put some little bumpy things on the shell, little bedazzle jewels or something on there that actually deform the shell enough that that shell wouldn't seek on top of it. How likely is it that all the steps will align and this loss will actually first call vulnerability and that is the comparison of the threat capability and then that resistance strength, the strength to resist the capability of the threat. And when we document it, we would have a rationalization in case of threat capability, the rationalization might be for the threat to be fully realized and must sell the data and the consequence must ensue. A chain of actions by a group of actors must run the completion and that means, again, capability is lower than the probability of simply taking action. Vulnerability factors are then documented using minimum, most likely maximum in a degree of confidence. And we say it's got to be, you have to sell the data, consequence must ensue and so forth. And then the resistive strength or the resistance strength is minimum, maybe 20% most likely about 60% again having to do with the fact that the person understands the store operations and has a sense for when to take action. The resistance strength might be pretty low. In our fuller version of this, we pointed out that the store does a regular location on the video that they do in the store. So the video at the till gets rotated out once a month. The tape's going to back up and the tapes are erased and reused because it's a privately owned organization so they reuse those tapes and which means you might get the placement on video but you may not necessarily get the retrieval or you might get the opposite. You might get the retrieval without a placement. So some of the video might be incomplete as well. So the maximum strength could hit 80% of the managers out there, jazzing with the guy and talking to him a little bit during the whole time he's working or they're sharing a couple cups of coffee and just shooting the breeze that might actually serve as a resistance capability. So with that, we have a sense or how vulnerable we might be but now it's time to figure out is vulnerable to what? We understand the frequency that losses might occur. Let's now figure out what the loss magnitude means. The loss magnitude in fair means in open fair standard means two things. It's going to mean something to do with a primary loss and that has to do with a loss that would occur at any time, any kind of a loss actually occurs. There's always going to be some loss that's likely to occur. Secondary loss doesn't always occur and we'll talk about that as we get to it in a few more slides. Primary loss though is the day to day. These are the things that need to be fixed in the event that a loss occurs. And it occurs, it falls into a series of categories, productivity, response, replacement, compliance, and judgments that might occur, competitive advantage and reputation loss. These are the categories that are part of the open fair standard. You are interpretation of them, maybe a little bit open in terms of how your organization would use them. But having some kind of a common set of categories like this is useful because as an analyst, there can be, you develop an organization why common meaning for some of these things and it becomes part of your thorough due diligence as you do your analysis. So this is pretty useful stuff. We'll skip to the replacement loss just as an example. Primary loss magnitude can be substantial if it involves replacement of all the store card scanner devices at $668 a pop. And I wouldn't look up the price of that device. This is a small retailer. They don't get good time discounts like all the real big time retailers get. So it's $668 each. That's a burden cost. That's gonna include the installation to get it on site. Hopefully you wouldn't use the contractor you're already using because they might be a problem. But at this amount, it's not wildly expensive. The stores will stock about one and a half to two devices per lane to assure enough redundancy at the store load in two times. And then a lower cost alternative to replacing the equipment might be possible. So let's say we discovered that a loss happened. We need to do something in terms of replacement. Again, we might be able to glue fancy colored lumps and jewels and things like that on top of them or an emblem with the store logo or something that just performs a shell enough that that layover shell doesn't can't be attached to it. So there may be some possibilities here and instead of having to replace it. Now, the possibility may evaporate and I'll talk about that a little bit with the secondary loss. So we've got a series of different kinds of things. And one of the lost factors we had that was competitive advantage. Unless the secondary loss occurs or something larger happens, things like this, your competitive advantage with respect to you and the other organizations you compete with may be relatively stable. You may not have any issues unless for some reason they discovered and less for some reason they exploit the fact that something happened and unless in fact somebody actually cares. If you've cleared this up and nobody's credit card ends up becoming compromised, this may be as close to a non-event as possible. Perhaps you found the shell laying on top of the device and recovered it before any data could be sold but you don't always know that. So that's where some of these issues come in. So here we go with some of the primary loss magnitude factors. We have a minimum, most likely maximum of a degree of confidence. You can see the replacement cost at maximum. You could get to about a quarter of a million dollars in the event that something like this is found. See imagine you're walking to the store and you find one of these funny looking shells just laying on top of your card reader. How many times does this happen? What do you need to do in terms of a response? This is one of your bigger management questions at this moment. And the rationale about what needs to be done is laid out here. It includes the labor necessary to address the primary loss as it has occurred. And because we don't know exactly how far this loss extends, we might have to do some of these maximum things in order to properly address and immediate the issue. Now we have something else. This is just the primary cleanup. Sometimes things get out of hand and it really depends on the successful sale of data in a subsequent exploit. In a maverick, depending upon the type of transaction, card exploit has one average value, debit cards may have a higher average exploit or a different. An EVT might be much lower depending on who's attempting that step. Things with pins obviously end up becoming less likely to exploit than as you saw on the scanner on the shell. It was capturing pins, which it was. But at the very least, the secondary loss event frequency here is estimated to be the overall loss event frequency minus about a 10% chance that you might get lucky and not have a problem. So there's a 90% chance that this is gonna become a secondary loss. Now to become a secondary loss, what we're talking about is we're talking about the people who aren't. They are stakeholders in this overall thing, but they're not necessarily stakeholders who are employees of the all-for-five stores. They would be the customers, they would be part of that extended stakeholder population that could be affected by this. And that gets pretty big. And because it gets pretty big, the secondary loss can get pretty big. So similar factors exist, but when we start to really expand it, we start to find out some very substantial sorts of losses can occur. And in this case, one of the ones that happens in our perfectly crafted scenario because we had the latitude to do so, this scenario says a secondary substantial loss may occur because of reputation dim. The news gets out that the credit cards have been exploited and it gets into the news media and the store is struggling to address it. And they're doing an honest job of trying to remediate this problem, but they've got a problem. And the problem has now become both a political and an image problem. We mentioned earlier that they're gonna open foreign in the stores. So the company is privately owned. So from a reputation standpoint in America, we would probably look at an organization's market capitalization in terms of their shareholder, their share price and their value to shareholders. The company in this case is privately owned and self-capitalized. They don't have any issues with obtaining capital. They don't have any issues with the market value. The primary owners are all family that got no plans for bringing in any outside stakeholders. So there's no creditors to satisfy no stakeholders and shareholders to worry about. But the store is hoping for four new stores in the next year or so in their waiting city council approvals and locations where they're gonna do the store build and they're gonna need approval before construction begins. Now, in America, we have this famous movie called The Blazing Saddles. And one of the funny scenes in it is where Mel Brooks is the governor of the state. And he starts to say he brings in all of his closest and he says, Harumph, Harumph, we need to take care of this. We need to take care of this. Harumph, and he looks around and he points to the guy and he says, I didn't get a Harumph on that guy. Politics gets this way sometimes. And one of the things that can happen is those four new stores go back a year. They could go back two years. The company is very ethically driven. They're not gonna play bribing kickback games. And we don't know that the city councils are into that or anything, but the reality is they wanna play the game straight and they're not gonna fool around and they're very clean operators. And they just might have to go away. Those stores aren't going to open. So what you've got is you actually got an impact on the overall bottom line of the organization simply through reputation loss of up to $1.6 million for a $150 skimmer being placed on top of a card reader and then the next flight is happening. So things like this can really rapidly spiral out of control, but at this point we have a lot of numbers. We have a lot of what we would consider reasonably good estimates of minimums, most likely a maximum dollar amounts and probabilities. We've got some confidence in them. We've got medium high or whatever level of confidence we can properly summon here. Right now all we've got is a lot of numbers and we've got some confidence and we've got some rationale behind the numbers we explained to. In stage four, we derive and articulate the risk. We've got all these distributions of information. We've got all the min, max, high confidence. Excuse me, take a moment for a minute. I'm gonna go on mute for just a moment. We'll take your look. There we go. We have a series of distributions and ugly curves here that have to do with the estimates that we came up with earlier. We're doing a series of comparisons to do a derived estimate of risks. And obviously again on the left side we've got the probability that something might happen as a distribution and we've got a probability that of loss as a distribution on the right. And in this case we used a piece of software and we created an analysis. This is the answer to the question that Mr. Johnson will probably be asking. How likely is this to happen? How big might our risk or our exposure be? For Mr. Johnson's sake, the first answer is we look in the lower left corner, we've got the color red, the risk is high, the loss exposure is a million or above. That might be the only answer that Mr. Johnson's looking for. Then again, he might say, whoa, what is this all about? Tell me more, what's our situation here? The primary loss is relatively low and we see it in the table in the top right. We also see it in the chart in the top left corner. You see that blue distribution of probable primary loss. We have a fairly likely loss event frequency about eight times a year or thereabouts and you see it in both charts. And we've got a distribution of losses. It doesn't peak much more. It goes up to less than $100,000 at very max. It's just over $100,000. But then all of a sudden this is a log scale. You see the red splotch, which is the secondary loss. And we have secondary loss not occurring quite as often as primary loss and that's why the secondary loss is slightly to the left and the chart on the top left, it's slightly to the left of the blue in the primary loss. But that's pretty likely to happen and it's unfortunately awfully close to the primary loss. The news gets to you that you had credit card exploits and then you have customers affected by it. That's considered the secondary loss. So that's the extended family of stakeholders and this is the loss that's likely to be incurred. And these are the organizational losses to be all for $5 stores. This is not specifically include the loss to those secondary stakeholders, that is the estimates. So this is a pretty ugly loss for the company and probably an even uglier loss for the extended family of stakeholders involved here. So this is how this would look. It's actually a series of things that have been put together for the purpose of this presentation. You probably wouldn't present it this way to many of you. And that rounds out the basic analysis in our abbreviated version. We say, I think we can give this back to Bill for us something up there. Go on, go on, and if Bill wants to go off with me, we can have him do some summary over here. Slide. I heard the word slide. Yeah, would you take it back one slide? Yeah, I just wanted to say a little bit more about this. This analysis software that we're using is some software that was provided by CXO where it's called FairIQ and it is pretty much this version of it is only intended for training purposes. But I wanted to point out here that the way the software works is by essentially taking the parameters that we've entered into the framework here and then doing what's called Monte Carlo analysis or basically repeatedly tries different permutations of the data through this particular case for doing 3,000 iterations. So what you're seeing over here on the left hand side the chart that shows these two little clouds, that represents kind of a scatter plot of the primary and the secondary losses that were calculated in each of these iterations. So that's a key idea that's used in the open fair standard is this idea of using the Monte Carlo analysis to basically estimate the probable values through different combinations of the variables that were added. So then you see over here also a plot that shows the distribution in terms of the cumulative risk exposure and a histogram that just shows the likely cumulative distributions of events. Okay, I'll go on back to the summary itself. So as we mentioned in the very beginning according to the open fair standard, risk is the probable frequency and magnitude of future loss. And so that's really what we're trying to do here. Open fair is a standard for analyzing information risk. And so there's certainly other frameworks out there that can be used for doing risk analysis and the open fair is not intended to compete with them. Open fair is intended to be complementary to those types of standards. So somebody was asking me earlier if this was a replacement for COSO or something like that. And obviously COSO is looking more at financial risk. You have also the PMI as it's all risk management framework, TOEAP as its own. So there's several different organizations that have risk analysis frameworks. And so I don't think open fair would place any of them. I think it's intended to be used where it's appropriate. Next slide, Jim. So you skipped one slide. Thank you. So basically we've looked at, some of the common risk methods, some analytical frameworks are possibility driven. They use more of an approach where you're not only looking at empirical data, but you're also allowing the analysts to have subjective interpretations and limited quantities of data where you're measuring the nominal values, things and resulting in ordinal estimates. Open fair tries to do more of a probability driven approach where you're using the actual estimated outcomes without a lot of subjective weighting. It gives you an analytical framework that allows you to apply these variables and make a deteriorative comparisons using that one parallel method that we were just talking about. Okay, Jim. So as Jim Hightala mentioned earlier, open fair foundation search page was currently available. We're working on developing the certified level I think it would be called certified level. It's typically the case. And if you wish to get certified, you can certainly contact the open group and Jim for more information. The open group website has information about open fair certification. You definitely just have to take the exam and you can either self-study or you can go ahead and take a training course. Once you get certified, the current program, there is, does not expire. So you get certified under open fair foundation right now that will remain in effect. There are no CEUs or any other types of credits you need to earn or annual maintenance that have to be paid that are to maintain your certification. Okay, so at this point, Jim, I'd like to give a floor back to you and if you want to leave the Q&A session, we'll be glad to answer some questions. Great, thanks, Bill, and thanks, Jim, for the presentations. Yeah, I'm happy to post some questions and get your reaction and answers. To expand a little bit on the question about, is fair a replacement for other risk frameworks? We see it as a solution really for doing risk analysis in a way that's complementary to COSOERM, which is mentioned, but other risk frameworks, things like ISO 27,005. In fact, it's worth mentioning that the open group published a couple of years ago now a white paper that's really a cookbook that shows how do you take open fair and the risk taxonomy standard and plug it into an ISO 27,005 risk management framework and that's been a very useful thing. Actually, one of our members, a large aerospace company, had the need to use fair and to put it in the context of ISO 27,005 and led the development of that white paper. So you can find that out on the open group website and we're open to doing other such mapping white papers. In fact, one of the attendees at a recent fair course wanted to submit something he had done that maps fair to COVID and so we're looking at bringing that in and publishing it as a white paper as well. Finally, the fair methodology is a way to do risk analysis in a way that can be easily communicated to senior management that's probabilistic, as Bill and Jim mentioned, and it really goes a lot deeper than many of the higher level risk frameworks that are out there. Let's see what other questions we have here. So we can make a copy of the slides available. We'll do that. We'll send those around to everyone who's attended the webcast. Somebody's asking about explaining more about how to read or interpret the chart or graph results. So Jim, if you're controlling the slides, you might wanna flip back to the summary graph slide and if one of you wanna talk a little bit to how to interpret or read those results, that would be great. Yeah, the general output here, the look of it is following along what might typically be a Monte Carlo analysis. And like I said, it's actually got a bunch of stuff all clustered together. This is actually clipped out of a PDF that's generated by the software. So I don't know if you can see my pointers. Jim, you can see my pointer. Yeah, we can see them Jim. All right, so right here, this is in a graphical format, primary risk and secondary risk. Again, secondary risk is a risk that may or may not occur depending upon the event itself and the likelihood of it occurring. So secondary risk is a little behind primary risk in most of these analyses. This is a logarithmic scale as you see. And this is a simple default of the software, wouldn't necessarily have to be this way, but it helps see things a little bit better. Distribution here is being shown in terms of a loss event frequency in a fairly narrow bandwidth. So what we have as a situation, we have a number of identifiers within fair that help characterize this sort of a distribution. But this is showing that it's a fairly likely probability as we go right to left here. It shows that even though it's fairly likely to happen, it's got a fairly tall distribution of loss. So even in the primary loss, this thing could go from something relatively small. As you notice here, it's below $1,000 to remediate. That's the part where, because you're the offer of $5 store, they tell people to go on the back and decorate the daylights out of their point of sale equipment so that something like this can't be placed upon there. You identified this as a threat. You went into the stores, you took a look at the equipment that was in there, you determined that no shells were present at the time because you saw this on Krebs. Then maybe all you did was do the modifications and the communications cost of the little B-desil-ing designs or whatever you call them is right there. If you really get into a more expensive response, so you do something more elaborate or you extend your video capabilities in the store or something like that, there may be some additional remediation costs up here. So that's why the distribution looks the way it does. When you get into a secondary loss, it's probably going to be a little bit more around like this. You've got a range of probabilities of a secondary loss repair, then you've got a range of costs. And they do tend to get pretty high. And that just has to do with the nature of secondary loss where the extended family of people out there who are stakeholders in this zone are affected. This is, as Bill was saying, a scatter plot based on the 3,000 iterations that were done. This is student software, if you will. It isn't as many iterations as you normally would do. It is based upon those numbers that we saw in our earlier tables of information. So what it's doing is it's taking a sample for these numbers here in the secondary loss I'm showing. And then it's doing some things to estimate what the shape of the curve is that is associated with that. And it's pulling samples out of this and it's using them as part of the components when it does roll up of risk. So to get to the primary loss event frequency and then to get to the primary loss and the secondary loss. Are we over answering this question at this point? Anybody want to hold up their hand? Yeah, I think we're breaking into this. I mean, yeah, if people want to call and talk to us about it, we will go into quite a bit more about how this works. Yeah, there's a couple more questions, maybe. Yeah, so one question, and I'll go ahead and take it, relates to how does this relate to Sabsa's view of risk. And, you know, having done a fair amount of work with the Sabsa Institute, and which I'm sure would in the open group, I would say that on the, the side of, you know, the way Sabsa treats risk that looks at security risk, the possibility of negative things happening. You know, I think that there's some alignment and, you know, this is really a way to get a more precise measurement of risk or more accurate measurement of risk. You know, given a certain risk scenario. There is a notion in Sabsa of a positive view of risk that really isn't encompassed in fair, that looks at, you know, really the opportunity side. So sometimes you, in business, you take a risk in order to achieve a benefit, and that Sabsa concept, you know, certainly applies in the broader risk context, but isn't really addressed in fair. You know, fair really is looking at just doing a good job of measuring those possible negative consequences and what the, you know, what the impact of those will be. So, you know, I guess that's how I would answer that. I should also mention that there is a fair amount of work going on in the open group between the architecture forum and the security forum to bring risk and security into TOGAF in a more meaningful way. And there's a high degree of activity around getting the next version of TOGAF, whatever, it'll be called whenever it comes out, to more fundamentally address risk and security as you develop enterprise architecture. There's a question here about how does an inherent risk, a digital risk apply for using fair? I think in this particular case example, it's really looking at pretty much the inherent risk before mitigations have been applied. But we're kind of saying if we did apply some mitigation like Jim was proposing in terms of changing the covers so that a sleeve could not be applied, then that would be, then you'd be looking at your residual risk in the main after that. So it's still some other way that they can do that. I think it could be used in both situations, really. Then there's another question down here about is there any evidence that parents can use that's successful in real life? I can't give you specific data about that. I just know that I've been working with two large clients recently that have been using the fair framework in their own risk analysis practices on the truth. And that's just a very small sample size. So I do see large organizations using it. And I don't need, you're asking here if it's better than just old risk matrices. I can't respond to that. I don't know exactly what you mean. But I think definitely fair does use matrices. We combine these factors together and you can put them into matrices. But I'm not sure if that's what you're referring to or not. And then there's another question here about the COVID paper being published. But Jim, do you have a comment on this? Yeah, I would expect given our time frames to do these sorts of things that that that's probably something that comes out maybe in Q4 this year. Okay. And stay tuned to the open group risk web pages. There should be some notice of it when it comes out. All right. And so that's I think all we have time for at the end of our time. We're right at the end. And so thank you Jim and Bill for participating and thank you all for tuning in to the webcast. And you know, stay tuned. We'll have more webcasts coming up on the topic of risk over the rest of this year. Take care everyone.