 finished his 39 year career at the Boeing Company as associate technical fellow. Highlights of his work at Boeing included leading the company-wide classified computer security program, the 777 program security manager and chief security officer for the Sonic cruiser program, the forerunner of the 787. More recently, CT Carlson LLC was established to provide information security writings and advisory services. Chris's book, How to Manage Cyber Security Risk, a security leaders roadmap with open fair, was published in 2019. In this session, Chris will give a case study demonstrating the results of implementing zero trust architecture. So tying together nicely the open fair body of knowledge we've heard about earlier and the zero trust architecture. So Chris, a warm welcome from the open group to you and I will hand over right now. Okay, thank you very much for the introduction. So what I'm going to be sharing with you is how to use open fair to build your business case for your proposed technology. But first, sorry, I forgot my little introduction section. So anyway, when you go through and get a new product, you of course have a normal process of identifying your requirements, options analysis. You select something. And then in most cases you need a business case to get through the financial folks and get prioritized to spend capital from the corporation. So what is the ROI for risk reduction is the question that we're going to be answering with the use of fair today. The product that we will be talking about and I'm not going to name what this is. This is in fact a real product, but there's a requirement to reduce the processing of access requests, that time that you trigger one and then you sit and wait and wait. So that's a productivity problem and building the business case for that is an ordinary thing. So we'll just touch on the results of that at the end. But to risk reduction capabilities are also part of the product, removing unneeded accesses, which reduces your vulnerability and also the ability to detect anomalous behavior so that you actually have an incident detection response capability. I'll dive into a little bit more about the product as we walk through. But first, a little bit of an overview for fair, for those of you who may not have dealt with it before. Fair is it defines risk as the probable frequency and probable magnitude of future loss. And as you see in the diagram, those are shown of the decomposition of risk into loss frequency, loss magnitude and further down the tree. So the sections that you see highlighted are the level that we're going to do the analysis. When you do a fair analysis, one of the first things the analyst needs to do is decide how deep down the tree you need to go to really think through your problem. We're going to be using the open fair tool, which is built on the Excel spreadsheet. I'm not expecting you to be able to read all of this, but what you see here is on the left-hand side is where the data can be input, both the current circumstance as well as the estimates for the proposed circumstance. The right-hand side shows the final risk analysis. Of particular note, we'll be doing 100 trials or essentially simulating 100 years of experience for this particular analysis. So getting down to the brass tacks. We have a series of stages that we walk through, and the first stage is to identify what is the loss scenario. We have two specific scenarios that are drawn from a draft risk document, I'm sorry, a draft NIST document. It's employees who have access to corporate resources and contractor access to corporate resources. As I looked at these, it seemed like these were effectively two groups that have some differences, but they're effectively the same, so they'll be treated as one scenario. What this is, is that it's an insider threat situation where a person is using their authorized access to remove information from the organization. In this particular case, we're talking about unstructured data, documents, image files, power points, that sort of thing, and sticking it on a thumb drive, perhaps, or using the computer to send it to an external location. And we'll be looking at those, that whole scenario is one. So further on identifying the loss scenario in specifics, we want to first identify what is the asset at risk. Its importance is that the data that could be exported would constitute trade secrets, which have the opportunity to possibly give competitive advantage to some other company. What's our threat community? We already spoke of it. It's these employees who are trusted insiders. And typically, when you create a risk analysis, you come up with a title for that risk scenario. So we have a long version of it, Theft of Intellectual Property by insiders, but often in an organization, you use a shorthand for referring to things. So this is just an insider threat case. Next, we move to the second stage, which is going down the left side of those factors and looking at loss-event frequency. So what we need to do is come up with an estimate for the loss-event frequency. So a typical analyst is going to make notes on the factors that are considered. And these bullet points just represent kind of a shorthand of the kind of things that one would look at. So we have 10,000 people in this organization. We do have some experience with other kinds of insider threats that we've detected. The challenge with the unstructured data, there is absolutely no mechanism in the organization to detect it, but there's suspicion that it is happening. So a number is arrived at that seems sensible, and that becomes the current threat-event frequency. So the calibrated estimate was a maximum of 15 events per year, a minimum of five events per year, with a most likely value of 10. In this particular case, the solution doesn't change the threat-event frequency. So you notice in that data input pane, the values are put in for the current value, but not the proposed solution, since they're the same. We also need to look at the vulnerability. So we come up with calibrated estimates for both the current vulnerability as well as the proposed vulnerability. You'll notice that the thought process around the maximum is that senior employees would be collecting accesses over their career as they move from job to job, accesses are added, but they tend never to be deleted, where the most junior employee has relatively small amounts of access, and average employees are somewhere in the middle. In the proposed solution, the whole point of it is to remove those unnecessary accesses. So the maximum dropped substantially, whereas some employees have needs to have significantly more files than others, the junior employee has that's no change, but the average employee is also significantly reduced. So you see on the right-hand image that the vulnerability has both the current and the proposed values input. So now we can see how that side produces some results. This is the loss event frequency view of the analysis. Again, you see the hundred trials, and we see I've circled in the lower left something that's an opportunity to emphasize. It looked like two events per year was kind of an interesting line. So in the current scenario, there is a 72% probability that there will be 2% two events per year. With the proposed solution, there's only a 17% probability. So that's a substantial reduction in the events per year. Now we move on to the loss magnitude side of the scenario. We're going to first estimate the primary loss magnitude, which the only element that is relevant here is the response cost. But as I pointed out earlier, we have no way possible to even detect that these events are occurring today. So there is no response cost. But with the new solution, there actually will be some amount of cost. But because we're talking about unstructured data, these are file shares, the identity of who is taking the action is known. It's pretty trivial to get to the individual involved. There's not a lot of investigation costs. It's really simply those response costs of having a corrective action type meeting or whatever has to be done. So the costs are practically trivial. You'll note that though they're entered on the appropriate row on the right-hand side in that chart or in that input panel. The second part of loss magnitude is the secondary loss magnitude. The secondary loss frequency is a function of the primary loss event frequency. So what we have to come up with is the probability that when there is a primary loss event that it actually turns into a secondary loss event. And so there's a calibrated estimate that you see that's been calculated there. What is more interesting and really the focus of this analysis is what is the loss potentially itself. It doesn't change from the current situation to the proposed situation, but this is the foundation of the business case. So these calibrated estimates were developed by the analyst and show a fairly substantial amount. And these are all input in thousands of dollars. So that's 100,000 to $900,000 a potential loss per incident. So keep in mind if we have a couple of those a year that's upwards into the area of 900,000 closing in on $2 million. So let's get to the interesting part. This is the result. So you see in the upper bar charts the representation of both the current and the proposed solution and it's really difficult in this particular case to see the tail of the loss events because they're actually relatively low frequency. But what is particularly clear is that with the proposed solution the probability of incidence of losses is significantly reduced. And you see that also in the exceedance curve. And I've circled the one situation that was kind of interesting. What's the chance of exceeding $3 million per year? And in the current situation it's 50% in the proposed situation. It is 23%. So that's been cut roughly in half. Also circled and this is going to be important in just a moment is what is the average loss per year? Now average is a challenging thing because it's very unrepresentative of the whole picture of course. But in the case of building our business case we're really asked to come up with some scalar number to stick into that business case. So for this analysis I chose to use the average value which is about $4 million average risk reduction per year. So what I have remaining is something of an eye chart. Every organization of course has their own way of doing this but this is an example of a business case type of analysis that looks at multiple years of the cost and the benefits. So at the top of the chart the benefits driver you see the inputs and they're circled is that $4 million that benefit that we'll see year after year after year. And then also circling back that $1.6 million was the business case that was put up for that reduced productivity loss. So that's a $5.6 million annual benefit. Now part of this analysis recognizes that you don't implement a solution like this necessarily completely in one year so it gives the opportunity to spread that over multiple years, calculates benefit flows, does cost of capital get to the bottom line at the very bottom. It basically suggests that there's a payback for this project of just slightly over a year. So if I was the person in charge of finance I'd look at that and say, oh that looks like a really good project to consider. So that's the conclusion of the analysis itself. Just wanted to give you some reference material. These are the things that I would recommend someone read who wanted to get started in understanding risk analysis. The first two of course are the fundamental fair documents that define fair. And the second from the bottom, the Douglas Hubbard book is a key if you want to get an understanding of how to calibrate estimates so that you have inputs data for something like this. And finally, just a little bit of me, a number of things that I've contributed in the open group which is going through the process of contribution is a tremendous aid to learning and then also the book that Steve referenced at the very beginning. So that concludes this talk. If there's any questions provide them through the process and we'll talk to you later. Thanks very much.