 Hello, I'm Steve Nunn, President and CEO of the Open Group. Welcome to Toolkit Tuesday, where we highlight the various components and leading experts of the Architects Toolkit, a collated portfolio of the most pertinent technology standards for enterprise architects. During the series, I'll be calling on a number of recognised experts who will bring their particular insights on how to most effectively use the various tools in the Architects Toolkit. We'll have a mix of interviews, panel sessions and pre-recorded presentations along the way. While all standards of the Open Group are designed so they can be adopted independently of one another, the greatest value for an organisation can be derived when they're used in unison. The sum of the parts should be greater than the whole. In the Architects Toolkit, we have collated a portfolio of the most pertinent ones for architects, together, all in one place. For most of these tools, certification from the Open Group is also available so practitioners can demonstrate that they have the skills required and recruiters can take the guesswork out of the recruitment process, all backed up by our Open Badges programme. Hello everyone and welcome to Toolkit Tuesday. I'm Steve Nunn, President and CEO of the Open Group and it's a pleasure to have you with us today. I hope wherever you are in the world you're keeping safe and well and we appreciate you taking time out of your day, evening, night, whatever it may be to join us today. This is episode one of season three of Toolkit Tuesday. We started Toolkit Tuesday as an experiment to gauge the level of interest and we've been delighted by the response and the requests for more episodes. So we made it through the dreaded first season to season two and then here we are in season three so we must be doing something right. And today we are going to focus on a body of knowledge called Open Fair which is made up of two standards of the Open Group, ORA which is Open Risk Analysis and ORT which is Open Risk Taxonomy. And we'll dive into the detail of that with two experts in a short while but first a little bit of housekeeping. The way we do questions on the WebEx tool is through the Q&A channel. So if you can't see a Q&A channel please click on the three dots in the bottom right hand corner of your screen and you'll see an option to click on Q&A and please submit your questions to the speakers through that Q&A channel rather than the chat channel. However we do encourage you to make use of the chat channel to say hello to other attendees at the event and also to tell us in particular, tell us where you're joining us from. We're very proud of the global nature of the Open Group and we usually get a lot of different countries represented on these broadcasts. So please let us know where you are and give us any comments. We've had some great feedback on the shows so far. So without further ado we are going to move to our main topic today which I said was Open Fair and particularly using Open Fair for cyber risk quantification. And we have a double act today. Our two speakers are colleagues both at Ostrich Cyber Risk. First is Jack Whitsitt who is Director of Risk Qualification. And Jack is a leader in the cyber risk qualification community with more than two decades of information security experience. He spent the past six years advancing the state of the art by expanding and refining existing CRQ including Fair into targeted best practices. In his role as Director of Risk Quantification at Ostrich Cyber Risk, Jack helps inform product direction and leaves the new Ostrich Cyber Risk Professional Services Division task with getting customers off the ground with risk quantification while avoiding or mitigating common pitfalls. And joining Jack his colleague Yannis Vasiliades who is Chief Product Officer at Ostrich Cyber Risk. In that role Yannis brings his product and company expansion expertise to lead the product and go to market strategies, manage analyst relations and direct engineering and product marketing. Together with the executive and engineering leadership team he works to advance Ostrich Cyber Risk position as a leader in the cyber risk management and cyber risk quantification spaces. And please submit as a reminder your questions to the Q&A channel and my colleague John Linford who is the Forum Director for the Security Forum and the Open Trusted Technology Forum here at the Open Group will be handling the Q&A today. So without further ado over to our guest speakers Jack Whitsitt and Yannis Vasiliades. Welcome gentlemen. So the reason at Ostrich Cyber Risk we based our CRQ solution on the Open Fair Model is because it gives you control and flexibility to build risk scenarios as complex as you need to. You choose the level of depth on the analysis that you want to do and the data that you have available to you. We all look at CRQ to report on the financial impact of risk. Give me that ALE and of course I want to use the annual loss expectancy to help us make informed decisions, aid prioritization, justify budget, communicate to the business people. Option A will reduce risk by one million dollars over the next 12 months and will cost us 250,000 dollars to implement. However option B will reduce risk by 800,000 dollars over the next 12 months but will only cost us 100,000 dollars to implement. Deriving such results with a certain level of confidence can be great but we often run into run scenarios without spending enough diligence in the preparation like defining risk, risk appetite and overall good scenario scoping. So we do run into a few roadblocks. Scrutiny due to lack of trust in the numbers. Well how did you come up with these numbers? Is the first thing the CFO will ask. After all if a particular risk has a probability to incur losses once in 10 years, well that's a long time to wait to prove out the model. We don't know where or how to get data to power the model or we don't have consensus and more. So Jack is going to walk us through why CRQ context management is imperative to successful CRQ programs and we hope that you will leave the session with the understanding that the CRQ journey is as important as the destination and with that Jack take it away. I appreciate it and thanks everybody for being here. So before we talk about CRQ let's talk about where we might want to apply CRQ and why because it's maybe a little bit broader, maybe a little bit more nuanced than folks new to CRQ might consider. So the first question is like why are we running this? And there's a few objectives that you might have from CRQ and you know one because you want to prove the quality and the likelihood that you have good decision outcomes but also there's some underlying things like are we confident in these decisions? How sure are we? Can we increase the amount of certainty that we have in understanding our risk? If we have better certainty that we can make you know maybe we can put our resources somewhere else whereas if we don't have confidence maybe there's additional amount of work we need to do and so on. You know we want to increase subjectivity. We want to increase tangibility to people by using dollars and cents and things like this. Sometimes we want to better communicate the risk to others or achieve consensus on what the risk drivers are and that sort of thing. By going through a good process you know we are able to get more buy-in on the decisions we make and comparability over time and things like that. The other thing I want to talk about briefly is that you know our risk decisions where we apply quote risk may show up in actual decisions to have the word risk in them. How much risk do we have? Why do I have risk? What can I do to have risk? But what's interesting is our organizations make risk decisions all of the time and not all of them you can directly apply CRQ to as a process and so what becomes interesting is we use the same process for a number of decisions all throughout the organization and it can help to create some standardization and systemization around how we make those decisions. If you see urgency or order or importance or efficacy or security or worry or need or those sorts of things that is somebody trying to make a risk decision and so you want to go through a reasonably good process for each of those decisions otherwise you end up with misalignment or a lack of objectivity or a lack of certainty and those sorts of things. And so when we talk about CRQ it's not just part of some standalone process it's not just about the reports but it's about how do we make better decisions in the organization as a whole large and small formal and informal. So you know when we talk about CRQ what is it you know we're measuring risk it's not just about coming up with numbers but it's about process it's about the models and the assumptions that we're making it's about the application of data which can be knowledge and experience not just logs or telemetry it's about using math to combine that information in a way that makes it easier to understand or that otherwise helps improve sort of the fidelity we have off of the information. And the point here is that we're not trying to forecast the future we're not trying to get down to five nines of certainty what we're trying to do is use a model with frequency and magnitude and those things along with process and along with data and along with math to make better decisions. And what becomes interesting with this is that that same process and some of those same models and some of the same data can be used super informally or very formally and there are things you have to do there are questions that you have to answer whether or not you ever go through a formal CRQ process at the end and so while on screen here you see you know open fair loss of infrequency loss magnitude this is the end state like after all of your sort of assessment processes are done they should be able to answer questions about how often are we expecting loss how much of that loss be and why but at the end of the day these are questions again that you're going to have to answer to make a good decision whether or not you put into tool and whether or not you assign numbers to it so let's talk a little bit about what that process should look like for CRQ in particular and then we're going to expand it a little bit. So first it really is just more than numbers so the first step in a good assessment process is identifying you know what is the concern let's scope the questions that we're asking you know what topics are are are we uncertain about how specific your results need to be you know what criteria are we making a decision based on is there some sort of risk appetite is there some sort of cost some sort of time constraints and so if you don't sort of answer this question upfront and you don't answer it for all the stakeholders involved then your assessment process whether it's CRQ or not is potentially going to be misaligned and so this is a formal part of you know sufferers quantification. Same with scenario scoping you know at a high level what are the triggers that you're worried about what are the loss events you know what are the you know how specific do you need to be in order to make the decision about these things so what we're really talking about but what is the concerns it's just decomposing your your threats and your objectives this isn't really even turning them into scenarios yet that's really when we get to step two when it's going okay why are these concerns you know we might be worried about a threat event like maybe some sort of threat communities are financially motivated and they're interested in you know extortion maybe through and somewhere so you know that might be the threat event that we have identified in step one but step two it's is this even plausible could this occur here what kind of tactics might they use what are the vulnerable surfaces that they might exploit what kinds of surfaces and if we have one that we think is pertinent we have others you know would removing it actually change our risk posture at all these are discussions that we should be having when we're assessing and analyzing risk again whether or not we're doing Syracuse but in order to do Syracuse we should be doing them as well so you know what are the control objectives if we understand what the bad guys might try and sort of what kind of surfaces they might rely on this starts to indicate what our control should be doing if we have a set of like 10 threat scenarios that might caused you know a number of different loss scenarios then that collection sort of is defines what our control should be doing to manage risk for us and you know what kind of are those controls even available interestingly you know we we talk about min max and most likely in Syracuse and that is important to talk about from a you know control basis are we it's our concern that our best case in in terms of our control performance is a problem are we concerned that our worst case are we concerned that our reliability over time of our controls are a problem or the duration of our our vulnerabilities versus the frequency of the vulnerabilities so you know as we go through and look you know are we again this is going to be a problem because we think it's going to happen a lot or do we think it's a problem because we think there's maybe an existential impact and very often these things get conflated in our discussion and our assessment processes somebody comes to the table and they've heard about data breaches left and right and they've also heard about these terrible data breach costs but it you know they it turns out that if they really break it down a lot of those data breaches are very small from a frequency basis but the big ones are infrequent and so sort of really breaking down the cause and effect and how that aligns our objectives is an important part of this process third this is where we start getting into in Syracuse where we start getting into numbers traditionally but this is more than just numbers this is start asking from a benchmark basis okay we understand what we're worried about we understand why we're worried about now how do we assess how bad it is the first thing we do should should do is look around us you know is this happening to our peers have we had anything happen like this before what are near misses that might have occurred again these conversations are useful to help build quantitative analysis work but on you know on the other hand they're also good for just qualitative assessment work triage once we've got benchmarks we ask ourselves how are we different do we have more of an exposed surface than our peers do we have less you know what is different what's the same what do you know as subject matter experts what's going to change in the future from what the benchmarks have said in the past and then finally like do we have any evidence to support this we have metrics and data what metrics and what data supports our assumptions in the estimation process fourth is going to be what's the best response this is interesting because even this is crq is great at helping us delineate these types this part of the analysis process but it's not necessarily required the first question is risk driver analysis we've assessed everything do we think it now that we've gone through the process let's go back to we think it's a frequency and magnitude process problem because a frequency problem is controlled differently than a magnitude or an impact problem same with min-max most likely we're revisiting that why we've we've gone through benchmarks estimates and data and we're going back but then the second part of this is also interesting which is what is our you know are we concerned that this is it's not so much that we have an existential problem in any one event but this is maybe you know the fact that just over time our budget isn't enough to handle the variety of events that we're talking about or the probability of exceeding in a given year this is potentially your storm model man that's a we can't handle one of those one one one in a hundred year floods or incidents but we're good with maybe those one and 20 year size incidents and then finally you know at the end of this this is do we have confidence in the decision do we have we've gone through process do we trust the process we went through how good was our data we can get through the process with pretty bad data with not a lot of information and we're going to make a better decision than before whether we quantified or not but capturing that last bit is one of the more interesting parts of the CRQ process because then it goes you can target your controls for increasing visibility so with all of that said I'm going to revisit that exact same conversation in terms of like how do we build a systematic analysis of how this process is going to go so you know let's look at what the concern is from an action basis we should be making sure that we understand and normalize internal risk and what our objectives are and what our decisions are this includes asking questions like what are constrained resources are we worried about cash going out the door are we worried about revenue changes now or our forecast revenue changes are we worried about market valuation those sorts of things are we worried about who's who has an equity and security who actually cares and why do the care and when does that care if it's not being met start changing into loss for our organization you know what are our objectives and how might they be impacted by threats at a business level are we making any you know investments are we are we doing some mergers and acquisitions do we have you know what are we doing that we won't be able to accomplish if if our security isn't met because that starts to constrain what we need to do analysis of the same thing you know a landscape what like let's look at the BIA work that we've done the business impact analysis let's look at our enterprises register these are outcomes you know these are the sort of business context that cyber security risk exists in and then on the other side what does the outside world look like not so much as it applies to us but what are bad people doing these days this isn't like going through the mitre attack framework and going what kind of TTPs they're using but like are we worried about you know espionage from nation states does that even impact us you know are we worried about you know global conflict are we worried about insiders and what about insiders what motivations what kind of things would they even attack this isn't like our PII stored on database number 12 this is hey are they looking for marketable data if they're interested in financial gain and what kind of marketable data you know that that sort of conversation and as we go through this you know actions why is that concern this is when you once we understand that the external considerations what are visibility objectives what kind of information do we need to gather in order to assess what our risk is so this is understanding what the moving parts are this is understanding how we're going to measure different parts of the problem get that it get the information in different ways like again are we using the IA are we using our GRC system or you know where is this information going to come from and then if we don't do this then we have duplication contradictory of assessment work you know we might risk mis mis risk factors and so as you know we're we're looking through not just what is the concern but why is that a concern this information is collected throughout our organizations are ready in different forms for the most part some of it isn't and if it isn't sometimes we don't know because we haven't systematically gone through this process of going you know what are TTPs that we care about what are the key surface areas what are the key you know infrastructure how are our controls relate what controls depend on each other and you can target those conversations and those questions against your concerns up in step number one and if you don't do this then you might be doing 12 different assessments and they might be not be answering the right questions or they might be duplicating the answers or they might be making different assumptions and so really what you're trying to accomplish here is going okay we there's some variables there's some uncertainty about what the concerns are how are we going to measure then we go okay how do we apply that information is really step three that let's organize some type of this is like how do our threat events relate to our controls relate to our surfaces relate to loss events you know what indicates you know about the data the assessments that we've done what are the best indicators you know because we telemetry and what do they integrate for example vulnerability scanning tells us a couple of different things one we have vulnerabilities today but doesn't really tell us who's going to be using them how long those are going to last that but the telemetry might also tell us how good our controls are operating like if our vulnerabilities are lasting for three months then that is interesting if versus if they're lasting three days and so you know this is really systematizing this is taking that data and organizing it in a way that's going to tell you a story in particular that's going to answer a specific sort of question that we've identified and then you know after that we're looking at what's the best response this is where we formally go through and do a time and stress analysis is this you know an overtime problem or or you know a capacity problem this is when we start asking the business you know how much can you take in a given year what are your budgets like where where you want to get stressed and we have to make a decision here this is when we look at control opportunities this is when you document hey it looks like we can reduce the frequency or it looks like we can reduce the magnitude of those sorts of things and then you can start evaluating those control opportunities against sort of a cost-benefit analysis for you is this worth it and again this is all things you can do with three people in a room and make your best effort or you can turn into a large large formal process there's still steps you're going to have to take the only change is really going to be how much how explicit you are about it how well you document it how well you need to document it you know etc but it's still the same process even if it's a one person's head and if you know we don't do like the best response analysis then obviously we might have solutions that don't manage risk we might have you know rework or you know etc lastly you know after action you know document communicate the confidence so that you can target operator add visibility controls and then you know it's one of the more valuable things you can do for example if you don't know how susceptible you are to given threat events this is great feedback to your red team and it can help target your red team against where where you need to go if you don't know hey how often do we patch things and how long do vulnerabilities last and as we went through the analysis process we couldn't answer that then that is an assessment task that needs to show up so that is on a high level the this is taking the crq process that we talked about over here and then systematizing it and going into some high level steps that we can take again whether we take these in an hour or rebuild this into a robust process that's up to sort of the use cases for you so with that that is a quick introduction to sort of a subjective process for for risk analysis that also supports crq i'm happy to field any questions for the remainder of the time and thank you for listening thank you very much jack excellent stuff and yonis not sure if you're still there and if you'd like to come off you can turn video on as well for the q&a portion so couple of questions for you guys so first of all you know you mentioned and you focused on this aspect of embracing subjectivity but you still kept it in the realm of quantitative risk analysis crq what advice would you have for an organization with leadership that is dead set on doing qualitative risk analysis but you know you as the analysts see the value in doing it quantitatively so i i i think you can write great narratives out of each of the steps in fact i'll go back for a second so if you look at it this way you can develop products to answer each of the each of these sets of questions and that is those are potentially subjective answers but you as an analyst can then take those answers and convert them to you know a quantitative analysis and maybe include that as an augment so your your your leadership team or whoever is a little bit skeptical they get the they get what they expect they get what what makes them happy but also give the they have an opportunity to look at having that augment and having that support you know we all have you know many of us have robust pre-existing you know process infrastructure expectations and culture and we can't supplant it directly but we can augment it we can provide use add the add the quantitative bit to the end to provide additional evidence and clarity one other thing i would add to that john is that in the tooling there is the possibility of a framework to bring together the two sides so that you're working over one platform and there is a more of a methodology around leveraging the qualitative results within the crq scenario structure perfect great thank you guys one more question for you you know the process that you mentioned involves consulting with subject matter experts getting those data getting those input but stuff changes things change how often do you have to go pester those people and get new estimates that's a really really good question so one of the great things about systematizing this particularly in this order is that you want to it helps you also reuse the material so if you have a sort of a static understanding of high-level threats because the motives for your for your bad things aren't going to change very often you're you're not going to suddenly be worried about a completely different set of threat actors at a high level who have entirely different motives right like we've got financial gain or espionage or you know whatever that is and so by creating that structure ahead of time and that that understanding what our loss events are ahead of time that gives you a foundation for asking things once and just updating them periodically so that's part of it is put that big picture in place so that you're not every time somebody asks you a different question you're not going to have to regather data but then the second part of that is if you look some of this just doesn't change very often like the again high-level threats the high-level decision criteria the why it why it's concerned why is that a concern might change a little bit frequent more frequently right but think in terms of how long it takes to respond if you can't respond if you're capturing information every hour but it takes three months to respond to that information then there's no sense in recapturing it every 12 minutes right so you know I would look at once a year reevaluating maybe what your concerns are and then starting with why it's a concern maybe once a year because your projects are going to happen etc then how much a concern maybe you update quarterly and then best response think of best response as you you're asking the system or your process different questions and so you can ask those questions as often as you need to just update the data and the inputs only frequently as frequently as you can take advantage of them if that makes sense definitely excellent answer and unfortunately that is all the time we have for Q&A but thank you very much again Jack and Janice for joining us for today's toolkit Tuesday thank you all of our attendees for joining us as well and back to I believe Steve Nunn for our outro so again thank you all great stuff thank you gentlemen thank you John for handling the Q&A there and thank you to Jack and Janice for coming here today and sharing your perspectives very interesting and thank you again for for joining us today and thank you to everyone of you who attended today either live or watching this in the comfort and convenience of your own time zone it was great having you with us on toolkit Tuesday the next one please join us next week for the next few weeks leading up to the open group summit in April we are running toolkit Tuesday every week rather than every two weeks so next week March 14th our episode will focus on the portfolio of digital open standards and how that's useful for architects how it can increase the convenience and quality of architecture work so that digital portfolio is something you may have heard us talk about it's bringing together various open group standards so that they're cross referenceable and cross searchable making them kind of easier to use together basically and to talk us through that we have one of our panel of experts here at toolkit Tuesday Chris Frost from Fujitsu so it's a great one to join us for please join us March 14th meanwhile keep safe and well and have a great week I'm Steve Nunn thank you for watching toolkit Tuesday