 So, a warm welcome to everyone. I'm Keith Shepherd from Old Agrofrostry and innovative solutions for decision agriculture based in Norway, Kenya. And I'll be chairing this webinar on behalf of Constructing a Digital Environment. Before I introduce the speaker, I'd like to say a few words of introduction about the program. Constructing a Digital Environment as a strategic priorities fund of the UK National Environment Research Council, Merck championed by Cranfield University. The initiative aims to develop the digitally enabled environment to better support decisions by policymakers, businesses, communities, and individuals, especially responses to acute events and informal understanding of long term environmental change. But this is happening by creating an environment integrated network of sensors both in situ and remote sensing methodologies and tools for assessing analyzing monitoring and forecasting the state of the natural environment. This is being done at high, higher spatial resolutions and high frequency than previously possible. The construction of a digital environment is harnessing multi disciplinary and interdisciplinary research and innovation. And with that introduction to the program, I'd like to acknowledge Merck's organizational support for this webinar. This webinar series focuses on data for decision making. It's really bringing a perspective from decision science. We have the capacity to generate and process more data at lower cost than ever before. However, in application data has no value unless it improves decisions. The next series aims to cover various topics, including the concept of value of information, the floor of averages, new tools for incorporating uncertainty and decision making common flaws and use of big data and AI, and communicating uncertainty and decision making. We plan to have webinar series on other topics going forward into next year, including seminars from network members. And just to also alert you the next network in three weeks time will be given by Professor Sam Savage on the floor of averages, which you can see in the center right. Professor, I draw your attention to the CDE digital trials that teach on the bottom right. This has a cash prize and chance to win a magnificent no quarter bottle so do sign up. The web link is at the bottom right. Without further ado, it's my great pleasure to introduce you to Doug Hubbard with whom I've had the pleasure of working with over a number of years. We published a comment article in nature on data and sustainable development goals. Some years back. Doug is the inventor of the applied information economics method and founder of decision research. We offer off a number of books how to measure anything finding the value of intangibles in business, the failure of risk management, why it's broken and how to fix it. Pulse the new science of harnessing and internet buzz to track threats and opportunities, and his latest book how to measure anything in cyber security risk. There's sold over 100,000 copies of these books in eight different languages, and two of these books are required reading for the Society of Actories exam prep. Today you'll be talking about the value information and how to decide what and how much you should be measuring. Before I hand over to Doug a few housekeeping matters the talk will last about 35 minutes followed by 20 minutes of questions and answers. Do please type in your questions in the Q&A box, and I will try to collate those and put those questions to Doug. Doug is being recorded and will be made available for download later. So Doug welcome and I'll hand over to you and that's great. Yeah, great thanks for having me. I'll go ahead and share my screen here. Keith has asked me to speak about the value of information. And I'm going to try to put this in the context of making important decisions which all measurements should be about. And it looks like you'll need to stop sharing I think Keith. Yeah, I'm actually in trouble to find the stop sharing. It disappeared so. Oh, okay. I've got it now. Okay. All right. Great. I'll try that. Great. Here we go. All right, thank you. All right. So, just to give us a little context here. Keith already mentioned my book so I won't spend any time on those. Let me just ask you a couple of questions to start with. Think about what your single most important decision should be. You probably work in lots of different areas there's over 80 people here on online. So you probably work in lots of different areas. But I think you might actually have the same answer to this question. I think it's how you make decisions. I refer to that as the meta decision, the meta decision is deciding how to decide it's sort of like metacognition. But within that context to then, what is your single most important measurement? Well, you know, the same meta context. It should be the performance of your measurement instruments for decision making. Right. So, how well are you measuring things. Are you actually using calibrated instruments, and by calibrated measurement instruments I don't just mean electronic devices right that you might use for very specific measurements and what are the other areas of science you're focusing on. But I really mean about models and observations in general how well do they estimate things what is their error. And are you actually improving decisions as a result of it. And when I talk about improving the, the accuracy of your measurement instruments and calibrated them, I'm talking about your subject matter experts as well. The empirical methods you're using and of course the decision models themselves that you're, that you're using to leverage the results from these various sources of information. Well, there's a lot of information. There's a lot of data, a lot of research that shows that quantitative methods, measurably outperform a lot of other very popular methods, and there are certain qualitative methods that don't do very well. But how do we start to improve on them. And can we even rely on subject matter expertise well you can to a certain degree. But there are some obstacles to the adoption of quantitative methods for further improving that maybe you've run into some of these objections. Think about this have you heard any of these objections or maybe you've said them yourself. We'd like to measure that but we don't have sufficient data. Does that sound familiar. If it sounds familiar. I would challenge you to think of that as a very specific mathematical claim. When someone says we don't have sufficient data to measure that does that mean that they actually computed the uncertainty reduction they would get from a given amount of data, and they computed the economic value of that uncertainty reduction to determine whether or not the effort was justified. Even among my clients who are scientists in multiple different areas. No, they, they've never actually produced this math to justify this claim. They're, they're usually making a very subjective judgment about what is ultimately a mathematical claim. When someone says we like to measure that but we don't have sufficient data they didn't usually actually do the math on that. So if you're talking about the math necessary to inform decisions. So I'm not necessarily, I'm not equating this math to the math that you have to do to get a statistically significant result to get a p value that's low enough for some particular finding. That's statistically speaking that's a different question than whether or not your uncertainty about a decision was reduced. Or maybe you thought that some particular area or decision or problem is too complex to model. Maybe you've run into that. Well, qualitative methods and intuition do not alleviate complexity any more than they alleviate a lack of data. Whatever mildly method you're using it's going to be wrong. All models are wrong and famous statistician George box said that all models are wrong but some are useful. I just point out that some are measurably more useful than others, and you can't help but model. If a model had no error we wouldn't even call it a model we just call it reality. Okay. But models have errors because they're necessary abstractions. So when someone says something is too complex to model. Are they just arguing that they shouldn't build a quantitative model and instead they should use their intuition. Are they claiming that their intuition is better at handling complex systems complex situations. The evidence says no. That would not be the case. I wrote my first book based on this idea that really everything I ever ran into at various clients and I've been in quantitative management consulting now for 32 years, but I would periodically run into these objections to measurement. I started categorizing them and I decided before I wrote my first book in about 2005 2006 that there really were only three reasons why anybody ever thought something was immeasurable and they're all three illusions. And then I summarize these in the book and then I added on to that it's in its third edition now the first book, I call them concept object and method. The object of measurement has to do with what measurement actually means it doesn't mean an exact number. In fact, it hasn't meant that for the better part of a century, even in the empirical sciences. We also talk about the object of measurement. That's another reason why people might think something's immeasurable. The object of measurement refers to defining the thing that you're trying to measure this is obviously a key first step in any scientific measurements or observations at all you have to figure out what it is you're talking about. That's really the methods of measurement. Even among people who've been trained in this area. Sometimes when we talk about methods of measurement in a different context, like for example reducing uncertainty about big policy decisions. That's different than the statistics you might use for say computing a p value and doing a significance test on something for for those of you familiar with those concepts. That's a very different question that you're answering when you determine whether or not something is statistically significant that's not telling you whether or not you learned something, whether or not you can make a better bet. That's answering a different question, mathematically speaking. So let's talk about each of these concept object and method. And if you want a new Monarch, you can think of calm. So I'll just talk about each of these in turn. First, as I mentioned for the better part of a century measurement hasn't meant in science a point value. It hasn't meant an exact number in quite a while. The de facto use of the term in the empirical sciences and the most relevant use of the term in practical decision making is a quantitatively expressed reduction in uncertainty based on observation. So you have to have an observation component and it has to be quantitatively expressed, but the elimination of uncertainty is not a requirement just the reduction of uncertainty is sufficient. So, you start with a prior state of uncertainty. You make some observations do some trivial math usually and then you have less uncertainty than you did before. That's what we mean by the term measurement. That's the most practical use of the term, and we don't want to reject a measurement just because it didn't reach some arbitrary level of accuracy and precision. Okay, what we're really asking is, did it add value to a decision. What was the value of that uncertainty reduction. And that'll bring us to the information, the value of information in just a moment here. But first, let me talk about this. How can I state a prior uncertainty. Isn't a prior uncertainty subjective. Right, so how can I state a subjective probability for an event being true, or a range of possible values for a yield of some new, some new crop, etc. or the, or a change in commodities prices, etc. There actually is a lot of objective research about the performance of different subjective estimation methods. So you can objectively assess the performance of subjective estimates and some subjective estimation methods are measurably better than others. There's decades of research on this. In fact, I'll, I'll cite a couple of these things. But one comes from Daniel Kahneman, he won the Nobel Prize in Economics in 2002, I'll mention him a little bit later as well. But Daniel Kahneman did some of the earliest research on how people can subjectively assess probabilities. Now, we've actually calibrated over 1600 people, Keith Shepherd included here. We've calibrated over 1600 people in the last 22 years. And of those 1600 people, we took a big set 434 of the most recent versions of these tests, and we looked at 52,000 of their individual response items so these trainees that went through calibration, went through a series of exercises where they assigned probabilities to various. In this case at first trivia questions but they advanced to other questions later on. Well, we can ask questions like how well do they actually assess probabilities subjectively and the way you objectively measure that subjective skill is you look at all the times they said they were 90% confident and were they right actually 90% of the time. Of all the times they said something was 70% likely did the thing occur about 70% of the time. Well, this red curve shows how they how well they did before they did the training. In other words, if you look at all the times somebody in the training said they were 90% confident, they're, they're getting about 75% of the correct. And of all the times they said they were 100% confident they're not even getting 90% of them correct, but after training they can get very close to ideally calibrated. What we're doing here is we're showing the effects of calibration training and other adjustments using training data where we train them with one set of data and then we look at how well they do on another set of data. And they can get very close to being ideally calibrated or within the statistically allowable error. Okay, so that sample dashed lines here are the sampling error, the yellow shows where they are after calibration. If you're perfectly calibrated over a large number of trials, you should be right along the diagonal right between these dashed lines, but we allow for some sampling error for the individuals that go through the training. And we've also determined that when people go through this training they actually do get better at estimating probabilities for the real world problems they're working on. Okay, so we've tracked that over time actually. There's a reason for doing this there's there's a whole set of mathematics that some of you might be familiar with that start with the idea that you use a prior probability, you state a prior about your beliefs. What do you think is more likely to be true or not that is summarized in this very simple arithmetic referred to as Bayes theorem. Okay. In fact, there's a derivative from Bayes theorem that we can use to figure out how well people would do if you combine multiple experts if you ask multiple subject matter experts for their calibrated estimates if they've gone through calibration training. How do you combine multiple experts do you just average them that turns out not to be the best solution. The best solution is actually this equation down here and this is actually a simplified equation because it doesn't even consider how well correlated the experts might be or something like that but it's actually a pretty close fit to observe reality. In other words, that equation is a good predictor of how often two individuals will be right. If they had different levels of confidence about the same prediction. Okay. What do we do with this information. Well we can use it to populate models of our uncertainty and one way to model our uncertainty and I understand that you're going to have Sam Savage next month. So I won't spend too much time on it, but one way to model your uncertainties using a Monte Carlo simulation. This is one way to do the math when you don't have exact numbers. So in the research that that I cite my books actually as well. There's research that shows that people who build Monte Carlo's are better at forecasting than people who don't. And if you're not familiar with it, Monte Carlo simulation simply involves sampling uncertainties thousands of times using the their stated probability distributions, so that you can compute some outcome you're trying to forecast. Not only do you get, you don't get just an exact number for a net present value or return on investment, or some other effect that you're trying to model, you get a range of possible values and their related probabilities. All right, so that's about the concept of measurement I just wanted to introduce the idea that measurement doesn't mean an exact number. We can use in exact numbers we can use probability distributions to model our uncertainties and measurement really comes down to making observations and doing some simple math to reduce our uncertainty and expressing it quantitatively. So let's talk about the object of measurement a little bit. We should ask the question, what do we see when we see more of it. The hardest thing that you have to measure, even things that you think are impossible to measure. What do you see when you see more of it, if you can define it in terms of its observable consequences. You invariably start thinking of reasons that you can ways that you can measure it the rest is trivial math, as I've said, we also want to look at why you want to measure it. The reasons behind the measurement actually help us frame the measurement problem. Why do you care what decision, are you making differently because of this measurement. Now you can make measurements purely for personal curiosity, entertainment purposes, you can make measurements because you plan on reselling the data to somebody else, but most of the time. I think policymakers and business people and people in government are making measurements is I think usually it's because they're trying to reduce uncertainty about decisions so you should be explicit about what those decisions are. So, let me just give you a few examples. These are some examples all of which I've heard before there's a lot more than this but what do you really mean when you say community engagement or information availability or resilience. Tell me what you see when you see more of it. Maybe community engagement means reduced cost of gathering information because they're more willing to provide information. Maybe you're trying to model some other policy where uncertainty about adoption rates among a population are, it has an effect on the benefits. And so you're trying to measure this because you think it has some bearing on some major initiative, you're trying to inform. And all of these can have an observable consequence in fact if it had literally no observable consequence, not even one that you could imagine. I would suspect you wouldn't know that it's an even a thing to measure. The only reason people think of things to measure is because they've seen more of it in sometimes, or they can at least imagine more of it in some cases than in others. And as soon as you start imagining what you see more of, again, you start thinking of observations, the rest is trivial math. Now, let's look at the why question. What decision are we trying to make with each of these community engagement. As I said, you know, are you considering a new policy where benefits may be limited by adoption or are you considering developing a more intensive liaison program of some sort. Right. What's your availability? Are you assessing a major investment in some new information technology? Resilience, are you considering a costly risk mitigation of some sort? So, be explicit about the decision you're trying to support and we can model that decision. Again, I glossed over it pretty quickly, but the way we model uncertainties about those decisions are with a Monte Carlo simulation that's one good way to do it. We can throw computing power at it then and run a lot more scenarios to get higher and higher resolution understandings of our risk about these things. So how do we go about deciding where we should reduce our uncertainty? Remember, I talked about measurement as uncertainty reduction, a quantitatively expressed reduction in uncertainty based on observation. If you had a prior state of uncertainty, then you're going to make some observations and reduce that uncertainty. Well, if you're putting together a typical business case, a typical cost benefit analysis for some policy decision or something like that, you may have lots of variables. There may be a few dozen, there may be a couple of hundred variables. How should I decide what to measure? Or even how much effort to put into measuring it? Well, there is a way to monetize the value of uncertainty reduction. This has been around really since World War II, but it's the expected value of information. It's really a net expected value of information. The equation here deals with some common situations like you can be wronged by a little bit or wronged by a lot, but ultimately, in its simplest form, it's the cost of being wrong times the chance of being wrong. The value of information ultimately comes down to in its simplest form. In a simple, binary yes-no kind of decision where you're looking at the value of perfect information versus your current state of uncertainty, it's really just the cost of being wrong times the chance of being wrong. Then we just have to deal with these situations where there's lots of other variables changing at the same time that we're uncertain about. We're wronged by a little bit versus wronged by a lot where we have only partial uncertainty reduction, not complete. So that's really the only difference between the simple expression and the bigger equation here. So let's take a look at the consequences that has. Imagine plotting information values in a chart like this. I've got a horizontal axis where I've got increasing certainty to the right. I've got lots of uncertainty at the beginning and as I move to the right, I get more and more certainty until maybe I can get out to a point of perfect information. The vertical axis is some monetary metric. And so I can have an expected cost of information. I start gathering information and I can reduce uncertainty, but to get rid of that last little bit of remaining uncertainty costs a lot more. So the cost of reduce uncertainty further increases as uncertainty approaches zero. It has to skyrocket at some point it may be impossible to eliminate uncertainty in fact, regardless of what you spend the expected cost of information is a sorry, the expected value of information is a curve that goes the other way. The expected value of information tends to increase rapidly at first and then has to level off. Now I'm showing these just to illustrate but of course you can compute these functions exactly in specific situations, given a particular decision model and particular variables with stated uncertainties and so forth. You can compute these exactly but the curves will always look something like this one will be convex and the other will be concave, typically speaking. And so if the EVI increases quickly at first but it has to level off because you, you can't surpass the value of perfect information that's a number two. That's a specific value if you were able to eliminate uncertainty about a particular variable that has a finite value to it as well. And obviously the value of information can't exceed the value of perfect information. So it has to level off. This tends to tell us that the biggest net benefit for a marginal increase in spending on measurement tends to be early in a measurement. You get the biggest bang for the buck we say in America early on in the process. And what this means is, if you know almost nothing almost anything will tell you something. Often there's kind of an assumption that if you have a lot of uncertainty you need a lot of data but mathematically speaking just the opposite is true. The more uncertainty you have the bigger uncertainty reduction you get from the first few observations. That's the way the math actually works that's the way base theorem would actually work. What else do we learn when we apply this kind of method. Well if you've got a even moderately complex decision with a few variables in it maybe a couple of dozen or maybe a couple 100 variables in it. You may find out as we often have that the highest value measurements in a list of variables are not what you would have measured otherwise. And well over 150 different projects I think we're closing in on 200 by now actually where we've computed information values on decision models with lots of variables. And what we keep finding is that the high information value variables are not what they probably would have measured otherwise in fact if if you look at a given environment with a given set of decision models, and come up with some taxonomy measurements because you've got 10 categories of measurements that you might engage in sort those categories by which ones get the most attention historically to the least attention historically. How much effort do they get put them in that sort them in that order. And then look at what their information values would be just look at their typical information values on decision models for each of those categories of measurements based on the formulas I've shown you. Well we've done this quite a lot in different areas. And what you'll find out is that those two lists are in different orders, and they're not just different from each other they're almost exactly inverted. It's all, it's not just that you're measuring the wrong things you're measuring almost exactly the wrong things. I don't know how this doesn't affect the, you know, the GDP of nations, it would seems like, if I keep looking at every industry I look in every government agency, and they've systematically been spending more time measuring things that are statistically less likely to improve decisions than the things they really should be measuring. Well you may be familiar with this but what do you really measure in your environment you tend to measure things you know how to measure. It's not like you first compute what you should measure, and what it's worth to measure it. You just really start with, here's what I know how to measure so I measure those things. If you started by just computing the value of the measurement first you would probably find to be consistent with the rest of our observations here, you would probably find that you need to measure different things than what you're thinking. Let's say government initiatives or projects of some sort or major corporate initiatives or projects one of the highest information value variables tends to be whether or not they'll ever finish. You if you've seen projects before that were really big and got canceled. Well, tell yourself this, did you ever see a project that got canceled. When the initial business case had chance of cancellation in the business case. You know there's a chance of cancellation because things have canceled before. And of course, every project that was canceled wasn't one they thought they would be. It wasn't one they thought would be canceled so no matter what project you're starting there is some chance of cancellation that has to be included and that tends to be a high information value variable. I've done this in various industries like an it people tend to spend more time measuring the initial development cost and less time measuring certain categories of benefits. They'll measure some administrative cost reduction without measuring let's say adoption rates etc, almost any pair wise comparison you make the higher information value variable is one that gets historically less attention than the one that has lower information value. So let me talk about the methods of measurement just a bit here. Now that you've figured out what you need to measure and how much effort you should be willing to expend to measure something. How do you go about measuring them well there, there's a lot of people with various backgrounds here I believe some of you have probably gotten involved in scientific empirical research before and you've published it perhaps. But even in that group I find myself reminding my clients of some of the same things, no matter what you're measuring. Just bear in mind it's probably been measured before. That's a good assumption. Any good scientist starts with secondary research they look at previously published research when they start investigating something. I'm surprised how rare this actually is among decision makers though. And remember, you probably have more data than you think when someone says we don't have enough data to measure that try to be resourceful for a moment. Think further what are what are sources of information that would allow me to reduce uncertainty about this thing it doesn't even have to directly inform it it could be other indirect measurements. You look at measurements in, let's say astrophysics or nuclear physics. By the nature of those measurements, a lot of them are highly indirect measurements there. They're based on calculations that make inferences from other more direct observations and you can probably do the same in a surprising variety of areas if you start thinking about it. So it's about being resourceful. And when you do the math you often find out you needed less data than you thought. So you have more data than you think and you need less than you think, and it's probably been measured before. Now these are just good assumptions to start with I don't know that these assumptions are always true in every situation, but start assuming them and then try to prove yourself wrong. Maybe you, maybe one of these is wrong, but I would start with these as good working assumptions first and try to prove yourself wrong. You may be the first person ever to measure something which case you know maybe you should be nominated for a Nobel Prize or something but I haven't yet. I doubt I ever will be. And so it's very likely everything I'll ever work on is something that's been measured in some sense before. So how do we bring all this together. We call this applied information economics. This is really just a practical application of a variety of quantitative methods, where each of the methods that we utilize, starting even with how we frame problems and how we elicit subjective estimates from experts, as well as the various quantitative methods we use and the empirical methods we use every single method we use here is something where we can point to research that has large clinical trials showing that some methods measurably outperformed others. Based on the components that we found that are the best performing methods, we always start with defining a decision. No matter what you're trying to measure you define a decision you model your current state of uncertainty about it, based on calibrated estimates, you compute the value of additional information. You measure where the information value is high and then you can optimize the decision I'll gloss over a little bit some decision optimization issues but that's a good process to go through. These are just some of the areas that we've applied this over the years we've prioritized investments in aerospace and biotech and pharmaceuticals medical devices risk return analysis on large it project portfolios military government not for profit large engineering risk distribution analysis, etc. Night and day different in fact in many areas. This is just one this is actually one of the projects I did for the United Nations. And this is a model of very high level, you know, cartoon representing major components of a large Monte Carlo simulation, where we're working out the economic impact of restoring the Kabuchi desert in Inner Mongolia. A UN environmental program project that I was involved in and we built this large decision model so that we could compute the impact of the economic impact over a long period of time, it was actually 50 year period of time. And we had lots of uncertainties and because we have all these uncertainties of course we're going to have uncertain impacts but fortunately, we can reduce our uncertainty further. This model had lots of variables of different sources there's a large number of variables that require entirely subject matter expert inputs they had to be trained first so that we knew how good they were, they were at estimating things. Okay, we don't just use subject matter experts on calibrated we have to calibrate them just like you would any other instrument that would use for measurement. Many of them are based on other secondary research at first, or other historical data that local authorities might have had. So that's the initial state of uncertainty we still have measurements to make but the information values tell us where we should focus additional measurements. And it turned out that the highest value measurement by a large margin was again not something they would have measured otherwise the biggest uncertainty was daily wages and Kabuchi project area laborers. Now that was one variable in a large model working out the economic impact but that was so uncertain that it turned out to be a major information value. And there's, of all the crops there was a particular crop where reducing uncertainty about the yield was higher information value, and they even had some uncertainty about the initial and in fact ongoing cost of desert restoration. And that was addressed with just a more detailed cost model so we created more much more detailed cost model, where we had more components where we could rely on other empirical data about the cost of various activities etc. All right, so I think I've just about run out of time and certainly feel free to contact me if you have any questions this is my email right here. And as I always tell people, you know, measure what matters and you can make better decisions. And so, I don't know if we have questions now, Keith, but I'm happy to stick around. Yeah, we have one question from Mark Colverly who's saying have you applied this to climate change issues. How do we optimize observations to support climate change mitigation. I would say that my best answer to that is we've built models that included climate change as variables in the models. So I haven't had the opportunity yet to apply it to climate change itself, but we've certainly used the outputs from models like the cement models. And other decisions that we're modeling. So we include the outputs of cement models and their uncertainties in other things that we're modeling. And of course those are pretty informative for big important decisions that people are making. The question you would start with though is what make sure we're framing the decisions so think of the portfolio of public policy decisions we might be making as a result of better measurements here. Right. Sometimes I think there's a tendency to just throw more money in a particular type of measurement that we need more sensors or more data in some area without really working out in advance. How would that change our states of uncertainty in a way that would inform the, you know, set of decisions better. You know, think about it that way. Thanks. I have one from Ben Swall here. How do you respond to variables or models that change through time? Oh, well, you certainly don't want to do that in your head. You, I would say that it, those are the very sorts of things you want to model quantitatively. You want to model your uncertainty about the change as much as your uncertainty about a current state. I mean the models that we build for environmental programs are typically 50 or times spans. Okay. In the pharmaceutical industry or civil engineering problems, those are typically 20 or 30 year forecast. And of course, we have lots of uncertainty, especially the further out you go. But because of that, that's why we want to be explicit about quantifying uncertainty. And in fact, we have a lot of data on how bad we are at forecasting future things. So we allow for plenty of uncertainty the further out we get. There typically is quite a lot of good data on how much things change. And there's even good data on catastrophic changes that are relatively rare. Sometimes people say, you know, this is a new technology, we don't have any data on it. Yes, but you have lots of data on introducing new technologies. In California, we don't have a lot of data on huge wildfires like campfire, which happened, you know, a little over a year ago, campfire in California was a huge fire. And, but in fact that's within the distribution of historical fires, especially when you take into account increased drought periods, which we expect to see more of, right. So, you know, we can only deal with those if we do the math and we can't deal with them if we try to do in our heads. Remember, no matter what model we create, we're going to have a model with error in it. But we have no choice but to model even our intuition is a model. So the right way to think about these things is which modeling methods do we have reason to believe measurably outperform other modeling methods. To build a quantitative model and somebody might say something like, how do you know you have all the variables or all the correlations. And I say well of course we don't. If we had all that we probably wouldn't call it a model we call it reality. But it's an abstraction of reality so the only question is, are we using modeling methods that show that they measurably outperform alternative modeling methods, like our subject matter expertise or something like this. Does that make sense. Thanks, Doug. I've seen a couple of questions that came up in the chat as well. There's one. How can you apply this to remote sensing applications say the classification of certain crops I mean, I think probably answers that you know this this can apply to any kind of problem really but I'll let you answer that. Oh, sure. Well, again, what's the decision. Right. Is remote sensing going to give you higher resolution data that would inform specific policies like imagine how you would make the policies decisions differently, or the intervention decisions differently. If you had higher resolution data, and maybe the remote sensing is going to be better at getting you that data faster. Well, is the change in time going to have a relevant impact on the decision making right so you still want to start with modeling the decision there. I've got a question here, Doug, which I know you've dealt with a lot. And it's a common one. And it's how would you measure variables that are linked to human interactions, like respect for example. Oh, well, fortunately, humans are measured frequently. There's, there's whole fields of study about measuring humans right. So, yes, for example, but let me take respect for a moment there and dive into it further because I think this is a great classic problem of something that seems initially intangible. I would ask the, the person who posed the question, what do you observe when you observe more respect, you must have seen cases where respect exceeded respect in other cases. So I would start there because I'm not sure I know actually is one way to start I'm not sure I know exactly the context of what you mean in that for that particular item. Yeah, and I suspect. Oh, go ahead. Well, I suspect that the reason you're interested in measuring something like respect is because perhaps you're looking at, you know, some sort of intervention of some sort. Maybe understanding respect is going to have something to do with forecasting other behaviors. Remember, if you're measuring something to inform your own actions or the decisions of others, etc. So if you're forecasting a couple of things, what would happen if I didn't do this and what would happen if I did. So what behaviors are you forecasting that would be different, depending on this decision you might make one way or the other. Thank you. On the chat. How do you deal with local scale data uncertainties and the role of local stakeholders. Well, what's the, what's the particular problem there. So, I mean, uncertainties. The math behind it isn't different for scale. Right, we could, we could deal with uncertainties at very tiny levels I do money. I did a Monte Carlo simulation when we looked at which house we were going to purchase. And we do micro scale, and we do Monte Carlo simulations for giant global issues. So the scales not really a problem, the math for uncertainties the same. So was there a specific issue you were thinking of that maybe I could dive into further. I can put that up if you can think of that. But in the meantime, you know, I just like to also reflect a bit on, you know, how difficult it is often for researchers to define the decision. We mentioned this in your book sometime back but I went into a research organization and interviewed most of the scientists and I asked each single single one. In turn, you know, how does your research support decision making and that was a very tough answer question for most people to answer actually and it does reflection and research and I think that's been your experience as well. Right, and I, I, I don't discount the, the value of purely exploratory observations in science. That's not a problem, but I would suggest to most researchers that if you're trying to get funding, and you're talking to agencies that might provide that funding, put it, putting it in context of decisions that you would support makes a business case for your research. Right, so that's one reason to think about it. You're affecting the decisions of others and the more explicit you can get about that, the better you're going to be able to make a case for the people who might provide funding. Yeah, so I've got quite a long question here. In many environmental scenarios, the economic value is obscure, if not just intangible. In particular, what matters differs between different decision makers like departments in government is there a need to have decision makers clarify their needs and do we risk falling into the trap of low value of information by delivering information we think they need. Oh, sure. Well, first off, the last sentence is a testable hypothesis I would test that hypothesis I'm not sure that's the case. But the fact that in from that the fact that economic impact is uncertain, even as you said obscure is the very reason we do probabilistic analysis. If we didn't have uncertainty we wouldn't need probabilistic methods. We need statistics really you everything as it is. Most of statistics deals with making inferences from from incomplete observations samples of larger populations, etc. So there's lots of room for uncertainty in the way that we do things it's the whole reason we do these sorts of things that's exactly why we do it, but you also use the term obscure. If obscure means unknown or uncertain, we have a way of modeling that if it also means ambiguous, that is something you can avoid uncertainty is unavoidable to some degree but ambiguity is not. We can figure out what we mean when we say things. So what do we mean by economic impact. When we observe that standards of living will increase. Well let's get specific about the standards of living and what's the value of those standards of living. I mean if I say everyone having access to clean drinking water is at least as valuable as improve safety in a work environment. Well that's a that's actually a quantified choice. So you can state preference that you've stated quantitatively, and you can state preferences like that quantitatively, or would you rather. What's the trade off would you rather increase everyone's income by 2% or half of the populations income by 40%. Well, so. Alex was sort of expanding there that he was thinking about biodiversity as an obscure, not just intangible value I mean I'm not you maybe that's something you've come across before one of your problems but I guess you would start say well, you know what what what impact would you expect to see if there's more or less biodiversity is the way you probably get start to get at that. Yeah, right. I mean, there are, you can imagine, and this is something that we should be modeling right in great detail but a biosphere requires diversity, right. If you had some fraction of the species you currently have and you have uncertainty about the interactions and necessary dependency among those species and among those systems those ecosystems. So, you might actually risk more catastrophic collapse which have, which would have other impacts on humans as well. Not that you necessarily have to limit your valuation of economic impact to humans, you can, you can value whatever you want to value. There's lots of ways you can express values explicitly and quantitatively, but I do recommend explicitly and quantitatively stating your values. You tell me what to optimize and we'll try to optimize for that, but you do have to be explicit. It doesn't really help a really any kind of policy or program to be ambiguous about what it is we're trying to do. It's a great exercise by the way if you haven't thought about it. What does biodiversity mean. Are we just talking about density of species in a given area, or are we talking about more specifically particular combinations of ecosystems, right, and their dependencies. You know how detailed do we want to get and what is it that we're forecasting to support what kinds of decisions. I think I think there's an interesting point about the earlier point about asking decision makers what they need. It's quite possible that, you know, we're always conscious that we should ask decision makers what they need but it's quite possible that decisions makers may not know what they need if they've not done an uncertainty analysis around a certain decision. Should I following simply the stated needs of decision makers we might go down completely the wrong track. That's true. In fact, you know, often when we start our analysis we start working for someone who doesn't quite know what they want just yet. Right, that's normal, but that's part of our job and should be part of your jobs as well is to facilitate that conversation to help them get specific. Right, you can't do any kind of controlled experiment in some area without being very specific about what it is you're trying to do what you're trying to measure right what are the conditions of the experiment etc and the same is true for decision making. Get very specific about what the objectives are in a measurable sense and a quantifiable sense. So if a decision maker tells you, you know, we need more better weather data you really have to go back and say, well, what decision are you using is that data supporting and really work through and analyze the decision to really come up with a sensible answer. Yeah, right, because I could interpret that's a good point Keith because it seems like we could interpret better weather data to mean a lot of things. Yeah, I could just think of what just more sensors is, is that it, or should I have a better computational methods for the sensors I do have. Right, to make inferences about them, or maybe I should just get completely different sensors, not just more sensors maybe I should try to start measuring different things altogether. Maybe better weather data has something to do with other human activities how well are we measuring that right. If someone's trying to say we're trying to measure something about weather data because I want to know something about the impact on the electrical grid. Well, maybe we should know something about the increase use of electric vehicles. Is that the reason why they're measuring that I would have a quite a bearing on that. So yes, the more specific the better. So I've got question here which is more getting into communicating uncertainty. How do you communicate complex models to governments that inherently include uncertainty is it a matter of just stealing in information to support the decision. So I think you must have done this numerous times hundreds of times. You've, you know, it's time to kind of communicate now that the model results to the government or the user. What's your advice on that. Well this is a pretty complicated model actually I'm summarizing it quite a bit right here. But when we show them this and we say, you know, we should go out and measure these things further and that'll help you reduce uncertainty about future policy decisions for restoring the Kabuchee desert. They understand that right so the model can be very complex they're they're not by the way they're not unfamiliar with very complex models on things. The financial economic world can use very complex models there's engineering problems that have very complex models right. So they're not unfamiliar with people producing complex models they just don't have to be exposed to all the complexity. That's partly your job they have to trust you. They're saying, they're saying I realize there's a lot more complexity you're not revealing to me but give me the summary points. Oh so here, you need more resources to go out measure, measure wages because that's going to reduce your uncertainty about how we should continue to restore the Kabuchee desert. That makes sense. They get that. So, put it in it when you put things in the context of improved decision making I just think their understanding is going to improve. Often when people say I have difficulty communicating complex problems to decision makers, I think they might be misinterpreting their experience. I don't know if the decision makers are having difficulty with complexity per se. I think it is, even at the given level of complexity you're not telling them relevant things for decision making. I think in my experience if you stick to relevant decision making they can get relatively complex. There's reasons why they're in the jobs there, they're in typically. Certainly one thing I found rings home with decision makers is when you start talking about the economic cost of measurements and how much value it's actually giving you that seems to resound pretty well. Absolutely. I mean, obviously, measuring daily wages for Kabuchee project area, laborers was not going to cost $1.6 billion. It was going to cost a few tens of thousands of dollars. And although, you know, say 30 or $40,000 may have sounded expensive at one point. It's trivial compared to the impact it had on the decision. So there, that's the selling point. So at times about up dog. I think that's some really valuable lessons for all scientists and researchers and policymakers here I think and something to really think through so we're really grateful for you coming on and giving this talk I think it's fundamentally important. I just like to draw attention to the next webinar which will actually show you how to do these Monte Carlo simulations in a very simple way using very simple simple tools by Professor Sam Savage and also you know what the floor of actually not incorporating uncertainty the floor of averages and Doug has also been heavily involved in the development of these tools through his random number generator which is key behind this and this is part of that group as well. So, unless you have any final comments, I'd like to thank very much and thank you for attending and we, we look forward to the next one as well. Thanks. Thanks for your time. I appreciate it. Thank you very much.