 Expert elicitation is a rigorous way of assessing probability distributions from experts. But as Granger or Morgan has said, when done poorly, it can lead to useless or even misleading results. But when it's done really well, it allows us to have probabilistic information that we wouldn't have any other way. So there's three major uses of expert elicitation in ecological forecasting. The first is through developing probability distributions on the priors. The second is probability distributions for likelihoods. And the third isn't related to probabilities but is related to formally understanding the process and the structure of the model through causal relationships using conceptual modeling or mental modeling. I'll focus on how we think about eliciting probabilities for priors and likelihoods. So in particular priors are one of the ways that expert elicitation is particularly useful for ecological forecasting. Most of the time what you'll see in the literature is a use of uninformative priors or uniform distributions. Now if you're a statistician, you have no information about ecological systems, this can be a very reasonable assumption. But many of us with ecological knowledge know something about these systems. And so developing prior distributions that put reasonable bounds on the phenomenon of interest that we care about is a really powerful way of constraining and providing information within this modeling context that allows us to say something meaningful right from the beginning. It also means that as we develop these Bayesian forecasting approaches, it can prevent us from developing posterior distributions that are spurious relationships solely based off of the data because we've specified information through those probability distributions and expert elicitation in the priors that allows us to say something meaningful right from the beginning so that you can combine the priors and the likelihoods in a way that improves the way that we forecast. This means that expert elicitation is particularly useful when we're working in data-limited situations. We have a number of ecological problems where we just don't have the data that we would like. Examples like invasive species, endangered and threatened animals and plants, as well as infectious diseases. These are some of the cases where expert elicitation can be particularly useful to develop better ecological forecasts. So one of the examples where a model-based forecasting and expert elicitation have been combined is through the use of weather forecasting. So there was actually a natural experiment built in because in the early days in the 1960s when we were just building weather models, it didn't perform as well as we would like. However, there were a number of local weather forecasters who had been predicting the weather for a number of years and providing probabilistic distributions over the conditions that they expected. So over time what you were able to see is that in the early days the experts outperformed the models. But over time the models got better and we were able to assess how well the models did versus the experts by using scoring rules and skill assessments. By doing that one of the things that was really important in these processes is that the experts were an important part of the forecast process. They were able to understand some of the limitations of the model and contextualize the results for their particular location. In addition to this being a great natural experiment, one of the things that really accelerated the learning both for the experts and the forecast models is that you would make a forecast whether it's by an expert or by a model and you would observe it the next day. You would gain more information and you would make another forecast and you would observe it the next day and you had a huge amount of information that allowed you as an expert to learn how well you were doing over time and to improve over time such that people got better. And you could see that so when you had real expert weather forecasters they performed quite well. Weather models they had the same benefit of making forecasts observing and seeing it improve over time. That meant as we improved our ability to forecast we were really able to judge how well the models did and zero in on where we could make the improvements. And that means that as we think about this really thinking about how ecological forecast expert judgment, formalized expert judgment combined together I think is one of the key ways that we can improve ecological forecast. One of the things that I want to highlight are some of the problems that occur or things that an ecologist should be aware of when developing or constructing an expert elicitation. The first thing is that this is not a trivial process. We can certainly think about our own expertise and judgments when developing these models and making those assumptions explicit but if you're formally incorporating probabilistic distributions into model it requires thoughtful consideration and it's equivalent in thought in terms of the design of the experiment as designing an ecological experiment out in a particular ecosystem. That means that you really want to engage social scientists and decision scientists to help you design an expert elicitation that would allow an elicitation of a prior or a likelihood that would be meaningful for the model that you're constructing, the system that you're studying but that is also robust and rigorous. That means as we think about what elements one needs to consider to develop a robust and rigorous expert elicitation process some of the problems with elicitations that have been identified by Granger Morgan in his 2014 Proceedings of the National Academy of Sciences paper are first you have to be careful to make sure that you're focusing on a topic where an expert could actually make an informative predictive judgment. That's not always the case with every single scenario where one wants to develop such data so you want to make sure that you're setting up the problem such that someone could actually answer a probabilistic question. Additionally, a lot of times we like to use qualitative uncertainty language. We use terms like likely about normal, more likely than not. These terms are imprecise and it means that experts may interpret what probabilities are associated with those qualitative descriptors in different ways. So it's really important to avoid qualitative language and to focus on precise quantitative probabilities. A lot of times when we're doing this with experts, we oftentimes use mental shortcuts and we're most of the time overconfident so we construct probability distributions that are too narrow. These are phenomenon that are fairly well known and some of the mental shortcuts that we use or heuristics can be minimized through the design of your elicitation protocol. In other situations, it's really impossible to remove some of the mental shortcuts one might use from that process and in those cases we need to be aware of how that might affect the results and ways that we may be able to correct for that through design or through modification of the probability distributions after the elicitation has been conducted. It's really important to get the right experts. You really want to make sure if you're capturing a phenomenon that you're getting multiple experts that really capture the range of perspectives for that particular phenomenon of interest. Additionally, you don't want to shortchange the development modifications and testing of your elicitation protocol. This is one of the most important steps to make sure that it's robust and you want to make sure that you're including in your elicitation protocol ways that capture all of the relevant information that's needed as part of this process. That means that it's perfectly fine for an expert to sit in front of a computer or at their desk so that she can access information that would help her to make a more informed probabilistic assessment. When thinking about these elicitations, you want to make sure that you're capturing uncertainty and representing the correct functional form. You want to make sure you're getting a diversity of expert opinion and that's captured in part through the choice of the experts and making sure that you're getting the right people as part of the elicitation. And then finally, if you've captured input from multiple experts, there's questions about whether or not it's useful to aggregate that expert judgment and develop a probability distribution for all of those experts, whether it's more useful to keep those probability distributions independent for each of the experts.