 In this lecture I want to make the idea of the forecast analysis cycle a bit more concrete by going over what I believe to be the simplest possible example of implementing this that can be used for real-world problems. So in going through the simplest possible forecast analysis problem I'm going to make a lot of simplifying assumptions. First let's start thinking about the analysis step. So for the analysis step we need a prior which is our forecast. So I'm going to assume that that prior follows a normal distribution with some prior mean and prior standard deviation that captures the uncertainty in our forecast. I'm going to assume that we've made our forecast and so that prior mean invariance is known. Next to combine that prior with theta I need to write down a likelihood. I'm going to assume that likelihood is also normal and that it follows a very simple form which is that the observations are just normally distributed around the true states of the system with some observation error. I'm likewise going to assume that that observation error is known which is not an unreasonable assumption because a lot of data products do report their uncertainties. If I take this normal prior from the forecast and a normal likelihood from the data this benefits from the conjugacy between the normal prior and the normal likelihood and will give us a normal posterior. Furthermore that normal posterior has a simple analytical solution. If I were to express the uncertainties in my forecast that come from the uncertainty in the data, the observation error and the uncertainty in the forecast, you know its prior variance, if I express those prior variances instead as prior precisions which is just one over variance, then the posterior precision is just the sum of the precision. So the posterior precision for the analysis step is just the precision of the observations plus the precision of the forecast and then if I want to get back to variance I can just do one over that term. Now the mean of the analysis step is just a weighted average between the mean of the forecast and the mean of the data each weighted by their precisions. So I have the precision of the data divided by the total precision, the data precision plus the forecast precision giving a weight to the data and likewise of the weight from the prior forecast that comes from the uncertainty by the forecast in terms of its precision divided by the total precision. So this leads to two important intuitive pieces of understanding that is mathematically exact for this simple problem but will carry through conceptually to more complex problems. The first is just that when we perform the analysis and update the forecast with new information that that updated forecast will always be more precise because we take the prior forecast and we add more information to it and so literally we were summing the precision of the data with the precision of the forecast so we're always ending up with a more precise analysis. The second intuition is that the contribution of information from the forecast and the contribution of information from the new data are each weighted by their uncertainties. So if I have data that is very noisy it's only going to have a little bit of influence on the forecast. By contrast if I have data that is very precise but a model that is very uncertain then we're going to see the analysis pulled very strongly in the direction of that high quality data. This I think has an important take home message for all of us regardless of whether we're making ecological forecasts just as producers of ecological data which is that to be able to do any sort of data simulation to be able to integrate observations into models and forecasts it is critical that we have estimates of uncertainty on those data and it's also critical that we have estimates of uncertainty on our models. Furthermore that those estimates need to be robust. If we have an estimate of uncertainty in the data that is very inaccurate we might give very inappropriate weight to that data. So for example if I have some observations and I report just some sort of calibration uncertainty on that data but really there's a huge amount of sampling uncertainty I might give too much weight to data it might falsely represent its information contribution. We've now talked through how the simplest possible analysis step works combining a normal forecast with a normal likelihood to give us a normal analysis. We next want to think about how the forecast step works we want to now make an updated forecast from that updated analysis. In the simplest possible case I'm going to assume that the model that we want to make a forecast with is simply linear because I'm making the assumption that the forecast model is linear I can use the analytical moments approach to uncertainty propagation to actually analytically transform the uncertainty in the current analysis state into the uncertainty in the forecast. What's going to end up at the forecast mean is just what we would get from plugging the mean of the analysis into the forecast model and then the analysis uncertainty is going to have two components. First we're going to take the current analysis uncertainty and we're going to inflate it or decrease it including to the slope of the the model because it's simple linear model so as we learned when we learned about uncertainty propagation the variance in the forecast is just going to be the slope squared times the variance in the input in this case the analysis posterior which is the input initial condition. So that we have a contribution from the initial condition uncertainty that goes into the forecast and then we also have a contribution from the process error which in the simplest possible case we're going to assume the process error is also normally distributed and it has a process error that is known which is a very strong assumption for a lot of ecological problems. But while it's a strong assumption it does result in an analytically tractable approach to propagating uncertainty in the forecast and then an analytically tractable approach to updating that forecast with when new information becomes available. So this very simple analytically tractable forecast analysis cycle that makes a simplifying assumption that the observation errors are normal and that the process errors are normal and the model is linear goes by the name of the Kalman filter and is one of the first examples of data simulation and will form the basis for more complex data simulation problems that we'll cover next.