 So in the last video we were talking about the trade-offs between different methods that we can use to propagate uncertainty into forecasts. And specifically we're looking at analytical approaches to propagating uncertainty and found ourselves focusing on analytical methods that just propagate the uncertainty in terms of the statistical moments such as the means and variances and covariances. We found that we could get an exact analytical solution to that provided that the models were strictly linear. That's a very limiting solution because we have a lot of ecological problems where the models that we're interested in are nonlinear where there's interactions that involve multiplications or divisions or some other functional forms, you know, exponential growth, saturating functions, things like that. When we're dealing with nonlinear functions there is a way to propagate their uncertainty analytically but it's hard to do in exact closed form. So what we end up doing is instead of propagating the uncertainty through the exact nonlinear model we approximate that by taking the linear tangent of that model. So essentially we take the derivative of that model to estimate an approximate slope around the mean. And so what we end up then doing is once we have approximated our nonlinear model with a linear approximation we can then use the same rules we had previously to understand how to propagate uncertainty in linear models. So thinking about this from a mathematical perspective what we're essentially doing is replacing the function that we're interested in with which called a Taylor series expansion of that function and particularly focusing on the first few terms of that that describe just the mean and a linear approximation. When we do this we find that the resulting solution is very similar to what we had for our linear model. The key difference is in our linear model we had specific slopes and in we do a nonlinear model we have instead the derivatives and those derivatives may change depending on where you are in parameter space but there's something we can solve for analytically. So we again end up with a situation where each source of uncertainty that we want to propagate into the forecast is represented in terms of its direct component where there's the variance of that term multiplied by the square of the derivative of how our output responds to our input. So essentially we're saying that when we propagate uncertainty there's a contribution coming from the sensitivity of the model to that component and the uncertainty in that component and as before we also need to include terms that account for the interactions that arise as the product of the partial derivatives of each of two factors and then the covariance between those two factors. So one of the general concepts that we can take home from this is that we can partition out the uncertainties in our forecast into these two key components. The uncertainties of the inputs and how our outputs respond to those inputs. So essentially we're decomposing the predictive uncertainty into the sensitivities of our systems and the uncertainties in the inputs and we'll come back to that concept later when we talk about how we analyze uncertainties to try to improve models.