 Statistical inference is most often engaged in trying to separate signal. That is a model of the world and how it works from noise. The difference in how we observe the world. And that model is often in statistical inference treated as random noise. So classic assumptions with a linear regression, which is one of the most commonly used techniques in statistics, is that uncertainty arises from observation error. So your error is all in that observation component. You've got a model that describes the signal. Oftentimes that's a mean or a line. And if it's a mean, for instance, then any time your observations are different from that mean, you estimate that as observation error. So something wasn't measured correctly. And the general assumption there is that the error is normal. And so symmetric about zero, that the errors are independent, so knowing something at one observation doesn't tell you anything about the error between the model and the next observation. These techniques don't often handle missing data intuitively or at all. And the assumption of constant variance over time is another assumption that's common and important for regression models. And one of the things that we're doing here is thinking about uncertainty as more than just noise. And so sometimes you don't know the process perfectly. In fact, most often you don't know the process perfectly. And you certainly don't necessarily measure all of the response variables perfectly. And yet all of the error in your model isn't necessarily due to measurement error or isn't necessarily error. There are many reasons that data vary. And data being the observations or the response variable, they vary because of that observation error because you don't measure something perfectly. They can also vary if they were taken at different times or different sites. If there's a weather difference or different slope, there are lots of reasons that data vary or responses vary. We try to put as much of those into a model in an explanatory way as possible. But there's often leftover variance that we know has ecological relevance and isn't just random noise. And when that happens, which probably happens most all of the time, then just using the signal that we've carved out of the data from using the model, just using that signal to make predictions or forecasts outside of where we have data, whether that be in time or space, is not going to capture the true future. So you're not going to make a precise or an accurate forecast that has high probability of capturing a future state if you don't describe all of the real reasons that there's variation in data. So another way to say that is real data vary for a lot of reasons and some of those are consistent with the classic assumptions of linear regression. But many of them are not. And when they're not, that inability to capture the many real ways and the processes that cause data to vary makes forecasts and even predictions less powerful and less likely to be accurate. Even though adding in all of that uncertainty and acknowledging the uncertainty as something real often makes confidence intervals in forecasts much broader. So predictive intervals, confidence intervals will be broader, wider. Uncertainty might grow if you account for it and don't assume it's all noise. It will certainly grow as you get further and further away from the data points. But you're more likely to be accurate, more likely to capture the true future state. Bayesian methods or probabilistic models generally are a framework that allow for the flexibility to treat all unknowns as random variables. And so the unknown knowns, the things you know are unknown. So oftentimes you might have regions that are different and you get response variables. And so for instance, looking at fecundity in trees, you know that fecundity generally increases as trees get bigger. But in some regions maybe that looks a little different. That function looks different than in other regions. And you don't necessarily need to know exactly why. But that difference itself is not random noise. It's not observation error. It's due to regions. And so there are ways to partition uncertainty into things we don't know necessarily well enough to fit in as fixed effects in a model. But you can still capture the uncertainty through hierarchical modeling, through random effects, and through separating process error and observation error more coherently.