 All right, so another kind of class of assumptions, the observation error, the regression model assumes all error is in the y's. Again, so you figure out what the best fit line is, and you calculate residuals by looking at the difference between your observation and the line. There's always some uncertainty associated with the parameters that you estimate. You can make that uncertainty smaller by collecting more observations. But the noise that's left is always assumed to be in y in the observation itself. But sometimes you actually also don't measure x very well. So sometimes there's actually error in our predictor variables. And in fact, a lot of times there's probably error in our predictor variables. And there are a few more frequentist opportunities for capturing that. Sometimes it's not a big deal. And sometimes it could be. You can imagine that if there's error in x and you're trying to use x to make a prediction, then that error is going to propagate out and also be in your prediction. If you haven't somehow accounted for it or described it, then it makes your prediction overconfident. All right, so errors in variables is a way to deal with the fact that we can often have errors in x. We're uncertainty in our predictor variables as well as in our response variables. So classic assumption is that it's all in the response variables, but in ecology, the reality is that it's often also in our predictor variables. So how do you deal with that? And the Bayesian framework, again, because we have this probabilistic structure, allows us flexibility to build that into a model. So in this case, what we've done is we've got the same kind of linear model we've been working with, where our parameters describe the variance around a slope, for instance, in the intercept and the observation error. But also, we've got a model for the variable, the predictor variable x, that's described by its own set of parameters and also informs y.