 Erros in variables regression analysis is an order technique for dealing with measurement error in the predictive variables in regression analysis. While this technique is a bit old, it is still some relevant applications today. The key assumption in errors in variables regression analysis is so-called classical errors in variables. And what this means is that the measurement error in the variables is uncorrelated with the variables themselves. So if we have a b that we are interested in, then the indicator b1, the error in the b1 is uncorrelated with b that is of our interest. The measurement error in indicators causes attenuation. So if we have these x and y that are perfectly measured, no noise in the measurement, the correlation might be 0.5. And if we have measurement error, so there is random noise in x and y, then the correlation is attenuated. Erros in variables regression deals with this kind of scenario. So how do we do errors in variables? If we have a latent variable model here, so we are interested in the correlation or regression between a and b, but we measure a1 and b1, which contain measurement error, then the model implied covariance matrix, importantly the diagonal elements are affected by measurement error. So the variance of a1 is a bit larger than the variance of a, because this theta a1, the measurement error e, and then the variance of b1 is a bit higher than the variance of b because of theta 1 or the variance of this second indicator error term. What we try to do in errors in variables regression is that we adjust the covariances by eliminating the measurement errors. So if we know that the measurement variance here is theta 1 and we know that the measurement variance here is theta 2, then we can subtract those from the diagonal and that solves the measurement error problem. So how does that work in practice? Let's assume that we observe this covariance matrix here. The covariance of a1 is point 2, the variance of a1 is point 2, the variance of b1 is point 2 and the covariance is point 3. Now the unreliability of a1 and b1 under the classical errors in variables assumptions is affects only this diagonal here as shown before. Now let's say that we know or we assume that the reliability of these indicators is 80%. What we then do is that we adjust the covariance matrix by multiplying the diagonal elements by the known or estimated variances. So this variance becomes point 95 and this variance becomes point 95 and then we estimate a regression model using this error-adjusted covariance matrix. So this is how the classical errors in variables regression analysis works and it corrects for measurement error. The same can be implemented also in modern structural regression modeling software and this is perhaps the more common way of implementing this technique today. So we again assume reliability is 80%, then we fix the error variances of these indicators to be point 2 times the indicator variance. So if the indicators are 80% reliable, then 20% of the variation is error variance and the error variance is going to be 20% of the indicator variance. So the indicator variance is 1.2, the error variances are going to be 2.4 both. We fix them to those values and we estimate this is our implied correlation matrix. So we estimate all the regression coefficient using these fixed error variances and we get consistent estimates. This technique has as one problem and it is that the reliability's are typically not known but we estimate them from the data and when we estimate them from the data and we fix them in the SCM estimation, then the SCM software does not know that they are estimated quantities instead of known quantities and that causes our standard errors to be inconsistent. There are some solutions to this problem like Oberskin-Satura talk in this article but the solutions tend to be bit technical and it might be difficult to find any implementation of those solutions. So you would basically have to program your own standard or estimation technique which is not feasible for most researchers. The errors in variables technique is also discussed in Kline's book and he gives examples of how you can fix the measurement error of a predictor variable, how you can fix the measurement error of a mediator variable to get consistent estimates even if you have measurement error. Importantly, any single indicator latent variable that you estimate in a model is basically an errors in variables estimate. This technique has two common use cases. The first use case is meta-analog or structural mechanism modeling. So in meta-analysis you often use estimated reliability's and of variables and you observe correlations of those error-contaminated variables. So when you fit a structural mechanism model to a matrix that you assume contained measurement error, then you can use errors in variables to correct for the measurement error. And this article by Burke and co-authors, for example, applied that technique to a subset of their variables indicated by boxes. Sorry, circles, the boxes are assumed to be perfectly reliable and there is no correction. Another interesting application is more sample research. And this is something that we have had some really encouraging results recently. So useRossell's book chapter is a good summary. This is an open access book, so it's accessible to anyone and he summarizes this work. The key finding is that we know that maximum likelihood estimates of a latent variable model are the most efficient. In large samples, but maximum likelihood estimates might not work as well as small samples. And Rossell's research and his co-authors research shows that there are small sample scenarios where actually taking some of the indicators and then calculating the reliability of that sum and applying errors in variables regression analysis produces better results than maximum likelihood estimates. So this is a useful finding. Aerosene variables regression analysis can also be used for diagnostics. So if you have a complicated latent variable model where you have measurement models for latent variables and you can't get it to converts, then converting it to aerosene variables regression analysis model might get you some results that can serve for diagnostic purposes because there's simply less to estimate when you estimate reliability as first and then just apply error correction to the main estimates instead of trying to estimate everything as one goal. So this is useful for small samples and if you work with meta-analogical correlations or for diagnostic purposes.