 Improgress studies that apply structural ecosystem models typically follow the two-step modeling approach. This modeling approach is explained really well in this article by Mesquite and Lazzarini. The main thing in the two-step modeling is that you first estimate the confronter factor analysis followed by a structural regression model using those same latent variables that you estimate it in the confronter factor analysis model. So what is the logic behind these two models? The logic is two-fold. First of all, in some cases we might be okay with having the measurement model slightly misfit the data. So measurement, particularly if we do a measurement using survey, is hardly perfect and so we might have some small residual correlations that the model does not explain. When we move onto the theoretical model, so we constrain the paths between the latent variables according to our theoretical model, then there is no room for mispecification anymore. So we must ensure that any mispecifications in our final model are due to the measurement part and not the part that relates the latent variables to one another. These modeling approaches are guarantees that that will be the case. The second thing that this modeling approach does is assess interpretational confounding. So the idea of interpretational confounding is that the identity of a latent variable depends on the indicators. And how we interpret a latent variable depends on what the indicators are used and also how the indicators load on the latent variable. Because the model is estimated as one big model, it is possible that the loadings of the indicators change between the measurement model and the structural regression model. This indicates a problem because it means that the latent variables don't get the identity from the indicators but also the model that is being tested. So we want to eliminate that and this modeling approach eliminates that problem as well, or at least reveals that problem. Then if you want to fully follow the understunner-gerbing approach, you need to estimate at least two alternative models. So you need to estimate the more parsimonious model where you take something away from the theoretical model. So you constrain some of the paths to be zero and then you want to demonstrate that if you take something away from the theory, then that causes the model to misspeak. You also estimate a less constrained model, so you add something to the model and you want to demonstrate that adding paths that you didn't originally include do not decrease the model fit in any meaningful way. So this is the other full two-step approach, converter factor analysis followed by three, at least three separate structural regression models. These are tested using the chi-square test. So graphically we do first the converter factor analysis. Let's assume we have a y, the dependent variable a and b. They are first allowed to be freely correlated and then we do a structural regression model where we specify that some of these latent variables are actually endogenous latent variables, so they depend on the other latent variables. This structural regression model is actually nested in the converter factor analysis model because the converter factor analysis model does not constrain the correlations between the factors anyway and this model can impose some constraints. In this particular case there are no constraints, so these models will fit equally well, but in any realistic models the structural regression model typically has more constraints than the converter factor analysis model. So we do these two steps converter factor analysis followed by structural regression model. Then we assess the model fit. So here the results, the converter factor analysis fits well to the data. The Mesquite and Lazarini paper provides the full sample covariance matrix or correlation matrix for indicators and I can just reproduce the results using their correlations and I'll get the same results that are reported in the paper. So all good on the measurement front and then we run the structural regression model. We notice that the model does not fit the data. So chi-square rejects the model and this indicates a serious problem for the study. In this particular case this is actually a specification error, so authors are doing model specifications that they did not intend to do and we know that if we do diagnostics but that's not the point of this video. So this significant chi-square here indicates a problem and it must be diagnosed and that's what the authors do in the paper. Then we also compare the factor loadings. So the idea in the two-step modeling is that we want to detect if the meaning of the factors or the latent variables changes between the measurement model and the structural regression model and we do so by comparing our final model which they call the best model against the measurement model and we see that all these loadings they are about the same so there is no big changes in the measurement models. So we conclude that latent variables remain the same between models. If there were major changes in the structural loadings then we would conclude that the latent variables in the measurement model don't really correspond to the latent variables in the structural regression model and that would be a problem. Another important part beyond this factor loading comparison is model testing sequence. So you use a likelihood ratio test to do a sequence of model fit tests. So you model comparisons. The most important comparison is the nested model test between the theoretical model, the one, the structural regression model that you derive based on your theory and the comfort and factor analysis which tests your measurement theory and that is unfortunately missing from this table. But this table contains all the other tests. So they're testing a model number two theoretical number three best constrained so that's a more parsimonious model and number two against number four which is unconstrained so you add something to the model and we want to demonstrate that if we take something away from the model so we do a more parsimonious model the more parsimonious model does not fit the data well and we want to demonstrate that if we add something to the model the result of the test will be non-significant so adding things to the model making it more complex will not add any value to the actual analysis. Then these other comparisons that they have is based on the best model that they got through a specification search so they were doing diagnostics and modified the model based on that and then they tested alternative models based on that theoretical model. Finally one interesting thing in this table is that this feed of the chi-score statistic for the null model is the same as for the measurement model. That is an impossible result and that's probably just a typo in the paper and just highlights the importance of reproducible research. So whenever you construct these tables you should not copy-paste things because it is very easy to copy-paste a value to an incorrect cell and then it gets printed in a strategy management journal or AMJ, I don't remember which one this paper was published in and this can be avoided by constructing the tables programmatically so that your statistical software will print the full content of the table which you copy-paste to the paper or you can use software such as stat tag that will automatically link your results from the statistical software to your award document that contains the paper.