 Suosittelijä, joka on mahdollisimman, on erilainen suosittelijä, jossa on markkustelua tai erilainen suosittelijä ja seuraavat, että suosittelijä on seuraavaa ja seuraavat, että suosittelijä on seuraavaa, niin suosittelijä on, että miten seuraavat suosittelijä ja suosittelijä on suosittelijä. Tämä videon olen tullut uusia praktaista, että se on vähän hienoa ja miten tämä maailmangelma pitäisi olla ympäristössä. Lopulta ensimmäinen esimerkki. Tämä ei ole aina hienoa, koska siellä on muutamia ympäristöjä, joita tehtävät markkaisuja, ja sitten tehtävät ensimmäinen esimerkki. Siksi markkaisuja ovat käyttäneet sen, että markkaisuja ovat käyttäneet, ja se on se. Mutta mitä tämä pitäisi hienoa esimerkki on, on se, miten esim. nämä tehtävät markkaisuja tehtävät markkaisuja. Joten ympäristöjä, joka on suurin optimaalinen, on tehdä ympäristöjä. Ensimmäinen, kun tehtävät markkaisuja ja ympäristöjä, sitten tehtävät ensimmäinen esimerkki, joka ei ole ympäristöjä, ja sitten tehtävät, että ei ole mitään. Tämä on ympäristöjä, joten se on ympäristöjä, koska en tiedä, jos ne ei ole ympäristöjä, eivät ole tärkeää ympäristöjä, koska ei ole niin paljon kuvaa. Se on mahdollista, että se on ympäristöjä, mutta tehtävät ympäristöjä on tehtävää. Tämä on tärkeää ympäristöjä, joten ympäristöjä. Mutta sitten, kun tehtävät ympäristöjä on ympäristöjä, niin joten meidät ympäristöjä vaikka tehtävää ja katsoa ympäristöjä, joten tehtävät ympäristöjä ja ympäristöjä hyödyvät tokorot, että se on ympäristöjä, joten ympäristöjä, joka ei ole ympäristöjä, hiukan että se on sitten esim. Ja se on sitten tommosien näkökulmesta, ...actual indicators. And what is the magnitude of method variance in the indicators? So you should look at that and then also look at how are the correlations with the items affected or regressions... sorry, the latent variables. So you look at is there method variance and if there is, does the method variance cause bias. If so, how much? Just looking at whether relationships stay significant or not is insufficient. For example, if you have a regression coefficient of 0.5 and that coefficient goes down to 0.2 when you're at a matter factor, that would be something that the reader should know, even if both of those effects remain significant. There's also, there are conceptual issues as well so this is unfortunately a common practice and the issues, additional issues in this model are that they are using hierarchical factor models here. En näy mitään metodologiosta, joka on jäänyt järjestelmää, jossa on usein semmoinen modelon, jota on tullut metodokasioon. Lisäksi, se näyttää, että heidän on yksi pari modelissa, jotka on koko tarpeeksi. Yksi on metodokasioon, joka ei ole. Semmoinen modelon on yksi tärkeää, ja en ole ymmärränyt mitään metodologiosta, joka on ymmärrä. Tämä maalokonpäristöä ei ole todella mielenkiintoisesta, mutta se on tärkeintä, ja se, miten suunnitelmat ovat ympäristölle. Joten, miten suunnitelmat pitävät ympäristölle? Mennään esimerkiksi, tai mennään olemme olemme olleet paremmat rikaportit. Joten, siellä on kaksi asioita, joita pitäisi katsoa. Tämä ovat keskustaneet esimerkiksi spektros-papua. first is to what degree is the method variance in the items and the second question is to what degree the method variance in the items causes method bias in the rigorous and coefficient of interest it's possible that you have method variance but it's that does not cause bias for example if you have let's say innovativeness measures there might be a lot of social disaster bias in those measures but if your performance measure is not affected by the same source of bias then the method bias method variance would not cause bias to regression coefficients at least to not any serious bias of course if you have measurement error in the explanatory variable there is some bias but that's a lot less severe than if the same source of method variance affects both the dependent variable and the independent variable so you need to instead of looking at whether something fits better than other you should be looking at what degree so what are they actually what's the magnitude of estimates what are the magnitude of the factor loadings of the method factor compared to the main factors and how much do the regression coefficients between the factors change when you take away the method factor monofit various nesting model comparisons are useful for understanding the how the method factor affects the items and Williamson's 2010 article talks about different modeling strategies and make sure that all models have the same variables so in a model where you assume that the method factor does not have an effect you should still have the marker variables in the model and have a factor for those marker variables simply allow that factor to be correlate with beyond correlate with all the factors and not load on the item so that should be your comparison model William 2010 talks about this comparison parameter estimates you look at method factor loadings and you compare factor correlation or regression with and without method factor this is not very common and I couldn't find any articles that would actually show a table side by side but many articles do report they compare these two regressors important to report there are marker variables or measures of method variance in detail and this is important because generally as a simmering article here points out markers we don't know much about markers so people use markers but we don't have much evidence and they actually work so we don't how well they capture different sources of method variance one reason for this is that methods these marker variables tend to be not reported in as much detail as the interesting variables like control variables these markers should always be reported with the same level of rigor and same level of explanation and justification that you give for the main study variables so there is like no excuse of not reporting let's take a look at two examples of proper reporting of marker value the first example comes from Rafferty and Griffin in 2004 and this applies their modeling approach by Larry Williams that involves comparing different nested models and what makes these good practices that they first do the series of model comparisons and they actually present the decrease of freedoms on chi-squares so we can see how well the model fits model speed they first try a model without the method factor then they try a model with method factor that is freely loading on all indicators and then they try a model where the method factor is loading on all the indicators but the loadings are constrained to be equal they identify or conclude that the model where the method factor affects each item differently fits the data the best and then they go with that model so this is a good example of on the level of detail that you should report the model comparisons if you use the Williams 2010 technique that was available at that time as well but he just hadn't published the paper another nice thing about this article is that it reports this factor loading so we can actually take a look at how much the method variance affects each item and how much these actual variables that of interest affect each item what's interesting here is is uh to beyond comparing that these method factor loadings are fairly small they are they are substantial but fairly small compared to these uh actual loadings the interesting thing is to know that these bureaucracy items load very highly on the same on the bureaucracy factor or the market variable factor and the reason for these high loadings is that this factor this final factor actually combines the marker and the method so it assumes that the method and bureaucracy are the same so this model is actually perhaps slightly mis-specified a more realistic model would have had two load had another minor factor for the bureaucracy items and then one general factor for the method and that would avoid the problem of mis-specification here that this assumes that the method and bureaucracy are the same so that bureaucracy affects these items not only the method i'll talk about this modeling problem more in another video and then there is my paper one of my early papers i i did analysis for this paper after taking the first course in structural ecosystem modeling and we report also on loadings for the model and for the method factor and we actually do this right so we allow the marker variables to be freely correlated so we are we are saying that the markers can measure something else beyond method but the marker variables and the interesting study variables are only correlated because of the method it's not reported in the article but that that's actually how we did it and if you check the decrease of freedom you can see you can see that our decrease of freedom reveals that there were some other parameters estimated that are not reported in the model so that's what we did and to make this even better we should have a cfa model without the method factor so we can see how much the method variance affects the correlations importantly in both of these two examples that are just demonstrated the decision or the explanation of of method variance or the diagnostic of method variance is is not a yes or no question so neither of these examples do something with method variance declare it's not a problem and then continue without modeling method variance method variance is always a problem that occurs to a degree and it's useful to instead of trying to answer a yes or no question more useful approach is to include the method factor in the final model to see what is the effect of modeling method variance on the results and this is what would we do and this is what the previous example did so both of these articles actually present the final results including the method factor then we can be confident that if the method factor was modeled appropriately which we wouldn't know based on these results then the result will be correct of course whether you model all the sources of method variance correctly that's easier said than done as i discussed in another video about marker variables