 Liibillinen ja valitettavuutta voidaan olla käyttävä paljon eri teknistä. Tämä video on jälkeen ottanut muutamia teknistä, jotta on järjestelmä, ja sitten teemme teille suurin ja joitakin, joiden on käyttävä ja joitakin ei. Valitettavuutta on, että valitettavuutta on se, että on se, että on se, että ei. Tämä ei ole valitettavuutta, joten voi valitaan vain suurin valitettavuutta, Validiteti on, että varoja on varoja, joilla on varoja, joilla on tärkeää teoretikköä, joilla ei voisi tärkeää. Yleensä, siellä on yhteistyötä, jotka ovat yleensiä. Faktora-anauksesta on tärkeää, joka on validiteti. Faktora-anauksesta voi olla käyttänyt 3 eri perpelejä. Yksi on, että validiteti on tärkeää, jos validiteti on yleensiä, tai jos on tärkeää varoja, se on tärkeää, että voivat valida yleensiä. Sitten, erittäin on, että validiteti on tärkeää. Jos yksi kertaa, että validiteti on tärkeää, niin validiteti on tärkeää, että validiteti on tärkeää. Yleensä, joilla on tärkeää, että validiteti on tärkeää. Tämä on tärkeää integraali, on tärkeää, että validiteti on tärkeää, jos validiteti on tärkeää, jotka voivat valida, mennään tai vaatevat, jos validiteti on tärkeää, että jos kokeillaan kokeillaan kokeillaan 2-täivästä, että 0,99 per eikä se on kokeilla, että teidän 2-täivästä on tehty 2-tätä eri työtä. Eli tämän vaaleiden on kokeillaan 2-täivässä. Kokeillaan analysointia on yksi asamuksen, mitä pitäisi olla kokeillaan. Ja työtä ei ole kokeillaan, että pitäisi olla kokeillaan, että ruoittaa sitä, jota on kokeillaan. Exploratory factor analysis assumes that all relationships are linear and the error terms in the factor analysis are independent. The converter factor analysis approach is more flexible, it only assumes that the model is correctly specified and you can model non-linear relationships, in which case you would be doing item response theory analysis or you can model measurement effects correlated to secondary factors and so on. But it's important the model is correctly specified because otherwise it's not a proper test of your theory. Minimal reporting of exploratory factor analysis is which factor rotation techniques you used, you should always use the direct-obliminal rotation. Then factor loading pattern, so you report if you have ten indicators and four factors, then you have a table of four columns and ten rows and then you have a factor loading for each indicator factor pair and then you highlight the highest ones so that it's easier to see the pattern in the loadings. Confinitor factor analysis report estimated factor loadings and the chi-square statistic, the decrease of freedom and the p-value. If the p-value is rejected, then you also, if the p-value rejects the model, then you also need to report what kind of diagnostics you did for the converter factor analysis. Then we have the second technique is construct-valid data assessment regression analysis or it correlates. The idea of this technique is that you have different measures that could correlate and they are supposed to measure different things. Then you have a theoretical expectation of how the constructs that those indicators measure behave. So the theoretical expectations are called the nomological network. So you have the causal relationships, their directions and strengths and then you compare if your empirical relationships between the measures match the theoretical expectations. If they do, then you conclude that you may have construct validity. The assumption here is that the nomological network or all the theoretical relationships are known a prior. And this is very difficult to satisfy in practice because we typically are testing new theory in articles. If our theory is new, it has never been tested before, then how could we possibly know that it's correct? We can't. The minimum reporting is regression coefficient, its standard and p-value. These are normally not reported as a table. Instead they are reported in the text, so you are saying which relationships you expect. Then you have parentheses including the regression coefficient standard error and p-value in the parentheses telling that whether the expected relationship was observed or not. Then you discuss whether the regression coefficient match your theoretical expectations. Then you have theoretical arguments. This is really a scene but this is a very important thing. The theoretical argument is that the idea of validity is that the variance of construct causes variance in the inner items. That's one definition of measurement validity. The concept of theoretical argument must answer the question of why we should expect the construct variation to cause variance in the data. Explain what is the process that through which the construct causes people to respond to survey in a particular way, for example. The assumption is that argument is logical and supported by prior theory. Then we have principle component analysis which is sometimes used but it's not useful for measurement validation. People are incorrectly applied principle component analysis as a factor analysis technique. It is not a factor analysis technique. It's a data summarization technique and it's not useful for any of these purposes that we use factor analysis for. Then reliability. Reliability we have to consider two important things. The reliability of the scale scores that we calculate as the mean or sum of the items. Then we need to quantify the reliability of the scale score and we do that by using reliability indices. So reliability indices tell us what is the reliability of the scale score. There are many different variations or many different types of reliability indices and they differ in the assumptions that they make. The most commonly used is the tau equivalent reliability or alpha which assumes that the indicators are unidimensional measures of one thing. All items are equally reliable and measurement there is purely random. Then we have the second most popular is the congeneric reliability or composite reliability or coefficient omega which assumes unidimensionality and random measurement there. The difference here is that congeneric reliability allows the indicators to differ in their individual reliability. The minimal reporting that you do is you have to explain why you chose a particular index and then you have to justify the assumptions and explain how they were checked if they were. And then the actual value of the index. Then we have test retest correlation and it can be used to assess the reliability of individual measures or scale scores. The idea was that we measure one thing now and then we measure the same thing a week later if the two measures correlate that's indication reliability. This technique makes two assumptions. First of all the assumption is that the delay between the measures must be sufficiently long so that the informant gets to reset. So if we remember what we answered the last time to a question then our test retest doesn't work so it assumes that we don't remember our previous answer. That's the reason for the delay. Then it also assumes that the delay is not long enough or too long so that the trait that we are measuring is relatively stable. So if we measure a child's height now and then are two years from now then whether those two measures are not the same. It's not the indication of unreliability. It's indication of that the kid has grown through that time so the trait must be stable. Then minimum reporting you should justify the delay and then you test retest correlations that you report those. So justify the delay it's not too long not too short and then report actual correlations. Then we have standard as factor loadings and standard as factor loadings are used to assess individual or reliable reliability. So the square of standard as factor loading is an estimate of individual reliability. Assumptions are the ones that you make in your factor analysis and then typically the factor analysis reporting is interpreted as it is and you don't need to report anything special for this reliability estimate. Then there is average variance extracted with sometimes used and this is redundant with others. So there is really no reason to use it and it's one index for a scale but it is not a reliability index in the same sense as these others here because it doesn't quantify what is the reliability of the sum. And you need to report the factor loadings anyway that go into the average variance extracted index. So the AVE really doesn't give any additional value beyond the standard as factor loadings that you would normally report anyway. So finally how do you assess reliability and validity evidence reported by others. And I have two roles when you are reading published work and when you are reviewing work for publication. So when you are a conference of journal review. And we have a couple of four scenarios here and what to do in these scenarios. One is common is the factor analysis is missing and what do you do about it. So factor analysis is the most important tool for validation and what if the authors don't report that at all. So they could say that it has conducted the results were OK. They don't report it or they don't say anything about factor analysis at all. If the scale has been previously validated in multiple different studies and the previous validation evidence is valid. The fact that the scale has been applied before and some statistics have been reported about that scale doesn't mean that it has actually been validated because for example the people who present the scale could have used principle component analysis which is not useful for scale validation and then they nevertheless get to publish the paper. So is there actual prior evidence that you can check. If yes, then it's probably OK. If you review somebody's work and they present a multiple items scale and don't present the factor analysis then you should require revision that includes the factor analysis results. Then we have reliability statistic reported without checking the assumptions. This is the default case of use coefficient alpha without even knowing what the assumptions are. If you are reading a published work then it's useful to know that the reliability statistics are even if the assumptions are not fulfilled completely results may not be that severe so you could probably trust the results. If you are reviewing somebody else's work for publication then require that the authors justify the chosen reliability coefficient and report how the assumptions were checked. Then the fourth case, third case is that there is some evidence that you don't really understand. So the authors discuss something about reliability and validity. They use some weird index that you have never heard about like greatest lowest bound GLB or whatever. You don't understand what it means. So what do you do about it? Then if you are a reader of the paper then if there is some other evidence that you know how to interpret and you can make a decision of whether to trust the results using that other evidence then it's probably okay to ignore the evidence that you don't understand. Another alternative is that this is a learning opportunity for you. So study the technique and importantly you should study the techniques using sources that are trustworthy because particularly in measurement there are these guideline types articles that basically say that this has been applied before therefore it's a good practice it should be applied in the future. That the fact that something has been applied before doesn't make it a valid technique. So there are articles that advocate techniques not because of the merits of the techniques but because the technique has been used previously and the authors think that there is evidence for its validity. Trust for the sources include for example organisms or research methods. So you can basically trust that what's said in the journal makes sense. Other journals cycle and signal methods is okay. A good book about measurement is okay as well. If you are reviewing work by somebody else and you don't understand the statistics that they apply. Then ask the authors to explain that in the paper. If you don't understand what the statistic tells you then it's possible that other readers don't understand it either. Coefficient alpha most people probably have an idea of what it means. Greatest lower bound statistic. Most people in management probably have no clue what it does. So then it's useful for the article to educate the readers a bit. So tell ask the authors to tell what the index, how it's interpreted, why it was used, what kind of assumptions it makes. And then cite appropriate papers to support that it's actually a useful index for the purpose that the authors are using it for. Then we have the final case. Cross-sectional survey ignores common method variance and there is no assessment or harmless single factor test. Which is a really weak test. Then what you should do when you read a published word. Well in published work you can check the correlations. If all the indicators or all the measures correlate with one another then that's an indication that there's a method variance problem. If there are sets of indicators that are only weakly correlated then that's evidence that there probably is not a method variance problem. If there are objective measures and or items that are specific instead of being like asking persons feelings then you can probably trust the results. If you're really in work by others, require that the authors apply a confronted factor analysis with a method factor and if they have marker indicators those should be used as well. And then the authors also should mention the limitations of the technique that they apply for considering method variance problems. Because not all of these techniques work really well in all scenarios. So every time when you review work by others the main thing that you do is in the methods part is to make people justify their decisions. So that you understand why they make certain decisions then you can make a call whether the decision is justified or not.