 Randa-nohdon ei ole vain sitä, että yksinkertaiset pitää tärkeää. Tärkeästi jos sinä olet tutkimus- ja yksinkertaiset, niin sinä pitäisi olla ympäristössä ympäristössä. Jos ei ole, että ympäristössä olisi yksinkertaisesta, joissa yksinkertaiset ja yksinkertaiset on tärkeää, niin se pitäisi olla jälkeen. Tällä tavalla, että sinä pitäisit tärkeää, jos sinä olet yksinkertaisesta. Let's take a look at what common method variance is about. Here's our example. So we have three questions that are supposed to measure innovativeness and three questions that are supposed to measure our performance of a company and our success of a company. So we have innovation questions i1, i2, and i3. We have success questions s1, s2, and s3. We follow the common practice of taking some of these i-indicators and some of these success indicators. We find that the correlation of the some of the innovation indicators and some of the success indicators is 0.3, and we assume that these innovation indicators measure innovativeness, these success indicators measure success, and there could be some other variance components, random noise, and some item specific variation that we don't really care about in the data. So we find a correlation of 0.3. We know that our random measurement there attenuates correlation, so we claim that the real correlation could actually be as high as 0.4, and then we make grand claims about our innovation being one of the key drivers of success. We claim innovation as a succeed, success must be associated. So what kind of problems do we have? Do we have any alternative explanations for the correlation? It's also possible that these indicators don't measure only innovativeness and only success, but they measure also whether the person thinks positively about the company. So we have this systematic measurement, there is s here, and it influences all these indicators. So a skeptic of our study would say that we have not found out that innovation and success are associated. Instead we have just found out that when we present positive statements of a company to a person, then some people will respond systematically higher than others. So that is instead of being driven by the constructs, these are driven by a general sentiment of the person and they don't really, and the correlation is not really a reflection of any theoretical relationship. Let's take another example. This is from a paper published in Information Systems Research and we have a question, scales are about information quality. We have a scale about accuracy and we have a scale about completeness of information and these are about government information systems. Now the question is that measures correlate, accuracy one and completeness one correlate. So what can we say? We can say that they measure two different constructs and the constructs are correlated. A skeptic would say that no, these indicators don't measure two different constructs, instead they measure hostility towards government. So particularly if you do this in the United States where I think this resource was done, there are people who really think that government shouldn't be doing any services for people and they are openly hostile about government. So if you're hostile about government, you're going to rate all these indicators to small numbers and if you like government services, you'll rate them to high numbers regardless of the accuracy of data and completeness of the data. This or they could just measure how much the informant wants to answer ones versus fives when asked to agree on an item. So that's also possible. This issue is called common method variance. So the idea of common method variance is that the correlation between two indicators is not driven by the correlation between the constructs but instead it's an outcome of the measurement process. So some people like to measure the answer to the center of the scale, some people like to answer to the ends and that causes a small correlation. Whether the correlation between indicators is entirely or partly influenced by method variance is one issue but the thing is that if I read your challenges that you have this problem, you have to be able to demonstrate that you don't or it's also possible that you have it in which case you have to understand how to avoid it. And to avoid the problem and argue why we wouldn't have a common method variance problem if we really believe, so we have to understand the different sources of method variance. So why does a method or how a measurement method can induce variation into our data. There's a good paper or actually the series papers written by Philip Podczak of Anise Co-authors and they have this 2003 paper which is the most cited one. They have this big table where they list all kinds of different reasons why survey indicators could be correlated. So it is possible that indicators are correlated because they ask them from the same person. So if you have the same person responding to your innovation questions and success questions then that can cause a correlation between those indicators because of people's tendency to respond to survey questions. It's also possible that they're social desirable advice. So some indicators, some items, it's very difficult to agree on. For example, if you ask somebody whether they have committed a crime or not, it's very difficult to say that you have committed a crime. And even if you ask the same person whether they have driven over the speed limit then they would agree more easily because there's less social desirability to agree on that indicator. Then there are also item context effects. So if you ask the question first that makes people angry then that will influence all subsequent indicator responses. Or if you have a question of whether the company is innovative or not and then you have indicators after that that measure specific aspects or specific consequences of innovativeness then that general question will prime the person to answer positively or negatively to the remaining questions depending on how they answer the first question. And then there's the measurement context effects like some people want to answer in a specific way when using paper and pen. Some people answer in a specific way in online forums and that can cause various. So there are many, many different ways why your survey indicators can become correlated that is not due to the theoretical constructs. Is this a big problem then? Well there are quite a lot of disagreements. There are the big problem whether you think it's a big problem or not is not as relevant as the fact that if you do survey based studies some of your readers and readers will think that it's a big problem and they have studies that they can cite to demonstrate that this is actually a big thing. For example in this paper by Potsakov in 2012 they indicated they are reviewed studies that assess the prevalence of method variance. So how much of the variation in our indicators are due to the measurement method and how much due to the actual construct of the trade being measured. And they found out that in the method that they applied revealed that about one fifth or one fourth of the variance in the data is due to the method. About half or a bit less is due to the trade. So the method variance is about half of the variance of the actual trade. So if you're reliable variation then this implies that of that reliable variation about 60% is the actual thing that you want to measure about 33 is a method variance and then remaining is the randomness or unreliability. You can of course challenge this kind of studies based on on questioning their methods and so on but the point is that this is a potential problem and if a reviewer says that you have a problem it is difficult to address it and oftentimes they do. You have to understand beyond the problem itself you have to understand that it has if it exists it has serious consequences. In random measurement the random noise here we discussed before that if we have a real correlation of 0.5 then perfectly perfect measurement gives us a very close estimate of the true correlation if our sample size is large enough. Here is 300 which is definitely large enough. If we have random noise in the data then the correlation will be underestimated or attenuated. Here the reliability is 50% and this is attenuated by about minus 40%. So instead of 0.5 we have 0.29. Systematic measurement here is more problematic because it inflates the correlation estimate. So in this case x and y both measure the latent x and y that were interested in and there is a systematic measurement error source that is equally strong as x and y. So observed x is half of the systematic error and half of the latent x. So it's a 50% systematic error 50% construct variance and that will cause serious overestimation of the relationships. So in this case the relationship is overestimated by more than 50%. So the real correlation is 0.5 and we're estimated at 0.77. The common method variance problem is a big deal because not only because it inflates existing correlations it will also indicate that correlations exist when in reality they do not. So in this case even if x and y would be completely uncorrelated the estimated correlation would be around 0.5 ballpark. So that's your concluding a strong effect when nonexists. And this is the reason why we can deal with unreliability. So unreliability we just know that the effects are on average a bit smaller than what they should be. But here with systematic measurement error the problem is that we can find substantially large effects when nonexist and therefore this is a big problem. It's such an important issue that some journals are actively discouraging cross-sectional surveys. So of course when you have a survey study where you ask using the same scale format or the same occasion or the same person you ask three questions then they could be correlated and this is one of the reasons why journals are recommended against cross-sectional studies. So the recommendation is that instead of doing a cross-sectional study do a study where you measure the independent variables with a survey and then measure the dependent variable using some other means. If nothing else then at least use a second survey. This of course helps also if you use two surveys it helps you to establish the causal order by measuring the effect after the cause. So that allows you to take into account or argue the second condition for causality. Let's take a look at our example. So how would we improve this possibly common method variance contaminated survey for? We could of course do a second survey where we ask the performance or success implications ask that half a year later one year later. But even better if we study companies we don't actually have to measure this success with the survey. We can rely on accounting data. So we wait two years then we get the actual accounting data. What kind of growth or what kind of innovative or what kind of profitability the company actually reported and then we compare whether those companies that were innovative grew more or were more profitable than less innovative companies. One good strategy for implementing this kind of study is that oftentimes when you write a paper you first write it to a conference and then you write the better version that you try to publish in a journal is that you can collect the success data or performance data using a survey first and then you write the paper using the survey data as the dependent variable for the conference. Then you get some feedback and maybe one year later you have visited the conference gotten the feedback then you start to work toward the journal paper. When the journal paper is done then by that time you will have the actual accounting measures available because it typically takes a year or two from the idea to a publishable journal paper if you do the conference presentation in between. So you can do first the conference with survey data then switch to a better dependent variable by using actual accounting data and this also helps you to avoid the concern that you're just trying to republish the same study that you presented in a conference in a journal because when you have a different dependent variable then no one can argue that it's the same study anymore.