 Okay, so in this session we're focused on empirical methods and what we can do as editors to smooth getting papers through the methods check and increasing the probability that all of the work that goes into a paper turns into a publication. So it's good to start from the basics, you know what is journal of operations management, so we are focused on the management of operations. We publish scholarly research, we want to have academic research academic relevance, but then the other thing is that We want every article to provide insight that helps people in the field making decisions to have a clearer sense of what is available to them. The journal is empirically focused. With when we think about a conceptual paper a conceptual paper usually starts from something that is observed out in the world of practice. So that we find that over the years, some of our most powerful papers have been conceptual we also are open to literature reviews, although the bar is quite high. The journal was focused was founded exactly 40 years ago this month, so we would rather be all together celebrating this but we're not we're here. In zoom but that's better than nothing but 40 years is an exciting date for us and we are really focused on different methods and wide variety of methods and a wide variety of theories. So let's dig down a bit deeper. So if we think about science, then loosely speaking, we observe some regularities and we learn something we make propositions about, you know, the way the world might be and we start understanding things about context, etc. And so when we observe something, then the question is, what can we learn. Well, we have a habit. And one of the things that Tyson and I really want to do is to break this habit of going immediately to generalization. This is truth. And in something as messy as management, especially as messy as the management of operations generalization is often too big a step. So we spend a lot of time encouraging authors to figure out. Hmm. So this is an interesting regularity that we're observing. What do we have the right to say from this. Sometimes the thing that we have the right to say is that, hmm, this gives us pretty credible evidence that a research question needs to be addressed. And then another thing that can happen is that we had something that we thought was true. It just made such total sense to us to everybody. And then when we actually go and observe what happens out in the field, we find that that accepted theory doesn't stand up to the test of implementation. This is very interesting. And then another thing is that we have conventional wisdom. Let me give a quite silly example. Conventional wisdom says, if you treat people nicely, then it will lead to better outcomes. And, you know, who could disagree with that. But what if we're in a situation where somebody has to decide between treating someone nicely and not what if treating someone nicely then puts us in a situation that will clearly lead to worse outcomes. There's not an obvious answer there. Well, of course, we want to treat people nicely so it's maybe not such a great example. But if we as researchers can give decision makers insight into the trade off that is going to eventually emerge from application of conventional wisdom, we are achieving, you know, the goals that I've just been talking about. One thing is to differentiate between proof and warrant. And Tyson and I are increasingly emphasizing that when we go when we do these studies it's not about proving something about absolute truth. But it's more warrants providing evidence that that a that a belief may well be true. And so we have a forum piece on warrant and what it means to go after warrant that should be showing up in a few months. So that what we are doing is we are presenting arguments. And then we present those arguments to the decision maker saying this, you might well end up with better outcomes, assuming that this theory is true, as opposed to assuming the opposite. In order to do that, that is usually going to come down to the quality of our methods. So, let's take a look at how this lines up at jail and in terms of departments. So, the top department here is empirical research methods and operations management. And I'm delighted to announce that Guangxi Chang is going to be joining me go run care as a co department editor, as of the first of September so so welcome to Guangxi. And then we have the usual departments health care innovation and project management inter organizational relationships intervention based research operations interfaces operational systems public policy strategy and organization sustainable operations and technology management. And there's a lot with so we have a matrix organization between the empirical research methods department and then the other departments. Now, let me ask whether Tyson has has gotten in yet. Yes, I've gotten here. Okay, so let me stop sharing my screen and let me pass that's perfect timing. Oh, well, you can. Why don't you just keep sharing since you're up. Oh, all right. I'm on it. And looks like we're on slide five so welcome everyone. Sorry I'm late I had technical problems, but we should be good. Looks like we are at our editorial process here, which at first glance seems a little convoluted and overwhelming. But once you get familiar with it, it's pretty usual with a couple of exceptions. I see that manuscripts come into Suzanne or myself as editors in chief, and we can either make a decision or assign it to a department editor. But on occasion, we send manuscripts for what we call a pre review methods check. That's the check methods box you see there, which sends it to our empirical methods department, which takes a look just primarily at the methods only. So it's not really evaluating contribution. It's not really evaluating other aspects of it, but the questions are squarely about the methods. And what we had been doing was going through a couple rounds of review at some times trying to get the methods just right and then it would go into regular review. However, sometimes it would then be rejected for lack of contribution. So we've decided more recently not to let that happen so much to try to limit the methods check to one round in most cases and take those comments forward into the regular review process where necessary. So after that, it proceeds into a fairly typical review process, although JOM uses an AE and a department editor, as well as reviewers. I won't go through all of this process right now. That's not the point. This is explained fully in an editorial that Suzanne and I wrote in 2018. You can find for open access on the journal's website, but for now we want to highlight this pre review methods check as this will be the topic of the rest of our session today. We can go to the next slide. Just some data from 2019. So this is last year, our first full year with Wiley as the publisher and using the current online submission system Scholar One. Last year we published 38 articles, although of course most of these were once submitted in prior years. And we had 537 new submissions. You see that we desk reject just over half of those, sometimes with input from a department editor. We sent 11% of those to this pre review methods check that we're talking about. So about one in nine ends up going at times. We've dropped that off a little bit this year, but that's probably the right number roughly. We've, as you can see, still got some manuscripts under consideration from last year. We've been using a decision called reject and resubmit quite a bit. This is where there's just a lot of uncertainty and risk in the revision, and we're not sure authors can do it in a short timeframe that we normally look for with revisions. And we still want to give authors the benefit of the doubt, give them a shot at maybe coming back if they're able to answer the criticisms, which are substantial, but perhaps the authors can address them. So it's a possible way to come back, even though the current manuscript wasn't viable. As you can see, we reject about 22% after review. And so far from last year, we've only accepted about 2%. However, our acceptance rate is typically about 6%. So out of those 30 manuscripts still under consideration, we probably would end up accepting a majority of those. As far as how busy various departments at JOM are, you see our methods department getting those 58 pre-review methods check papers is quite a lot for one department editor, which is why we've recently added a second one. And you can see among our other departments that are topical, like inter-organizational relationships, typically supply chain management type papers. That was the busiest department, although we have three department editors there, dividing up those manuscripts. Strategy and organization was next and then operational systems, kind of the classical OM themes, and then sustainable operations, which was pretty busy for a single department editor as well, doing managing full review processes for that many manuscripts. However, we've added a second department editor this year in that department. Next slide. We want to highlight a few of our special issues that are coming up soon with their deadlines. You see our mobility climate change and economic inequality special issue announced last year has a deadline coming up at the end of the month. The managing marketing operations interface and omni-channel retail just a day later, coming up the 1st of September, as well as the technology management in the global context. So three special issues with deadlines coming up in the next several weeks. And then a fourth one, global operations and supply chain management in the context of dynamic international relationships. The first one's coming up in less than two months. So several special issues that, of course, most submissions come in near the deadlines. We're excited about these. And then on our next slide we've announced four more special issues coming up for next year. The first of these has a deadline early next year at the end of January. The COVID-19 effects on global supply chains, emphasis on the 3Rs, responsiveness, resilience, and restoration. So we're excited about this special issue and all the opportunities and data that this year is providing. For better or for worse, we're going to try to learn from this situation and do better for any future events that are similar. And then we've got three more special issues with deadlines later next year. All of these and their full calls for papers are available on JLM's website for further information. Next slide. Okay, so we always like to take the chance to review guidelines that improve the probability that an author's paper will have a positive outcome. So the idea is that the journal is about community. It's about peers helping peers develop their work. It isn't that the reviewer is an oracle who is going to come with some lightning bolts who says, you know, this is what needs to happen for your paper to be worthy. It is that we want to agree together on what warrants publication. And we also think about authors should be reviewing if you are submitting a lot of papers, then you should be also planning to review a lot of papers. And as papers, as people review papers, they tend to become better as authors, and they tend to become better writers. And then making sure that the research is empirical. You know, we it's so sad to have an interesting paper be desk, desk rejected because it is simply not empirical. When you submit a manuscript, Tyson and I start by reading the cover letter, and it really sets the stage for how we're going to look at your paper. It's a chance to tell us things that will help us understand the right place to send the paper to also if you have a data set that's been used for something else, you know, tell us about it because then we can also help help you through that. You know, if the paper has been rejected from another journal. You know, tell us about that you know we we would like for your paper to do well so this is a great chance to help us. We publish a lot of editorials, and then previous EICs have published very helpful editorials. We publish methods papers. And we are making these resources available so that we can tell you solutions that we've come up to for standard problems that people run into and that's very much the spirit of what we're what we're doing today. So, a couple of the editorials so Tyson and I wrote an article, our tutorial editor in 2018 laying out the process of the journal and just helping people understand what will happen to their paper because the process that we have put in place that we think is going to end up with the best papers is not the world's simplest process. So sometimes it's helpful to have some process documents. And then Tyson wrote an article so Tyson Tyson's own research didn't originally start as being classic operations management. Depending on how we define operations management and so he has transformed that evolution into a very nice editorial on, you know, operations writ large. And if you're trying to figure out so if your paper doesn't address the management of operations, then even if we love the paper, it is not going to go forward at the journal so that I highly recommend Tyson's editorial to kind of see how what you're doing might be counted as managing an operation because most of the stuff that we do can be framed as an operation if we really want to go that direction, strong statement that somebody proved me wrong. There are some departmental editorials want an intervention based research that was our event together with Tyson and me. Anant Mishra and Tyson have an editorial coming up for innovation and project management, and then go fishing on the John Gray back in 2017 had an editorial on strategy and organization. And even if these departments don't all have their own editorials full article editorials JLM's website contains mission and scope statements and information about each department. So you can certainly always go there and other departments are at various stages of developing further editorials. So these kind of come about as we see a need for them. And certainly as we get more questions and comments about what fits and so forth in certain departments this prompts responses that may then develop towards editorials. This slide here highlights some key methods papers from JOM and about operations management now this morning we're going to be introduced to a lot of further sources and information on methods it's very valuable. But at least we've captured here a few of the more recent JOM papers about some of these various areas. We can highlight over on the right, for example, case study methods have been long used in JOM. And yet they're challenging to do well. And so we've had a number of papers come out to provide guidance on that. We'll be talking today about surveys and endogeneity and some of these issues. As far as experiments go we have a 2018 paper, and we are working with some authors to develop kind of counterpoint response to that paper with some further ideas about how we do experiments in operations management. And so we are constantly evolving developing improving the methods we use which is why it's very important to stay up to date with what is currently considered valid appropriate methodologies. It's not always appropriate or satisfactory to cite papers, even in JOM from 10 or 20 years ago that use particular methods and use that as your only rationale or justification for a method or validity check or something like that. So we'll be getting more into that as we go today. So with that, Miko will pass it over to you. I will stop sharing my screen. Thanks for the introduction and now we're going to move on to the next part of the workshop. I'm going to talk about the methods review process. I'm Mikko Renkö, the department editor or one of the department editors now of Empire Research Methodist Department, and I'm from the University of Vivastula where I'm shooting this stream now. And Suzanne stole two of my slides or I stole two of her slides. So I'm just going to skip through the first two slides and we're going to talk about the methods check and what is the methods check about and why do we do methods check. And the department basically does two kinds of things. First, we review all the papers that are about methods and that's a smaller part. There are more important part why the department exists is that the journal gets some particular quantity of studies that are difficult to evaluate or some studies that have problems and it's not sure if those problems can be solved. So problems in analysis or research design and it makes sense to send out those to a methods specialist for reading. And Journal of Operations Management is not the only journal who does this kind of thing. So for example, Journal of Management recently published an editorial that states that they are going to implement a similar procedure. So they will have a department where there are a couple of editors and then a collection of methods specialists. And Journal of Management does it a bit differently. So they will first do a couple of rounds of normal review and when it looks like that the paper could be accepted, then it goes to the methods check. And these kind of systems have been implemented by other journals recently as well. For example, Leadership Quarterly has done this for maybe two years now. Science has been doing it for, I don't know how long, maybe 10 or 15 years. And I think Entrepreneurship Theory and Practice has a similar procedure. Also, Strategy Management Journal has been doing something about methodaries. So this kind of introduction of a method specialist to the review process is not a unique idea. So what does the methods check actually do? So what do we do in practice? We only evaluate if the study is done well. So when a paper is sent to be methods checked, then what is being looked at is do the data, support the claim. We are not looking at whether the claim is important or interesting, but simply is it done right. And the manuscripts that come to the journal, they tend to have a very similar set of problems. There are problems in dealing with energy and energy. There are problems in dealing with method variants. There are certain problems that it designs that we get all the time. And just to make the process more efficient, we have built up a template that you can download from the meeting website under this session that basically gives explanations or descriptions of those problems. And then when we get a paper that has a problem, we just take the description from the template and add some other observations, and then it goes back to the author. So this is fairly efficient. And typically, there is just one method reviewer because method reviews is not as much evaluating the manuscript as it's about checking. So we're basically checking if things have been done correctly. And these are typically not matters of opinion, but they are clearly right and wrong ways of doing things. So typically, there is just one reviewer. And I tend to use student reviewers because I know that if a student comes right out from the methods course, they will have the latest understanding of methodological research. And I might give a paper to one of my students and tell them to evaluate the paper against a specific lecture that I gave three months ago or against specific articles in, for example, organizations of research methods. And this is good practice for the student as well. And we are planning to expand really a pool, but the volume of papers that come to the department has been so high that I haven't been able to actually expand the reviewer pool yet. And it's very good to have a second department editor so we can actually start developing the resources of the department so that it doesn't rely so much on just me being able to review 50 to 60 papers per year. And then we have done a couple of post-acceptance checks. And the post-acceptance check was implemented first time, I think, a year ago when a paper from JOM was highlighted in my citation alerts. And I checked the paper online first and it made claims that didn't make much sense. And then we quickly pulled the paper for a day or two, fixed the paper and put it back up. So sometimes we check papers after they have been conditionally accepted just to make sure that there are no nonsensical statistics or nonsensical claims or clearly incorrect results that slip through because that does happen. What kind of feedback can authors expect to get from the department? So here is an example of a real letter from the department, the first eight points. And this is after the first round of method review. So some papers, as Tyson said, have gone through three rounds of method review before they go to the actual department. And typically the methods tend to improve during those processes. So I tend to give lots of citations like what to read about these things and also the template contains lists of recommended readings on endogeneity on method variants, on regression diagnostics and so on. So in the department, as I said, we get basically three kinds of submissions. We get method submissions a bit more than 10 per year. So this is not the methodological journal, but we do publish methodological papers as Tyson told. And these methodological papers are typically sent out to external reviewers. So if you send a paper about survey data analysis to JOM, it might go to one of the editor of board members and then we'll try to get someone, for example, for organizers of research methods, editor of boards to be the second reviewer. Or we might just send it to methods experts and not anyone on the editor of board if it's about the method that we don't really have much competence. Then the second class of papers are challenging on unconventional methods. And these are methods that have not been used in the journal in the past. For example, various machine learning papers, they come to the department or could come to the department. Complex panel data econometrics, like dynamic panel analysis, or line of bond, that kind of stuff, that gets sent to the department. Basin analysis gets sent to the department and so on. So these are papers for which it's difficult to find a reviewer because there is not that many people with solid understandings of these techniques. And then the third class of papers is problematic designs and analysis. So quite often I get an email from Tyson or from Suzanne that this is an interesting idea that the authors are proposing, but it's a cross-sectional survey and it doesn't seem that they have taken method variance or causality very seriously. Can you check if there is something that can be done to make this a public case? So these kind of papers, and this is the majority. So most of the papers that come to the method check are actually rather simple and they might have design problems, they might be missing some analysis and the idea is to guide the authors to make the paper publishable if possible. And if not possible, then simply explain why the current design that the authors have will not produce a JOM paper. So this has been run for two years now and we have processed, I don't know, a bit more than a hundred papers, maybe closer to 131, something like that. And Suzanne has been asking me to write editorials. So whenever I find a problem in a published manuscript and I talk about that problem to Suzanne, she tells that I use it for an editorial. And I thought about it, but I would be writing an editorial for almost every issue of JOM and I don't think that's a good use of my time and I don't think that's good use of the journal web pages. So I decided that we need to put together a paper that explains the common methodological problems in OEM research as seen through the papers that the department receives. And to do that, I asked two persons who I've trained during their doctoral studies and who are really good at methods, at least I think they're really good, to help me and write a paper about what are the problems that we find. So basically what I did with Gabriel La Latika and Henny Tenhonen is to go through a bunch of editor letters that I've written during the review process and some reviewer statements and check why are papers rejected and what kind of problems almost always lead to a revision request. And then we further checked to what extent these problems are present in published papers and we compared published papers also against their methods review template to see what kind of issues slip through during the review process. And I will give the stage now to Henny Tenhonen who will tell you some of the descriptive results from our study and then I will continue after her and explain some of the more specific issues with some examples and then also what we can do to avoid these specific issues. So I'll switch to another presentation now and then Henny can continue. Thank you Mikko and hello everyone. I'm Henny Tenhonen and a doctoral candidate at Aldo University Department of Industrial Engineering and Management and I would now like to tell you a little bit about the results of this two-fold review we did about the common methodological problems in operations management and concerning the editor letters and published studies in Journal of Operations Management and my co-author is Asmi Koset, Gabriella Latikainen. She's from Juveskulla University also. Okay, so I guess the inspiration or the background for doing this type of review was one reason paper published in Organizational Research Methods by Jennifer Green and in this paper they looked at the review process and what kind of issues could be found that lead to either retection or revision of these manuscripts and there is previous studies but they haven't been as comprehensive as this study and so it gave us a kind of a roadmap or idea how to conduct this review although our study design was a bit different. We had study one and study two as Mikko told you and in the study one we focused on the review process and looked at these decision letters by editors and a couple of review reports as well from 2018 to first half of 2019 and we had around 88 documents, letters that concerned 80 manuscripts that was our unit of analysis and from these 80 manuscripts 42 were invited for revision and 38 were rejected and we used Atlas TI for coding this with Gabriella Latikainen and because these letters were written by Mikko so he wasn't involved in this coding stage of the letters and we tried to be more objective that way and our goal was to identify common problems and whether they lead to rejection or revision and what we could learn helpful things for authors, scientists to take into consideration when they plan their studies and then after this we wanted to see previously published studies in Journal of Operations Management and if we could find similar issues there, same problems that were discovered in the first study and we did the review of 46 empirical theory testing articles published in 2016 to 2018 in the journal and reviewed against this method checks template and also the findings and code structure that was developed in study one against these papers and we did then some comparison of issues discovered in study one and study two we excluded predictive modeling papers and those that used machine learning and we focused on empirical theory testing articles our analysis was based on constant comparative method it was an iterative analysis we did we started with open coding where we were really looking at the expressions in the letters and tried to be as open to the issues and problems that could be discovered and after that we discussed between the co-author and then we did more coding and tried to discover the commonalities between the issues we discovered and then build some conceptual categories that were discovered in this operations management review process regarding research methods and that was the next coding phase and we started to notice that some main groups emerged that had to do with either research design issues or data analysis and then somewhere about interpretation and reporting mostly so we decided to divide or use as the next stage of our coding these three main groups that are here research design issues, data analysis issues and reporting issues and after that we went again back to the letters and we started we did another round of coding where we then built a code structure using hierarchical codes where we had for example research design as the main group or main category then we had causality as the subcategory and then a specific topic for example control variables and in the study too we used the same code book that was developed and then we in addition we had descriptive codes that we checked for each paper whether what kind of analysis they had and so on then after this we realized that there were some overlaps with the DA and reporting codes so we organized these into technique justification and interpretation and reporting issues so finally we had four categories of problems so here are these four categories in addition to research design problems we looked at if techniques or we discovered that in the letters there was this persistent issue of whether the techniques were used correctly justification was one of the themes that rose and also interpretation and reporting where the fourth one let's go to the results then so study first here are some results regarding the research design issues so the first the most common three most common issues in this category where that research design does not support causal claims control variables are missing or observational study claims causality or we have cross-sectional study with mediation model common method variance was also very common so there was no proper diagnostics for that and it was clearly visible and mentioned in the letters and then thirdly problematic measures where there was no clear scale development or adaptation approach where one of the most common research research design issues in the editorial letters mentioned regarding these manuscripts so that was that category then regarding the techniques and their use so this is a bit weird because we have now inappropriate technique but actually the most common issues in this category where that the appropriate techniques were missing so model fit analysis were missing if there was significant chi-square statistic it should have been noted and diagnosed as SEM analysis should be more extensive or they were missing completely and then also missing common method variance analysis was also a problem there were also incorrect use of techniques but these two were the most common in this category so it was about missing techniques more than inappropriate use then justification issues here often mentioned was that assumptions of Kronbach-Alfa were not checked when assessing reliability or some justification of reliability methods were not convincing or they were completely missing regression assumptions were not checked enough so this was mentioned a lot and these were maybe not the the severe issues that led to rejection but they were very common and mentioned a lot in the letters then interpretation and reporting issues so justification of control variables the selection of control variables was very common and it was mentioned that there should be more justification why the control variable presents an alternative explanation for the correlation between the dependent and independent variable and not just why it affects the dependent variable a lot of then these reporting problems such as text revision and problems with tables and figures they weren't clear there were some scales missing some numbers values or colors were a bit off and so on also then one of the issues was that more details were needed on model fit in particular the model misspecification should be reported in more detail so then this is a like first descriptive analysis or kind of overview of what are the most common issues that lead to typically lead to rejection in this sample we had and the highest one the most common one was that research design does not support causal claims and it was very frequent in the rejected manuscripts and it had the manuscripts having the issue had a high rejection rate then also data collection and sampling issues were quite severe regarding their rejection rate and then missing common method analysis where we're also also one of the most common ones of course this is interesting that many should have justified methodological choices based on their merits instead of empirical present so that was among the top four in these rejection decision manuscripts that we discovered but then what problem seemed to lead to revision decision well these had low rejection rate manuscripts that had these four issues they were very common many papers that were invited for revision had them here's model fit additional details were needed and the ten 80 issues were in many papers and then the editor also complained about choosing inappropriate inappropriate technique for addressing common method variants and about the missing regression diagnostics and then instrumental variables were not justified enough but these were not did not seem to lead to rejection so much they went to the next stage and and that was our discovery okay then after this we continued towards the published papers and I'm gonna tell a little bit about these published papers that we reviewed so mostly most of the papers had secondary data and they were longitudinal there were some cross sectional surveys also and most had some kind of linear model and they many used also instrumental variables and multi-level models were also quite quite common there weren't that many experiments and not that many structural equation models either and there were 46 papers all together that we looked and what was clear was that there there were not that many diagnostics done in these papers so 70% had none not anything on diagnostics 22% had some and there were some really good examples of doing the diagnostics well but only 9% and then most of the papers only looked at the the significance and did not really interpret the the effect size the impractical importance of the effect so this review revealed several things on the same for problem categories that we had in the in study one firstly the most common research design issue was causality claiming causality with the problematic research design sorry problematic research design giving possibility and the charity sorry that was the most common research design problem we didn't have that many papers here that had research design issues overall and energy energy was the most common problem in these but the second one was claiming causality and how cross section or some other problems in the research design not necessarily cross sectional design was causing problems for example problems with control variables and then inappropriate instruments or time lag then problematic measures were also discovered here in study two and common method variance was also somewhat common but not really like if you look at the numbers so only only a handful had these issues in inappropriate techniques so here again we had missing techniques that that was the most common problem discovered and here we did this review of these papers so that Gabriella and I first we split the papers in half and then used the methods check template went through these papers and then discuss them with Mikko so then we Mikko went through all the papers and this was how we found these issues so missing regression related analysis was the most common technique related problem especially missing diagnostics and then there were also many incorrect use of actual regression like the main models for example two stage least squares was used incorrectly the residuals were used instead of the values and there were also many other problems regarding different types of regression models and instrumental variables were also quite challenging in some papers, 10 papers here it seems and they had some problems with the criteria and selection of and use of them in different models and then model fit analysis were missing as well justification didn't have that many 11 seems to be the count of the most common problem so unsubstantiated claim about regression was the most common justification problem and this concerned mostly about transforming variables so that regression could fulfill their assumptions normality assumption for example so it was claimed that it requires also normally distributed variables and these types of also like log transformation when is it needed and then in relation to regression model yeah justification like of justification so in some papers there were no justification at all why they did the log transformation for for the focal variables in regression and this was this was a problem in in some of these papers and also lack of justification regarding instrumental variables so we have the relevance criterion and then the exclusion criterion so these were often not or not not so often but rather often kind of ignored and finally then interpretation and reporting issues so here is the same as in study one control variables there's a lack of justification regarding control variables and also relying on empirical papers as justification for using some methods so these were the main issues in problematic justification of methodological choices or control variables so that was the largest issue so citations had problems especially page numbers missing from book citations and text revision was needed to conclude study study two we could say that regression related issues seem to be the most common if we look at analysis and technique things and mostly about justifying your choices and kind of misunderstandings about how to choose the technique what to do with the variables and and then how to what to report what is relevant so that the the readers get a good understanding of your results and your models we did comparison between study one and study two and you can see here that there is a difference between the research design issues that were most common in these studies so in in this manuscript methods check stage of course there are more severe research design problems that have to do with causality so that was the most common problem there causality claims are not supported by the research design so these type of issues of course when they haven't gone through the the peer review and the methods check so that's why they were there but then in in the published articles naturally there are no longer great research design problems but endogeneity still persists so there are still omitted variable bias or simultaneity based endogeneity left or some other other types of endogeneity then regarding the technique here both studies had missing analysis whether it's model fit or diagnostics similar things other had published that it had clearly these diagnostics missing and or in many cases regression diagnostics were missing and then justifications study one were about reliability assessing that and then study two were about regression mostly and what was surprisingly similar is that both these manuscripts and then the published papers had similar problems that with justification of control variables why are they chosen what do they bring to the model and what kind of how do they address endogeneity but then also this justification of methodological choices was found to be quite common problem in these published papers Okay so those were the main results of study one and study two and what we can conclude at this stage when we have these preliminary descriptive results is that it seems that J.O. and review is very good at identifying research design problems and that works well there aren't that many problems except endogeneity but then there's room for improvement regarding some data analysis and justification issues such as these control variables and other things that I just mentioned okay thank you so now I will give floor back to Mikko I'll say a few words about tennis presentation before we go to my final presentation where I take a look at a couple of more specific issues with examples and then what we can do about it let's go back one slide here and by the way I was thinking that I would go on for 40 minutes and then we'll have 20 minutes for discussion at the end so if we compare these study one the rejected manuscripts or manuscripts that are in the methods check review and were asked to be revised and these accepted manuscripts we have to also understand that these are quite different kinds of papers so the majority of articles that come in at least in the methods check are cross-sectional surveys and those are difficult to publish in the journal we do publish those if they're really well done and if they address interesting questions but as Henny pointed out in the beginning of the presentation most of the published studies actually use secondary data so when you use secondary data then for example addressing scale reliability is not an issue so that's one reason why these results differ another thing that must be pointed out is that like Henny said not all of these issues are equally severe and one thing that I like to point out particularly is there are Chrome box alpha so if you misapply it then the consequences are not that severe but it's very easy to apply correctly and to check the assumptions you simply need to do factor analysis if the factor loadings are the same or roughly the same for all the items then you apply coefficient of if they differ then you apply composite reliability and that's it so that's something that anyone can do so it's easy to fix and that's one of the reasons why it's often raised and people don't seem to know much about the assumptions of all but it's not an important issue just something that is easy to spot and easy to fix and therefore it's highlighted here I'll now go through some of the more specific issues with our examples and then talk about how we can avoid these meteorological problems right so back to my presentation and we were initially planning on having the first draft of the full paper that explains these issues that I will go through now for the conference participants but because of the pandemic it didn't turn out so we didn't have time to write it because of having to homeschool kids and other stuff like that but we have a version we just need to clean it up and it'll be available on the conference website in a couple of days maybe hopefully Monday definitely by Wednesday so let's take a look at what issues we have and the increase in the meteorological rigor really started during the tenure of the previous editors in chief so there was this editorial by Ketokibi and Guide or Guide and Ketokibi rather and they pointed out some problems and we can see that the problems that they actually pointed out have become more salient to alters that submit papers to the journal so Guide and Ketokibi pointed out it's time to take causality seriously and they talked about the endogeneity and actually quite a few papers that are submitted to the journal talk about the endogeneity the problem is that not all those papers actually know how to apply instrument variables correctly some don't seem to really know what an instrument variable is and trying to deal with endogeneity without understanding what the issue is about and what instrument variables really are or what two states least or any other instrument variable estimation technique does leads to trouble so this recommendation time to take causality seriously has actually lead to another set of problems because it forces researchers to apply tools that they may not be comfortable with and rule of thumb that is still very much prevalent so I don't think that that editorial from five years ago really made any difference so we get lots of papers that still say that when reliability statistic is over 0.7 then everything is fine when it's below then results are useless always understand the tools that you use this is a problem and we can see this both in published articles and in the article Sentimental Treatment I'll demonstrate an example with two states least squares but we had applications that claim to be two states or three states least squares and when the authors explained the method their explanation was not even close to what the two states or least squares three states least squares is actually do there are claims about house one test testing efficiency doesn't do that it assumes efficiency tests for consistency and GMM is presented as a general method for dealing with endogeneity well GMM does not deal with any kinds of endogeneity and even if you're using it with the dynamic panel model it deals only with specific cuts so that kind of things this editorial really seems to have pushed people beyond their comfort zone in using techniques that would be effective if used correctly but then whether those are used correctly that's entirely another matter Common method bias this is something that most articles address but it's really a big kind of warms in that many of those techniques that are recommended for common method barriers if you really look at what they are based on and what's the idea and does it really work the answer to that tends to be either no or yes but only in very specific circumstances so if you have method barriers problem you basically have options that are really bad bad and slightly bad so there are no good solutions to method barriers problems and we need to understand that when we deal with method barriers problems so quite often we have harmless single factor tests unmeasured data and variable and so on and then stay current in methodological development this is not being followed that well so quite often people justify their choices because this is what has been done in the past I remember particularly one paper that I think I send a revision request last week or sometimes in the recent past and I asked the authors to justify their decisions based on methodological literature and then I pointed to a literature that states and demonstrates that what the authors actually do is problematic even though it's a current convention and then I got a response back from them saying that we are doing this because it's a convention. So not all conventions are worth following. Okay, let's take a look at a couple of highlights on the problems that we have and I'm probably not going to have time to go through all the things that are prepared so I tend to have more slides than I go through but I'll post the slides on the conference website and I also have most of these slides that I use here explained on YouTube and I'll point you to links later. So instrument of variables and endogeneity. This is a paper, this is a screenshot from a manuscript that was revised a couple of times during the method review process and it's now sufficiently different that you can identify what manuscript or what published paper this is actually. So what is the problem here? So they are regressing X on Z, Z is instrument X is endogenous explanatory variable. So stage one that's the first stage of Tuesday's discourse. Well, it's good that far. And Z is related to X. Well, that's an assumption in instrument of variable estimation. Good to check. So correctly this far. And then use regress and residuals as an instrument of variable for X in the second stage regression. That's actually not how you do Tuesday's discourse. So in Tuesday's discourse you are supposed to have an instrument of variable. I explained the concept in a few slides from now. Then regress the endogenous variable on the instrument of variable. Take the fitted value, not the residual, and then use the fitted value in place of the original endogenous variable as a predictor of why the dependent variable. So they're applying it incorrectly. And what is the justification for this application? They're citing by 2016, one 2016. And when we take a look at what Bayon 1 published in Journal of Operations Management actually say, they explain the Tuesday's discourse procedure similarly incorrect. So the point here is, is not to blame any author. So these are mistakes that everyone makes. The point is that maybe if you want to know how Tuesday's discourse works, Journal of Operations Management is maybe not the best place to look. So instead of looking at Journal of Operations Management, maybe you should be looking at a really good research methods book or an econometrics book that is specifically about these kind of techniques. Now, are these incorrect applications, or are they just incorrect explanations of a correct application? We don't know if they're incorrect applications. Of course, these are the main analysis of these papers. The results are going to be completely incorrect. If it's just a reporting issue, the results could be right. We don't know. How would we know? Well, we would know if the authors were just to explain, for example, that they used a status IV regress command, or even better, give us the analysis file. So if you give the analysis file that you applied, then we can check what you actually did. And we don't have to rely on whether you know what Tuesday's discourse does. Another thing that we can learn from this example is that if you are not 100% sure how, for example, Tuesday's discourse works, then maybe you should not try to explain what it does. Just state that you applied Tuesday's discourse and state the software, perhaps report the command as well, and be done with it. So then you're not giving incorrect advice to other authors. So how did I respond to this kind of problem? Well, this is from the decision letter. And quite often when I write decisions, I tend to link to teaching slides or teaching videos. So I have this library of things that you can find on YouTube, about 200 small clips that explain different techniques. So instead of looking at journal for persons management, perhaps you should look at Ketokivian Macintosh. Well, it's in journal for persons management. It's specifically about these techniques. Or take a look at econometrics, or take a look at our lecture video that explains the concept. Don't use journal for persons management and empirical papers published in the journal as methodological guidance. So the fact that people use something probably correlates with that technique being useful. But simply the fact that something has been used in the past is not evidence for the usefulness of data. Because not all the techniques that are applied work well. Or we might later discover that some of the techniques that we have applied in the past don't really work as well as we thought that they do. So this happens as well. And this is very much about staying up to date with the current methodological departments. But we should not feel too bad about this because this happens to everybody. For example, our industrial marketing management published a paper that explains Tuesday's list course incorrectly in the same exact way. And what's worse is that this is actually a paper about how to deal with endogeneity. So a journal publishes a paper about how to deal with endogeneity when you write your articles to that journal and they explain things incorrectly. Now there was a follow-up to this paper by the same authors who say that well we explained it incorrectly and then they provide the correct explanation. But if you simply see this paper which is somewhat siding now, you wouldn't know that there's a correction being published. So whenever you see something in an applied journal, it might be a good idea to take a methods book or look at a class that talks about methods and then compare. Does the methods book or does the class explain the method the same way than an article published in an applied journal does? And by applied I mean journals that publish empirical research instead of journals that publish studies about methods. So this is an instance of a statistical method, methodological myth and urban legend. So the idea of this methodological myth is that someone publishes an idea in an econometrics book. We read that there is two regressions in two-stage least squares. We take something from the first stage, we use that in the second stage and you misunderstand that well we take the feedback, residuals instead of the feedback values and then you publish a paper in an applied journal. Then instead of looking at the original idea in the research methods journal, people tend to look at the journal where they want to publish. So one person misunderstands an idea from a research methods journal, then it starts to circulate within that discipline or within the journal. And this is what we actually show in the two-stage least squares example. When this goes on for a while, it becomes institutionalized in the review process. So when reviewers see someone explaining that we use two-stage least squares, we use regress x on z, we took the feedback values, we use those to predict y. A reviewer says no, that's not how two-stage least squares is used. See these five papers published in this journal that explain how two-stage least squares works. So the problem is that once we have enough misapplication, then trying to publish a correct application becomes more difficult. And then it becomes part of the methodological body of the discipline to do things wrong. So how do we stop this kind of problem? So how do we stop these statistical and methodological myths and urban legends? There's actually some good advice in the 2015 editorial by Guy Nankieta-Kimi. One is that you should always understand the techniques that you apply. So instead of simply reading what Journal of Operations Management does, take a look at what actual methodological literature says about this. Instead of saying that expert x recommends technique y, you should be justifying your decisions based on the method that the method has been proven to be something. If two-stage least squares has been proven to be consistent, it's a much stronger claim than saying that Guy Nankieta-Kimi recommends two-stage least squares. You can also say simulation studies have demonstrated that method x does something. You should not use something because someone has said so. The fact that I say that you should use two-stage least squares is not a justification. Otherwise, you need to justify things based on their proven or demonstrated properties. If you think that regression analysis is problematic theory study, then you need to say that regression analysis has been proven to be inconsistent, which means that it produces incorrect results, even with very large samples, under endogenetic. And then you can say that, well, two-stage least squares has been proven to be consistent under this scenario, therefore we apply two-stage least squares. Another problem is siding methodological sources, or another solution is siding methodological sources instead of siding previous applications. And whenever you cite something which people actually do, remember to add PAPES numbers to the site list. So one of the things that I often see is that authors say that they apply two-stage least squares and they cite Green 2010, for example. Well, Green's econometrics book is 1,200 pages. How am I supposed to know what specific thing from that book which covers a really broad range of topics are you referring to? So whenever you cite something, particularly a big book, give your reviewers and your readers pointers on where to learn more about this technique that we apply. Just siding Green will not help anyone because no one who has not read Green from cover to cover will know what you are referring to. And if someone has read Green from cover to cover, they are probably somewhere teaching econometrics because that's a really hard book to read. Publish your analysis files. So the problem here was that we don't really know whether two-stage least squares was applied incorrectly or simply explained incorrectly. So if you publish your analysis files, state-of-the-art file, SPSS syntax file, state-of-the-art file, then your readers and your readers can actually check if you did things correctly. Doing things incorrectly versus explaining things incorrectly are two very different kinds of problems. It's easy to solve, just fix the explanation, but if your actual main analysis is done incorrectly, then a lot of rework is in order to make it work. So really pay attention to how you justify things and try to understand what you do. And this goes beyond two-stage least squares. One of the articles that was published and that we reviewed did a study or a model where X predicted Y, Y predicted X, and then they applied seemingly underrated regression technique. Well, that's not the correct technique, and it's in violation of that technique's assumptions. It produces very much incorrect results. This relates to knowing what you do and also explaining it in a transparent way. So this is one of the problems. Another specific problem relates to an instrument, a variable, and an endogeneity. And there are a couple of myths that need to be corrected that are very common in the published studies. Let's take a look at what the endogeneity is. So the idea of endogeneity is that if you want to regress Y and X and claim causality so that X is a cause of Y, then you must assume that any other causes of Y are uncorrelated with X. So there are basically three causes of endogeneity that one could think of. One is that you have a specific omitted cause in your mind. So you might know that variable E here is the cause of X and Y, but you don't have data for E. Maybe you have prior theory that says that E causes X and Y, you don't have data. So this is an omitted control variable problem, and it needs to be explained. Another problem is that you're not sure if one of the omitted causes, other causes of Y could be correlated with X. So you don't know the specific source of endogeneity, but you assume that X and Y are correlated more than what can be attributed to this possible cause of relations. The third kind, which is a bit different, is that there's reciprocal causation so that X causes Y and Y causes X, and this is called simultaneity in econometrics. Let's take a look at example of what this means in practice. So let's assume that we have this kind of simple problem. We want to study if investments in factories affects return on assets. And what kind of assumptions do we need to make? This relates to the paper by Macintosh and Ketokivi about endogeneity, which is a very good general explanation of this. And they state that addressing the endogeneity question starts with asking the question, what explains the variance of the dependent variable? So what does investment in factories depend on? Some companies invest in factories other stone. What does it depend on? Well, it could depend on company strategy. So investments in new factory strategy decision probably depends on term strategy. So if we assume that firm strategy is one of the predictors of whether a company decides to invest in a new factory or not, then to claim that there is no endogeneity, we must assume that firm strategy is not the cause of return on assets. If you go and say that return on assets does not influence or firm strategy does not influence return on assets and you go and tell that to strategy management scholars, they will tell you to go away because there is lots of evidence that strategy actually influences our own. So we have an endogeneity problem when the variance of the explanatory variable depends on something that also causes the dependent variable. And the key thing in dealing with endogeneity is to first explain to the readers what really is the endogeneity problem about in your study. Quite often, when I see review studies, they just state that there is potential endogeneity problem. That's not very useful. It's the same kind of statement that our study could be potentially wrong. Our sample could be potentially biased. Our measures could be potentially invalid. Yes, all those are logical possibilities, but we should also only focus on those possible problems that we think are most relevant. And therefore we need to really try to understand what is the specific issue in our case. So what is the thing that drives investment in factories that also drives ROA, explain to the readers, and then you move on to the next stage, which is how to deal with endogeneity. So this is problem number one, making a general claim that endogeneity is a problem without explaining what really is the problem of life. Some studies do it well. So they say that and those fall into the class typically that are the endogeneity class where there is a reciprocal causation. So ROA causes investment as well. And then they can explain how to deal with the problem. The general strategy for dealing with this problem is using instrumental variables. The idea of an instrumental variable is that you pick something that correlates with the endogenous explanatory variable and does not correlate with the error term. There are problems in instrumental variables in the studies that I've read and some of the published studies. Two specific problems. One is that studies are justifying this instrumental variable by checking if it correlates with Y and seeing that that correlation is not significant, then declaring that Z is a valid instrument. That's not the right thing to think to test. You need to test somehow whether Z correlates with the unobserved error term, which you of course can't do directly because you don't observe the error term. If you did, then things will be very simple. So the Z must be justified that it does not correlate with any other causes. It must be justified based on a theory. And this is the second problem. The first thing when you deal with endogeneity is to explain the problem. The second thing that you must do is to explain why you think that your chosen instruments based on existing theory or based on existing empirical research are not causes of the dependent variable. If they are, then they're not valid instruments. So here's a list of common problems about endogeneity and instrumental variables. Not explaining the nature of the exogenetic problem, endogeneity problem. Not justifying the instrumental exclusion criteria. These were the one, two things that I just explained. Assuming that Tuesday's least squares estimator is required for dealing with endogeneity. Tuesday's least squares is a simple technique for dealing with endogeneity, but it's not the only technique. The magic ingredient in endogeneity is not Tuesday's least squares. It's an instrumental variable. And you can, of course, use instrumental variables with structural equations. I've seen quite a few papers that use structural equation modeling with latent variables, then switch to scale scores and apply Tuesday's least squares. This is unnecessary. It's a bit incorrect if you assume that the latent variable model is actually correct for the data. And it's also unnecessary complex, because you can simply add an instrumental variable to the structural equation modeling directly. It doesn't make much difference to the original model. Assuming that GMM estimator solves endogeneity problems or assuming that it, when used with a dynamic panel technique, solves all endogeneity problems. So this is something that we encounter in public studies. And this is an example of a complex technique being either justified incorrectly or even outright. Implementing statistical techniques incorrectly, Tuesday's least squares incorrect implementation, which is so one. Another thing that is commonly misused is Tuesday's least squares. I think that the applications of Tuesday's least squares that I've seen in this journal are more often incorrect than correct. Or at least they're incorrectly explained. Whether it's actually done incorrectly or whether the explanation is simply incorrect, we don't know. Assuming that correlation between instrument and dependent variable is a test of the exclusion factor. So this is the endogeneity. It's a big, big mind field. And one of the reasons why I think our researchers are struggling with this is that they may not have training for dealing with endogeneity. Because this issue has really been highlighted only in about last 10 years and it was raised in an editorial five years ago. And if you were not given training on how to deal with endogeneity, if you did not take an econometric class during your doctoral studies, it might be possible that the first time that you hear that you need to do something about endogeneity is when you get a review letter back from the journal telling that you need to deal with endogeneity and gives you 90 days to submit the revisors. Is it a reasonable thing to assume that the researcher first learns about instrument of variables and endogeneity, then learns about two-stage least squares, other techniques for using these instrument of variables, then applies them correctly, reports them correctly and does this within 90 days? Probably not. So if you are asked to do something that you are not really comfortable with, it might be a good idea to do two things. One, you can ask more time. You can tell the editor that the reviewers are asking you to do something which you have never done before, therefore you need more time to study. You'll get the extension, no problem, unless it's a special issue and there's a timeline. The second thing that you can do is to write in the response letter that this is the first time that we do two-stage least squares. Please check if we have done this correctly. No reasonable editor or reviewer will reject your paper because you are misapplying techniques that you are now using the first time. If you misapplied once, you're told to fix it and you misapplied again, then you're likely going to be rejected. The data analysis issues or mistakes are generally something that can be addressed in revisions. Of course, if it seems that a paper would need five revisions before it can reach acceptable level on the methods part, then we will not do those five reviews. We'll tell you to go elsewhere with your paper or make it better and then come back once you're sure that it's better. So we can't run research methods 101 or advanced research methods during the review process for one set of orders. That's simply a resource in question. Okay, we got some time, so I'll talk about another common problem, method variance. And method variance is, this is a big, big kind of warms. So whenever we get a cross-sectional survey, we basically require that the authors say something about method variance. But whether you can actually show that it's not a problem or whether you can show that it's... We don't really know whether it's generally a problem or not. There are theories that method variance can influence correlations with the noxious variance. Is there evidence to support those theories? That's a big iffy thing. But nevertheless, this has been and it continues to be a common reason for rejection. So if all your scales, all your items in your survey are highly correlated, then it's possible that the correlation is driven by something else than the constructs of interest. And that's the basic argument for rejecting an article because of method variance problems. So what can you actually do about method variance problems? They are editorial by guide and Ketokivi points you to the portrait of 2003. But this is an instance of pointing to outdated advice. So this is a topic that is actively studied. So within the last five years, we've had lots of important and interesting findings in the research methodology literature that you need to look at. So if you justify things based on portrait of 2003, then you are using outdated information. So there are a couple of techniques that you can apply. And these techniques for method variance issues can be divided into techniques that detect the problem. If you use a technique that detects a problem, then the technique tells you that there's no problem. You're fine. Then we have techniques that detect and control. So if you have method variance problem, there are ways of controlling for it. So we have correlation techniques, harm and single factor tests should not be used. Ketokivi and guide points to that already, portrait of 2003, 20 years ago says don't use that test. Still, when I see that test being used, it's typically justified with the site test into portrait of 2003. So authors are using technique and citing an article that specifically recommends against the use of that technique. So that either indicates that the authors have not really read portrait of 2003 or they are simply being a bit dishonest. I think they are not reading the paper is the more common scenario. Then there are parts of correlation procedures linked a lot with the technique and unmeasured later method factor design, which is also explained in portrait of space. So this is one technique. And these are techniques that you could apply without thinking about method variance issues in your study design. So these are some things that you could apply after the data has been collected. The other techniques that are available require that you think about the method variance problem in advance. We have marker variable techniques and measure techniques. The idea of a marker variable is that you measure something completely unrelated. For example, you measure person's mood on that day in your survey about supply chains. And then if you can check if the mood variable correlates with the supply chain variables, if they do, then that's an indication of method variance. So this is the idea of a marker variables and measure techniques, refers to techniques where you are suspecting that some of the items in your study are influenced by, for example, social desirability bias. And then you have a scale about social desirability and then you use that in the model. We have a couple of techniques that apply this principle. Then we have multiple method techniques. So if you think that measurement method drives correlations, then use multiple methods. So multi-trail, multi-method matrices is the most common methodological approach for analyzing the data. These are not very common. So these are basically techniques where you measure the same dependent variable, same independent variable using two independent measures. And then we have instrument variable techniques. And this is something that is not, I've never seen anyone applying this, but it's recommended in some articles and it's in principle useful. These techniques can be characterized also into questionable techniques and impractical question techniques. So instrument variables are impractical because if you have an instrument variable, it must be uncorrelated with the source of error. And if you think that, for example, using a survey and a single informant is a source of error, then your instrument variable must be collected from some other informant or using some other technique with the same informant. And that's typically not practical. The reason for using a single informant survey is typically that no other form of data collection is available for a particle or a resource. Impractical and questionable multi-method multi-trail techniques. I will not explain that in detail because this is not really used in JOL papers. Then we have questionable techniques or correlation techniques. The most common of these techniques is that you have a converter factor analysis model where you have the latent variables of interest, and then you have one general method factor that loads on which all the indicators are. This kind of models are really identified. Identification means that it's mathematically possible to come with the best set of estimates for the model. And if the model is not identified, then estimating the model is rather useless. So these are really, really questionable techniques. I'll talk more about these on a YouTube video that I'll introduce. Marker variables, this is the state of the art, and it's still a bit questionable because you're making assumptions that we really don't know if they hold, but it could work at least in theory. So these models are not typically identified. They don't work even in ideal conditions. This is a bit less questionable. It can work in ideal conditions, whether it works in practice, we don't know. So practical advice on meta-variance. This is something that is evolving. So there is lots of methodological literature addressing these issues. And you can take a look at, for example, Spectre's paper from 2019. This is a very good article about how measurement method can affect indicators and how you should be theorizing about the measurement method and then how you should be analyzing the data once you have identified the possible causes of meta-variance. This actually requires a lot of work if you apply the technique that they recommend, but I think that's the most robust thing that you can actually use. So take a look at that article. Then Poznikov and Mackenzie's, of course, are classic and their recommendation for dealing with method-variance is, number one, the best way to deal with method-variance is avoid the problem in the first place. So if you are asking about, let's say, supply chain integration and financial performance, two variables, then take financial performance measures from actual accounting figures, or ask the person to report numbers instead of rating the company's performance from one to five. People generally tend to report numbers like what's your revenue, rather honestly, and they are not affected by the same kinds of biases that normal ratings get. So procedure-variance and multiple sources are best. Poznikov's article is still updated on this front. Consider a mechanism and it is the expected effect. So what is actually, method is not something that is one source of barriers. So there's social desire, but there's item priming, there is a leniency effect, implicit theories. Poznikov's article lists at least 20 different things that can cause two different indicators in the survey before. Then our specter's article focuses on a few of them. And their main argument is to focus on the mechanism and then try to see how the mechanism influences your study. So what are priming effects, contextual effects, what's the mechanism? So this is something that you need to do before you collect your data. Then consider evidence of strength of these effects in the meteorological literature. If you're using a scale that has been shown to be very resistant to, let's say, social desirability bias, then you can write in your article that typically in this kind of study, social desirability would be the main problem. However, research has shown that for this particular scale that we apply, social desirability is not a big problem. That's one way to deal with method variables. Instead of trying to fit a single method factor into the data, try to think a bit more about what is actually driving the correlation between the variables and do you have any evidence to point to that shows that that might not be a problem in your study. Then you should make informed decisions and evaluate the impact based on this evidence that you've actually read from about the prior applications in your study. This is basically the procedure in a nutshell that specter and co-authors recommend. The correlation of techniques simply don't work. If you have a technique, have data that you collected, you did not think about method variance, and then you fit a single factor model to the data, doesn't really do any good. So these techniques don't work. Why they don't work? I'll explain in a set of videos that I'll link to next. I have a few other topics similar to this, but because we need to wrap up the professional development workshop, I'm just going to skip these other things and then we'll have a discussion about the things that we have shown here. So one thing that you can do is to learn more about these common problems is that you can go to my YouTube channel. So I do teach statistical research and research design, research methods. That's my main competence. I'm not actually an operations management researcher myself. I consider myself more of a methods person. And I teach methods and I decided about a year ago that I will put all my lectures that I've done online on YouTube. And for the past about a year or a year and a half when I've been doing research methods courses, I've designed those courses to address in part those issues that I've seen in the papers that I review for JOM. And some of these, like the set of videos on method variants. Well, I don't have a playlist for it yet, but there are like 10 videos is inspired by the problems that I saw in published articles. I show examples from general operations management and other journals. So this is one source where you can learn more about the standards against which your articles are evaluated. And there's more resources about these common problems coming soon. We will have our paper that I worked with Henni and Gaby. And the review part is done. We still need to write the recommendations. We'll have an early draft available on the conference website before the conference is over. And we'll submit the full paper explaining these issues and how to deal with them. Later this fall, we hope that JOM will accept the paper and then publish it as soon as possible. And then maybe once we have this list of most common problems are explained, then maybe at that point I'll write some editorials if there's something more that you should know. But this is just an explanation of some of the issues and what we can deal with the issues. The most important things, two most important things are, one, you should understand the techniques that you apply and your application should be based on the most recent methodological research instead of being based on what has been done in the past in general operations management. The second important point is to be transparent. If you are not 100% sure whether you have answered a review or comment correctly, then you should point that out in the response letter. Also, it's very useful to provide your analysis files as part of supplementary material in the article. We don't insist that you publish those files as online supplementary material on the general website, but they should be made available to the reviewers. Why is this important? Why is this useful for you? When I get an article to Methods Review, quite often the first letter that I send to the authors is, I just need more information. You are not reporting transparently on what you do, so it is impossible for me to evaluate whether you're doing it correctly. Like the two-stage list course example in this presentation, we don't know if it's simply an incorrect explanation or whether it has been applied incorrectly too. Having access to the analysis files will solve a lot of problems and answer a lot of questions for those reviewers who actually have experience with that statistic. I'll conclude my presentation here, and we'll have about 50 minutes for questions and discussion. Thank you very much, Miko, for that very rich presentation. I very much appreciate how people are coming together so that we can explain things well and get the most value out of the very exciting research that we do. Tyson and I feel that we want the journal to be held to the highest possible standard. It doesn't mean that we have created a new religion where you have to jump through certain hoops to be good enough, but it is really describing accurately what we have the right to say given the data that we have. It might be that there is a very likely endogeneity problem, and then just like you would do in a court of law explaining, yes, these are the factors that need to be taken into consideration. This is why this paper is worth looking at, even though it could be biased in this direction, which also provides a leg up for future researchers. So Guanji, it would be very interesting to have your comments if you're here. I've seen you here. Yeah, so can you hear me? I think I did. We hear you. Okay, cool. So hi everybody. Thanks to editors for inviting me on board on this exciting department and then be very happy to work with Nicole on these various issues. So it is also firsthand learning what Nicole did in terms of the summary of one and a half years work he has done basically. It's a quite structured summary. So I think Nicole's training is more from the quantum psychology, and mine is a little more from econometrics. So we have this overlap on how to deal with endogeneity it seems to be one of the key topics of today's talk so I'm just going to quickly echo what Nicole said and add my own reflection of this. I have my own reviews for the journal as well as an author. I have to deal with this pretty much in every paper. So my reaction to this is, I can understand why endogeneity is such a thorny problem for authors and for reviewers as well, because it is easy to make a blanket statement saying that you have endogeneity problem and then feel that it's a slam dunk rejection for the paper like Nicole said. But to me that feels more like an air ball than slam dunk because a more reasonable approach to dealing with and to talk about this is to first talk about what you think as the source of endogeneity. Because otherwise it's placing a unreasonable burden on authors, if you don't tell them why you think there is an endogeneity problem in the first place. So I feel that's very important. And that also plays into encouraging the authors to become more transparent and then encouraging the reviewer to become more reasonable, because these two things, in my opinion would have to go hand in hand otherwise it is unlikely that we will have a lot of improvement on this. On one hand authors can be more transparent and then they want to talk about potential endogeneity problem and then what they think as the source of endogeneity and their prosolution in the first submission of the paper that's the best scenario, but at the same time I understand that if the authors do that they might open themselves to attacks on endless robustness checks and then at the end might lead to a rejection. I feel that aspect could be improved if we ask the reviewers to be a little more reasonable in terms of evaluating the solution of endogeneity. We don't need a perfect 100% perfection textbook solution to your endogeneity problem because a lot of times the instrumental variable in the textbooks comes out of the simulation and then they would assume in the simulation that your instrument has no correlation with the error term and then they observe the entire data generating process. That aspect I have to point out because in real world we do apply research where no textbook writers and then our data are not generated out of simulations. We collect data and then the data collection process itself is extremely time consuming regardless of your primary data collection or you actually contact the company to get the data. So I feel I need to respect that your data collection effort but also at the same time I would like to echo Mikko's comment and then ask the authors to be a little kind of proactive when you are collecting the data. So you have gone through all the trouble to collect the data in the first place right so when you're collecting the data trying to think about endogeneity problem when during the data collection process and then sort of think about what might be the cost of endogeneity what might be the omitted variable if you didn't collect it and then the best solution to the endogeneity problem is if during the data collection process you have already thought about the key possible alternative explanations and then you actually collected the variable that would cause endogeneity if you didn't control for it and then you have already collected the data and then you collected the variable that will cause the omitted variable barris and then you control for that variable wouldn't that be the best solution and I feel that it is more of a design problem than a kind of a post data collection problem of course at the same time I also understand that if you get secondary data from company your data is company sponsored often times you don't have the power to dictate like I want to collect this variable I want to collect that variable your data is whatever that's you know the best beta you can get from the company I understand that too so if that's the case also try to be a little more transparent and try to be a little more proactive when writing the first submission at least don't you know dash on the problem and they just say okay yeah there is an endogeneity problem we think they're these are the possible causes and oftentimes before you magically find a perfect instrument you can possibly just get some side evidence on how large the endogeneity problem is and at which level the endogeneity problem come from in Mikko's review of his very structured review of the letter he wrote he mentioned oftentimes you have panel data or multi-level data say you have observation within companies that kind of structure and you have a lot of companies and then you have multi-period observation within companies then I mean a very important question regarding the endogeneity is whether you think this endogeneity arise from cross companies or arise from even within companies over time so that's very important because if it mostly coming from across companies and then you perhaps have a pretty good solution if you apply the right panel model panel data models if it comes from a source that not only across companies but also within companies over time then it will be a lot harder so having that transparency and discussion upfront I think it helps with yourself and the reviewers because then you set a boundary for the reviewers reviewers cannot come back and have a blanket statement saying hey man you have endogeneity problem you're done well we know that we have endogeneity problem and then we have already discussed the possible sources of endogeneity and then at this point I think the ball rose back to the reviewers and if they want to propose even more alternative solutions alternative causes for endogeneity because you did a great job of proposing alternative explanations and then if they want to come up with something else they got to go at least with your standard of rigor in terms of talking about those sources of endogeneity so that's what I would add yeah I want to add two things one is that research does know that it cannot be perfect and flawless but you need to try your best and the department tries to point you out to the most recent understanding of methods and tries to help you to to improve your studies of course sometimes you may be pushed to do something that you don't really understand and in that case it might be a good idea to say that you are also applying a simpler technique because you understand it and simpler techniques are less likely to be misapplied than complex techniques for example we did not really encounter any gross misapplication of regression analysis in the review but we encountered misapplications of two states least squares and let alone three states least squares so simpler techniques should be preferred unless there is a really good reason to use a more complex technique and related to the comment about endogeneity in the methods review template that I sent to reviewers and I specifically asked them that if they complain about endogeneity they need to explain what is the source so if they think that there's an omitted variable E then you should really name what is the E so what is the omitted variable so if you say that there could be an omitted variable and you don't name the variable then how are the authors going to address that question if you think that there is two-way causality then point out to a theory or empirical finding that say that y is actually a cause of x and not the other way around so it's not only like you said about authors providing evidence and justification but also reviewers providing evidence otherwise endogeneity will just become something that you cannot stamp on any paper to reject it very good I think if I'm hearing a couple of big themes here one of them is transparency and I think on average we could save an entire round of review for papers if they would come with that transparency initially often the first round is simply Miko or reviewers asking questions about what authors actually did and they can't evaluate they can't comment until they know what authors actually did and so it takes a whole round just to figure out what was done and then really the second round is what should have been the first round and so we could save a lot of time for authors for reviewers for editors for all of us if transparency got more emphasis right up front and I think another big theme here is we want to publish papers and we know papers can't be perfect and there are often trade-offs with rigor and level of interest and we're not really wanting to publish perfectly rigorous papers that are uninteresting and that make little contribution so we don't want people to come away from this session with the impression that your methods have to be 100% perfect, rigorous all the time but we do want to avoid the common pitfalls and problems that have tripped up many authors and this session is for reviewers just as much as it's for authors because all of us as a community like Suzanne said at the beginning we're looking to grow and develop and help each other publish good research it's no good for our community if we use methods, problems to keep papers out and keep good research from coming to light so it's incumbent on all of us as authors, as researchers to do rigorous high quality research to think about these things before while we're designing our research and while we're collecting our data and then it's incumbent on us as reviewers to look for possibilities and ways to explain things to help authors to be developmental not to just make simple statements that try to cause quick rejection decisions I want to thank Miko and others for organizing the session as well primarily and I hope you found it helpful thank you everyone thank you have a nice conference we look forward to seeing many of you back in half an hour for JOM's award session yeah thanks everyone for being here see you soon thanks everyone