 Welcome, everybody, to lecture two. In this lecture, we're going to be talking about the methods that social psychologists use to conduct their research. And methodology is not necessarily the most favorite topic for students. To be completely honest, it's also not my favorite topic to talk about, but it is an essential topic because only if we really understand what you need to do to conduct good research, you can also understand the outcomes of that research. And that is where the fun happens. That's where it gets exciting. So today, we're going to be talking about the methods that social psychologists use to do their work. And especially for social psychology, you could say, this is really important. It's important to talk about good methodology. And why is this the case? Well, because it's not really easy to be a social psychologist, not in general, but also not, especially not, over the last decade. Let me talk to you about why this is problematic to be a social psychologist and the problems that we tend to run into. The first problem has to do with the image of social psychology. The second one about bad research practices and finally, unethical research that has been conducted in the past. So let me start off with the first one, the image of social psychologists. Okay, so let's start by talking about the image that social psychologists have. The image is typically that social psychology is just common sense. It's sort of an open door. The research that we do is not really new. It's something that all humans can sort of predict the outcomes. It's not very important, therefore. And let me tell you a little bit about my own research and sort of the problems that I had when talking about my work. So I told you before that I study relationships and when I did my PhD, I studied why some people are better able to maintain and protect their relationships than others. And one topic that I studied was cheating. So unfaithfulness in relationships. And I studied why some people are more likely to cheat on their partner than others. And there's, of course, a variety of topics and it's a very complex question. But one factor that I zoomed into was impulse control. So in my work, I studied participants' ability to control their impulses and I tested whether this ability for impulse control predicted cheating. So whether people that are very impulsive are actually more likely to cheat on their partner. And I thought that this was very cool research and I think I did some very cool studies. And it was in the media and then I got a lot of remarks from people saying, yes, of course, but this makes so much sense. Why would anyone study this? If you're impulsive, you're likely to cheat. Yeah, what's the big deal? Everybody knows this. And this is a typical example of what we refer to as hindsight bias. Hindsight bias is basically the idea that once you know the results of a certain study, you think it would have been very easy to predict it. So retrospectively, you think once you know how something works, you think I could have easily predicted this. So this is called hindsight bias and social psychologists have a lot of troubles with this because a lot of times if they talk about their work, people say, yeah, I knew this all along. That's basically the idea of hindsight bias. This is not newer information. This is just research being conducted on something that everybody already knows. And sometimes I get it, this feedback, this criticism, but oftentimes this is not how it works because if I ask you to predict outcomes of a certain study, this is much more complicated than you sometimes think. So for example, I will now showcase some questions, something that you could study. For example, you hear a song on the radio for the very first time, you really like the song. Over the next few weeks, you hear it many, many, many times on the radio. So you know this first time you hear a song, then it's got to repeat it over and over and over and over and over again. After a couple of weeks, do you think you will like the song less, the same or more? So you think about it yourself. Another example, Taro, who happens to be my older son, and this is actually something that happened to me, Taro is playing in his room. His room is a bit messy and he decides out of his own initiative, which is shocking, as a mom, I can say that, he decides to clean his room. And actually to his own surprise, he also enjoys doing so. When his mother later enters the room, she decides to reward him with five euros. And after this payment, do you think Taro will like the act of cleaning his room even more, the same or less? So once he was rewarded for the behavior, do you think this increased his liking for cleaning the room or do you think it didn't affect it or do you think he liked it less? So you might have some ideas about this and I can tell you that probably your first responses to these questions are wrong. I'm not gonna tell you the answers to these questions right now, you will learn them if you follow this course. So stay tuned for the answers to these questions, but they are less straightforward than you think. So this is sort of the first problem that we as social psychologists deal with, the problem that we have this image that is our research is an open door, it's not important, everybody can predict it, which is oftentimes not the case because of hindsight bias. Okay, on to the second problem that we social psychologists face. This is actually a suffer cookie, I would say. And this is something that has become very open and relevant over the last 10 years and it had to do a lot with something that happened at Tilburg. Actually, to be more specific, at the department of social psychology of Tilburg University, it had to do with this guy that you see over here. I don't know if you know him, maybe some people recognize him, his name is Dieter Ekstapel. And Dieter Ekstapel was very well known in the field of social psychology, really a very influential scholar, someone who published a lot of papers, also got published in very big journals like Science, like then you really made it if you published in Science and in Nature. So he was well known, very famous, and he also was the dean of the faculty at a certain moment. And just to give you an impression of his fame, while I was doing my PhD research, I started in 2006 and I ended in 2010, what you do as a PhD student, once you're finished and you wrote your dissertation, you send it out to influential people in the field. And I also remember that I sent a copy of my dissertation to Dieter Ekstapel and he sent me an email and it was just one sentence, something like, I don't remember word by word, but it was something like, thanks for your dissertation, very interesting work. And that really made my day because he was so famous, he was so well known, and I thought, wow, he actually got my dissertation and sent me an email about it. So that's sort of the level of fame that he had in our own field and also more broadly speaking in society because he was also in the news quite a lot with his work, really sort of groundbreaking work also, also very influential, impactful. The media really liked his work, for example, he did a study together with Rose Fonk of Rappert University, Nijmegen, in which they compared people who eat meat versus vegetarians and that people who eat meat would be basically more of an asshole once you eat meat. So you become sort of a worse person, your personality is worse if you eat meat. So this was also something that was in the news a lot sort of very appealing. So everything was going very well for Professor Staples until in 2011, there were some students in his group who had their concerns about the way he worked. And they were very brave and they stepped up and they raised their concerns to people in power. And as it turns out, he was actually conducting fraud at a very, very big level. So he actually now is known as one of the biggest scientist frauds in history. So he sort of used all the bad research practices that you can think of. And most importantly, he sort of made up his own data. So he would say that he conducted a certain study. For example, he said he conducted studies with primary school children. And what he actually did was he himself in the back of his car filled out questionnaires as it is was filled out by the children and conducted his research based on the questionnaires that he himself filled out. Can you imagine how scandalous that is? It's really bizarre. So there was a huge investigation and it turned out that a lot of his papers also the famous meat eating versus vegetarian paper was completely false. It was not based on fraudulous, faked data. And a lot of his work, actually almost all of his work was retracted also from these big articles like Science. They retracted the paper. And it was really a serious investigation also because he got a lot of money to conduct his work. So he got millions and millions of euros to conduct his work and he spent it on hiring people and making money himself. And he also ruined so many careers for PhD students of his. Yeah, it was really, really, really bad. So also, of course, not good for Tilburg, not good for the reputation of social psychology, especially not social psychology in Tilburg, complete disaster. But this turns out to be only the tip of the iceberg because in the years to come, we sort of, as a feels, reconsidered the methods that we use to do our work. And we found out that there's actually a lot of bad research practices and fraud is very apparent, one of them, there are more problems with doing science. For example, using very small sample sizes, so not enough participants basically to come up with your conclusions and sort of removing participants from your data set and then conducting your analysis. So if you wanna know more about that, I'll post a link so you can read on the seven deadly sins of methodology and social psychology. It's not relevant for you to know all of this, but if you're interested, you can find out more. For now, it's just enough for you to know that there were bad research practices and this sort of spurred an entire field of replication, replication studies. And what replication means is that researchers sort of redo work that has been done in the past. So especially very famous studies that have been very well known. These studies were replicated, repeated basically, by other researchers using oftentimes a bigger group of participants, a different sample size and to see whether the same results could be obtained. And what the researchers found out was that oftentimes results could not be replicated. So when the study was repeated, other results were obtained, which is a problem because if you obtain other results than the results that was in the initial work, then the conclusions that people come up with from the previous work cannot actually be concluded. So you have to change the theory. So this is called a replication science, a replication crisis, sorry. And this has also been discussed a lot. So there's a lot of examples of replication crisis in the field of social psychology, but it turns out to be a much broader problem. It was also general in other fields of psychology, but also in, for example, medicine and other fields of science, there was a replication crisis. So we needed to sort of step up our game, change our methods to make sure that the conclusions that we derive from our work can actually, you know, are real, are based on actual factual information. So we are now, after a decade or so after this crisis, we are moving further, we're moving forward, and we are improving our methods. So I think social psychologists did very well in taking this crisis very serious and changing stuff for the better. So what we did now is we run replication studies, as I mentioned, so we sort of repeat studies that have been done in the past, but then with better methods. We also run meta-analyses, and with the meta-analyses, you sort of combine several researches, several studies that had the same goal to see whether the results are still true. So basically, you combine studies to test whether certain hypotheses can actually be confirmed yes or no. And finally, we now conduct open science practices, and this is really supported. And with open science, I mean that we, for example, once we come up with a research question and we have a certain expectation, we pre-register this, and this means that we, before we actually do the research, we type and we make public our expectation of that work. So this is called pre-registration. The materials that we use are open, and the data sets are open, so everybody can just look into what we did, so the chances of actually conducting fraudulent work that's really slim, because now everything is so open, so it's very hard for a future leader example to do what he did. So I think we did a very good job in this. So this is solved, right? Of course, this is still working and making, but this is going the right direction. Now let's move on to the final problem that we as social psychologists have to deal with, and this has to do with unethical research. And what you see here are pictures of studies that were actually conducted by social psychologists. You see a very famous study, the Milgram experiment, and the Zimbardo experiment. These were both conducted around the 60s and the 70s. I'm not gonna talk about the content of this research. I'm gonna do so later in the course, so I'm gonna tell you all about what these researchers actually did, but for now I just wanna mention that the research was conducted as very renowned institutions like Stanford University, still one of the most renowned worldwide, well-known institutions for science, also by very well-known professors in psychology, Zimbardo and Milgram were well-respected, and they conducted studies that were completely unethical, and they should have never been conducted because the participants were really harmed in this work, and what they exactly did, again, I will tell you, but we are still dealing with the backlash of this research because these studies are still very famous, maybe you already know them, and they also damaged the image that we have as social psychologists, and they also damaged the perspective that we are sort of messing with people a little bit. So this is also something that we need to change, and we are also moving forward in this regard, so when it comes to ethics of research, we have very good restrictions I think now, so for example, we use informed consent, so that means that once participants enter our research, the very first thing they do is they read an information letter in which we explain to them what this research is about and what they can expect, and they give consent, so they say, okay, I understand it, and I agree to participate in this work. Sounds very straightforward, right? But this is new, relatively new in the field of science, so to actually tell people what they can expect and not just bombard them with some sort of research procedure that they didn't see coming, so they now know what they are getting themselves into, informed consent, a really vital aspect of ethics. We try to avoid deception, sounds very logical, right? But for a long time, a lot of research that has been conducted in psychology used deception, so we basically lied to participants. For example, what researchers did in the past was we gave participants a task to do. This was really a task that didn't make any sense, they had to count something or divide something, and then we gave them fake feedback. And the fake feedback group, for example, looked like this. On the basis of the results of your task, I can see that in the end of your life, you will have no friends. You will die all alone and everybody will leave you. Well, that's quite harsh feedback, right? And that was all nonsense, it was not true, but it was used to sort of shock participants and see what the effect is of receiving this feedback and studying emotions. Well, that's maybe the researchers had good intentions, but using deception can actually harm participants because this is something that can stick with you, especially if participants are later on not informed that this was actually fake, something that I'll come back to later. So we try to avoid deception, we also try to protect our participants, protect them from experiencing harm, pain, being too cold, being too hot, making sure that they are not harmed. Sometimes we do harm them a little bit just to see the results, but then it will be within a very doable range. I'll give you some examples of that later on in the course, but we try to protect the participants as good as we can. Also, of course, confidentiality is now a much of a more big deal. So we protect the privacy of our participants, especially relevant today because once something is out there, you cannot imagine sort of the personal questions we sometimes ask our participants and we really wanna make sure that these data are treated confidential so we are using more restrictions on that regard. And finally, we also use debriefing and debriefing is at the very end of an experiment. We tell participants what the research was about and also if we still used deception, for example, so we gave them fake feedback, we really make sure that they know that this was false and this is not based on real actual data. And to make sure that we do all this correctly, there's now an IRB, this is an institutional review board at basically every university that sort of checks whether the research that is conducted also lives up to these expectations. So we have several institutions in place to make sure that researchers do their job well, both when it comes to conducting right methodology as well as when it comes to treating people in a humane way. Thanks.