 So, this lecture is going to be about social research and issues that we have in social research. We're going to start talking about the difference between objectivity and subjectivity. You probably think that objective means unbiased and subjective means biased. In sociology it means something a little bit different than that. Objectivity is not unbiased per se. There are biases that do exist in objective research. In fact, all research on human behavior is going to be biased somewhat. The effort is made to reduce the bias as much as possible. It's kind of impossible when human beings are studying other human beings to be completely unbiased. Objectivity does mean that we are concentrating on outside observations. What this means is that we're looking at things that we can actually see, that we can actually hear, that we can actually measure. So I have in parentheses at the top here quantitative data. Usually when you are trying to make an objective study you are counting things and applying mathematical and statistical analysis to those things that you count. So you are trying to be an observer with as little of your own self involved. You are trying to see what you can tell with your senses and not with your common meaning or commonalities that you have with the people that you are trying to study. Objectivity is usually quantified in some way and this is an effort to look at populations because you do objective studies in sociology trying to look at groups. You want to be able to count people and the things and behavior that you observe people do. You want to be able to count it in such a way that you can actually apply statistical analysis to it so that you don't have to study everybody in order to know stuff about human beings. Statistics is a way of looking at a sample of a population and from that sample being able to learn some things about the population. But we also count entire populations. In fact in the United States we do this every 10 years with the U.S. senses. But if you have a smaller group in which you want to just talk about that group, that group then becomes the population. And you might survey everybody in a class for instance or everybody that is involved in a certain organization or something like that. So quantification is usually attached to objective research simply because one of the ways that we can observe and keep ourselves sort of at a distance from that which we are observing is to observe it in groups and that requires us to count the behavior and the incidences of behavior in order to understand the behavior. Subjectivity is not biased per se and it's not more biased necessarily than objective research. Subjectivity simply means that we are concentrating on how people experience their lives. Think about what you learned in grammar that every sentence has a subject and it has an action, a verb and it has an object. So the subject is the actor, the entity that is doing something to the object. So we are trying to see the world through the point of view of the actor, through the point of view of the subject. So that is where the word subjectivity comes from. And mostly we are going to be interested in what people say about their lives and try to get an idea of how they actually are thinking about the way that they live their lives. One of the methodologies that is used in qualitative data that came from a gentleman that you should know by now from the historical lecture named Harold Garfinkel is this idea that he called bracketing. Bracketing simply means that you take a section of either your field notes or transcript from an interview or some other way that you've collected information from people. And you sort of bracket it off, meaning that you sort of look at it in its entirety and then you ask yourself, what am I assuming that helps me make sense out of this? So he said it's like you pretend to be a Martian or you pretend to be from outer space and you're listening to this information through the aid of a universal translator like in Star Trek and you, I'm not sure Garfinkel used Star Trek, but you get the idea. And so you're looking at this and then there are certain assumptions that are made. Sometimes we call this reading between the lines where we really try to see what it is that you do not have to say. Let's do the people who are involved in the conversation, either the one you're participating in as an interviewer or as a participant observer or the one that you're recording by watching other people talk to each other or looking at the way that they write things or in any form of communication there is always an assumption on the part of the speakers that the listeners understand some things, that there are some things that are given that don't have to be explained every time. And so what Garfinkel is suggesting is that if we do this, if we read between the lines, if we look for this part that everybody just knows, everybody just assumes, everybody takes for granted, that that's going to tell us something about culture. That culture is the shared meaning that we have. We don't have to reinvent the wheel every time that we talk to each other because we can rely on a common knowledge. We can rely on the possibility that there is something that our listener will understand in pretty much the same way that we understand it. And that is how you go about analyzing this qualitative data that you've collected. So why does this work? Because people give accounts of their lives to each other. Think about a regular conversation that you have when you're walking down school hall or when you're at work or wherever and you say hello, how are you? Which is the polite thing to say, right? And another person says to you, oh, let me tell you about the night I had last night and then proceeds to tell you a story. So they give an account even if they say fine in response to it without giving an extensive account of what happened the night before. The word fine is still giving an account of the person's life. How are you? I'm fine. It's a conversation in which one person has asked the other person, give me an account of your state of being. And that person has responded by saying my state of being is fine. So we are watching these accounts as people give them. Sometimes they give them in the form of stories and sometimes they give them in the form of shortcuts or symbols, that kind of thing. But they are basically just accounts that we give to each other. And they can be read as accounts. And so by reading these accounts, you've got to ask again, what is taken for granted? That's where sociology lives. Sociology is those things that we take for granted. Max Weber, who you should also know by now from the history lecture. Max Weber called this fairstein. Listen to the German pronunciation, fairstein. Fairstein simply means to understand. But it also has the simplification of understanding deeply. You might use the idea of stand under, right? Getting beneath the surface. So what Garfinkel has done is he has taken this idea, this theory, that Weber had that sociology is the fairstein of life. And he has turned it into a qualitative methodology called bracketing. Qualitative data is collected in many ways. It's collected by observing people, giving accounts to each other, watching other people. It's collected by actually being involved in the conversation. So we can listen. We can read what people have written. We can watch people as they interact with each other. We can converse with other people, trying to figure out what they are thinking and what accounts they are giving of their lives. So whether you are collecting data as an objective study in a quantitative manner, or you are collecting data in a subjective study in a qualitative manner, you are always going to have data quality concerns. These are very important because they also help us as consumers of information, which is part of my concern in teaching you this. I don't want you to just understand how you go about and do this research. I also want you to understand how you assess the quality of other people's research. You are given information from social research every day you watch the news. In news reports, there are little factoids. There are press releases that come out all the time that tell you that this research or that research has discovered something new about us as human beings. And there is a lot of iffy information out there. There is a lot of incomplete information out there. So if you understand these kinds of quality concerns, you can begin to look at information that is coming your way in a critical manner that will allow you to assess and think about these things instead of just accepting what is told to you as truth without any assessment of it. So in order to have good data, you have to have measurable data in some form or another. It has to be something that allows you to be able to test it, to repeat it, to be able to see the results of it. If you want that data to be good, it needs to be reliable. Reliability simply means it can be repeated, that you can do the same study in another place and another time by other people and that when that study is repeated, it pretty much gives you the same results as the original study or the other studies. And this information builds on each other as more and more research is done. The problem with reliability when it comes to studying human behavior is that it is so hard to account for all the factors. We can't take human beings and put them in petri dishes and stick them under a microscope and leave them in controlled atmospheres for long periods of time to observe over and over again how they react. Any time you are studying people, you are studying people who are being affected by things that you are not accounting for. Let me give you an example. Reliability would be repeating research over and trying to get the same results. Let's say that in January of 2011, we did some research asking people what are they most afraid of. So we went out, we did a survey, we pulled a good sample of the population, we asked our questions and we came back with a top ten list. And the number one concern is the economy, the number two is terrorism. You could probably make the list, but you get the idea. It came up with a pretty predictable list. Well, let's say then that somebody else repeats our methodology in June of 2011. So six months later, six months have passed by. And in June of 2011, they do this and all of a sudden in the top ten, pretty high to the top, is swine flu. Swine flu didn't appear anywhere in our January study. Well, what this suggests to us is that not that the study itself was unreliable, but it suggests to us what really happened. And that is that there was a swine flu epidemic in the spring of 2011. And during that epidemic, it hit in particular children and pregnant women. So it made the news a lot. So a lot of people got more afraid of swine flu during that time. Now that's a pretty easy thing to parse out, to figure out it wasn't the test, but it was these other factors, this epidemic that happened. But you can't always do that. Sometimes you can't figure out why the result is different. So the thing to remember is that when it comes to reliability and the next thing that we're going to talk about, which is validity, that these two things you never completely demonstrate 100%. When it comes to studying human behavior, you make a strong case for a reliable measurement. You make a strong case for a valid measurement. You cannot prove that it is reliable or that it is valid. So what is validity? Well, validity is simply that you're measuring what you thought you were measuring. And this can also be difficult because you think that you're asking a particular question on a particular topic, but the subject of your study may interpret that differently than what you intended. There was a study that was done, a psychological study that was done, I believe, in the 1960s where the researchers were trying to figure out what kind of stimulus made people comfortable. What level of stimuli in the environment would make people comfortable. And they especially wanted to know if there are certain personality types that needed more stimulus or less stimulus, that kind of thing. So this is how it was set up. You came thinking that you were going to take a personality test. Before you actually took the test, you were asked to wait in a waiting room. Told that, you know, the study was running behind and that it would be a better 20-minute wait. And here's this panel sitting in front of you and it has all these knobs on it. You can turn the lights up as bright or as dim as you want. You can turn a television on as loud as you want. Or you can turn music or radio on as loud as you want. You could adjust the temperature in the room to your comfort level. And you were told just simply here, make yourself comfortable. And then there was a measurement of what you did in terms of how loud or how soft everything was, how bright it was and how warm or how cold it was. And then after they got a fairly good measurement of you making yourself comfortable, you were given a personality test. And then at the end of the study, being responsible researchers, they did a little bit of a debriefing and they wanted to make sure that the people were actually trying to make themselves comfortable. So they asked them questions like, did you know that, you know, when you were in the waiting room that this was part of the study? And if you did, what did you think was going on and so forth? Well, had this guy come in, they'd go through the whole thing and he sits down and after they leave, he turns everything on as loud, as bright and as warm as he can make it. And he's just off the scale in stimuli, okay? So he's created this atmosphere that is just highly stimulating. And the researchers are like, wow, we can't wait to see what this guy's personality test comes out, right? So he gets to the end and they're asking him and they ask him the question. And he responds, I know what you were trying to do. I knew this was part of the experiment and you were trying to find out if I could take it. Well, I showed you I could take it. I took it. I made it as uncomfortable as possible and I stood there for 20 minutes and blah, blah, blah and so forth. So it was very obvious that his result was not valid because they were trying to find out what would make him comfortable and he had gone out of his way to make him as uncomfortable as possible. So they had to throw his result out. It wasn't a valid result. So I hope that gives you an idea of what validity is. Validity is simply trying to figure out whether or not you're measuring what you believe you are measuring. There are some problems when you are collecting data that the researcher needs to be aware of. One of the things is self-identification. When you ask somebody to put themselves in a category, they very often will tell you essentially lies about themselves because they want to look good. And so because of this, they self-identification data needs to be taken with a grain of salt, especially if you're in areas where people are prone to try to look as good as possible. You should know by now something called presentation of self. So when a stranger comes and asks you questions, you probably are going to try to present like the sanest, the nicest, the most patriotic or whatever self that you can present to that stranger. And that will skew the data. So it's very hard, for instance, to ask direct questions about people's sexual practices because most of them are going to not tell you what really goes on in their bedroom when they're asked directly. Or another example is that there's a general social survey that is done every four years in sociology. And one of the questions on that survey is to identify your class. And you're given four choices, upper, middle, working, and lower class. And then you're asked later about some information that tells us what your socioeconomic level is according to the government. And people who have $2 million income in a household of two living in Idaho are telling the researcher their middle class. And another four-person household living in New York City on $20,000 a year will also say they are middle class. And it's very obvious that the two households don't have much in common in terms of socioeconomic level. So self-identification can be very, very tricky. And researchers need to make sure that when they use self-identification that they have some other measurements to help make sure that the self-identification is not skewed in a particular direction. When researchers ask questions of people, they have to be very careful to ensure that they are not biasing the information that's coming back to them. That same general social survey that I'm talking about, when people go out and do these interviews, and they do in-person interviews for this that are about two hours long, they are expected to be as poker-faced and dead-panned as possible while they are asking the questions. A simple raise of the eyebrow could skew the data in a particular direction. If you answer a question and the researcher makes a facial gesture of some sort, from then on out, the person may be trying to impress the researcher instead of answering honestly. Plus, you need to make sure that everybody's reacting the same way to all of the questions because you have more than one person asking these questions. So you want the interviewers to all pretty much be doing this in the same manner with the same clothing on, et cetera. So researcher bias can also show up in the order that you ask questions or in the wording of the questions or in the ways in which the answers to the questions are interpreted rather than reported directly. So there are a lot of ways in which researcher bias can show up that will taint the data. You also need to make sure that you have a good sample if you're gonna do any kind of statistical analysis. A good sample is called a random sample and what that just simply means is that everybody in the population that you're sampling has an equal chance to be a part of the sample. If they don't, if there's a bias in one direction or against another group of people, then you have a tainted sample that can affect the outcome. The other thing is that you need to have a significant result. Significance has got to do with probability and it's measured in something called the p-value. The p-value tells you how probable it was that your result came from randomness, right, just coincidence, instead of from actual results, right, that this is the way that it will be over and over and over again. So a p-value of 0.01 means 99 times out of 100 you're going to get results that mean something and one time out of 100, you're gonna get results that are just coincidental. This is a good, strong study. If you have a p-value of 0.01, if you have a p-value of 0.05, it's still considered a fairly strong study and that a p-value of 0.05 means that five times out of 100, it may be at random, whereas 95 times out of 100, you have something real that you're studying. If somebody is not telling you the p-value of their correlation, they have not given you enough information to know whether or not the correlation is meaningful. I'm guessing that most of you who are listening to this have never seen a research result given to you on television that reports the p-value. This means that every time you are told, for instance, how many people believe this, that or the other thing, or which candidate is ahead in the race, or which approval rate of a particular policy or person, if you're not told the p-value, then you essentially have not been given enough information because if the p-value is very high, like say 0.20, which means one time out of five times instead of 100, there will be a coincidence. You live in Vegas or you're connected with Vegas, so you should understand that one out of five is pretty high odds. If that's the p-value and they're not reporting it to you, they've just given you junk information. You don't know what's going on. I mean, you can assume that if it gets reported on the news that maybe what's going on is they have a very significant sample, but you can't really assume that unless they're reporting the p-value, which means that pretty much all the information that you're given through the television and online is questionable information. I'm not saying that it's wrong. I'm saying that you do not have enough information to assess whether it is right or wrong. I wanna talk about two things that often come up in interpreting data. One of it is called confirmation bias. Confirmation bias is when you hear something, you're more likely after you hear it to see an example of it. And if you see an example of it, you think to yourself, huh, that was right. That was true. But the truth of the matter is you just took a sample of one. You don't have enough information to actually confirm the results that you've heard. This, a good sociology example of this is the welfare queen myth, okay? Statistically, if you look at birth rates among women who live below the poverty line in countries where they have a choice as to whether to have children or not, they do not have children at the same rate as the regular population. They in fact will have a birth rate that sometimes is as much as half of the general population birth rate. And yet we have this myth out there that there are women who are having lots of children while they're on welfare in order to get more welfare. Besides the unreasonableness of this, there's just simply no data to demonstrate that this is going on in any widespread way. And yet, everybody hears about this and you meet one person or you know somebody who knows somebody who knows somebody and you're just sure that this must be right because you know personally an example of this. Okay, that is confirmation bias. You have confirmed that information simply by taking a sample, a small sample without doing it scientifically. So you need to be careful when things are presented especially in popular discourse like the news that we're not seeing something called confirmation bias because what they like to do when somebody reports on this kind of stuff is show you a picture of something and that has a tendency to make people think, oh yeah, I know that. All right, and then the other one is something called ecological fallacy. Ecological fallacy is referring to information that is collected on a population level, okay? So you've gone out and studied a whole bunch of people and you said, okay, this population has these characteristics. And then taking that information and trying to apply it to an individual in the population. The two levels are not the same thing. It's not the same study. It's not the same information. The example to give you of this is that African American women as a group have a higher rate of incidence of breast cancer than any other ethnic group, okay? So African American women are more likely than any other group as a group to get breast cancer. I don't know what the statistics are directly, but let's say it's like three times more likely. That does not mean that a particular African American woman has three times a greater chance of getting breast cancer than a specific white woman, okay? All she has, she belongs to a group that is three times more likely. If that particular woman has no breast cancer in her family, no relative that has it, has never eaten tainted meat, there's a plethora of food and exercise and environmental factors that could come into play. Her personal risk is probably a lot less than the group risk. So she, and on the other hand, if the white woman has a history of breast cancer in her family, has eaten and taken hormones, eaten meat, laced with hormones, has been exposed to certain things in the environment as a smoker, et cetera, et cetera, then that white woman's personal risk is probably a lot higher than the black woman's risk. There are lots and lots of factors that go into an individual's risk. Yes, belonging to a group that is more likely to get it increases your risk, but it doesn't increase it proportionately. It doesn't mean that you are now three times more likely. So you have to be careful because a lot of media reports will read things in a population and then they will take it to the personal risk. So we've talked about data quality. Let's talk about collection methods. There are a number of different ways in which sociologists collect data. Probably the one that you're most familiar with is surveys, but we also do interviews. Some of these interviews can be for quantitative purposes in which we just basically ask questions and collect information, either in writing, asking people to fill out forms, which is what surveys are frequently, or it might be in person where an interviewer actually goes and sits down and asks a number of questions. We observe people. We can observe people in enclosed situations or observe people out and about. We do participant observations in which we are a part of what we are observing. This is often called ethnography when it is in for a long period of time, but we can do short-term participant observations as well. You should note that as you are doing your group work in your 101 course, that you are essentially working as a participant observer because at the end of the semester, you're going to have an assignment called your group analysis in which you're going to kind of analyze what it is you've been observing as you've interacted with each other. Ethnography, on the other hand, is more of an immersion in which a researcher goes to a place where people are at and hangs out with them for long periods of time, keeping field notes, doing interviews, and being a part of that for a long period of time and then coming back and analyzing those notes and those transcripts. Ethno means human. Graphy means writing, so ethnography just simply means writing about humans. Ethnography was a methodology that was perfected first in anthropology, but sociologists have done ethnographic studies as well. In your history lecture, we talked about the Chicago School and the Chicago School. Well, I don't think we talked about it, but you should have read about the Chicago School and the Chicago School was a highly ethnographic set of researchers. They did a lot of ethnography during the 1920s and the 1930s. Experiments are a little rare when it comes to human beings. Experiments implies a control group and a group that you do something to and it's generally a little difficult to create an experiment in a laboratory when we're dealing with human beings, especially when we're dealing with social behavior of human beings. There have been experiments that have been done in the past. They're a little more difficult to do now because we have more stringent ethical rules about doing human experimentation, but we do have something that we call natural experiments. And there has been one very recently that I think illustrates this quite well. In 2010, when the Obamacare law came into effect, one of the first things that happened was an expansion of the Medicaid. Medicaid expanded in states that chose to do so with matching funds. And they essentially upped the threshold so that more people could be involved in Medicaid and be able to get insurance. Oregon took the federal government up on this and expanded their Medicaid. And when they put the information out and asked people to sign up, they had about twice as many people sign up as they had enough money to pay for. So what they decided to do was hold a lottery. The lottery meant that households were picked at random, that every household that had signed up had a chance to be able to get the Medicaid. And they just went through and randomly selected the number of households that they could afford to take care of. Well, this created a control group, a group of people that did not receive Medicaid, and an experimental group, a group of people that did receive Medicaid. They were chosen at random, who was in which group was chosen at random. And this created a situation that very much mimicked an experiment. Now it wasn't done in order to create an experiment, it was done in order to try to fairly meet out what little money they had when the demand was so high. But some very savvy researchers realized that they had a golden opportunity to actually study the effects of having access to healthcare. And so they contacted the Oregon public health people and they worked it out and they were able to follow the health outcomes of both households that had received Medicaid and households that had not. And they found very significantly that being able to have access to the healthcare system improved the health of the people who had access to it. So, savvy researchers who run into situations like this can study something as if it were set up as an experiment even though they didn't set it up themselves. Another example of this is if you're collecting data in an area that experiences a natural disaster, it can become easy to in turn study what happens after a natural disaster. So if you did some sort of study in New Orleans, for instance, before Katrina, you could go back in and do the study again after Katrina and have a sense of how Katrina changed things. So these kinds of things show up, they're not always available to researchers but they certainly create opportunities for us to understand social behavior better. The last part of data analysis that we need to think about is how it is reported. Data analysis is not just presenting tables of data and having everybody look at it. There are going to be, there's gonna be a write up and that write up is going to have an introduction in which the stage is set and a description of how the data was collected and analyzed. And then there's gonna be a section at the end that is gonna be a discussion of it. And that discussion can really skew the way people see the data. So you have to, if you're not somebody who particularly understands fully what was done to the data, you can be very misled. So you might need to know some things to help understand what might be influencing this interpretation. And by the way, interpretation can happen both with quantitative and with qualitative data. So you may want to understand first of all, why was this study done? What use is this information? There are lots of people who employ studies, not just for the pure science of it, but because they have other agendas. And you need to know what those other agendas are in order to know where the biases might lie. And closely related to that is also something called the SPIN. If, for instance, the use of this information was to assess the success of a particular program, then you can know that the SPIN is probably more positive than the results actually were. Because most of the time, if something is employed to assess a program, it means a funding decision is gonna be made at some later point. And you want to make sure that the SPIN is positive enough that the people who are funding are actually going to give you more money. The other SPIN might be if you've never received money and you're not assessing a program's success, but rather assessing the need for a program, then you probably are gonna be spun more negatively because you want to exaggerate the need for the program, exaggerate how bad things are and so forth. So knowing why a researcher is doing what they're doing, doing the research can go a long way to help you critically think about the results that they are presenting. Financial biases, of course. I mean, researchers get paid for what they're doing and they want to get paid again. They want to get funded again. So a lot of times if their research comes up questioning their funders' purposes and so forth, they may skew things in one direction or another. Also just changes the whole meaning of things. Sometimes if you know who was behind the research, most of you probably think that breakfast is the most important meal of the day and that was something that has been told to you off and on probably all your life. You've heard it in many places and so forth, but if I tell you, which is true, that the whole idea, the research behind it came from the Kellogg's Institute. It changes your mind about whether or not that information is good because obviously Kellogg's, a breakfast producer, you know, a company that produces breakfast foods wants to convince you that you should eat breakfast every day. So you've got to wonder whether or not there isn't a financial bias there and there are other kinds of conflicts of interest too. Professional people who do research get invested in the research that they've done in the past and that means that they may not be open to new results and I guarantee you that in a peer-reviewed situation in which your research is being reviewed by your peers, if you have questioned somebody who has done extensive research, even though you're not supposed to know who is actually reviewing your research, you know that the person that you've questioned has been asked to review it and you know that they're going to review it with the most negative eyes possible and so it's the exceptional person who would be willing to say, oh yeah, I was wrong, this proves that I was wrong and so they're going to, you know, argue with you and fight it out with you and those kinds of conflicts of interest oftentimes determine what gets published and what doesn't and that's very important because really one of the ways that you get fooled in the information that is presented to you is not by what is given to you, it's by what has been withheld from you. The stuff that you never see, the stuff that never makes it into a publication can actually be an important aspect of the big picture that is just not available to you and will skew your understanding of it. There is an area that is related to statistics called demographics. Demographics is not statistics though a lot of times the popular discourse when we're talking about demographics like, you know, the percentage of people or something like that, we really are talking, we hear it as those are the statistics. Statistics is particularly about probability. Demographics is just counting stuff. It's a study of what different groups of people have in common. The categories that we pick determine the outcome. You can see this with the US Census. The US Census essentially creates categories and those categories have changed over time. Prediction does not equal causation, but demographics can't even predict. Demographics are just a rate of things. You have to do more to it to get to the point of prediction. So the Census is a good example of demographics. It's a 10 year population count. You know, the US Census Bureau does more than just collect this. They do collect data that can be statistically analyzed. But the stuff that you mostly fill out every 10 years is just a count of who you are. It's a count of the number of people who are male and the number of people who are female or it's the count of number of people who are in a particular racial category or the number of people who live in a particular neighborhood. All of those kinds of things, those categories have changed over time. The US Census before 1980 did not include the category of Hispanic in the 2000 and in 2010. It included being able to check more than one category under race. So that is a whole different set of groupings than what has happened before. So again, the categories determine the outcome. Another example of demographics is something called epidemiology. Epidemiology. Epidemic. You know the word epidemic. Epidemiology is essentially looking at disease cases. And it's counting them. It's figuring out, you know, is it increasing? Is it decreasing? What, you know, where is it happening at? How many people are being affected? What kinds of people are being affected? Is it affecting the old? Is it affecting the young? That kind of stuff. It doesn't get to what causes the disease. The cause of the disease is biological, right? The cause of the disease is probably a germ. But instead the epidemic is, you get useful information by mapping out how the epidemic is spreading, how big it is getting, or whether or not interventions are making it smaller. This is just simply counting. By the way, epidemiology is a great field of study to go into with a sociology degree. If you get a bachelor's in sociology and a master's in public health, you can be an excellent epidemiologist, a very well-paid epidemiologist. So check it out if you get a chance. Birth control and life expectancy are also demographics. Life expectancy is made up of a number of demographics that help us figure out how long on the average a cohort that is born this year can will live. Now remember, it is, most of the time, life expectancy is talked about in terms of who was born this year and how long they're expected to live. The older you get, the longer your life expectancy. The reason for this is that, when you're my age, which is over 50, it means that you haven't died of any of the stuff that would have killed you before 50. So my cohort will live longer, is expected to live longer than a younger cohort because that younger cohort is going to lose people along the way to things that kill people at younger ages. If you think about this, this makes sense because their life expectancy at birth was 72 years old and you're a 75-year-old person, you obviously have a longer life expectancy than 72. The longer you live, the longer your life expectancy. If you want to click on this thing here where the life expectancy link, you can go to the CIA World Book and see where the United States ranks in life expectancy. And I think you'll be surprised at how low it is. All right, and then another epidemic, or I'm sorry, another demographic use is to study migration patterns, to look at the extent to which people are moving. Like for instance, we've seen a huge movement from the North to the South in the last 50 years in the United States. You can take a look at migration patterns for certain countries, to other countries. India and China, for instance, have a huge diaspora across the world. The latest migration patterns are showing China moving into Africa, and this is an interesting dynamic that can be analyzed and looked at both from a sociological point of view and also from a political science point of view. So this gives you an idea of what demographics do, the kind of information that it gives you. These are important pieces of information even if they do not lead to understanding causation or prediction. All right, finally, we wanna talk about ethics. Of course, because we are studying human subjects, we have a situation in which we want to not harm anybody, to keep people from being harmed by our research, to make sure that we do research that is of use to human beings, is not detrimental to human beings in the long run, and as researchers we're responsible to do that. Since the 1980s, in order to meet that responsibility, most research institutions have what are called IRBs, or internal review boards. Internal review boards are made up of colleagues, researchers, who review other people's research designs and make sure that they meet certain standards. IRBs are mandated by Congress, and in exceptions to IRBs have to have an act of Congress to exist. Now, there are not a lot of researchers at the College of Southern Nevada. The College of Southern Nevada is a two-year teaching institution, and it doesn't require its professorship to do research in order to make tenure and so forth. And we don't have researchers in training and graduate school that we need to provide opportunities for. But there are some faculty here at the College of Southern Nevada who do research, and when they do, they do have to have it run by an internal review board. Generally, the IRB that they have to run it by is the UNLV IRB, and if that IRB is reviewing, you essentially put your plans down on paper, and you have to meet certain standards, and most IRBs will publish what those standards are, and they're gonna take a look at your design and make sure that it's honest, a truthful design that you really are trying to study, what you say you are trying to study, that you're going to plan to report it in a truthful way, and that you're gonna take certain steps in order to ensure that you will be reporting in a truthful way, that you're going to tell the people that you are researching, that you are actually researching them, and why you are researching them. Now, this is an interesting requirement when it comes to social sciences, because if you tell somebody, hey, we're trying to figure out this, you have just skewed their response to your questioning. So this has, since the 1980s, ruled out some research, because it's impossible to do an interview with somebody or to collect data with people after you told them what you are studying. And then you also have to ensure that there is confidentiality. By that, it means that you are not going to reveal who your sources were, and you have to take certain steps to ensure that you do not reveal who your sources are. These things, if they're not met, if they don't pass the IRB, then you have to go back and rewrite and replan and re-enter it. Like I said, if you need an exception to it, then you have to apply to the National Institutes of Health who will then recommend to the Congress that certain exceptions be made. So if you did do research in something that would be skewed by the disclosure, the truth and disclosure requirement, you might be able, through an act of Congress, to be able to research them and then tell them why you researched, but that is very rare and very hard to make happen and takes a lot of bureaucratic time, as you can imagine. The last thing that I wanna mention is that we also, as sociologists, because we are human beings who value certain things, who have certain biases when we look at things who are interested in certain things, that we need to be honest about that and make sure that we understand the extent to which our own personal values are coloring what we're studying, how we're setting up the study that we're doing and how we are interpreting it. This is true no matter what you are researching. It is important, especially in qualitative research because very often the qualitative researcher is actually part of the data. They were the one who wrote the field notes or they are the one who was involved in the interview, the transcript includes the researcher as much as it includes the subject of the research. And so if you are not willing to sit down and ask yourself, where are my biases? Where have I created, where am I skewing the data? You can essentially move things in a bad direction. So values and understanding your own values and seeking to be as value free as you possibly can, which is impossible to achieve but a good thing to try for is important in an important part of ethics and social research. So I hope this gives you a good overview of how research works. One of the things that I wanna make sure that you understand at this point is that you may be feeling by the end of this lecture that you can't really know much, okay? That because you're not given enough information are because you're given interpretations of information instead of the information itself that you will find it hard to judge whether you're being told something that is true or not. I hope that's how you feel because you should feel that at this point. Information that is passed around in public discourse frequently is not giving us enough to be able to think about it critically. So you might say to yourself, well, then what am I gonna do? I don't know anything. Some of you will go, okay, I wanna learn more and we'll go out and actually become researchers and that's cool and I think that that's wonderful and I hope this does inspire that but the majority of you are probably not gonna wanna do that. You've got other plans for your life than to spend a lot of time trying to do research. So here's what I ask of you. Now that you know that most of what you are being told is not enough for you to know whether it is true or not, then all I ask of you is to recognize that and don't repeat it as if it is true because a lot of misinformation is exactly that. It is a meme that gets repeated over and over and over again and that meme is a part of our thinking and life and stuff becomes true when it started out kind of iffy. So be a good consumer of information, ask a lot of questions, look for good information but if you can't find it or you don't have the time to do that, then note that you don't know, be happy that you don't know and don't tell it to anybody else. Don't repeat it. That makes you the best consumer. Only repeat those things that you are sure of.