 Hello and welcome back to History and Philosophy of Science and Medicine. I'm Matt Brown. And today we're talking about values in science. This will be our first discussion of values in science and we're focusing primarily on the role of values in scientific inference and the role of risk in thinking about values in science. Let's start with the simple question. What do we mean when we talk about values in science? Well, value is actually a notoriously tricky term. But when we're talking about values in science, we mean any kind of criteria or considerations that have something to do with what we want, what we desire, what we cherish. We're thinking about those things that concern action, that motivate action. Even if that action is just to make an inference or to adopt a hypothesis. We are thinking about those things that are factors in judgments, although they're not those sorts of factors that are logical or quantitative criteria. Often when we're talking about values, what we're really interested in are those group of values that we might call social, ethical, political. Sometimes we group all of those together and we call them nonepistemic values. There are other kinds of values, which we call of course epistemic values that have to have something to do already with science. So simplicity of a theory or how fruitful the theory is in making new kinds of predictions or how well a theory unifies purportedly disparate or seemingly disparate phenomena are values that we have. They're ways in which we value a theory. Those are not usually the types of values that we're worried about when we're talking about values in science. Now, one thing I think it's important to keep in mind in a discussion of values in science is whether we're talking about a descriptive or a normative question. So a descriptive discussion of values in science is gonna focus on whether scientists do make value judgments, whether values do influence what scientists do in their work, whether it influences the science that we actually have today. We wanna know whether scientists are biased, in fact, whether their biases affect the science that they do. We are interested in other words with descriptive questions in what has actually happened. And there, psychological and sociological facts about human beings and their limitations are of course important as our historical accounts of how science is actually unfolded. When we're talking about values in science as a normative question, we are asking whether there is or is not any rightful role for values in science, whether scientists ought to make value judgments, whether they should consider values or whether they should be value free. One of the ways of thinking about values in science that has historically been extremely influential is what we call the value free ideal or the ideal of value free science, which is a very strong answer to the normative question. It says, in the ideal, scientists should not make value judgments. Scientists should leave values out of their work. It is not an objection to the value free ideal that scientists don't actually follow it, right? That's just a descriptive account of what they do in their actual limitations. The value free ideal says, even if that's true, scientists should try, right? It's not a good thing, right? Similarly, arguments against the value free ideal are arguments that there are rightful roles for values to play in science. Not necessarily because, as Vairaban would have it, anything goes, but because there may be as a desirable role for values to play. Now, the readings for today give two kinds of arguments for including values in science. One of those we might call an epistemological argument. It concerns just the nature of knowledge and what is required for getting higher quality or better results in science. And the other kind of argument is concerned with more ethical questions. What are the moral responsibilities of scientists as scientists? So let's start with the epistemological argument. According to the epistemological argument, the sort of comparative nature of science means that when we have shared social values influencing science, that affects the range of options that we have. For example, picking our best theory. We're deciding between two possible explanations of a phenomenon. And the range of options we are in fact considering has an important influence on the outcomes that we get. Science is a kind of, as they say, garbage in, garbage out process. If you, it can do a very good job. Even if we suppose it's totally value free in the judgments that it makes, it can do a very good job of deciding between two options, which is the most well supported, which is the best. But it doesn't, it depends a lot on how the options are generated. And the way the options are generated often depends on values. And if the entire scientific community is quite homogeneous, and they have a pretty shared set of values, say in the example from the acrylic paper, scientific community largely shares patriarchal values as largely made up of men. And the options are all encoding patriarchal sexist biases, then eventually, inevitably your outcomes are going to be sexist, even if the process is totally great. So on the epistemological argument, you need other kinds of values, alternative values coming in, giving you new options in order to get better outcomes. It works sort of like this. You can imagine every scientific inference is a choice between two options, right? You go a certain way, right? You choose one side of your options and you end up making some progress and then being faced by two new options. You make another choice and this will end up leading you along to another set of possibilities and on and on down the line. But if all of these options are based on similar values, then there are going to be other options that you haven't considered that could be better. So if we use the color blue here to represent the shared values and all of these lines and nodes are colored blue, if we consider an alternative set of values, we might have a third choice at our beginning stage that is here in red that gives us something better. But we never considered it because we didn't consider alternative values. We didn't make our value judgments conscious and we didn't consider possible alternatives. Okay, so that's our epistemological argument for considering value judgments in science and thinking about values in science. We also have an ethical argument and this you see in Heather Douglas's piece, if we consider that when we make choices in science, when we face decisions like in the other argument, we face a lot of choices between option one, option two, the choice that we make has an impact on things that we care about, has an impact on society, has an impact on social goods, because scientists are also morally responsible, they're normal human beings with the same set of ethical responsibilities, they're responsible for the consequences of their action. If those consequences are foreseeable and so if the choice they make could have an impact, then there's some responsibility there. And we might, I think point specifically to, we might point specifically to the impacts of when we get it wrong, right? And there's another reason according to this ethical argument for considering values. We should let value judgments, guide our choices so that we get the best, we manage the risk of the different kinds of impact the best. To see this, think about an example. Suppose you're testing a chemical to see whether it's toxic for humans, right? You might be using, you might be testing on animals or you might have a kind of a computer model, there's just different ways you might do this, typically you're testing lethal dosage on animals like fish. Now, there's a couple of different options here, right? So if we are testing the hypothesis H that some chemical X is safe for human consumption, right? The hypothesis could be true or false. It could be safe or it could not be safe. That's what we're trying to find out the state of some facts about the chemical, make it one way or the other. And then we can choose to accept the hypothesis or reject the hypothesis, okay? This is a little bit simplified but fairly well captures how we think about statistical testing and science. So there are four possibilities. If we accept the hypothesis and the hypothesis is true, that's great, we're gonna end up getting the kinds of results that we want. If we end up rejecting the hypothesis and the hypothesis is false, again, it's fine. We're gonna get the results that we want. If we accept the hypothesis but the hypothesis actually is false, this is what we call, the technical thing we call this is a false positive result. The other option, if it's a hypothesis is actually true but we rejected the hypothesis then we call that a false negative result. If you happen to have been received a test for some disease, let's say you got COVID tested recently, this is, you will have had to deal with these possibilities, right? If your test comes back positive, right? It could be a true positive or a false positive. If your test comes back negative, could be a true negative or a false negative, okay? Now, in the case of our hypothesis H about the safety of the chemical, there are different possible impacts depending on what error we might make. So if we accept the hypothesis but it's a false positive error, we have the risk to human life and health, okay? That's a negative impact of error here. If we have a false negative error, we may over-regulate this chemical and so forego the benefits of the chemical and there may be some economic losses along the way, right? Knowing how the data is going to be used to regulate the chemical, we can see pretty straightforwardly that these different impacts follow from our choice to accept or reject the hypothesis. Now, if we've done a good statistical analysis on a well-controlled trial of the safety of the chemical, then what we will have to do is we'll have to decide what is our standard of evidence for accepting versus rejecting the hypothesis? How much evidence do we need to accept that the chemical is safe? If we raise the bar very high, right? We decrease our likelihood of false positive errors and so we decrease the risk that we're going to cause harm to human life and health by saying that the chemical is safe, okay? But at the same time as we're lowering that risk by raising the amount of evidence we need to accept, we are increasing our likelihood of false negative error. We're making it easier to reject the hypothesis, easier to ban the chemical or not approve the chemical depending on what kind of chemical it is. And so we're increasing the risk of foregoing whatever agricultural and economic and other benefits that using the chemical might have. And that's a fairly straightforward trade-off, raising versus lowering the standards of evidence. If you don't consider these risks as you make that judgment of what your level of evidence is gonna be, you're gonna end up making a kind of reckless decision, okay? So those are some of the basic arguments and ideas about values in science from the point of view of inference and risk that we need to talk about today. That gives you a little bit of a basis for thinking about this stuff. There's obviously a lot of issues for us to talk about more detailed aspects for us to talk about. So please feel free to respond on Discord or leave a comment here on the video. Or I look forward to discussing this with you in class. Otherwise, I will see you next time.