 Once we have well documented verifiable facts, we have defined our concepts, we have defined exactly which facts we're going to try to make use of, to try to explain. We're very specific about the concepts we use to classify, categorize, use those facts. We've got a couple more things to check on before we're ready to make a policy decision or advocate a particular type of policy. First we have to be sure that we and our readers or our audience value the same things, that we want the same outcomes. And once we know what outcomes we want, we have to decide how to get that outcome. So that brings us to our next two points of stasis, cause and value. The stasis of cause or causation is also known as cause and effect or consequence. Usually it depends on what we're looking at. If we're looking to see where something came from, what happened in the past, we call it search for cause. But if we're looking to decide what's going to happen in the future, we call that consequence. But this is the same sort of cause and effect relationship that we're looking to understand. And when the ancient philosophers in Greece and Rome distinguish between the different points of stasis, they usually just listed four. And that's fact, definition, value, which is also called evaluation or quality, and policy, which is also called proposal. They lumped questions of cause and effect in with questions of value. Presumably you could independently verify cause and effect relationships just like you could verify facts. And in some cases that's true, but in some cases it's not. We can see this division in the example I first used to distinguish between a fact and an inference. That was the Smilodon skull and the Le Brea tarpets. The actual bones that are found in the Le Brea tarpets, those are facts. Those are independently verifiable. The arrangement, their composition, all of that. But how they got there, that's an inference. We have to come to a best guess about how those bones got there. And in particular, that's a causal inference. Causal inferences are very intuitive a lot of the times. When we look at a fact, we typically think we know how those facts came to be there. But we can be wrong about that, which is why we want to separate out causation. Remember that the Hungarian statistician Abraham Wald during World War II saw these bombers coming back to the base and they had these bullets all over them in these particular places. Some people wanted to put armor where the bullet holes were. Wald wanted to do the opposite. He wanted to put the armor where the bullet holes were not, because planes were still getting shot there. It's just that those planes weren't going to make it back. So we can look at the same facts and come to different assumptions about how those facts came to be there. And a lot of the thinking process, both the deliberative system two, metacognition, but also a lot of the system one, automatic, non-reflective, intuitive cognition, is trying to figure out how things happen in the world so that we can make things happen that we want to happen. And this isn't just true for humans. As you may remember from Michael Shermer's TED Talk at the beginning of the semester, this is something pigeons do. You can put a pigeon in a box, like the one on the top right, where it has two buttons it can press. And if it presses the right button, a reward, a piece of food will come out. But sometimes the people operating the box will change it so that the other button has to be pressed in order to get the food to come out. And the pigeon will learn to press that button and the food will come out. But then they'll change it up to where you have to hit the left button and the right button and the left button again. Once the pigeon finally figures this out, it'll start doing the left, right, left. And once you change it, it still tries to keep doing that pattern. It sees a pattern in particular not just any kind of pattern, but a causal pattern. But the pigeons frequently don't realize the difference between causing the food to come by pressing the buttons and causing the food to come by turning around one direction or another. So sometimes a pigeon will press the left button, then turn around, press the right button, turn itself around the other way twice, and then press the left button again. It still has the same effect, but the pigeon is assuming more steps in that causal process than are justified. And Schoenberg compares this to superstition, just like in professional baseball. Batteries will have certain rituals they'll go through before they get up to the plate because they've found that, well, this one time I hit a home run right after tapping my left shoe with the bat three times. And so they'll tap their left shoe with the bat three times every time they get up to the plate. We're really good at inferring causation even when that causation is not there. This is another type of pattern recognition. And sometimes people can be fooled. You can sell somebody a piece of crap that doesn't do anything, but it feels like it does something. There's some ambiguity about causation. Did this thing actually help me find something or not? And that's what this, the instrument that he talks about, this bomb, so called bomb detector, is sold under the promise that it will help you find all sorts of things, whether it's bombs or golf balls or illegal drugs or whatever. But it doesn't actually do anything. And we typically like to think of ourselves as being a little more intelligent than a pigeon when it comes to figuring out cause and effect. But we're frequently and easily lulled into mistaking correlation with causation. When we talked about the difference between peer reviewed sources and popular sources, one of the distinctions was how simple and how intuitive the non-peer reviewed popular source would usually be in its explanation, especially when it comes to causal explanations. Frequently we'll see a story that says a new study shows that if you do this or if you eat this or if you practice this, it will have this result. But if you go back and look at the original study, it will never say anything like that, like do this and it will have this direct result. Instead, it usually says people who have this health effect frequently have this kind of diet or this kind of exercise. They frequently eat more, in this case, coconut oil. If you find that people who are less likely to get Alzheimer's eat a lot of coconut oil, you might think that, oh, well, if I eat more coconut oil, then I'll be less likely to get Alzheimer's. That's what LIFAC and OMGFACs seem to imply. But if you check the Alzheimer's Society's fact check page on this, you'll see that that is not at all justified by the study that these two reports claim to be quoting. So we frequently see actions or events or other facts in correlating in the world and we think, oh, one thing must have caused the other. So if I just do that thing, it'll cause this. Especially when there's high value outcomes like maintaining a healthy weight, but those outcomes tend to depend on a very complex range of factors, many of which we just have no idea about. We tend to really oversimplify those. And a lot of times our sources, in this case National Geographic Magazine, which is typically a good source for a lot of things, but even they will, from time to time, exaggerate the simplicity. They'll overgeneralize, they'll simplify the causal process and exaggerate the simplicity of a cause and effect relationship. Like in this article, which is titled Why Are We So Fat? It's asking why, how did this come to be? And right there in the subtitle, it makes it seem really simple. A love of carbs, a lack of exercise, the real reason one in three Americans is obese is simpler than you think. Well, it's actually not that simple, but they're going to portray it as simple. And they're going to base their information on a couple of interviews. The article starts out by saying, the obesity crisis is a result of simple math. It's about calories in, calories out. Well, that's something people frequently say, well, if you eat more calories and get less exercise, so that you burn less calories, well, obviously you're going to get fat. But even within the article, we look down and we say, well, okay, genetics also play a factor in that. Your metabolism has a lot to do with your age, as well as your genes. A younger person is going to metabolize food a lot faster than an older person, even if the older person gets more exercise. And it's also a matter of the type of calories you get. Are those calories accompanied by fiber that your digestive system needs? Are they simple carbs that immediately get stored in fat? More recent research, like the research by Stanford professor Justin Sonnenberg, has shown that the type of food you eat interacts with the particular kinds of microbes living in your gut that can digest certain things better than others. And if you feed your gut bacteria the right types of plant fiber, then they'll be much better at metabolizing all the calories. If you don't feed that gut bacteria fiber, they tend to die off and they're replaced by gut bacteria that specialize on simple carbohydrates. But the ones that specialize on simple carbohydrates aren't as good at processing those. So when they study lab rats, they see that the rats that have the particular type of gut microbes that depend on high fiber food, they're much better at metabolizing the same amount of calories than the rats that don't have the right kind of gut microbiome that specializes in high fiber food. So there's a lot more going on in your gut. There's a lot more going on with the obesity epidemic than just calories and calories out. But calories and calories out is part of it. The amount of dietary fat you get is part of it. The amount of exercise you get is part of it. Your genes are part of it. Your dietary fiber is part of it and your gut microbiome is part of it. So it's a really complicated interaction of different causes, each of which impacts the other. Sonnenberg's research, like a lot of the research we do, goes looking for simple answers but comes back with sometimes just better questions. It doesn't simplify the situation, doesn't make it easier to understand. That can be frustrating sometimes. But we're actually getting more accurate, even if the answers are more complex. But notice that in the link there, the title of that New York Times video is It's All in Your Gut, implying that, oh, it's not these other things. It's just this thing. That's not Sonnenberg trying to simplify things. That's just the New York Times doing that. But that's what popular media sources do. Because we have a need for closure. We want simple answers that we can use all the time. We want to seize on the first available answer and we want to freeze on it. We want to apply to everything, not just one or two things. We don't want to have to figure out the particular context and all the parts of the particular ecosystem this thing's a part of. Because we're more motivated by getting the feeling of knowing, the feeling of closure, than we are by actually understanding what's happening around us. And that leads us to jump to certain conclusions about cause and effect. And one of the most frequent problems we run into when trying to figure out a cause for an effect is mistaking correlation for causation. Imagine you're looking at data for two different phenomena. One of those phenomenon is the incidence of drowning. And you look at a calendar and you look at the number of incidents of drowning and you see that certain points of the year incidents of drowning go up. And at certain parts of the year, they go down. And you also look at another collection of data that is ice cream sales. And you notice, hey, at the same time of the year that the ice cream sales go up, incidents of drowning go up. And then when ice cream sales go down, the incidents of drowning go down. I think I see a causal relationship here. Ice cream causes people to drown. Once you think you see a relationship, once you think you see that causal pattern, you can come up with explanations. You can rationalize it. You can use motivated reasoning to try to make it seem more likely. Well, maybe people are eating ice cream and then they're going swimming. And their stomach hasn't had time to process that food yet. So they're getting stomach cramps and that's causing them to drown. There's another explanation, though. Maybe one thing isn't causing the other. Maybe ice cream isn't causing people to drown or vice versa. Maybe people aren't consoling themselves over a drowned friend by going out and buying ice cream. But maybe both of those things are caused by a third factor that's not mentioned in either of these data sets. That is, during the summer, it gets hotter. And that's when you're more likely to eat ice cream and that's when you're more likely to go swimming. So in this case, there might be a correlation between ice cream sales and drowning, but that doesn't prove that there's causation. And distinguishing between correlation and causation is one of the things that the scientific method is set up to do. To be sure that we're not just looking at things that occur together and assuming a relationship, we want to try to prove that there's a relationship there. The goals of science are to describe, explain, but also predict. And so causal claims in science, it's not good enough to say, this is what's happened in the past. This is called a just so story. If I look at something that occurs a particular way now and I say, well, this probably evolved for this purpose. Or this is probably there because of this. That's just a speculation. I need to be able to turn that into a test which will predict results in the future if the same conditions are repeated. I need to be able to see that that thing is going to happen again. So scientific causal explanations need to be able to predict similar phenomenon with reasonable accuracy in the future. So let's go back to our case study where Robert Sapolsky was looking to see if this modified virus could actually prevent the damage done to the brain by chronic stress. And he tested this vaccine on rats. And Jonah Lair tells us in his article that he began introducing the modified herpes virus into rodent brains. Then he induced a series of tragedies such as massive stroke or extended seizure which would trigger the release of glucocorticoids. Within minutes, the modified herpes virus began pumping out neuroprotective proteins which limited the extent of celibath. As a result, damage was contained. But testing rats by giving them the stress vaccine and then putting them under stress is only half of what you have to do in order to prove a causal connection. We have to have a group that does not receive this experimental factor that we can use to compare with this other group to be sure that something's actually being protected. And we call that comparison group the control group. That control group needs to be similar to the experimental group in every way except this one condition. So you can make sure that this new condition, this particular factor, has a causal relationship to an effect. So in the case of Sapolsky's rats, you've got the experimental group with the rats with the E. And they're given the stress vaccine and then they're introduced to a traumatic event like an image of a cat, of a predator. It was actually much more traumatizing than this. They shocked them and that kind of thing. But you see that the experimental rat, the rat that received the vaccine has very little brain damage from that shocking event. That in itself is not enough. You need that control group that does not get the vaccine but they undergo the same type of stress. And we need to see that there actually is damage in that group. If there was no damage in that control group, then the end result wouldn't be attributable to the experimental condition. In this case, the Sapolsky shot. These are the same two logically valid modes of arguments, types of syllogisms that we referred to previously as modus ponens and modus tolins. Logically, it's the same thing as in the thought experiment where if you're a bouncer in a bar and you need to make sure no one is drinking underage, you need to check the guy who's drinking something alcoholic that you know is alcoholic but you don't know how old he is. And you need to check the drink of the girl who is 19 but you don't know what she's drinking. The other two, the woman who's drinking something non-alcoholic but you don't know her age, and the guy who's 40 but you don't know what he's drinking. Those two don't matter. But you need to check more than just to make sure that the consequent happens. You have to also check to see that the consequent does not occur when the antecedent is not a factor. And Sapolsky was able to compare this control group of rats and see that they lost nearly 40% of the neurons in a particular region of the brain. And once he knew that there was a causal connection there, once he knew that his modified virus was actually protecting the rat's brains, he could then move to the next stage. And it was actually an unintentional experimental and control group situation that led Sapolsky to realize the connection between social hierarchies and chronic stress and the two of those together with shorter lifespans and poor immune systems. This actually came from when he was in Africa studying these baboons and he saw that the baboon troop, which normally had a very strict hierarchy with a dominant alpha male at the top and his group of thugs that were beneath the alpha male, but they were still over the rest of the males and all the females. They were very abusive to the lower ranking males and the females. And because of that the lower ranking males and females had unhealthy immune systems, they were under constant stress. But when the baboons got into a garbage dump where some spoiled meat had been thrown out, the dominant males got to eat the meat first, but they actually caught diseases from it and died from it. And once all the dominant abusive males were killed off, the rest of the baboon troop kept acting the way they had always acted, which was to be much more cooperative and pro-social. And once that happened, the health of the entire troop improved. All the ones who had previously gone hungry and had weak immune systems, they were now starting to become much healthier. But Sipolsky wouldn't have known that until he saw both conditions at work. Both the hierarchy with the abusive males at the top and the poor health, as well as the hierarchy without the abusive males at the top. The hierarchy becoming more egalitarian and seeing a corresponding rise in health. We typically tend to think of cause and effect relationships as pretty simple. Usually with a single cause having a single effect. Like in the example of a couple of billiard balls. If you hit the cue ball against the five ball, you know the five ball is going to go straight. Single cause, single effect. That would be an example of a simple cause. Well, sort of. Because you actually have to hit the cue ball yourself. So you're the ultimate cause of that chain of events, that chain of cause and effect. Unless you're only there playing pool because a friend wanted you to come with them. And there may be some other factors that led you to be there at that point playing pool. In which case you might say well those are causes that contributed to this otherwise simple cause. But let's just focus on, let's just keep our eyes on the ball right now. You hit the cue ball, the cue ball hits the five and you intend for that five to go into the corner pocket. But despite your best intentions let's say for some reason the five kind of veers to the right, hits the eight ball and knocks it into the center pocket. You just lost the game and you're not really sure why. Maybe you were just a bit off and you didn't realize that the pool cue wasn't directly on the right part of the cue ball. But you're probably likely to want to check out the pool table at that point. Was there an unlevel surface there? Was there a wrinkle that's kind of hard to see in low light? Was there some other factor that caused the five to veer to the right? It's at this point we start to realize how many potential causes there could be in a seemingly simple event. So while we like to think of causes as simple, it's easier to conceive of them as simple. Any system of cause and effect, if we look at it close enough, we'll see there are a lot more factors than we initially noticed. So it helps to separate a lot of different types of causes. First of all, we can distinguish sufficient causes from necessary causes. A sufficient cause is all it takes to bring about a particular effect. You don't need anything else other than a sufficient cause. The sufficient cause itself is all there is. So let's just say the cue ball hitting the five is a sufficient cause for moving the five forward. But that's different than a necessary cause. A necessary cause is not a sufficient cause. It's something that by itself could not bring about this effect. But it is one component that has to be there in order for that effect to happen. So think of the example of starting a fire. If you have a pile of dry wood by itself, that's not going to be a sufficient cause to start a fire. And if you add lighter fluid to that wood, you're going to have two causes contributing to a fire, but neither of them is actually going to be able to start the fire. You've got to have a spark. You need a matchbox or a lighter that you can use to ignite the flammable material. But of course, if you strike a match and there's no fuel there for it to burn, you're not going to have a fire. Even the match itself, of course, is covered with a little bit of lighter fluid. If you had a spark with nothing combustible, the spark by itself would not be a sufficient cause for a fire. You need the spark and you need the combustible material. So those are necessary causes, but they're not sufficient causes, but they are still causes. Sometimes causes are unambiguous. You see one thing happen and then immediately after that, you see another thing happen in a predictable way. This is called approximate cause. Proximate just means close. Approximate cause is the immediately present or visible cause of a given effect. The opposite of that would be a remote cause. And sometimes this is also referred to as an ultimate cause, but it might be a little confusing to say an ultimate cause because some people think an ultimate cause is a sufficient cause. But an ultimate cause or a remote cause would be a cause that is not directly obviously linked to an effect, but it is there. It is actually causing an effect, even if we don't see all the chain of cause and effect between the remote cause and the final effect. So think of a smoking causing cancer works this way. For years, for decades, tobacco companies were able to deny that there was any connection between lung cancer and smoking. It's remote enough, they're separate enough in time. So if you've ever tried to convince a smoker to stop smoking, especially a young smoker, they're probably going to tell you that, oh, any consequences this one little cigarette is going to have are going to be so miniscule and so far off that it's not even worth worrying about. And that one cigarette might not be a problem, but it's contributing, it's a necessary cause that's being added to other necessary causes over time. So remote causes are easy to ignore, they're easy to deny. They're hard to actually prove and they don't fit our need for simple answers for cognitive closure. This is why it's so hard to get people to understand long, complex phenomena with remote causes like global warming. No single car exhaust and no particular factory's emissions are going to cause the temperature to change. But all of this pollution together over time adds up, it traps carbon in the atmosphere, it makes the earth warmer. That means that more ice in the Arctic melts, that releases more carbon, that creates this process that sort of speeds itself up. And once the global temperature goes up four degrees Celsius, then we're going to be looking at uninhabitable regions of the United States, Southern Europe, most of Asia. We're gonna be looking at arid land where there's now lots of food being produced. We're gonna look at the entire world's population is going to have to move into Canada and Russia and a few strips of habitable land. All of those things would seem drastic if they happened in a short period of time. But we're talking about decades or centuries between the original cause and the ultimate effect. So it's easy to ignore at any given point during that process, because it's not a sudden explosion. It's not an immediate, like a terrorist attack where you see the approximate cause and the immediate effect. But of course the effects of global warming, if we do have a temperature rise of four degrees, then we're going to have an outcome that's much worse than any possible terrorist attack. And then there are precipitating causes. These are necessary causes that might be remote, they might be proximate, but they don't work alone. You have to have a lot of other things already in place for precipitating cause to initiate a causal chain. So you can think of the way dominoes work. You push one domino and it knocks down the next domino and it knocks down the next domino. But those dominoes have to already be in place for the pushing of one domino to be a precipitating cause. There's an old proverb that goes, for want of a nail, in other words, because I didn't have a horseshoe nail, for want of a nail the shoe, the horseshoe was lost. For want of a shoe, the horse was lost. For want of the horse, the rider was lost. For want of a rider, the message was lost. For want of the message, the battle was lost. For want of the battle, the kingdom was lost. In other words, just because one horse threw its shoe and the rider didn't have a nail to put the shoe back on, the rider was then unable to deliver a message, a necessary piece of intelligence, which caused an army to lose a battle and because they lost that battle, that was a decisive battle that caused the kingdom to be overrun. In this case, the loss of the nail would be the precipitating cause. But all of those other things had to be in place. The stakes had to already be very high in order for that loss of a nail to precipitate anything. Most of the time, such something so small would not precipitate any kind of noticeable effect. But because all of these other necessary causes were in place, something very small could precipitate a major change. And this is sort of the problem that historians have whenever they're trying to explain why a particular war happened. So if you think about the beginnings of World War I, a lot of times history textbooks will tell you that World War I started with the assassination of the Austrian Duke Franz Ferdinand by the Serbian nationalist Gabriel Princip. But just because one person killed one other person, even if it's a very important person, doesn't usually precipitate a world war. But there are a lot of things that were in place at the time. There was a lot of the European powers had allied themselves into two alliances, the Triple Entente and the Triple Alliance. So that England, France, and Russia agreed that if anybody attacks any one of us, we will all together go against that attacker. An attack against one of us is an attack against all of us. In Italy, the Empire of Austria-Hungary in Germany said the same thing. So Gabriel Princip was loosely connected to this Serbian resistance to Austria-Hungary. Austria-Hungary assumed that Gabriel Princip did not act alone, that he and his group were connected to the Russians. And that set in motion a bunch of necessary causes. But the precipitating event was that murder. But that murder by itself would not have precipitated anything without all these other remote causes already being set in place. Now we have to be careful when we look at complex causes like this, especially when we're trying to predict a precipitating cause will cause these long range effects. We have to be careful that we don't make the slippery slope fallacy. That's when we presume that one event will inevitably lead to a chain of other events that ends in catastrophe. People frequently make this argument when they argue against the legalization of marijuana. They say well if you legalize marijuana that will lead to legalizing heroin and cocaine and all these other much more dangerous drugs. It implies this domino effect. And domino effects can happen under the right conditions. But in order to distinguish a precipitating cause from a slippery slope fallacy, you have to show that those necessary conditions are already in place. So that the precipitating cause can be provably linked to a likely outcome. And finally there are reciprocal causes. This is when one factor leads to something else happening and that second thing actually reinforces the first thing so that it happens again. A leads to B and B leads to another A and A leads to another B and another B leads to another A. And if we go back to Robert Sapolsky's studies with rats we see the sort of reciprocal causes that affect things like the way stress can actually cause weaker immune systems and cause you to be more hyper aware of and more alert to danger. And that being more alert to danger causes you to see more things that are scary or think of more things that are scary. And that causes you to be even more alert to danger which causes you to be even more stressed. These kinds of things we call a feedback loop. And a positive feedback loop is not always a good thing. So don't confuse the word positive with something being good. A positive feedback loop is where a reciprocal cause magnifies itself, it gets larger and larger. A negative feedback loop would be where it became less and less of an issue. And Jonah Lair points out that chronic stress actually makes us more sensitive to the effects of stress. The things like weaker immune system and neural damage from chronic stress. Because as he points out, the more we get stressed like when lab rats are stressed repeatedly via amygdala the part of the brain that deals with the fight or flight response. It gets much larger. And a swollen amygdala means that we're more likely to notice potential threats in the first place which means we spend more time in a state of anxiety. Stressing out more makes us better at stressing. It's sort of like exercise for a part of your brain that you don't want to be stronger. And the end result, as he says, is that we become more vulnerable to the very thing that's killing us. It's very similar to what we saw earlier in the semester with false positive pattern recognition. When people feel a lack of control, when they're reminded of a time when they were unable to do something, when other forces outside themselves got in the way and took control away. When you feel a lack of control, you're more likely to see threats that aren't there. And when you see threats that aren't there, that makes you feel a lack of control. And that makes you more likely to see threats that aren't there, which makes you feel even less in control. There are a whole lot of these feedback loops that seem designed to make us unhealthy and unhappy and all of that. If you don't get enough sleep, then that causes you to have a short attention span the next day. And having a short attention span when you go to class and take a test causes you to have poorer grades. And having poorer grades makes you stress. And when you stress, you don't get as much sleep. And that causes even shorter attention span which causes poorer grades. Any bad habit you have, it feels good at the time whether it's eating the wrong kinds of food or spending too much time playing video games or wasting time rather than doing something you need to do. But those things feel good. They help you relieve stress. But when you try to change those things, it takes a lot of effort. And the effort exhausts your cognitive resources and makes you feel drained and makes you wanna go do that bad habit again so that you'll feel good and relieve stress, which makes change even more difficult. Fortunately, not all reciprocal causes are that terrible. If you can just get eight hours of sleep whenever it's possible, that will lead to sharper focus. And that will lead to gradually better grades. And that will lead you to gradually be able to relax more. And that will make you gradually able to get a little bit more sleep every night. When you start working out, you start getting exercise. It's difficult at first, but it gets easier because you start to develop your cardiovascular system. And that leads you to have more energy the next time you go exercise. And that leads to a good feeling. And so when you go exercise, it's more self-perpetuating. It becomes a good positive feedback loop. But stopping one cycle and starting a new cycle is usually the hard part. To going from not getting enough sleep and getting bad grades, to getting more sleep and getting better grades. At some point, you've gotta be able to sort of break one cycle and turn it around the other way. And once you can support the premise that a particular action will cause a particular result, you're almost ready to advocate a policy claim. You're almost ready to do the thing that has the best result. But first, you have to be sure that you and your audience agree on what the best result is. And that's probably going to be the most difficult part of any argument. That is evaluation or value. Once you and your reader agree on what outcome is more valuable, then you can start to advocate a particular action which will be a cause of a particular effect. And that will be your policy claim. But first, you have to understand how cause is working. You have to think, where might somebody disagree with my representation of cause and effect? Is someone going to argue that one of the things I'm saying is a cause? Might somebody say that's just a correlation? If so, I need to go back and prove that there is a cause or a relationship. Maybe I need a control group or I need the site studies that have control groups. And then, especially if I'm talking about a remote cause, I need to be able to distinguish that from a proximate cause. Because my reader might assume, well, if I don't see an immediate proximate cause and effect relationship, then there can't be a cause and effect relationship. Then I have to define what I mean by a remote cause. I have to show all of the processes involved in that remote cause. I need to show what will be the precipitating cause. I have to prove that certain causes are necessary causes. Even if they don't cause anything by themselves, they are still part of a larger causal process. Now you won't have to explain all of these. Some cause and effect relationships you and your reader will agree on, but some you won't. Once you identify which causal premises that your reader might disagree with, then it's time to decide, well, is this a sufficient or a necessary cause? Is it proximate or remote? Is it precipitating? And most of the time, you'll have to spend a lot of time proving that a particular thing is a necessary cause, that without it, the effect won't happen. Or that a remote cause is still a cause, even though it doesn't seem like it's closely connected to a particular effect. That's how you focus your causal arguments. Once you've got agreement about the cause and effect relationships, then it's time to decide which outcome you and your reader value.