 All right, thank you all for coming back so promptly. What I'd like to do in the next hour and a half is actually share with you a puzzle that I've been working on. It's truly a work in progress, and it truly is something I would be interested in hearing your thoughts about. To tee it up, I'll just point to a couple of things the Commandant said that if we think about it, are, at least on the surface, inconsistent with each other. On the one hand, he quoted Secretary Gates talking about senior leader failure, and he talked about how much the fact that they are at these elevated levels tends to go to people's head and cloud their judgment. Now if you think about that for a minute, it would suggest that although the person is the same person, the changing of their environment and context has effects on their thought processes and even on their behavior that they themselves might not necessarily anticipate. On the other hand, when he ended, we ended up with a ringing endorsement of the concept of integrity, something which suggests that this makes you almost immune to moral failure and capable of operating in any environment without any challenges. And so I've been spending the last year or so thinking about that conundrum. How does that actually work? And so that's what this talk is about. I'm not kidding, and I say I want to be intellectually humble about this because I am new to this field of moral psychology, which is what I've been reading lately. I'm by training a philosopher and philosophers have certain ways of framing questions. And for the last 100 years, at least in the English speaking world, those have not been much in touch with empirical questions. So I'll explain why we've been trying to reconnect them somewhat in actual form of philosophy. That's the standard disclaimer. I don't really need it in this environment, but it makes Dave Lee happy, so okay. So what I want to do is start, how do we normally think and talk about ethics specifically in the military? That's the first question I want to tee up. Because the way we talk about it sets up a set of assumptions about how it works. And then we design programs and train people based on the belief that because we make assumptions about how ethical reasoning and behavior works, that we're helping to improve our problem. So if it turns out, if those assumptions turn out to be flawed, or at least even partially flawed, then we might be producing results that we don't expect, maybe even disastrous results that we didn't expect. And then we scratch our heads and say, how could that possibly be? Because we designed all these programs so well based on our assumptions about how this works. And so the extra thought doesn't usually occur to us. I wonder if we should go back and re-examine the assumptions on which the whole thing was built. And maybe there are some reasons to have some doubts about that. And that's kind of what I'm going to suggest. So to telegraph the obvious upfront, I'm going to suggest that there are some really fundamental flaws in the way that we have historically thought about this. And it's partly the fault of my discipline. It's partly the fault of philosophy. For reasons I'll explain briefly, I won't bore you too much with the Aristotle, but a little Aristotle is helpful. And then there are some personal reasons why I got interested in this. Reason number one is John Meyer and I and Admiral Kelly get to talk to the major command course, which is everybody taking 06 command over in the Navy as does Admiral Christensen. And as you know, the Navy's experienced quite a rash of detachments for cause of late. And so there's a lot of kind of concern in the Navy what's that about, what's going on. And there are several possible explanations of this. One is, and people often say this, well, the standards have just changed. And to some degree, that's true. They have changed. You think people used to get away with or be tolerated or no longer tolerated. So true, but also irrelevant. The second thing is maybe these are people who've been messing up all along up to the rank of 05 or 06, which is where all the firings are actually happening by and large, or the command master chief level. And they just finally get caught. So if that's what's going on, then that's a question about the promotion system and about the nature of officer evaluation. How could we screen better for these kinds of failures? And if you notice, the chairman has been fiddling with that question regarding senior officers just lately to thinking about 360 feedback for senior leaders, various other kinds of ways of trying to detect these problems. But the third possibility is the one that really interests me. And I have no empirical evidence to know how much of these three are in play, what the proportions are. But suppose it's the case that somebody has been a perfectly squared away officer for 20, 22, 24 years. And then they get to these senior ranks and then they fail. And if you read the Navy Times or any other services, the way they fail are often so bizarre that if they weren't so tragic, they'd almost be funny. I mean, just like how in the world could they possibly do these things, right? I mean, you all know the kinds of cases I'm referring to. And so if it's the third case, that is they've been fine for a long time and then they get to these ranks and they suddenly fail spectacularly, that would suggest there's something about the nature of the environment that we're putting them in that is so disorienting for them or so different from what they're used to doing that that does lead to failure. And there's a ton of empirical evidence that people who are confused and disoriented don't tend to act really well under those circumstances. So again, it's an empirical question what the proportions of these three causes are. And right now, nobody can answer the question, which of them it is. But for my intellectual purposes, I'm interested in a third possibility. Does everybody understand where I'm going here? Let me stop many questions, even at this stage. Because this can be a dialogue, I'm happy with that. I really want to, this is work in progress for me, so I don't know, this is tentative. Okay, and the other reason why is there's this new branch of philosophy, my discipline, called, guess what, experimental philosophy. And this has started with a bunch of young philosophers, mostly in their 30s, who had the bright idea. I wonder if you could do any kind of empirical research that would be relevant to philosophical questions. Who knew? Nobody in philosophy has asked that question for 100 years in the English-speaking world, because we put our feet on the chair and drink a cup of coffee, and we think. And then we write our little articles, and that's how you do philosophy. And you don't even need a window, just your pen and pencil will do. And so these guys started doing some empirical research. And so I got the opportunity to spend last summer at the University of Arizona, where I was the old guy with a bunch of these young experimental philosophers, but I got kind of dipped shallowly in the experimental philosophy pool and thought, this is interesting. I wonder where this goes. On the one hand, it's kind of cool. On the other hand, in many cases, you kind of think these are like research psychologists operating without a license, right? Because they don't really have the training in the empirical research disciplines that a research psychologist would have. So you've got to be a little nervous about the methodology sometimes, but that's a scholarly cavill, okay? Let's start with what I think is the way military people usually think about ethics. This is grounded in ideas of virtue and in the concept of integrity. Actually, it's occurred to me as I listened to military people talk about ethics, there are really three words you guys like. Character, integrity, and professional. And those can kind of be used in all, kind of in all purpose kinds of ways, right? So let's talk a little about the intellectual background of that way of thinking about ethics. First, there's a problem with the word virtue, at least in my mind. I mean, a virtue sounds to me kind of quaint and Victorian, okay? I don't know if it has that ring to you, but that's how it seems to me. So we got to do a little fun with Greek to figure out what the word virtue has historically meant in this literature. It comes from the Greek word, arete. Which means the functional excellence of a thing. So any Greek philosopher, Plato, Socrates, Aristotle, all of them always operates the same way. First question is, what is this thing for? And then how do you determine whether it does what it's for well or badly? Okay, so in Plato, there's a line in the Republic where he talks about virtuous pruning hooks, right? And that makes perfect sense in Greek. You can be a virtuous pruning hook because you got the top rating in a consumer report. You do what pruning hooks do really well, right? And so the word virtue means the functional excellence of the thing given the kind of thing it is. Okay, that's the old definition. So the military virtues are the functional excellences that are necessary for military people to do military things well and effectively. Take for example, the virtue of courage. Aristotle says, courage is the mean between the extremes of cowardice and foolhardiness. You can go wrong in either direction and it's very hard, you can't write you a formula for what this is, but generally we know it when we see it, when we see a truly courageous person. And sometimes we confuse the foolhardy with them, but usually not for too long because they don't tend to hang around that long. So you need these excellences to be able to do your job. So that invites the question, well, how do we think we come to have these virtues, these excellences, these are a tag? Well, turns out that the answer Aristotle gave 2,500 years ago is basically the military answer. And we've been wedded to this answer ever since. And as I lay it out and I think if you think about how military training works, it'll occur to you, even if you've never heard of Aristotle and certainly if you've never read a word of him, you are all closet Aristotelians, you just didn't know it, okay? What Aristotle says is look, you come out of the womb with a bunch of capacities. You have capacities for athletic activity, you have capacities for intellectual activity, you have capacities for musical ability, you have capacities for artistic ability. And all of these come to you neither, he says you don't get the virtues by nature because you don't come out of the womb with these functional excellences, but it's not contrary to nature either because you have the capability built in. It's just a question of whether it's gonna be developed or not, right? So some of these capacities will be developed and some won't, how do you do it? He says well, how do you become an excellent harp player by playing the harp? On the other hand, he says you can play the harp a long time and be really crappy at it too, right? So if you're gonna get to be an excellent harp player unless you're just incredibly naturally talented, at some point you're probably going to need a teacher, a tutor, a mentor, someone who coaches you on this and that coaching is initially gonna be kind of painful because it's gonna push you out of your comfort zone, right? The example I give to my stocked up classes, you know, when I was a kid, I took piano lessons for a while, obviously, I never got very good at it, but I remember being confused why my teacher really cared about the fingering of row, row, row, your boat, right? And it just made no sense to me. I thought if I hit the right notes at the right sequence, who cares about the fingering, right? Well, what my teacher knew that I didn't know and I never went far enough to care is that eventually I wanted to play Franz Liszt or something, the fingering was gonna matter, you know? Because your most human hands can't get to all those notes if they're not really picky about how they do it. So, he says, you know, you'll develop these, some of these you'll develop poorly, those will become like vices, bad habits, and some of them you'll develop well and they will become your functional excellences, your virtues. And he says, the interesting thing about it, you do it by repetitive activity, usually under some kind of supervision to make sure that you're doing it well, that you're getting it right. So think about what you do, I use this example only in theory, if you have a bad golf swing, you go hire a pro to tell you what's wrong with your golf swing, right? And you've gotten yourself habituated to a way of doing it which consistently slices and your chances that you'll fix this yourself are pretty small. You need somebody who can watch what you're doing and tell you, here's what you're doing wrong, right? That's the expert who helps you with this. So think about how military training works. You do the activity over and over and over again with supervision and feedback, right? And at the end of that process, if all goes well, it becomes quite kind of automatic, right? You call it muscle memory. You can kind of rely on this to happen. So when you add all that up, this formation of all of these habitual activities forms a bunch of habits which are these inbuilt, routinized ways of doing things that are now familiar to you. So what was once perhaps painful or uncomfortable now becomes a routine and perhaps he says if it all goes really well, it even becomes pleasurable, right? You actually take pleasure in this activity. Even though when you first started doing it, it might have been unpleasant or it had best neutral. And you add up all these habits and you get something called character, right then? So character is the sum of all these habits that you built up. And then we label integrity being consistent with that formed character, okay? Again, I'm gonna stop because this is kind of important. Is everybody following me so far? This is just Aristotle 101 so far. Okay, any comments, questions? All right, so when we use the word integrity in the military, what are we implying? We're implying that there are a stable and reliable set of virtues that are reliable across different environments and contexts, right? So people talk about this in terms of, if I could look myself in the mirror and I have integrity, then I'm good, right? Character, talk about it the same way. And then we have this all-purpose term, professional, which is meant to incorporate a set of virtues, a set of arites specifically appropriate to the nature of the activity of the military profession. And those are the virtuous versions of that. So this all adds up to the character assumption. And this is a quote from a philosopher who's one of the founding followers of the experimental philosophy movement. So let me just let you read that for a minute. Had the pleasure of spending some time with John in Arizona, which was very useful to me. Okay, everybody had a chance to read? Now notice the title of John's book, Lacking Character. So what he's out to show in his book is this is false. This is not true. I can prove to you that this is not true. Okay, I realize this is very counter-cultural for this audience, right? That's why I'm being tentative about it. But the longer I read this stuff and I've been at it for about a year, the more convinced I am that it's not true and that our belief that it is has bad effects. So that's what I'm gonna try to talk about. So what if this is all based on false assumptions? What if the whole idea of a fixed character is empirically false? Is it possible that saying that you're all people of good character, therefore we can completely trust you and any environment and you will be good to go and we don't need to worry about you? If the character thing is not true, but we're relying on it, what follows? We are setting ourselves up for moral failure. We're setting ourselves up for moral failure because we're relying on something that turns out to be empirically not as reliable as our assumption suggests that it is. So the last big question, the so what question is how would we change our approaches to training and thinking about allocation of responsibility and so forth if we included these situationalist factors in our calculation. And as you'll see at the end, I've talked myself into a quite philosophically confused model on that last point. So that's where you gotta really help me out when you get to it. I'll say more about that in a minute. Okay, so most of you are probably familiar with this famous social psychological experiment, but I wanna play it for you anyway. This is a relatively short clip. I have two clips today, one is slightly longer. Hi, I'm Chris Cuomo and welcome to our primetime webcast, a look at one of the most shocking experiments of the last 50 years, literally. Imagine this scenario. You go to a prestigious university to participate in a learning and memory experiment. When you arrive, you discover that the teaching instrument is this machine, which seems to give electroshocks to a man on the other side of the wall. As you move up the scale, he begins to scream out in pain. The experiment requires that you continue. The experimenter pressures you to go on. Would you agree to continue? Ah, that's all. 45 years ago, Dr. Stanley Mildrum came up with this experiment to test whether people would blindly follow the orders of an authority figure. He found that two-thirds of his subjects were willing to give the most dangerous shock on the machine. We teamed up with Dr. Jerry Berger, a social psychologist at Santa Clara University in California, to see whether people have changed since then. Wrong, 90 volts. The typical response is to turn toward the experimenter and if not say something, at least give a look that says, what should I do? In our new experiment, how many people would agree to follow the orders of an authority figure? Ow, that's incorrect. 39-year-old Troy Shasker is an electrician. He's been paid $50 to participate and told that the money is his to keep, even if he quits the experiment early. He's worried about the dangers of the electroshock machine. Wow, I don't think, I don't think I should shock him that hard if he really does screw up. That's been a severe shock there. Yeah, there are 25. In the room next door, Troy watches as the learner gets strapped into his chair and really gets nervous once he hears him say this. I should probably bring up a couple of years ago at Kaiser, they diagnosed a mild heart condition. I'm really not too worried about it, it's not that serious. Well, you should know that while the shocks that we'll be using today may be painful, they're not dangerous. Milgram intended that scripted exchange to set up a conflict in the subject's mind, a choice between the health of the learner and the authority of the experimenter. Number one. Then the test begins. Blue boy, girl, grass, hat. The learner must decide which of the four words is the correct match. At first, everything goes smoothly. Correct. I was confident that he was doing really good at first and then it started looking bad. Then at 75 volts, Troy hears the first sign of trouble. Soft, rug, pillow, hair, grass. Incorrect. 75 volts. I could actually hear him next door going, ow. Oh, and he kept getting things wrong. Incorrect. 90 volts. At 105 volts, he's clearly uneasy. I got a little moist on my forehead there. I wasn't comfortable. I can't tell you why I listened to him and kept going. I should have just said no. The correct word was duck, rock, house. At 150 volts. Incorrect. 150 volts. Will Troy listen as the learner begs him to stop or will he follow Brian's orders? Follow the correct word, pair of white. The correct word was white horse. Next item. The next item is sad. He obeys the orders. Face? Why didn't you stop? The strap, I saw him getting strapped in and they were just like little, I mean, he could have just, if he was in that much pain, he could have just tore himself off. Why are you putting it on him and not you or the experimenter? I was just doing my job. I was doing what I was supposed to do. 75 volts. So I guess the influence of having the conductor of the experiment right there next to me telling me to keep going had a lot to do with it. For the past 30 years, there have been severe restrictions on using humans in social psychology research. To avoid putting subjects under too much stress, Dr. Berger made a significant change to our experiment. In this experiment, you stopped at 150, make believe volts. In Milgram, they went much higher. We stopped for ethical reasons. We couldn't put people through the agony that Milgram's participants went through. I told you I had heart troubles. My heart's starting to bother me now. Are there clues that indicate whether certain people might be more compliant with authority? Wrong. 90 volts. When you were watching, how good were you at guessing? Oh, this person may go or this may not. It was impossible to tell. I tried to guess. I tried to look for signs, body language, anything to try to guess who's going to continue and who's going to stop. And that tells me that it's not that there are certain kinds of people who are obviously different from the rest of us. It tells me that probably all of us are capable. Thanks for watching our primetime webcast and be sure to watch again next week at abcnews.com. Okay, so here's the data using the original Milgram voltages which go up to 450. So at 450, the subject is passed out and they're continuing to shock them. So these are the percentages of people that would do it. Now, so what do you think that goes to show? 65% went all the way to the max voltage, right? This extreme power of authority. Notice that in fact, in this situation, there's very little authority, right? They've never met this experimenter before. He's just a guy in a white lab coat. They're told in advance, even if you quit, you get paid, right? So there's very little reason that a person couldn't just say, I'm out of here, right? Compare that to a military environment, in terms of the power of authority, right? So if authority of a white lab coat of a guy you just met will get you to do that, dot, dot, dot, right? They extrapolate it on out. Okay, so what most of us presented with the Milgram experiment, I myself would say, I would never do that. I mean, I just wouldn't do it. I mean, I agree to the experiment, I might do the first few shocks, but once the person's obviously in distress, I just wouldn't do it. But the data tells me I'm almost certainly wrong about this, right? I am almost certainly self-deluded about that. And so if I'm gonna think about my own moral behavior in a realistic way, I've gotta take on board data like this that I don't know myself nearly as well as I'd like to think or that I'm not nearly as noble as I'd like to think when it comes to this kind of stuff. Put me in the wrong environment and the data, at least in this case, suggests I probably would do it. Here's some more evidence. I love this one. This was done in the 70s and there's still a lot of pay phones around. And so the researchers arranged to have a person walk behind a person just getting off a pay phone and drop a pile of papers. The question was, would the person getting off the pay phone help the person to drop the papers or not help them pick up the pile of papers? Notice the data. The only experimental intervention was putting a dime in the coin return of the phone sometimes. If you put a dime in the coin return, 87.5% of the participants helped. If you didn't put a dime in the coin, 4% helped. 87.5 versus four. And again, the intervention is a dime. So anybody have a hypothesis about why something intuitively absolutely trivial like finding a dime in a coin return would so decisively sway these numbers? Yeah, give it a shot. That seems to be the best hypothesis. Did everybody hear it? People are just a little bit happy. I mean, really trivially a little bit happy, right? I mean, trivially a little bit happy are more likely to do it. Or this is one they did at Princeton Seminary. They assigned the seminarians to write a talk about the good Samaritan story in the Bible. Remember the story where Jesus tells the story about this hated ethnic group, the Samaritans, who's the only guy after all the supposed good people go by who stops and helps the guy who's been beaten up by the roadside. And at least one takeaway from the story is, it's important to help people kind of regardless of whether they're your group or not your group, things like that. So the two groups of seminarians. One group was told, walk across campus at your leisure and give your talk this afternoon. The other group was told, it's really important you be there on time. You got about 15 minutes to get across campus. Make sure you don't do that you're not late. Seminary students, same seminary, presumably roughly the same values. If the unhurried participant stopped 63% of the time when they encountered an actor pretending to be a person in great distress, but the hurried participant stopped only 10% of the time. So that would suggest time pressure, sense of urgency will lead you to fundamentally different behaviors. If I asked you in calm reflection in Sunday school class about the Good Samaritan story, you'd probably all say, oh yeah, yeah, we would do that. All right, that's what we would do. But the data suggests not true. Not true and certainly don't count on it. Okay, now this is a little longer. I love this guy. He's an Israeli guy who teaches at Duke. This is a TED talk. So it's what are the usual, what 18 minutes or so. But it's so good that I want you to see the whole thing because I've learned a lot from his stuff. So here we go. Oh, I've messed up. What have I done? I was trying to get full screen, but that failed. I'll just try it again. I want to talk to you today a little bit about predictable irrationality. And my interest in irrational behavior started many years ago in hospital. I was burned very badly. And if you spend a lot of time in hospital, you'll see a lot of types of irrationalities. And the one that particularly bothered me in the burn department was the process by which the nurses took the bandage off me. Now you must have all taken a band-aid off at some point and you must have wondered what's the right approach. Do you rip it off quickly? Show duration but high intensity? Or do you take your band-aid off slowly? You take a long time but each second is not as painful. Which one of those is the right approach? The nurses in my department thought that the right approach was the ripping one. So they would grab hold and they would rip and they would grab hold and they would rip. Because I had 70% of my body burned, it would take about an hour. And as you can imagine, I hated that moment of ripping with incredible intensity. And I would try to reason with them and say, why don't we try something else? Why don't we take it a little longer, maybe two hours instead of an hour and have less of this intensity? And the nurses told me two things. They told me that they had the right model of the patient, that they knew what was the right thing to do to minimize my pain. And they also told me that the word patient doesn't mean to make suggestions or interfere or... This is not just in Hebrew by the way, it's in every language I've had experience with so far. And there's not much, there wasn't much I could do and they kept on doing what they were doing and about three years later when I left the hospital, I started studying at the university. One of the most interesting lessons I learned was that there is an experimental method. That if you have a question, you can create a replica of this question in some abstract way and you can try to examine this question and maybe learn something about the world. So that's what I did. I was still interested in this question of how do you take bandages of burned patients? So originally I didn't have much money, so I went to a hardware store and I bought a carpenter's vise and I would bring people to the lab and I would put their finger in it and I would crunch it a little bit. And I would crunch it for long periods and short periods and pain went up and when it went down and with breaks and with outbreaks, all kinds of versions of pain. And when I finished hurting people a little bit, I would ask them, so how painful was this? So how painful was this? So if you had to choose between the last two, which one would you choose? I kept on doing this for a while and then like all good academic projects, I got more funding, I moved to sounds, electrical shocks, I even had a pain suit that I could get people to feel much more pain. But at the end of this process, what I learned was that the nurses were wrong. Here were wonderful people with good intentions and plenty of experience and nevertheless, they were getting things wrong, predictably all the time. It turns out that because we don't encode duration in the way that we encode intensity, I would have had less pain if the duration would have been longer and the intensity was lower. It turns out it would have been better to start with my face, which was much more painful and move toward my legs, giving me a trend of improvement over time that would have been also less painful. And it also turns out it would have been good to give me breaks in a middle to kind of recuperate from the pain. All of these would have been great things to do and my nurses had no idea. And from that point on, I started thinking, are the nurses the only people in the world who get things wrong in this particular decision or is it a more general case? And it turns out it's a more general case. There's a lot of mistakes we do. And I want to give you one example of one of these irrationalities and I want to talk to you about cheating. And the reason I picked cheating is because it's interesting, but also it tells us something I think about the stock market situation we're in. So my interest in cheating started when Enron came on the scene, exploded all of a sudden and I started thinking about what is happening here? Is it the case that there is gonna be a few apples who are capable of doing these things or are we talking a more endemic situation that many people are actually capable of behaving this way? So like we usually do, I decided to do a simple experiment and here's how it went. If you were in the experiment, I would pass you a sheet of paper with 20 simple math problems that everybody could solve but I wouldn't give you enough time. When the five minutes were over, I would say pass me the sheets of paper and I'll pay you a dollar per question. People did this. I would pay people $4 for their task. On average, people would solve four problems. Other people I would tempt to cheat. I would pass the sheet of paper. When the five minutes are over, I would say please shred the piece of paper, put the little pieces in your pocket or in your backpack and tell me how many questions you got correctly. People now solve seven questions on average. Now it wasn't as if there was a few bad apples, a few people who cheated a lot, instead what we saw is a lot of people who cheat a little bit. Now in the economic theory, cheating is a very simple cost-benefit analysis. You say, what's the probability of being caught? How much do I stand to gain from cheating and how much punishment would I get if I get caught? And you weigh these options out, you do the simple cost-benefit analysis and you decide whether it's worthwhile to commit the crime or not. So we tried to test this. For some people, we varied how much money they could get away with, how much money they could steal. We paid them 10 cents per correct question, 50 cents, a dollar, $5, $10 per correct question. You would expect that as the amount of money on the table increases, people would cheat more, but in fact, it wasn't the case. We got a lot of people cheating by stealing by a little bit. What about the probability of being caught? Some people shredded half the sheet of paper, so there was some evidence left. Some people shredded a whole sheet of paper. Some people shredded everything, went out of the room and paid themself from a bowl of money that had over $100. You would expect that as the probability of being caught goes down, people would cheat more, but again, this was not the case. Again, a lot of people cheated by just by a little bit and they were unsensitive to these economic incentives. So we said, if people are not sensitive to the economic rational theory explanations, to these forces, what could be going on? And we thought maybe what is happening is that there are two forces. At one hand, we all want to look at ourselves in the mirror and feel good about ourselves, so we don't want to cheat. On the other hand, we can cheat a little bit and still feel good about ourselves. So maybe what is happening is that as the level of cheating, we can't go over, but we can still benefit from cheating at a low degree as long as it doesn't change our impressions about ourselves. We call this like a personal fudge factor. Now, how would you test a personal fudge factor? Initially, we said, what can we do to shrink the fudge factor? So we got people to the lab and we said, we have two tasks for you today. First, we asked half the people to recall either 10 books they read in high school or to recall the 10 commandments. And then we tempted them with cheating. Turns out, the people who tried to recall the 10 commandments, and in our sample, nobody could recall the 10 commandments, but those people who tried to recall the 10 commandments given the opportunity to cheat did not cheat at all. It wasn't that the more religious people, the people who remembered more of the commandment cheated less, and the less religious people, the people who couldn't remember almost any commandment cheated more. The moment people thought about trying to recall the 10 commandments, they stopped cheating. In fact, even when we give self-declared atheists the task of swearing on the Bible and we give them a chance to cheat, they don't cheat at all. Now, 10 commandments is something that is hard to bring into the education system, so we said, why don't we get people to sign the honor code? So we got people to sign, I understand that this short survey falls under the MIT honor code, then they shredded it, no cheating whatsoever, and this is particularly interesting because MIT doesn't have an honor code. So all this was about decreasing the fudge factor. What about increasing the fudge factor? The first experiment, I walked around MIT and I distributed six packs of Cokes in their refrigerators. These were common refrigerators for the undergrads, and I came back to measure what we technically called the half lifetime of Coke. How long does it last in the refrigerators? And you can expect it doesn't last very long, people take it. In contrast, I took a plate with six $1 bills and I left those plates in the same refrigerators. No bill was ever disappeared. Now, this is not a good social science experiment, so to do it better, I did the same experiment as I described to you before. A third of the people, we passed the sheet, they gave it back to us. A third of the people, we passed it, they shredded it, they came to us and said, Mr. Experimenter, I solved X problems, give me X dollars. A third of the people, when they finished shredding the piece of paper, they came to us and said, Mr. Experimenter, I solved X problems, give me X tokens. We did not pay them with dollars, we paid them with something else. And then they took this something else, they walked 12 feet to the side and exchanged it for dollars. Think about the following intuition. How bad would you feel about taking a pencil from work home compared to how bad would you feel about taking 10 cents from a petty cash box? These things feel very differently. With being a step removed from cash for a few seconds by being paid by token make a difference, our subjects double their cheating. I'll tell you what I think about this in the stock market in a minute. But this did not solve the big problem I had with Enron yet because in Enron there's also a social element. People see each other behaving. In fact, every day when we open the news, we see examples of people cheating. What does this causes us? So we did another experiment. We got a big group of students to be in the experiment and we prepaid them. So everybody got an envelope with all the money for the experiment and we told them that at the end, we asked them to pay us back the money they didn't make. Okay, the same thing happens when we give people the opportunity to cheat. They cheat, they cheat just by a little bit, all the same. But in this experiment, we also hired an acting student. This acting student stood up after 30 seconds and said, I solved everything, what do I do now? And the experimenter said, if you finished everything, go home. That's it, the task is finished. So now we had a student, an acting student that was a part of the group. Nobody knew it was an actor and they clearly cheated in a very, very serious way. What would happen to the other people in the group? Will they cheat more or will they cheat less? Here is what happens. It turns out it depends on what kind of sweatshirt they're wearing. Here's the thing, we ran this at Carnegie Mellon in Pittsburgh and at Pittsburgh, there are two big universities, Carnegie Mellon and University of Pittsburgh. All of the subjects sitting in the experiment were Carnegie Mellon students. When the actor was getting up, was a Carnegie Mellon student, he was actually a Carnegie Mellon student, but he was a part of their group, cheating went up. But when he actually had the University of Pittsburgh sweatshirt, cheating went down. Now this is important because remember when the moment the student stood up, it made it clear to everybody that they could get away with cheating. Because the experimenter said, you finished everything, go home, and they worked with their money. So it wasn't so much about the probability of being caught again, it was about the norms for cheating. If somebody from our in-group cheats, and we see them cheating, we feel it's more appropriate as a group to behave this way. But if it's somebody from another group, these terrible people, I mean not terrible in this, but somebody we don't want to associate ourselves with, from another university, another group, all of a sudden people, awareness of honesty goes up, a little bit like the Ten Commandments experiment, and people cheat even less. So what have we learned from this about cheating? We've learned that a lot of people can cheat, they cheat just by a little bit. When we remind people about their morality, they cheat less. When we get bigger distance from cheating, from the object of money, for example, people cheat more, and when we see things of cheating around us, particularly if it's a part of our in-group, cheating goes up. Now if we think about this in terms of the stock market, think about what happens. What happens in a situation when you create something where you pay people a lot of money to see reality in a slightly different... Okay, I'm not gonna go on with the stock market example, it's very interesting, and he has several of these TED Talks, so if you're interested in them, this is the guy, and I recommend you go watch him. But just off the top of your heads, what are the takeaways from that series of experiments that Ariely did? Let's start with a sweatshirt experiment. I mean, what does that go to tell us that's applicable to military organizations? Yeah, use the mic if you can please. Good, anybody else? So think about this for a minute. If you were interested in a commonance, attempts to deal with the sexual assault problem, and you were gonna take on board this kind of research, it might lead you down to a different kind of path for thinking about what you would wanna do about it. Anybody got a thought about that? Cause what we're gonna do is put people in rooms like this and show them a lot of PowerPoint slides, right? That's, you know that's coming. But if we worry about Ariely's problem very much, what would you suggest? If there is a fix, what is it? Come on guys, any number can play here. Yeah, please. Good, there was a hand back here somewhere? Yes, please. Does everybody see the bottom line answer to the question would be, we don't do that. I mean, when you've got a culture where people say, we don't do that. So when the misbehavior is made the outgroup guy, right? That's where you've got the Ariely stuff working for you. Cause it's about the sweatshirt, right? Is this our team or is this not our team? Are these our guys or are they not our guys? Also it was a little subtle, but did everybody understand that little experiment where he separated the cheating from actual handling of money just by 12 feet? I mean, that's a pretty cool one too, right? The minute you make it even slightly more abstract, less obvious that you're cheating, your probability of doing it goes way up, right? And so insofar as organizations create that kind of, so I think this is really important and the reason I'm wasting your time on this is because as future commanders, if we take any of this seriously, far more important than anything you say to your people at a formal gathering is gonna be what kind of climate is created for what's a tolerable behavior. And once it's real clear, if you wanna be part of us, this is how we are. And by the way, what do we know about 19 year olds and uniforms? They really do wanna be part of a group and dress alike and march in line, right? I mean, that's kind of what they wanna do. So insofar as you control that mechanism, you control something far more powerful than you think. Well, okay, now it seems to me in a way, what's weird about this is we already know this, right? I mean, this is why I thought the Commandant's thing was interesting because we literally talk out of both sides of our mouths about this problem. On the one hand, we give the sermons about integrity and character and imply by that that if you've got that, you're okay, you're good to go, there's no problem, right? And in the very next breath, we talk about why things go bad and poorly led units and units that have poor morale and units that don't have good unit cohesion, right? So how do we connect the dots between these two ways of looking at the problem? And how do we do it in a way that still allows us to assign moral and legal responsibility but is also empirically accurate to what's going on? So, oh, sorry, there's a great book. If you haven't read it, I highly recommended Blackheart's One Platoon's Descendant of Madness and Iraq's Triangle of Death by Jim Frederick. This was recommended to me by General McMaster in the Army. This is an account of that army unit that formed a plan over a period of time to rape an Iraqi girl, burn her family, destroy the house and cover up the story. It's a horrific book, but what I like about the book in the scare quotes form of like is it cuts the salami very thinly. It sort of shows how these guys got there. And it didn't happen one bad evening is the point of the story, right? It happened because a lot of things were going wrong, a lot of people in their chain of man knew things were going wrong and failed to take effective action. Some people even tried to do some things to prevent this, but they were ineffective and what you got was this atrocity, right? So thinking about atrocity stuff will get you there. So that would suggest if we're to think about this accurately and not just rhetorically, the hard problem is this, how do we balance blaming individuals for their misbehavior, which we've gotta do, right? With examination of the situational factors, the leaders who put them there are created. And that's important both to assign the blame to the individual bad actors appropriately, but also arguably to broaden our sense of who's responsible for what, because take Abu Ghraib as an example. If you know anything about the Zimbardo prison experiments in which a bunch of, does everybody know this experiment? Zimbardo put a bunch of Stanford undergraduates in a basement of a university building for a weekend. And these all were Stanford undergraduates, right? Fellow undergraduates, college students, assigned one group to be prison guards and one group to be prisoners. They had to terminate the experiment after two days because the guards had become so abusive to their prisoners. Now think about that. Stanford undergraduates, dealing with others, Stanford undergraduates, in a role-playing exercise. So if you know that, what's the probability that a bunch of ill-trained National Guard MPs put alone in an Iraqi prison are gonna misbehave? I would say it's somewhere close to a hundred percent. And if you tell me, well, you didn't see that coming, then at best you're ignorant, right? At worst, you're self-deluded. I mean, that's absolutely almost certainly gonna happen. Now the philosophical problem here is Zimbardo actually testified at the Abu Ghraib trials. My friend, George Masturianni, who's a psychologist at the Air Force Academy, wrote a really excellent article about Zimbardo's testimony. It's in parameters, the Army War College Journal a couple of years ago. But he basically chronicles how Zimbardo just ties himself completely in knots in terms of unable to assign ethical responsibility or legal responsibility to anybody for anything, right? Because the danger of the situationalist account is it could so wash away all versions of personal responsibility that we're just left to throw up our hands and say, oh, it's all situationalism. And that can't be right, but on the other hand, you're just pretending that situational factors don't play, can't be right either. And that's the philosophical model I've steered myself nicely in the middle of. And I'll try to give you some preliminary thoughts about ways you might throw a rope to the shore, but I'm by no means done with this because you can see it's a very hard problem. Anybody have, before I proceed, any thoughts about the problem? Everybody understand the problem? Okay. Well, first, blinding flash of the obvious. Character is not as reliable as the way we talk about it. Period, full stop. It's just not true. So that's an empirical, provable fact. And if you doubt me, just go read some moral psychology lit, come back and we'll talk. Because I mean, the experiments are just overwhelming on this. But here's the more important point. Continuing to talk as if it is, continuing to talk as if character and integrity are as reliable as the rhetoric we have around them tends to suggest is actually dangerous. It is dangerous because it leads us to think that because we've done character education, for example, then people are good to go. I used to teach at the Air Force Academy and there's something every year they're called the National Character and Leadership Symposium, which is a two or three day event. And I'm not exaggerating, what happens at this event is cadets and other college students are exposed to basically preaching by beauty queens and football coaches and all kinds of people, basically exhorting them to be good. Always struck me that this is a colossal waste of time, money and effort. Does anybody believe that anybody will be decisively made a better human being for any extended period of time by being exhorted for a brief period of time? I don't think so. At best, exhortation would work if you're part of an organized community that hangs together for a long time and keeps the same message going at you regularly, right? Maybe that, but certainly not one off talks by one person or another. So the other thing that's really important about the moral psych literature is, and I can't stress this enough, most of these things have interventions that if I told them to you, you'd say that is so trivial, it couldn't possibly affect the way people behave. You just would not believe that something that small could affect others or you. But think about the phone booth. A dime, 87.5, 10. Look at that swing and behavior based on a dime. So that would suggest that when you are leaders, one of the things I like to talk about with John and Admiral Kelly over the major command courses, if you think about how military organizations handle failures, there's what I call the holy trinity of solutions. You fire the leadership, you mandate new training, and you issue a new policy. And somehow those things in some combination are supposed to be the universal fix for every failure. But I think if we think about it, it's pretty clear that is, we already know that isn't true, right? The example I like to give is when I worked for the Army, General Shinseki came in as Chief of Staff and he called Carlisle and said, I need 10 colonels to study the readiness reporting system in the Army because the one thing I'm sure of is that I have no idea how ready the Army is today. And why is that? Because the culture of the Army then would not allow anybody to report below C2. So if you're a little, your friend, O3 company commander, and you falsely report your C3 unit to be C2, are you lying? Well, in the ordinary sense of the word, of course. If stating intentionally and deliberately something you know not to be true, to be true on an official report is not lying, what would be lying? On the other hand, that would be a totally boneheaded way of looking what's going on, right? Because the reason that the captain is reporting falsely is because the culture won't allow the behavior. It's a system effect that drives the behavior. So unless somebody with some stars on their shoulders fixes the system, the behaviors are not gonna change, right? So I invite you to think about that as you rise in rank because you start controlling systems. And so when confronted with failure, you might just at least take an extra tick before you resort to the whole eternity solutions, right? To ask yourself, what's driving this behavior? And do I own anything that's driving this behavior? You may know that it's a system that's driving the behavior and you may not own enough to fix it. I mean, that's certainly true. Although that would be part of your job, I think, in terms of feeding up your chain of command to try to get your chain of command to come to terms with the system failure, right? But, you know, going back to the sexual assault question seems to be we're confusing two very different things. I mean, the general point is that everybody, and especially young people, are very interested in sex. This is not news. And consensual sex is gonna go on no matter what, and some of it's gonna be injurious to good order and discipline, and you're gonna have to deal with it, and that's that, okay? Sexual assault is not that. I mean, it bears no resemblance to that. And so there's no reason to merge those conversations as if somehow we're talking about the same thing, right? So it seems to me that when you go back to the t-shirt problem, one solution is, you know, how do we treat each other as shipmates? How do we treat each other as airmen, as soldiers? And, you know, in a well-run unit, this level of respect for each other would prohibit abusive behavior. It's you're not gonna prohibit consensual sex. I mean, you're gonna have to deal with it as a discipline matter, okay? But let's take that off the table as a separate problem from what we're talking about. So this is my point about the Holy Trinity. If you really are interested in good behaviors and not just, you know, trottering out the rhetoric of integrity, it would behoove us to be spending a lot of time studying and attempting to control environments. And to better understand how these factors drive the behaviors that we're seeing. Because we frame the question in terms of Aristotelian virtue, it kinda doesn't invite us to look empirically at this side of the problem. That's really the burden of my talk today. We need to supplement, not replace, character talk, with context talk, since we know that it's far more decisive that we normally think it is. So even the level of moral responsibility, we don't necessarily wanna focus only on the individuals who did the observable bad act. For example, in the law, if I put you in an environment that is foreseeably likely to cause you to fail, then some of the burden is on me, right? I should have foreseen that this would go badly. And I'm responsible to that. So here's the philosophical problem that remains. And if you've got any thoughts about how to square the circle here, please let me know. In the moral cycle literature they talk about, is it a bad apple problem or is it a bad barrel problem? You know, is it just a few bad apples or you just put these apples in a really bad barrel and practically any apple is gonna go bad in that barrel? Hard to know, but there are some really bad barrels out there, right? And we would do well to think about where the notably bad barrels are. And if we absolutely gotta put people into them, then what can we do to prepare them? And I'm here to suggest that just talking about their integrity and character probably is not gonna be sufficient. Probably is not gonna be sufficient if it's a really bad barrel because all of us are subject to this. So I think there's a vast area for empirical researcher that sounds weird and coming from philosopher right now, but the longer I think about it, I think we really don't understand this very well. What goes on in the environments we're putting people into and how do we better understand that? Now as you probably know, the group in causal and now signed off by the CNS has been working on this leader development continuum identifying what we think people should, to use the old army phrase be know and do at various levels of rank. It seems to me this is a perfectly appropriate way to integrate our ethics considerations with the leader development continuum, what level of self-reflection, self-critical and thinking about system effects needs to kick in because you don't own much systems as an O3, you start on systems in an O6, certainly on systems in O7, O8, and if we know that's important, then we should think about it. And we certainly know that as you advance and rank, your situation changes across the spectrum. That was the point that the commentator was making this morning about people kind of getting this inflated sense of self-importance or his example of momentum worry, remember that you were mortal. It'd be pretty hard to remember you're mortal, I think in some of these environments. I remember one of my favorite army generals, Walt Omer, who retired lieutenant general, told me a story with it. He went to his retirement ceremony and then when it was done, he walked with his wife and got in the back seat of the car and his wife turned him and said, you're gonna have to drive, Walt. I was like, oh yeah, I haven't driven myself anywhere in quite a long time. So, okay, that's just my talk, so thanks very much. Thank you. Thank you. Thank you.