 In 2001, this distinguished professor of organizational behavior and psychology at the University of Michigan Business School did it again. His newest co-authored book, Managing the Unexpected, assuring high performance in an age of complexity, explains the concepts and strategies behind high reliability organizations. These are the organizations who must manage unexpected threats and, therefore, cannot make mistakes. Our country's wildland and prescribed fire and fire use forces, along with flight deck crews on aircraft carriers, air traffic control systems, and hospital emergency departments are among these vital organizations. Fellow organizational psychologist and University of Michigan professor Kathleen Sutcliffe co-authored this important Managing the Unexpected book. Together, Sutcliffe and Wyke are heralded for helping develop the concept of high reliability organizations. So how do we manage for the unexpected? What concepts, tools, and skills do we need, both as organizations and as individuals? In a special Managing the Unexpected workshop with prescribed fire and fire use practitioners, Wyke and Sutcliffe explained how to best apply these high reliability organizing principles. If you think that you face a reliability problem and you solve it and then you move on to something else, that's the big error. You redo this stuff over and over on you. It collapses on you day after day. It has to be redone. You don't put reliability into the bank. And so the things that we want to describe and talk about today with a variety of organizations are things that you're going to have to just keep doing, keep paying attention to over and over again. I'm going to be talking a little bit about the Columbia shuttle disaster and you remember when that episode happened, people in the headline said, uh-oh, here are echoes, keyword, echoes of Challenger. They did it again at NASA with Columbia, had the Challenger accident, had the Columbia accident, the two things parallel. You've had the same experience, man gulch, echoes of man gulch in South Canyon, same time in the afternoon, same configuration, so forth. Point is these kinds of things that we want to talk about, how you deal with the unexpected, what reliability is all about, what goes into it, has to have as a component of it continuous monitoring, continuous processing, continuously redoing it. And the worst thing you can do is to let complacency creep in, but it's going to be tempting because you're going to say, ah, we got it, we're successful, now we've got that one, let's move on to something else. That's how you get entrapped on some of these reliability issues. Think about it as challenges of understanding. If you don't understand some of this stuff, please ask. Maybe there are parts of it that are hazy for us and between us we can figure it out or we can between us say we don't need that kind of an idea and move on to those that we need. Think of it as a challenge of observation in the sense that we need to really understand how Cerro Grande came to happen. How did it turn out that way? They didn't intend it that way. How did that happen? What were the decisions along the way? Back it up. How did we get into that kind of position? And then finally, what can we do to lock in these kinds of resolves to change it, to do it slightly differently on a permanent kind of basis so we keep that capability for reliable performance. Now, I'm going to turn this over to Kathy who is going to set the stage for us on reliability issues and how we're going to handle things today. I want to just tell you a little bit. I want to kind of set the stage for where we're going today. And when you think about this work that we've done over the last few years, basically this work grew out of a series of discussions that Carl and I had and a long-term research project in which we were trying to understand how all organizations can be more reliable. And when we think about reliability, reliability is just this idea that we want to make sure that there's a lack of variance in performance. And so how can organizations over and over and over achieve high performance, especially in the face of trying conditions? We personally believe that all organizations need to be concerned with reliability. But we think that the organizations that really need to worry about reliability are organizations that are in trying conditions, that are in unpredictable environments, that are in environments in which you can't really anticipate all the surprises that can come up. So basically today we're going to be talking about that. We know that some organizations are better able than others to manage reliability and we're going to be building on those organizations today. But it's not simply through attention to safety that they do that, okay? These organizations that we're going to be talking about, I'm going to use the acronym HROs instead of always saying high reliability organizations. But these organizations that really do it well, they don't do it well only because they're concerned about safety. They do it well because they disvalue the mis-specification, the mis-estimation, and the misunderstanding of things. And that seems like it's a little awkward or complicated to say that they abhor mis-specifying, misunderstanding, and mis-estimating. And so they put a high degree of value on trying to get it right. They try to anticipate all the surprises that they might come up against. But one of the things they know is that they can't anticipate everything. And so what they do is they try to organize in a way to help them pick up on weak signals that things are not unfolding in the way that they're expecting. And so we're going to be talking about this organization. Now let me say just a few words about the kinds of organizations that we're going to be talking about. We're talking about the examples that we're drawing on, our examples of high reliability organizations. They're organizations that seem to remain productive, sometimes innovative, and relatively error-free in spite of facing very trying conditions continuously day after day after day. Now the term high reliability conveys an idea that high risk and high effectiveness can coexist. And the kinds of organizations that we're talking about are nuclear power plants, they're naval aircraft carriers. We put you in there wildland firefighting crews, hospital ER intensive care units, investment banks. These are organizations that really have to remain relatively error-free. Otherwise they can go down the tubes easily, and you all know what happens with that. They're organizations that have a different set of priorities, okay? They have a different set of priorities, and they organize in such a way to have an intense commitment for alertness. And basically we call this mindfulness, this intense commitment to alertness is what we call mindfulness. And when we talk about the term mindfulness, we think that commitment to mindfulness really is a hedge against the unexpected. And so today we're going to talk about how you achieve that. So during our discussion today, we're going to be talking about these HROs. We're going to pay close attention to how they organize. And we are going to pay close attention to how they struggle for alertness. And the basic message is very simple. It's that these organizations mindfully update through a series of practices and through an infrastructure that they develop. And these practices focus on failures, on simplifications, on operations, on resilience, and on expertise. One of the things we know about high reliability organizations is that they really try to anticipate surprises, as we've said. So they do a lot of scenario building. They do have tons of plans, but at the same time, they know that plans sometimes can get them into trouble. Because plans do have drawbacks. And some of those drawbacks that I've always thought are important to think about are some of the kinds of things that here. Plans can lead to mindlessness. In fact, we talk about the idea that plans can contribute to the fallacy of predetermination. Plans contain many kinds of expectations. When you create a plan, you assume that the world's going to unfold in a particular way. Well, sometimes that leads to a fallacy of predetermination. And that is that we do think the world's going to be unfolding in a particular way. And because we are, we do believe, actually, we tend to see what we believe. Because humans are not necessarily fact seeking, as Carl was talking about earlier, that we're belief seeking. So we tend to see what we believe. And so basically, plans embody what we expect to happen. And so they influence what people see, what they choose to take for granted, what they choose to ignore, the length of time it takes for them to recognize that small problems are kind of getting out of control. So that's one drawback to plans. The other drawback to plans is that they really do restrict our attention to what we expect, and they limit our view of our capabilities. Because in a way, you can think about it as the old adage. When you have a hammer, everything you see looks like a nail. So when you have a plan, you have a tendency to see the ways that you can act in the ways that are consistent with the plan. So a heavy investment in plans is important, and HROs certainly do that. But they also recognize that they have a lot of drawbacks. And so in part, a heavy investment in plans can restrict our sensing to the kinds of expectations that are built into the plan. And it can restrict our responding to the kinds of actions that were built also into the plan. And so it restricts our actions to the things that we think we really know how to do. So to be mindful is to be concerned with plans, but also to be aware that plans sometimes are not going to work out the way that we think. And so to develop an infrastructure that's going to allow us and our organizations, our units, to be able to pick up on small signals that things aren't going wrong. Something happens. We ask you what caused that event. The overwhelming tendency of people is when they look for a cause, they pick a cause that they personally can do something about or that they know how to change. That's not necessarily the most crucial kind of cause. So when you're doing any kind of inquiry or investigative work, when you say these are the things that cause that event, ask yourself the question, did I stop there just because I can do something about that or I know what to do about that? And if I knew how to handle some of these other things, then I might put the cause somewhere else. It's a tendency, I do it, you may or may not do it. But it shows up oftentimes in investigations. And it's a good little thing to file away in the back of your head just as a mindful moment to try to say, I want to do a little bit better analysis. And I may have stumbled onto a cause that I think I can do something about. If I know how to change an organization, then I'm going to back it up until I see culture is the problem, because I know how to change culture. This is from the head of the French Secret Service, Pat Lagodek, talking about what happens when a crisis hits. And he makes the point that the ability to deal with a crisis situation is largely dependent on the structures that have been developed before the chaos arrives. The event can in some ways be considered as an abrupt and brutal audit. At a moment's notice, everything that was left unprepared becomes a complex problem, and every weakness comes rushing to the forefront. This is part of a mindset we want to take to Cerro Grande tomorrow, but it's partly a mindset that you want to take back home at the same time. Because whatever the weak links are in your organization at Cerro Grande, put pressure on it, and that's where it snaps. That's where it goes haywire. Those are the places where you need more learning and stronger learning and over learning in order to seal some of those things in place. If you want a kind of principle, that if you have to do a quick read of an organization, it's hard to do better than the following. If you go into an organization under stress and you're trying to figure out how it's operating, take everybody who matters or who you're watching, see what their position was before the position they are now in. And that is how that organization, by and large, is likely to be functioning. Take the organization, take all of the people who are there, pull them down one level, the level at which they were before they got to their new position, and that's the way that organization is functioning. And it'll make sense out of it surprisingly quickly for you, and all it is, and it's not trivial, is another example of people under pressure falling back on ways of learning that they were more familiar with and that they've done for a longer period of time. You can see it at South Canyon. Remember Don Mackie was the jumper in charge because he was the first one out of the airplane. Some of the comments and some of the descriptions about Mackie later in the afternoon had him acting less like the crew chief because you remember what he was doing was spending a lot of time helping people sharpen their saws, which is more like somebody who is actually in the crew rather than the crew chief, which Blanco, who was the IC for quite a bit of that event, was also, as the afternoon went on, more likely to keep his crew of 20 people around him rather than being attentive to all of the other 40 people who he was in charge of that afternoon. Not surprising. Mackie has been cutting line longer than he's been a crew chief. Blanco has been a crew chief longer than he's been an IC. Gets under pressure, falls back on an earlier set of tendencies. How do you deal with that? We'll be talking about newer behaviors, things that are more novel to you, and the importance of practicing those beyond the point of getting them right or getting a grasp of it. A lot of you under training budget pressure basically train people until you say, oh, now they got it. Good, now we can go on to something else. That's really vulnerable, because if they just know how to do the new behavior, the moment you put pressure on them, they're going to fall back beyond that back to the earlier ways that you were trying to train them away from. One of the only safeguards you've got is over-learning, over-practicing it, doing it beyond the point of grasp. It takes time. Probably it'll take some money to do it. But if it's a behavior that really matters to you, it's one that you need to keep your eye on. So as a kind of organizing principle, something that we want to keep our eye on, keep this as a summary, if you will, of sort of what all of us are up against. It's hard enough to learn new stuff, but it's even harder to hold on to that new stuff. And so we want to build in some capability for over-learning. And we want to watch all of these different kinds of events for what people are going to fall back on when they're put under pressure. And you want to have that in mind, because you're going to have to make some statements of priority of what actions you want to be sure people reliably produce under pressure. Those are the ones where you need to focus your attention on the learning. And you're going to have to give up some stuff in the process. But that's a pretty shrewd call. And it's an important call. And it's one that can make a difference in your unit. Wyke and Sutcliffe emphasize that these same five key processes, or characteristics, surface again and again in high-reliability organizations. A preoccupation with failure. A reluctance to simplify. A sensitivity to operations. A commitment to resilience. And a deference to expertise. Kathy mentioned that in chapter four of the book, we've got several different scales by which you can audit your own organization to see how you stack up on things like how you handle failure, simplification, operations, resilience, and expertise, how you come out on mindfulness. These are a couple of the sample items that tap into how an organization copes with failure. High-reliability organizations tend to regard a close call or a near miss as a kind of failure that reveals potential danger rather than as evidence of their success and their ability to avoid danger. That's a really good item to discriminate organizations. Jesus, we just had a near miss. We really had a close call. Listen to what they say next. Shows were pretty strong, pretty successful, good shape, or boy, we are really vulnerable. We saw safety in the guise of danger, or we saw danger in the guise of safety. You can go either way, but that interpretation makes a huge difference. And when you get an organization that treats its close calls as a sign of trouble, sign of vulnerability, sign of danger, you're closer on the way to acting like the high-reliability organizations do than if you get the interpretation going the other direction. NASA around this time had a pretty strong record of dealing with near misses as a sign that they were in good shape. You can reconstruct several kinds of decisions, several ways in which they thought about their culture that illustrates that kind of a characteristic. So it's something you want to listen for in your own organization. It's paying attention to these small cues. Remember, think about any kind of a wasting disease, a disease that over a period of time gets worse and worse. Simple principle, but boy, it's a great guideline. In the early stages of a bad disease, it's hard to detect that you've got it, but it's easy to cure it if you do. Disease develops, gets worse and worse and worse. Now it's easy to spot, but it's held to cure. So as much as possible, you want to back up. You want to get early cues. That the system is unhealthy, that there's a small anomaly. Something's out of place here. How does the system treat those things? We ought to be able to get a feel for that tomorrow in Saragrande. You ought to be able to reconstruct how your own organization, when do you think you got trouble? When do you think you've got an issue that's turning into a problem that might turn into a crisis? High reliability organizations pick that stuff up early. One of the things that I am, Tim Sexton asks you whether you're a high reliability organization or not, and everybody kind of looked at their neighbor and beats me. In one way you are, and you've got a leg up in my estimation over NASA, NASA, you'll remember, had the Challenger disaster, had all kinds of investigations and cases written up about it, not one of which have found their way into any training at any level at NASA. You don't suffer from the same kind of thing. You have cases, investigations, things went bad, you wish they hadn't gone bad, you learn some lessons or maybe you're struggling to figure out what the lesson is, but it gets pumped back into the organization. No more South Canyon, although I'm not quite sure what that means, but no more of those. And so you try to figure out some kinds of things that are there. No more Cerro Grande days, okay? We're digging into that and you're not afraid to look at that. The investigations may not be terrific and there may be a need for much more competency in how you actually write and do an investigation, but at least some of that information, some of that history is getting pumped back into the organization. What kinds of cues do you find are diagnostic for you? Are good cues of when something is more, a fire is more active than you normally would have expected or seems to be spotting with more intensity than you had anticipated or is more active at a different time of day? Maybe those are rotten cues for something that's about to escape. Maybe they're good, maybe you've got some other better ones, but this would be a great chance to pool some of those kinds of indicators that suggest to you that a system is getting into trouble. Let's look at the reluctance to simplify. This is the second characteristic. And when you're auditing your organization, you're looking for what happens by way of getting different points of view expressed so that people have different ways of getting on top of a situation or of interpreting it. So some organizations, it's pretty likely that people will take things for granted and they don't question things very much. But even when they question them, they may all come up with the same kinds of interpretations and the same kind of explanations. Think about the way of operating that says, I'll believe it when I see it. A lot of people strongly evidence-based, database and so forth, that's the way they work. It's just as often the case that the opposite is true. I'll see it when I believe it. You go into a situation expecting certain things and there they are. That's the way we work. If that's the case, then the better set of ideas, richer the set of ideas we carry into that situation, stronger position we are to see more of the things that are happening there. So in many ways, what we're trying to do today is to just seed a lot of possibilities in your head. So when you look at Sarah Grande, when you look at your own unit, more things stand out for you than might be seen otherwise. Remember, we talk about sensitivity to operations, the idea being that you wanna pay attention to what's happening here and now because you wanna forestall little things growing bigger. So you wanna pay close attention to what's going on. And I think as the person said in the back, as she said, what we try to do is really think about the scenarios, what could go on. We're really paying a close attention to what's happening today and being sensitive to operations is, as I just said, is that you put a premium on real-time information. You know how the systems work. You can make small adjustments to things that are not working correctly and then before they grow bigger. And the big issue is that they're talking all the time. What I want you to take away from this are two ideas. One is that being sensitive to what's going on here now in this mundane moment is that you can really avoid small events from growing bigger. The other thing that I want you to take away from this is that organizations are built through communication. They're maintained through effective communication. And without that, in a way, you don't have an organization. So in fact, you have to pay close attention to communication. I've heard you talking about that a lot today. So if you're thinking about operations-sensitive leadership, you want to really make it clear to people that they need to speak up. You know, knowledge is something that occurs between people. It's not something that's just up here in your head. It occurs between people. You want to encourage others to speak up. You want to speak up and ask questions. You want to check for comprehension, obviously. You want to be aware of how you react to pressure and tell others, as Carl was talking about with respect to the airline pilots and the better pilots being distinguished by the fact that at the beginning of a flight, they say, look, I know I'm bad under pressure. And so if we get into a jam, make sure you talk up. And even just don't talk when we get into a jam. You know, say something if you need to. Verbalizing your plans, reducing pressure by changing the importance of the demands and abilities. That's actually something that you might not have too much control over and then over learning new routines. So those are the kinds of things you can think about with respect to operations. The idea is that, you know, HROs do try to anticipate and prepare for surprises. But they do know that they can't anticipate everything because it's an unknowable, unpredictable and changing world and you can't foresee everything. So you want to develop your skills for resilience. And many of you may have, you may know about resilience just because we talk a lot about that word, especially lately. There's a lot of literature that's developing on resilience right now. It used to be thought that resilience was a special capability that individuals had. You either were born with resilience or you weren't. Now, in fact, we know that that's not correct and that resilience, in fact, is a process and it's a process that occurs over time. And it is a process that occurs as a consequence of people developing their competencies, developing their efficacy, the capability that you will be able to handle a situation if it comes up. And the continuous kind of learning and growing that we want to do. So when you're thinking about resilience, you want to be thinking about the kinds of things of developing generalized resources that can cope with and respond to situations that come up, okay? We're responding to change. You want to develop deep knowledge. You want to develop capabilities for swift feedback to figure out what's happening, how can we adjust what we're doing. You want to foster learning. You want to foster obviously accurate and speedy kinds of information. And you want to develop skills at recombining different resources because you won't really know. And you sometimes have to get comfortable with improvisation. Finally, I've talked to you about the fact that in high reliability organizations, you'll see it clearly this afternoon. It's something you want to watch for. A word that shows up over and over in this literature is migrating decision-making. And what they mean by that phrase is in high reliability organizations, the decisions migrate. They drift. They move to the people with the most expertise to make the decision, not necessarily to the level and the hierarchy that is officially tasked in order to do that. What is striking in the NASA Columbia disaster is the fact that expertise was not recruited on almost any of these dimensions. People say with a bit of understatement, they may say out of the Park Service or Forest Service too, that NASA is not a badge-less society. Who you are, what level you're at, that really determines everything and what you're gonna decide, which works against decisions moving around to the level of expertise. But notice, they didn't go to the people who were experts in imaging. They didn't go back to the people who had built that model to see whether the crater model could be modified in any way to help them out. They really didn't listen to the engineers who were highly worried under these conditions. They sought out, management sought out people who agreed that this was no big deal and listened to them and made their decisions on the basis of them. And they had basically lousy intelligence about the kinds of images they could have gotten. This is the essence for me of the investigation board essentially saying for this one operation, NASA behaved in a relatively mindless fashion. The quote in the report says, shuttle managers did not embrace safety conscious attitudes. Instead, their attitudes were shaped and reinforced by organization that in this instance was incapable of stepping back and gauging its biases. Bureaucracy and process trumped thoroughness and reason. Thoroughness and reason are some of the things behind how people treat failure, how they treat simplification, how they treat operations, how they treat resilience, how they treat expertise. Notice this key piece right here, this is the beauty of what you're doing this week. Stepping back, engaging your biases and then resolving to do something about at least one of them as a starting point. The final category that we've talked about is of course this deference to expertise. What we know is that especially in high tempo times, when things are really heating up, that in these HROs that they shift decisions away from the hierarchy, from formal authority, towards the person who has the capability to answer the problem. So they have what we call kind of fluid or flexible decision-making structures and they actually don't have a fixed central player who thinks that they know, he or she knows everything. And so decision-making migrates to the person that has the expertise to make the decision at hand. And of course, you might think that decision-making does migrate down, but sometimes it has to migrate up as well because people who are trying to make decisions down here don't have the big picture, they have to go up. One of the problems with deferring to experts is that experts are sometimes subject to what we call the fallacy of centrality. And the fallacy of centrality is this, the idea that if I don't know about it, it must not be happening. So when somebody asks the question this morning, how do you deal with people who overreact or bring stuff to the expert's attention? How do you deal with those people? I guess as the expert, I would be asking myself, am I subject to the fallacy of centrality? Do I think just because I don't know about it that it must not be happening? Experts oftentimes overestimate the likelihood that they would surely know about something if it were taking place. One example of this, and Carl's written about this quite a bit, is that in the 1950s and 1940s, pediatricians and radiologists had, they were seeing, they would see radiographs of little kids that seemed to have all these broken bones, that broken bones that were healed. And in fact, they named it the brittle bone syndrome. And because pediatricians and radiologists couldn't imagine that people would intentionally be hurting their children. And it wasn't until the 1960s when there was a social worker that was also added to a team that was kind of investigating these situations. And the social workers said, well, the social workers know how to deal with community, they know how to deal with families, they have more ideas about what goes on in families. And in fact, the battered child syndrome was discovered in 1960 as a consequence of adding that social worker. Kathy, I'm wondering, what's an HRO's general take on after somebody or an organization is screwed up in punishment? Well, I mean, they try not to punish the individual. I mean, the whole thinking, and somebody talked about the Swiss cheese model, the idea that generally there's not one single, single cause of accidents. So in many organizations, they try to not punish the individual at the sharp end of the stick. Of course, that doesn't always happen, but for example, hospitals, the hospitals that I know about, say emergency departments and intensive care units, they're trying to understand the systemic components that contribute to errors rather than punishing a single individual. Now, there are political consequences and other consequences to that, so it's sometimes difficult to implement, but I think organizations are really trying to get away from punishing individuals. That's not to say that there aren't times when there is malfeasance or there is some actual bad behavior. And I think some people have written about that, in those cases, then you, of course, you have to punish people if that is the case, but just for events that occur that were unexpected and they don't necessarily, one person wasn't the cause, they're generally trying not to punish people. I was just gonna say that the whole issue of continuous feedback, I thought was pretty important. I attended a leadership class recently, and that was something that was brought up as something that's very important to do is to provide continuous feedback to your employees, whether it's positive or negative, and not just focusing on the negative when it occurs. And I think in the US Forest Service, in my agency anyway, I don't think that we always do that in a real continuous form. I mean, there's a lot of people I talk to who don't get any feedback at all from their bosses, none. I think you have hit on something that's really critical, and that is this issue of feedback. It is the case that people don't get enough feedback, and when they do get feedback, it's oftentimes negative feedback. So this continuous feedback is crucial, and you do wanna also talk about the positive. I was talking to somebody at the break, and I was talking about the fact that there's a lot of research going on right now on positive psychology and what's called positive organizing. And people who are looking at positive psychology have developed a theory that's called the broaden and build theory. When people experience positive emotions, they are more likely to broaden and build new ways of thinking. And so experiencing positive affect is very important. We've been talking about today the importance of failure, and in your book you talk about how it's okay to be a pessimist in that regards, but yet that runs counter to organizational values where we want people to be optimistic and we want them to figure out solutions and not just bring problems. So I see that as a conflict. Just kind of a tension. Tension, yeah. How would you answer that, Carl? Part of the failure point is essentially that you want people to be wary. You want them to be on their toes. You want them to be not complacent, but that doesn't mean necessarily that you would be preoccupied with just, oh, you screwed up again, you screwed up again, you screwed up again. It's more the, what am I taking for granted? Should I keep taking that for granted so that it isn't always just negative? It's whatever helps you stay on top of things, on top of details, on top of being more alert to what goes along. I think the big thing that worries us is just you have a success, you say, I got it, and then you move on, and then you get complacent about it, and that's what does in most of these high reliability ones. So in some ways, we and they overcorrect and you say, don't think you're having a good day or if you're having a good day, watch out in order that people just stay on their toes in general and don't get cocky or that's what we mean by the liabilities of success. You find organizations that have a series of successes, and boy, they start to tune out data, the environment changes on them, they miss the change, and then they get in deep trouble, and that's the sequence that is sort of the root sequence that we're after on high reliability organizations. And finally, before something unexpected happens, you may want to ask yourself, can I see signals of failure, and do I know how to make sense of them? How differentiated are the labels that I'm applying to a situation? When you see a situation, do you just automatically label it as, oh, well, I've seen this before without really taking the time to analyze it a little bit more deeply? Am I aware of the unfolding situation? Am I really paying attention to the context, the here and now, paying enough attention to what's going on in front of me? Do I have the skills to make do, or do we have the skills and capabilities to make do, and who knows how to do what? So those are the kinds of questions that you might want to use, either in situations where you have a failure or when you're facing some kind of unexpected situation or before something unexpected happens. So you can think about those kinds of questions. The only thing I would add is, it's a perennial difference between us. I think all stuff starts out with one person at a time. So from my standpoint, if I became more aware of the failure in my activities, some of the kinds of things that I tend to lump together and maybe should split apart, whether I spend more time thinking about the future in the past rather than the present, and how I handle bouncing back from setbacks, whether I have a tough skin or not, and what I'm good at and what I'm lousy at, if I start working on those things, it may have an effect on my immediate associates, and out of that, we may be able to build a group. So for my money, this kind of stuff works for an end of one on up. Adopting the high reliability organization principles into the wildland fire organization is no easy task. But if we are to successfully meet the wildland management and hazardous fuel challenges that face us, we must continue to improve our skills in managing the unexpected. Carl Weich and Kathleen Sutcliffe have opened an important door for us. They have shown us the tools to use, and now we must all learn how to use them.