 Virtually every problem we will encounter out there in the world can ultimately be traced back to a socio-cultural one. If you dig far enough into the water crisis, environmental degradation, inequality, or cybersecurity, you will find that it is not really about lack of water, lack of land, lack of money, or computer code as it may appear. It is really about people and how they see the world, how our thinking constrains us to a certain subset of solutions. All of these systems that we might be interested in changing are created by us, and ultimately every problem can be traced back to the human condition. No matter how big and bad all these systems may appear, this is ultimately about us and how we choose to build our world. In this respect, one of the biggest mistakes we make is getting too focused on the technical dimension of a system and forgetting about people. We think that cybersecurity is all about code and computers, and we forget there are real people with interests and motives behind those computers. We think that transport is about cars and roads, when really it is a socio-technical system. We think that food is about farms, tractors, and produce, when really it is a socio-ecological system. Probably our biggest blind spot today is people. By adopting an analytical approach, we try to shift everything into the technical realm. We try to do away with the humans in the system, and we do this by moving them into the technical realm by assuming that they are rational. This blind spot creates a huge new space for innovation and potential for systems change, but as always, before we can use it, we have to understand the system we are dealing with, and this requires some form of stakeholder analysis, which simply means understanding the people in the system. We have to understand how the culture and social structure of an organization works to understand its potential for change. Assuming things about people is one of the most likely places we can stumble in our endeavors. As always, naivety does not help. Systems analysis is about a critical detailed assessment of what exists, not about how we might like it to be. Assessing people and the motives in an objective light is sometimes the hardest thing to do. We are prone to have preconceptions that people are generally good or generally bad, with those preconceptions shaping our assumptions, which go unquestionably factored into our understanding of the system we are dealing with. But we need to try and get beyond this through a critical assessment. Stakeholder analysis is the process of mapping out the different individuals and organizations that have a stake of interest in a given project or system. This information is typically used to assess how the interests of those stakeholders should be addressed. Stakeholder mapping is the process of mapping out these various stakeholders through the use of some model. However, most of the traditional stakeholder analysis focuses on looking at the individual actors within the model and assuming a degree of rationality in their behavior. We assume that people make decisions in some way rationally and independently, which is often not the case. We typically think that it is the parts that create the system. We go into an organization and we look at the particularities of the members of the organization and we think if they are good people, we will get good outcomes. When we hear about a politician or CEO involved in some scandal, we say, what a terrible person. But what we often fail to see is the invisible set of incentive structures acting on those individuals. The basic premise in systems thinking is that systems dynamics create the behavior of the system. Donella Meadows puts it like this. That's the number one really important systems insight is that systems behavior comes out of the system. It's interrelationships, it's goals, not the elements, the people, the actors in it. The question is what is wrong with the system, not what is wrong with the people in it. In this context, this means that it is the incentive structures within the organization that creates the behavior of the members of that organization. This is a basic idea in game theory, that we study the structure of incentives that the actors are operating within as a game. In a general sense, we are trying to look where the forces and stresses are in the system. If we take a mechanical system like a chair, we will see that it has a particular structure that is designed to channel and disperse a certain physical force that is placed on it. We can see how the gravitational forces exerted by someone sitting on it is channeled through the structure of the system. There are critical points where a lot of the force is being born and other areas where there is none. The same is true for institutions. There are forces of incentives and responsibility. Some people are heroes and they are the ones who defy the incentive structures placed on them. But most people are not. They will follow the course of least resistance. This was recognized long ago by the ancient Greek philosophers who looked around them and saw many different cultures, those of the Spartans that were war-like and prayed to a certain set of gods, the different tribes with their customs, and the Athenians themselves that had their own beliefs and culture. What was of interest to them was that everyone seemed to conform with their own local culture. It was obvious from this that each person is not individually making a choice about the best god to believe in or the best cultural habits, but everyone is simply acting according to the local incentive structures acting on them. We may find the occasional person like Socrates that questions and challenges their culture goes against the local incentives. These people are important, but they are rare. Many institutions over time come to do things that are very different from what they say they are doing. Russell Ackoff gives a number of examples saying that universities and colleges preach that their principal value and function is the education of students, and that's absolute nonsense. You can't explain the university's behavior if you think it's about education of students. What it's about is providing the faculty with the quality of work life they want. Teaching is the price they have got to pay and like any price you try to minimize it. A corporation says that its principal value is maximizing shareholder value. That is nonsense. If that were the case, executives wouldn't fly around on private jets and have Philippine mahogany-lined offices and the rest of it. The principal function of those executives is to provide themselves with the quality of work life that they like, and profit is simply a means which guarantees our ability to do it. So if we are going to talk about values, we've got to talk about what the values are in action, not in proclamation. Systems should be judged by their behavior and outcomes, not by rhetoric. People say what they are incentivized to say objectively. They do what they are incentivized to do subjectively. That is to say that we all find ourselves in roles as part of objective systems of organization, such as a bishop in the Catholic Church or a mayor of a city. In our interaction with others, we are incentivized to play that role. But when we are not interacting with others, we experience a different set of subjective incentives. Because our societies are populated by closed organizations, this creates the possibility for the gap between the two. Thus, with any closed organization, one has to understand the distinction between those two and how they shape the difference between what people say and what they will do. Donella Meadows states it clearly when she says, purposes are deduced from behavior, not from rhetoric or stated goals. Changing the incentives of agents is a hugely powerful leverage point, but what really matters are local subjective incentives, not necessarily global objective incentive systems. One has to keep in mind the difference between or one's interventions can easily become distorted, as captured in the term Cobra Effect. The Cobra Effect occurs when an attempted solution to a problem makes the problem worse, because of perverse incentives. The term originated in an anecdote from the British Empire in colonial India. The British government was concerned about the number of dangerous cobra snakes in Delhi, so they offered a bounty for every dead cobra. Initially, this was a successful strategy as large numbers of snakes were destroyed for the reward. Eventually, however, enterprising people began to breed cobras for the income. When the government became aware of this, the reward program was scrapped, causing the cobra breeders to set the now worthless snakes free. As a result, the wild cobra population further increased. The apparent solution for the problem made the situation even worse due to a failure to trace through the consequences of the incentives that were being introduced. The Cobra Effect illustrates why ethnography is so important in systems analysis. Ethnography is in an approach where the researcher attempts to observe an organization or culture from the point of view of the subject of the study. Ethnographic studies help us to understand the system from the perspective of an individual acting within it. This helps us understand the subjective influences, experiences, and incentives that the individual is under locally. Thus working to mitigate the mismatch between subjective and objective incentives structures.