 In this section to the course, we'll be using models derived from complex adaptive systems theory to try and interpret social phenomena. Complex adaptive systems can be understood as a special class of complex system that has the capacity for adaptation. When we use this paradigm, we're essentially looking at social systems as an environment within which we have many different agents who are acting and reacting to each other's behavior as they adapt and evolve over time. A good example of this would be the world of organized crime, where we have a social system consisting of law enforcement agencies and criminal networks who each have counteracting agendas and they're acting, reacting and adapting to each other's behavior, creating a very dynamical system. The idea of adaptation is then central to this whole paradigm. We can define adaptation as the capacity for a system to change its state in response to some change within its environment. The system does this in order to optimize its state within that environment according to some metric. So the agent has some value system, meaning it can define a set of states and ascribe some value to them, with some of these states being better and some being worse. So we might be talking about a trader within a financial system trying to make more money, a government negotiating a trade agreement, a politician trying to get elected or criminals trying not to get arrested. All of these are examples of agents that have some value system. They are operating within some environment and they are searching for an optimal solution according to some set of criteria. These agents are operating within some environment and that environment is changing periodically. The agents have to adapt by finding new responses to these changes. As such we can understand the process of adaptation as a search over many different possible solutions in order to generate the most effective one given the environmental conditions. The most coherent and robust formal model that we have for understanding this process of adaptation as a search is what is called a fitness or adaptive landscape, which is a very solid formal mathematical model we can use to describe complex adaptive systems. In a recent paper summarizing the literature on the fitness landscape model within the social sciences, they describe the model as such. At first sight, fitness landscapes provide a visual representation of how an agent of any kind relates to its environment, how its position is conditional because of the mutual interaction with other agents and which possible routes towards improved fitness there are. The allure of the fitness landscape is first and foremost that it represents a complex story about adaptation and fitness in one coherent image that helps summarizing the many aspects of these processes in an accessible way. So there are a number of different parameters that our adaptive landscape model needs to capture. Firstly, we need to define a parameter for how good or fit any solution is. Every fitness landscape has to have a well-defined definition telling us which way is up and which way is down. The higher up this parameter, the more effective the solution is and thus the better the payoff for the agent. Next, we need two more parameters in order to create a two-dimensional space within which to put our different solution types. Those that are similar will be placed in proximity to each other within the space. As an example, we might think about using this model to represent a military campaign. If we had two solutions based around predominantly using airstrikes, we would put them in proximity. While other different strategies using ground forces would be clustered in a different location on the landscape. So when we put these three parameters together, we have a three-dimensional space where the horizontal axes tell us the type of solution we are using and the vertical axis is telling us how effective that solution or state is. Now, for any application of this model, the different locations on the horizontal axis will have a different payoff ascribed to them. Some will be better than others, thus each one will have a certain elevation based upon its efficiency. When we map out all of these elevations, we will get a landscape inside of our model, representing the solution space to that particular environment. And now we can put our agents into this landscape, so these agents might be countries within the international political environment. Their elevation representing their capacity to influence the global political system and those with similar political regimes and ideologies would be in proximity to each other. Or as another more concrete example, we might be modeling the different drug cartels in Mexico where their control over territory and resources would be their elevation within the landscape. Agents are then trying to reach higher elevations and higher payoffs within this landscape, but typically they do not have a global vision of the entire landscape. They do not know if they're on a global optimal solution or just a local one. We do not know if we break up with our partner whether we'll find a more suitable one in the future. We do not know if we overthrow the current political regime whether the next one will be any better. Thus at any given time agents are faced with two different options of either exploiting their current position or investing resources in going exploring for better options. So this adaptive landscape represents the different types of environments that agents are operating within and these different environments can span from the very simple to the very complex. On the simple end of the spectrum, we're dealing with a context that is static in nature and with limited interdependencies. On the complex end of the spectrum, we're dealing with environments that are dynamic in nature consisting of many interdependent interacting parts. I'll now describe in more detail what this means by going over four of the qualitatively different types of adaptive landscapes starting with the simple and going to the complex. The most simple environments are static in nature and consist of the least amount of interacting variables. As an example, we might think about an absolute mono or an absolute dictatorship where all social, economic and cultural institutions are controlled and held constant through the political hierarchy. Within such an environment, everything is in relation to one political institution simply succeeding within that single organization can achieve global success. Or as another example, we might think about some homogeneous cultural system that defines clearly what is considered right and wrong and from this one single correct way to live one's life. These are examples of linear socio-cultural environments that would give the landscape a single dominant peak, one optimal solution that is well defined. And because of this, the agent needs only to follow some linear optimization algorithm. If we now increase the complexity by turning up the number of equally viable solutions, we will get a landscape that has many different peaks. And agents now have to invest a certain amount of time searching for the optimal position. As an example of this, we might think about a young person having completed high school trying to choose which university to go to. They will be trying to optimize for a number of different variables, cost of tuition, location, facilities, college ranking etc. And because of all these interacting variables, we will get a number of different viable solutions giving the landscape a number of different peaks, a rugged landscape. But in this situation, the variables are not changing over time. Thus the student could invest quite a bit of time and resources in researching all of the factors involved to find whatever they consider to be the optimal. Although this environment may represent complicated problems in that there are a number of interacting variables, it is still a relatively simple environment. If we now allow the different interacting variables to adapt and change over time, we now have a landscape where actors are acting and reacting to each other's behavior, constantly adapting. And it is out of this interdependence and adaptation that we get a landscape where the peaks and valleys are moving up and down over time. An example of this might be the current international political environment as we move into an increasingly multipolar world. With the rise of China and the other emerging economies, we are now no longer in an international environment dominated by the homogeneous western ideology of the Bretton Woods institutions, but increasingly have many more actors, both public and private, each with their own strategies and interests that are constantly acting and reacting to each other. And this means the end target is constantly changing. Any solution that may be effective now may cease to be effective when others adapt to it, which will once again alter the payoffs on the landscape as it moves up and down over time. Lastly, this whole complex adaptive social system of agents acting and reacting is receiving some set of input values from external sources, whether this is the natural environment or technology infrastructure to that society. A major change in these input values can cause the whole landscape to transform. In such circumstances, we are no longer talking about the agents acting and reacting to each other, but instead we're talking about the whole topology to the landscape transforming. This would be like a paradigm shift within science or culture where the whole landscape gets changed. We can think about the paradigm shift in our culture as we moved into the modern era. Everything got recontextualized through a scientific and materialistic context. With this cultural paradigm shift virtually every single social and cultural institution within the entire landscape had to reinvent itself within this new context. Education, governance, work, etc. Everything got redefined and those that weren't have slowly lost relevance. This is a long-term systemic change where we're no longer talking about adaptation but instead evolution. To take a more contemporary example, we could think about the rise of machine learning and mass automation. Machine learning in many different areas is making the basic processing of information a commodity. We as human beings no longer have a monopoly on basic knowledge and information processing activities, which is a paradigm shift. We have for millennia had an uncontested monopoly on these activities and through it control over our environment and all other creatures. But this is rapidly changing. Within this context of a systemic transformation, we are no longer competing with each other to maintain relevance. But now the actual whole context is changing and the whole system of society has to evolve in order to maintain relevance within this new context. From this we should see that the different environments require different types of adaptation. Within the first simple environment agents only need a relatively simple linear form of adaptation, which is really an optimization algorithm. The second environment is again algorithmic in nature, but it requires a greater investment of computation as it is no longer a simple trade-off between two variables, but now a number of different variables interacting. In neither of these first two environments does the system really have to adapt. It simply has to make an initial investment of resources exploring the environment before converging to some optimal solution and can then remain there because the landscape is not changing. Here the process of exploration and adaptation is transient in nature. We only have to do it for a period of time before the system can settle into some basin of attraction. The aim of the game here is to find the best solution and then just to stay using it. You don't really have to adapt. You're becoming the biggest fish in the pond so that no one can affect you or the single superpower in a monopolar political environment so that you have significant enough resources so that you don't really have to adapt to what others are doing. When the landscape is changing in response to everyone's actions, this actually requires adaptation. You have to stay continuously responding to what others are doing. This is like being in a multipolar international political environment. No one is big enough just to ignore what others are doing. There are enough major players that any one of them changing their state will affect the entire landscape to a greater or lesser degree. When the entire landscape is subject to systemic change then the agents must be capable of changing their entire functionality in order to be able to intercept and transform whatever new resource may be available within that environment. This requires them to be able to go through the process of evolution which is simply a more long-term and fundamental form of adaptation. In order for agents to change their entire functionality and evolve this requires the maintenance of a stock of redundant diversity within the system in order to be able to foster, grow and develop the long-term solutions in response to major long-term changes. Strategies that work well in one environment may well fail within another and this is often a limitation to long-term sustained development. When an agent adopts a strategy that works well within a simple environment and this enables them to develop into a more complex environment wherein they stay applying their previous strategy which works to prevent them from adopting a more appropriate one for the new context. Here we can identify that success often creates a positive feedback loop such as we previously discussed with the phenomena of irrational exuberance where some initial success makes the agent overconfident in their strategy and drives them forward into a more complex environment where their previous strategy may be inappropriate but the positive feedback loop of irrational exuberance limits their capacity to recognize this and change accordingly giving us unsustainable development and this might be cited as a form of self-organized criticality. As Albert Einstein would say we can't solve problems by using the same kind of thinking we use when creating them. As an example of this we might think about how our industrial technologies and solutions that were developed when we had a much lower ecological impact have taken us into a much more complex environment where we are for the first time significantly altering Earth's core regulatory systems such as the climate and polar ice caps. Solving problems within this broader, more complex environment requires a form of collaboration that our industrial systems of organization such as the nation state that previously may have worked well are now not well designed for. The point to take away here is that strategy is context dependent complexity is a fundamental parameter to a system when we turn it up or down we can expect strategy to change fundamentally requiring greater or less capacity for adaptation. In this module we've been looking through the lens of adaptive systems theory to see what insight it can offer us on macro-social phenomena. Firstly we talked about adaptation as a process through which an agent tries to change its state in response to some change within its environment doing this in order to optimize its position and pay off within some environment. We gave a quick outline to the adaptive landscape that can be used as a formal model for representing whole complex social systems consisting of many interacting agents both on the micro level of individuals and on the macro level of interacting organizations. We talked about the degree of complexity to an adaptive landscape as a key parameter where when we turn it up we go from a linear environment with a single solution to multiple solutions to a dancing landscape to an evolving topology representing an open system. Finally we looked at how the agent strategies need to change fundamentally in response to these changes in context as they go from simple algorithms to adaptation and evolution.