 In this module, we will be talking about the theory of far-from-equilibrium self-organization. We will firstly discuss the concepts of order and randomness in terms of symmetry and information theory. We will then talk about complexity as the product of an in-between or phase transition state. And finally, we'll discuss the term edge of chaos and talk about how self-organization is thought to be dependent upon noise and random fluctuations in order to stay generating variety. Far-from-equilibrium self-organization is a model that describes the process of self-organization as taking place at a critical phase transition space between order and chaos when the system is far from its equilibrium. But let's start by talking about organization. Organization is an ordered structure to the arrangement of elements within a system that enables them to function. As such, we can loosely equate it to the concept of order. Both order and organization are highly abstract concepts, neither of which are well-defined within the language of mathematics and science, but probably the most powerful method we have for formalizing them is through the theory of symmetry. The theory of symmetry within mathematics is an ancient area of interest, originally coming from classical geometry but within modern mathematics and physics, it has been abstracted to the concept of invariance. In this way, symmetry describes how two things are the same under some transformation. So if we have two coins, one showing heads and the other tails, by simply flipping one of the coins over, it will come to have the same state as the other. Thus, we don't need two pieces of information to describe the states within this system. We can describe this system in terms of just one state and a flipping transformation that when we perform it will give us the other state. Now, say instead of having two coins, we had an apple and an orange. Well, there is no transformation we know of that can map an apple to an orange. They are different things. There is no trivial symmetry or order between them and thus, we need at least two distinct pieces of information to describe this system. This second system requires more bits of information to describe its state, thus we can say it has a higher statistical entropy. The point to take away here is that we can talk about and quantify order and randomness in terms of information theory. That ordered systems can be described in terms of these transformations which we encode in equations. Ordered systems are governed by equations, whereas random systems are not. But because there is no correlation between the elements states in these random systems, they are governed by probability theory, the branch of mathematics that analyzes random phenomena. Complex systems are by any definition non-linear. Complexity is always a product of an irreducible interaction or interplay between two or more things. If we can just do away with this core dynamic and interplay, then we simply have a linear system. If the system is homogeneous and everything can be reduced to one level, then it might be a complicated system, but it is certainly not a complex system. Thus, one of the main ideas or findings of complexity theory is that complexity is found at what is sometimes called the interesting in between. If we take some parameter to a system, say its rate of change or its degree of diversity, and turn this parameter fully up, what we often get is randomness or continuous change without any pattern. Or if we turn it fully down, we get complete stasis or homogeneity with very stable and simple patterns. It is often the case that with too much order, the system becomes governed by a simple set of symmetries. Too much disorder results in randomness, and the system becomes subject to statistical irregularities. It is only in between that we get complexity. On either side of this, there is a single dominant regime or a tractor that will come to govern the system's behavior. It is only when a system is far from its equilibrium, away from one of these stable attractor regimes, that we get a phase transition area representing the interplay between the two regimes. In this space, the system is much more sensitive to small fluctuations that can take it into either basin of attraction. This phase transition area is also called the edge of chaos. The phrase edge of chaos was first used to describe a transition phenomenon discovered by computer scientist Christopher Langton. Langton found a small area conducive to producing cellular automata capable of universal computation. At around the same time, physicist James Crutchfield and others used the phrase onset of chaos to describe more or less the same concept. In the sciences in general, the phrase has come to refer to a metaphor that some physical, biological, and social systems operate in a region between order and either complete randomness or chaos, where the complexity is maximal. The edge of chaos concept remains mainly theoretical and somewhat controversial, but it is often posited that self-organization and evolution can only really happen in this phase transition space. There may be a number of different interpretations for why this is so, but one way of understanding it is that self-organization requires entropy and evolution requires variety. Unlike external intervention, where we can take a well-ordered system and simply reconfigure it by transferring energy to it from some other external source, in this way we go from one ordered regime to another without the need for entropy to enable the process, we simply need some input of energy. But as we know, self-organization does not happen in this fashion. It is internally generated on the local level and this process requires the presence of entropy and randomness for elements to be available for reconfiguration into a new regime through feedback loops that originate as weak signals or fluctuations. A number of different researchers have posited different theories around this process of self-organization far from equilibrium. The principle of order from noise was formulated by cybernetician Heinz von Forster in 1960. It notes that self-organization is facilitated by random perturbations and noise that let the system explore a variety of states in its state space. A similar principle was presented by Ilya Prigodzin as order through fluctuations or order out of chaos. Researcher Per Bach also looked at this phenomenon in terms of what he calls self-organizing criticality, the mechanism by which complex systems tend to maintain themselves on this critical edge. Many of these theories talk about both the need for entropy and variety in order for the system to stay adapting and evolving over a prolonged period of time. In summary, we have been talking about the process of self-organization, taking place far from equilibrium at an interplay between order and chaos. A theory that may have a lot of conceptual substance to it, but remains a long way from having a proper formulation. We talked about how order and randomness can be described in terms of symmetry and information theory. We looked at complexity as phenomena that arise at the interplay between different entities or regimes. Finally, we discussed the term edge of chaos and the role it is thought to play in the process of self-organization.