 Hello, and welcome back to Beyond Networks, the evolution of living systems. We're still on our journey into philosophy, the philosophical sort of motivations for this course. In this module, we've sort of looked at how we should refocus from that, which we know, to the unknown, from the island of knowledge, to the ocean surrounding it, the ocean of ignorance. We talked about how questions are more important and the facts we already have. And then we went into sort of the basics and a brief history of process philosophy. And we will use some of these concepts today to look at the process of how we get from the island of knowledge into the ocean of the unknown, how the process of doing science itself operates and how that sort of interacts with the things I was telling you about on scientific perspectives in the previous module. In particular, I want to remind you of this wonderful quote by Nicholas Resch, the processional nature of knowledge reflects the fact that our thought about the real things in this world presses outward beyond the limit of any restrictive boundaries. It's a never-ending quest that I gave you. This picture, this metaphor that comes from Stuart Feierstein's book, Ignorance and How I Drive Science about these ripples in a pond that keep on expanding. But the actual expansion of science, of course, is not as smooth and regular as this at all. And we've also talked about this already when we talked about the second most famous philosopher of science that you may know after Karl Popper, Thomas Kuhn, who talked about how science normally proceeds by puzzle solving in a sort of a common paradigm, but sometimes every once in a while, you get this big paradigm shift after which nothing is the same. People ask new questions and really see the world with new eyes. So it's very difficult to talk across these sort of revolutions. And this is the concept of income and serability. So we actually transform our view of the world as we make progress in science. And this is a very sort of uneven dynamic which reminds us, if we're biologists probably, of the concept of punctuated equilibrium in evolution, also called punk-ique or evolution by jerk sometimes, especially by people who don't like it very much. So this concept of punctuated equilibrium is that if you look at the fossil record in biology, you have long, long periods of nothing happening, punctuated by these sort of really tumultuous revolutions and very quick changes, radiations of species, big extinctions and things like that. And people have been sort of puzzling what the reasons are for these sort of fast, slow dynamics in evolution. We'll come back to that. What I want you to know right now is that sort of this normal science, or evolutionary science, Stasis, fast extinction and radiation is very similar to each other. So we can ask the question whether it is worthwhile looking at the development of science with evolutionary principles. And of course I'm not the first one to suggest that. If you think about it, Karl Popper's work itself, this sort of idea that we more or less randomly suggest hypotheses that then get tested and sometimes refuted is very similar to an evolutionary process. And this is the foundation of the work of American social scientist Donald Campbell who took Popper's work and turned it into something that he called evolutionary epistemology. And his article of that name, of that title, evolutionary epistemology actually appeared in a festrift for Karl Popper called The Philosophy of Karl Popper which was published in 1974. In this paper, Campbell reviews work that he had done since the 1950s in which he's looking at how we randomly, by randomly proposing conjectures, hypotheses, we create sort of an intellectual version of blind variation, different ideas that in Popper's version of The Philosophy of Science, remember, Popper doesn't tell us where these ideas come from. So we consider them random and also blind towards their sort of views in depicting the truth about reality. We talked about that in the module about perspective in the lecture about perspective true. On the other hand, when we test those hypotheses, refutation of a hypothesis is almost like selective elimination of a species in evolution. So there are very striking parallels. Of course, Campbell already noted that this applies to all human knowledge, also sort of intuitive knowledge that we can develop this way or our own cognitive processes are adapting in this way, not just in science, but generally. And that I'm always wary of metaphors. So we have to be careful in transferring sort of concepts from biology, from evolution to the progress of science, but there are some really interesting parallels that we can note here. When I say we should be careful, there are a few other people that worked on this topic and took it to some extremes. A famous example, by the way, is Rupert Riedel, the chair of the theoretical biology department, former chair of the theoretical biology department and founder of that department in Vienna and the founder of the Conard Lawrence Institute for Evolution and Cognition Research. He published a book in 1984 that was called The Biology of Knowledge, The Evolutionary Basis of Reason and inspired by work of Conard Lawrence on the evolution of our own cognitive abilities. He developed a theory himself that was very similar to Campbell's ideas at the time. This idea was taken to an extreme by David Hull. I have to show this book here because it's called Science as a Process. Of course, I love this title. I'm not sure I love the book as much because it takes an extremely selectionist approach to how our scientific theories evolve. You will see during this course that I'm somewhat critical of the extreme selectionist approaches in biology. So I'm also a little suspicious of his approach here, but it's a great book title for sure. We're not gonna go further into this. I encourage you to read it and form your own opinion. But if we get back to Campbell, we also get back to Wimpsad and we can draw a bridge from Popper via Campbell to Wimpsad. Because Wimpsad bases a lot of his work on robustness on the kind of work Campbell did. And he has a very central, Campbell has a very central concept in his theories, which is called multiple determination, which is where this term robustness in Wimpsad's theory comes from. Remember, robustness is the criterion by which we just, we judge the trustworthiness of our knowledge. Things are robust if they are accessible, detectable, measurable, derivable, definable, producible, whatever, in a variety of, and that's important, independent ways. Okay, so the more independent confirmation we have, the more trust the knowledge that we gain. And so by doing all these activities that are listed here, we refine and sort of adapt our knowledge to the reality we are confronted with. So we're trying to sort of develop more robust knowledge. Our cognitive abilities, this is interesting. This is not from Wimpsad's book, Re-Engineering Philosophy. Our cognitive capabilities and our institutions, he says, Wimpsad, they're very similar. They're no less re-engineered, that's an interesting word, than our biology. So our bodies that evolve by selection and other processes. Or technology, there's some sort of technological evolution as well. Just look at the different cars over time or planes. And he says, there's what they have in common, all these things. So our bodies, our technology, and even our theories about the world have something in common. And that is their structure, which is basically a structure of layered clutches in exaptations. What does he mean by this? I think this is fantastic. So we'll go into these two words a little bit. So the first term is the Cluj. It's very widely used, especially in early computer engineering, the sort of programming you did very quickly, that what was garnering commenced, like I could tell you how this works, but I would have to kill you afterwards. So something that is a workaround, it's not really solving the problem in a straightforward way. It's quick and dirty. It's a quick and dirty solution, the Cluj. It's clumsy, inelegant, inefficient, difficult to extend and hard to maintain. You've seen such technology. We have examples of this in our evolved bodies, our trigeminal nerve, I think it is wrapping around the order. So in giraffes, it has to go down from the head of the giraffe all the way down into the body of the animal and back up to innervate the sort of facial muscles of the animal. So this is a Cluj, an evolutionary Cluj, but our theories, our scientific theories, we've set this, this is the central insight, are just like that. We are tinkering, okay? We are using what we have and we are building heuristics, Clujes, quick and dirty solutions that work, not theories that have the value of absolute truth. And acceptations. And this is an extremely important concept that was developed by Elizabeth Burba and Stephen Jay Gould in a fabulous 1982 paper. In fact, I think it may be the single most important thing that Stephen Jay Gould contributed to the world and evolutionary theory in particular. And an acceptation is this sort of a trade or actually also it's used for the process whereby a trade becomes used for another function that it was originally selected for. Think about bird wings. Probably the first wings, they were sort of flappy outgrowths that were used for thermoregulation. That's a people thing between the forelimbs and the body of the animal. And then gradually these animals managed to start gliding and then selection switched from thermoregulation as a function to the function of flying. But the original selective pressure on this trade was probably not for flight. And that's an acceptation. What an acceptation allows you is sort of to work with stuff that's laying around. Your evolution is a tinker. As Francois Jacqueau famously said, it takes whatever it has laying around and uses it for whatever it needs it. And so no less if we think about our evolution, the evolution of our own knowledge, this is how it's structured. So we shouldn't expect this sort of a clean and unified sort of view of the world, but we should expect this sort of mess of local approximations, piecewise approximations as Wimsa calls them, the different perspectives that are sort of laying around and sometimes cooperate together and sometimes contradict each other. And through this interplay science makes progress. And we can learn from these parallels a few more things. So basically to summarize this argument, which is you may have to watch this again. I mean, it's not intuitive, but it's extremely deep. This is an amazingly powerful insight that he's presented here. So Wimsa is saying our knowledge, our scientific knowledge behaves just like any other complex adaptive system. And we can draw parallels between the evolution of other complex adaptive systems, such as organisms or technological systems that we are developing, evolving as human beings. We can draw parallels from those sort of systems to the evolution of the progress of our own knowledge. This is amazing. And so in particular here, I don't wanna go too deep, I can't go due to too deeply into this because we're limited on time, but there are two properties of complex adaptive systems that are probably universal across all domains that I want to point out. And they will come back many times during this lecture. The first one was first suggested by pioneering complexity scientists and economists, Herbert Simon in 1962, in a wonderful little paper that you should read. He says, everything is connected in the world. Remember Wimsa's bio-psychological thicket. Everything is causally connected. Everything influences everything. But some things are more connected than others, like Orwell's animal farm. Some animals are more equal than others. Some things are more connected than others. The world is a large matrix of interactions in which most of the entries are very close to zero and in which by ordering those entries according to their orders of magnitude, the distinct hierarchy structure can be discerned. So we may be living in this causal thicket in the middle of this mess, but there's hope because this thicket is somehow modular and it is possible to distinguish important influences in a complex system compared to others. What this means for science is that some connections we make are more robust than others and those are the ones, of course, that we retain. This is an argument against astrology, for example. If you have an argument with someone who believes in astrology, they can say your relationships are influenced by the planets and the stars and that is hard to deny. There is some gravitational influence and you cannot exclude the possibility that this changes something, but you can say, look, the influence of those factors is probably a lot less than the influence of your partner in a relationship on the relationship. So this is what is meant here by distinguishing these sort of large entries from the small entries in the matrix of interaction in reality. And so we'll encounter this also when we look at the structure of living systems. They are modular exactly for the same reason and we'll examine those reasons why all complex adaptive systems seem to have this sort of modularity in a feature model of the lecture. Lecture itself is modular as you noted. So the other sort of common factor of complex adaptive systems and also our theories is called generative entrenchment and this is a term that Wimsa himself develops and it's a very interesting term. He says, as our scientific knowledge evolves some robust heuristics become the basis of others. So remember, because of this decomposability we are retaining some heuristics and some not those that are robust are more likely to be retained, but not guaranteed. And so some of them stick around for longer and as they stick around for longer there are other heuristics that come to depend on them and people start building on those robust heuristics that are already around. Okay, we hope that those are robust because if they're not they can crumbling down of course all the heuristics that depend on them also fail. So as you accumulate more and more other heuristics that depend on one particular heuristic it becomes more and more difficult to change it just because too much depends on it. And this is what Wimsa calls generative entrenchment such foundational heuristics become generatively entrenched. Generative entrenchment is a feature not only of scientific insights of course but of certain components of any adaptively evolving complex system. So this applies to not only our foundational theories about the world, they become more and more difficult to change as we build on them but also on things like specific regulatory genes for example in animals think of the Hawks genes if you're a biologist they're very conserved in all the animal filer. So these are generatively entrenched if you change them a lot too much depends on it and everything will fail. So they are becoming indispensable and actually if a component that is not robust become such an entrenched factor that's a very bad thing because when it comes crumbling down and that's a real problem. So this is the basis of fragility one of the main basis of fragility in complex systems to come back to this topic of how evolution learns by making mistakes. Okay, so we've we're pulling together very quickly, very many things here. So all I wanted to say is that we can think of science our scientific theory as some sort of evolutionary process that involves some sort of selective pressures on theories heuristics to be maintained, retained or refuted. Okay, and it's there are certain parallels to organic biological evolution that are interesting to look at. So basically there is a type of an evolutionary dynamic at work here and what Wimpsett means by reengineering reality is that these systems of heuristics these scientific theories that we have they can adapt to their context their environment just like an organism can adapt to their context and therefore we can make progress in science just like evolution can lead to adaptive outcomes. But let's move back to Thomas Kuhn because we haven't quite covered what he had to say apart from his very famous book on scientific revolutions he wrote another famous piece which is called the essential tension and it's also got something to do with the progress of science. It is a sort of a tension that's inherent in the dynamics of doing research yourself. It influences your own research strategy. And he says, as a scientist, you're in constant tension because of this sort of dynamic between normal science and revolutions. Normal science is common, it's productive, revolutions are rare and rarely succeed. So you are stuck in a tension between a productive tradition that you're educated in and risky innovation. The risks that you wanna take to get out of that tradition but we need both of those to interact with each other. And this is parallel to sort of the productive tradition corresponds of course to the puzzle solving in normal science and risky innovation corresponds to his revolutionary science. So new paradigms ask us to see the world in completely different way. And so the problem is if you translate this into modern vocabulary, there's a beautiful book that I recommend that's very easy reading. It's called Algorithms to Live by Brian Christian and Tom Griffiths. And one chapter in that book is about a computer science strategy that's called Explore Exploit. And it corresponds exactly to Coon's sort of essential tension in science. Computer scientists are trying to figure out what your best strategy is. For example, if you're gambling, nobody is anymore in Vegas but people used to be pulling these one-armed bandits and computer scientists were developing these sort of strategies when it would be best to switch between different machines with different odds. So it's about risk-taking. This is a risk-taking strategy and you can derive some mathematical truths about or insights, I should not use the word truth anymore here insights about when it is good to switch strategy and when it is good to stick with what you know. So if you have a lot of certainty, if there's a high penalty on taking risks, if you have a lot of pressure and no time, then you should exploit and you can see immediately what I am alluding to here. So in modern academia, what you have is exactly the situation. Everybody's just exploiting what they already have. But as we've seen before and as Thomas Coon has introduced, you need both the risk-taking and the sort of puzzle solving the normal signs to get ahead and this is a problem. On the other hand, this sort of exploiting is not unreasonable, especially if you have found a gold mine somewhere exploited. If you're getting older in life, you become more conservative because you're switching from exploring to exploiting. That's fine. You can say as a scientific discipline matures, it can do that as well for a while, but if it gets stuck in this dynamic, it's a real problem. You always think, okay, young people don't vote conservatives. So conservatives will die out in the future, but that's not true. Why? Because young people turn into conservative old people all the time. And this is a rational strategy as proven by this work in computer science. And this has a lot to do not only with how we do science, but also of course with how organisms choose strategies in evolution. Again, there's a big parallel here. And you can apply this, for example, there's entire branches of industry nowadays that are sort of dying. You can recognize a dying industry by the amount of exploitation it is doing. And I need not point out that when I was young, we did not have so many sequels in the movies yet as we have today. So basically the movie industry has increasing revenues and it's sort of switching from explorer to exploit. But because of this pervasive pressure that we have everywhere in society and especially in academia, everything is switching. And so we've stopped exploring in science. And again, so this is something that I want to do here. So maybe what I'm trying to do is a bit crazy, but I'm just trying to break out of this exploitation sort of dynamic that we're stuck in right now. And I think we need to relearn to explore. So what we're doing here is we're taking unusual perspectives in order to explore because this is something that is intrinsically important for a healthy dynamic of scientific inquiry. So we've brought together here the two topics of process thinking, applying this process thinking to progress in science itself and connecting it to the things that I said about perspectivism in the previous model. This was a very sort of superficial tour, but I'll provide you with the reading materials that you need to go further in this topic if you want. So what we'll do now is we'll slowly segue back from this sort of excursion into abstract philosophical principles that provide the motivation of the course into applications to our view, the scientific view of reality. And exploration of course, free exploration has something directly to do with the main criticism of this course. And that is sort of if you consider the universe as a mechanism or a machine, everything is determined, including the future. And so you don't need to explore. You can just, if you have this old fashioned naive realist view of science, you can say, we have a working scientific method. We just apply it until we know everything about the universe, okay? We just need to measure the hell out of it and we'll understand everything. There is no new thing under the sun to quote my favorite book in the Bible, which is Ecclesiastes. Should read it, very interesting. So if we could only measure the current state of the universe with unlimited precision, that's the idea. We would know everything about its past and its future. This is clearly absurd. So if we adopt this process view, we get out of this problem. We move away from this naive realist view, okay? To go back to Resher and a wonderful little book he's written called, Unknowability. What he says there is, he makes a very simple argument that he also makes in his process metaphysics book but in less detail. He says, if we could predict discoveries in detail in advance in science, then we could make them in advance, think about it. It is logically impossible to predict future breakthroughs like that. And that the same applies of course for technological innovations, insights, whatever that you may have. So this transformative learning experience that we want to go on in this course has to be completely open. It is incompatible with this close Laplacian view of the universe that it's understandable in its entirety. It's determined and we can know everything about it. The myth of Laplacian omniscience as Reimsach called it. Resher goes on to say, and this is a beautiful quote, the future of science is an enigma. Innovation is the very name of the game. Not only do the theses and themes of science change but so do the very questions. So if we want to make progress in science, we don't not only need to get more data, more evidence, we need to ask different questions that are relevant in our current context. So the whole meaning crisis that I was talking about in the beginning, the very beginning of the course is caused by our current perspectives no longer being adequate for the problems that we are facing today. And I think it's extremely important, especially if we consider the perspectival nature of scientific knowledge and its evolutionary dynamic sort of nature that we start opening our minds, playing around with new ideas and taking this different perspectives again even though we have a lot of pressure to produce in the current paradigm. So in this spirit, I'll present tomorrow a very short lecture that is sort of making the bridge between process thinking and biology and then we'll start moving into the world of biological systems and biological models with this lecture. I hope you stay tuned and you'll join me again tomorrow when I talk about process biology. Thanks for listening. Bye now.