 Welcome to TSVP talk this afternoon. Today, I'm going to have a speaker, Dr. Monk Steputon. I'm very, very happy to introduce Monk Steputon because we are the member of TSVP program. And when I came here, I was like August, and I was very lonely. His number is there. And then she helped me all the time, right? I mean, until now, of course, yes. And then she got a PhD in cognitive science, working with Andy Clark back in 2012, I think. And then also she has been working with Evan Thompson. And this, you know, this tells me that Evan Thompson or Andy Clark, they are very much like a philosopher of AI and artificial life, which is something it's very much characteristic of artificial life research area, which I think you are also in, because AI life is different, artificial life is different from biology because we have a philosopher. So the bridging between philosopher and computer scientists or physicists is very, very important for shaping up artificial life studies. And the philosophers like Evan Thompson, you know, Francesco Barra-Mazrana, you know, then Andy Clark, she knows everybody, right? So I'm very much looking forward to what she's going to say today. Cool. Thank you, Takashi. And thank you so much for stepping in to chair me, because Tom unfortunately has COVID. So yeah, thank you. Okay, so the aim of the talk is to give a brief overview of traditional and body cognitive science or what I call traditional and body cognitive science. The people involved in it don't consider it traditional. Then I'm going to point towards some aspects of the body that have traditionally been factored out, but may in fact be relevantly interesting. And then work towards considering how reframing our assumptions might impact how we do cognitive science. So first of all, traditional and body cognitive science. So before I talk about traditional and body cognitive science, what is cognitive science? So actually most of you in the room are cognitive scientists and it's probably doesn't need saying, but for those of you who aren't, cognitive science is kind of unique among the sciences is in that it isn't a single discipline. And it doesn't really have a single sort of research project or a single research aim that all people who identify as cognitive scientists agree on. Instead it's made up of several disparate disciplines. So here is a, here is, yeah, can you make it like a little bit quieter? It's kind of resounding, yeah, cool, thanks. Okay, so the first picture there is from the 1978 proposal of the research program for cognitive science. This isn't really when cognitive science started. It sort of started arising really in the 1950s with researchers from these disciplines, especially from neuroscience, artificial intelligence and psychology, but in the 1970s it was really formalized as this is a research program. We're going to label it cognitive science. We create a journal, the cognitive science journal and the cognitive science society. So in the 70s, linguistics started making a, taking a very important role and philosophy and anthropology have sort of always been in and out very important both with researchers from those disciplines asking the questions that are specific to those disciplines in cognitive science and also various researchers in the other cognitive sciences, so psychology, artificial intelligence, neuroscience having an interest in philosophy and an interest in engaging with philosophers in order to try and answer the questions that they're interested in. So things have moved a little bit on in the present age. So recently psychology has been dominating cognitive science and there's a little bit of backlash about that just now. But the only main difference is that they've started to include education as one of the cognitive sciences in addition to the others. So cognitive science isn't just a concatenation of all of those disciplines, but rather there's a sort of a Venn diagram where all of those people from all of those disciplines can meet. However, it doesn't mean that if you're doing cognitive science you need to be engaging in all of, that you need to be literate or an expert in all of these disciplines. So Gentner has a really nice way of phrasing how he takes cognitive science to be. So he gives this metaphor of a multilingual set of people who are gathered to solve a common problem. So it's unlikely that the six languages will evolve in a new combined language rather each person does their best to become bi or trilingual so that they can learn from others. And he says the most productive interactions are likely to be dyadic or almost triadic and which ones which take off can't be predicted in advance and every now and then some group will hit on an area in which enormous progress can be made possibly leading to a new subfield. And apart from these big breakthroughs little gems of insight will come floating along at more regular intervals including disagreements. So discovering that a neighbouring field has made assumptions that contradict one's own can be quite enlightening. So I take it that this is quite different from the other particular sciences and it makes cognitive science quite unique. So what brings these disciplines together? They have a shared concern in understanding cognition, mind, intelligence, knowledge and learning. Sometimes these terms are used interchangeably and sometimes people want to tease them apart and be more precise. So the key research questions of cognitive science are how do natural cognitive systems work? How can we model the relevant aspects of cognitive systems? And how can we create artificial systems that implement cognitive functions? And for theoretical or philosophical cognitive scientists like me I think you can unpack these in two related questions. So on the one hand what is special about natural cognitive systems and on the other if we were to build an artificial system that's genuinely cognitive what would we need to implement? So traditional cognitive science tends to make the assumption that cognition is computation over representations and they have an attitude that the mind is to the brain as computer software is to computer hardware. And this is pretty much the same as claiming that cognitive processes are autonomous with their neural implementation. So they could be in principle implemented in a different system. And we see this in current research, the models of cognitive functioning and artificial cognitive systems research, they're neural inspired. So like neural networks are like the name is a bit of a giveaway, they are neural inspired but they're heavily abstracted from the morphological and the biological body and the divorce from the world in which cognitive systems have evolved and are immersed in and in which their cognition matters to the systems themselves. So Andy Clark is called this traditional view of cognition, the brain bound view. And he phrased it like this, the non-neural body is just the sensor and effector system of the brain and the rest of the world is just the arena in which adaptive problems get posed and in which the brain body system must sense and act. So in this brain bound view, we do of course need a body but the bodies are just for keeping the brain alive and for providing sensory input and motor output. We're essentially just brains and bodily vats. So if this brain bound model is right then cognition is done by the brain and is only instrumentally dependent on the body and the world. And this is the assumption I think behind the talk brain transplants into other bodies or into robotic bodies that you see in the media and also behind this idea of uploading consciousness to the internet. Because if you can separate cognition from the implementation then as long as you find some other implementation that yeah, some other means of implementing it in principle you could have your consciousness, your cognition. So what is non-trivial embodiment? So the term embodiment crops up in lots of places and it means a lot of different things in different areas. So it covers especially a diverse range of theses in analytic philosophy, phenomenology and cognitive continental force feed, gender studies, sociology, linguistics, psychology, neuroscience, cognitive science and artificial intelligence and robotics. But I think the uniting factor of what I'm calling non-trivial embodiment is that the body matters to cognition, to perception or to experience. And we can cash this out in three ways. How does the body matter? The body might matter to the way that we think, it might matter in thinking. So the body is part of what we use to think and perceive or the body might matter for thinking. The body is the structure through which we think and perceive. And I'm going to skip the first one and just go straight to the second. So the body matters in thinking. So Andy Clark has proposed what he's called the 007 principle even, taking this from James Bond, or adapting James Bond. So he says, in general evolved creatures will neither store nor process information in costly ways when they can use the structure of the environment and their operations upon it as a convenient stand-in for the information processing operations concerned. That is, and this is the 007 principle, not only as much as you need to know to get the job done. So what does this mean? This means that we have lazy brains. So we do as little information crunching as possible. We offload everything else onto the body and the world. So this 007 principle is a response to the kind of cognition first thinking that was responsible for developing robots like Asimov. So this is a really old robot now from 1996, but it's worth watching nevertheless. So I think this is an example of a cognition first design. So the robot is doing an incredibly, incredibly complex task and it's doing a pretty good job of it, like it's sensing and it's moving out of the way, but it's doing this by sensing, doing some complex calculations and then sending information to its limbs to do what it needs to do. And its body is just... So the body that's not sensing, the body that's just moving around is just receiving those instructions and moving and the dynamics of that particular body are having to be canceled out. So compared to humans, there's a huge energy expenditure and there's really complex joint movement planning and actuating is a really impressive thing to design. Compare this to these passive dynamic walkers. So here we have none of that. Nothing has been programmed into it. It's literally just its mechanisms. And yeah, it's got, I think, quite an astounding gate. When you watch these videos for the first time, I think you're really quite shocked at how natural it looks, even though it's just a set of metal put together. So here's another one. This one is not on a slope. So that one was using gravity. They don't have any actuators. That's the point of them being a passive dynamic walker. So that one, I think, needs a little tap to get going. But then, yeah, she gives them a bit of a push. But then even on the flat, they can keep going in a remarkably natural gate. I'll just show you one more passive dynamic running, which I think is especially impressive. So this is the kind of movement that if we were taking the sort of cognition first approach, we would think, okay, we need to somehow program in the way that the joints are going to move, how far it's, yeah, things like how far, how much the joints need to move in a particular, at a particular speed, and how to counteract the other one. Okay. So wild walkers use passive dynamics. They don't need motors or controllers, and they do this by making use of their morphology. So they're using the shape of their body, the placement of their joints, and the weight of their components. So this is what Roll Pfeiffer has described as morphological computation. And this is one of the things he's really famous for. This is his book, How the Body Shapes the Way We Think, where he describes his approach to robotics in the group. I think in Zurich that he was. I think he still is. So the idea of morphological computation is that the details of embodiment take over some of the work that would otherwise have to be done by the brain or the control system. So they use these passive dynamics and the morphology when possible to reduce computational effort. So it's not that you have to reduce everything to a morphological computation, but as much as possible use that. So there's a really good example of this. So if you want to see this in full, not this info, but a full explanation, Roll Pfeiffer has it in his TEDx talk. So here, Mike Rinderknecht has evolved and there we go. Evolved these little robots so that all of them are just using two simple controllers and yet they change. They move in 43 different ways, but some of them and so I've taken right to the end are particularly interesting ways. Like they're really ways that make us feel that they are somehow animal like sometimes you can even recognize a kind of animal in them. So this little guy is especially interesting because you'll see that they put a little bit more. They change his morphology a little bit more and he becomes what they call crazy bird. So this is just having changed the shape. Yeah, just having changed these little robot shape. There's nothing else different in these. So yeah, in in more fight this TED talk, he explains that they've added some. Yeah, so that's not quite true. They've just changed his shape, but they've added some little rubber onto the on his feet. So you can get even even more interesting behavior just as a result of adding a rubber. Okay, so if you're interested in right, well, Pfeiffer's approach to to embodied AI. I think it's worth sort of selling his this creation that him and Josh Bungard, his co-author set up quite a long time ago now and perhaps like in 2012 or before. So it's a collaboration between a bunch of different universities and they share, they do a shared lecture course and they used to be available on YouTube, but they seem to now just stream on Facebook. But they're halfway through. So you can follow them on Facebook. This is the Facebook page. And they're halfway through the semester just now. So they've got another few quite interesting lectures coming up. You can see the list of the different partner sites that they do. Okay. So the morphological computation idea. So Andy Clark thinks that these principles of morphological computation or this 007 principle are the same for cognition, not just for things like walking. So he calls this few extended. So the and the impact this is the actual local operations that realize certain forms of human cognizing include inextricable tangles of feedback, feed forward and feed around loops, loops that promiscuously crisscross the boundaries of brain, body and world. So the 007 principle at work in perception and thought. So this idea that you know, you need to know as much as rather know only as much as you need to know to get the job done. So one classic example that's often used is what it's not usually a dog and a frisbee. It's usually that fly ball and baseball. That we might think taking a sort of cognition first approach. How do we, how do we work out how to catch the catch this ball that's going? I don't really know what a fly ball is, but a ball that is going in the air that I need to run and catch. So we might assume that our brains are having to do some really complex calculations. So how fast is the ball going? How fast do I run at which angle? Do I need to run at which angle is it going to come down? So physics stuff that I can't do. And definitely dogs can't do. So we might assume that somehow our brains can do this even though I can't do this. But it turns out that actually you don't need to do much of that because there's a really easy heuristic. What you do is you follow the, keep the ball in the line of sight against the horizon and run so that it keeps still in there or something like this, right? And so if you hold your hand out and you, and you just run like that, then you can catch it. So the, what, what we think of is, is a kind of cognitive task. So perceiving this ball in order to be able to catch it can be reduced to a combination of my body in space and my, the way that I run in space. So the explanation of what we might have thought, the explanation of a task which we might have thought should be an explanation that is of stuff going on in the head actually might need to start including our body and our environment or at least how we act in that environment. And there are similar examples. So using calculating. So we, when we're little, we learn, we use our fingers and counting or we use an abacus. But when we put the abacus away, people still continue to gesture. They still continue to use, use these gestures that offloading in anti-clark's terminology, offloading these sort of part of the cognition or part of the cognitive process onto either the instrument, the abacus or the gesture. So you're not, you're not having to do it all in your head, but rather you're doing it as this criss-crossing of body, brain and world. So Andy gives a really nice example of the tuna. So I'll just quote him here because he has a very nice turn of phrase. So he talks about fish. So tuna and dolphins being mavericks of maneuverability and paradoxes of propulsion. So it's estimated that the dolphin, for example, is simply not strong enough to propel itself at the speeds it's observed to reach. In attempting to unravel this mystery, two experts in fluid dynamics, the brothers, Michael and George, have been led to an interesting hypothesis that the extraordinary swimming efficiency of certain fishes is due to an evolved capacity to exploit and create additional sources of kinetic energy in the watery environment. So these fishes exploit aquatic swirls, eddies and vortices, so the whirls in the water to turbocharge propulsion. But in addition to this, the fish actively create these vortices in pressure gradient. They also use their tails to flip the water, which will create one of these and then push off that a bit like you push off at the end of a swimming pool or if you used to pretend to be a mermaid in the swimming pool the way that with your flippers or whatever, you create a certain force that you could then push against. So using it as aided by a continuous parade of such vortices, it's even possible for a fish's swimming efficiency to exceed 100%. So this is the lesson of minimal robotics for cognition. So the co-evolution of morphology and control in a particular environment is a golden opportunity to spread the problem-solving load between brain, body and the world. So this changes from seeing the body in the world as problem spaces to seeing them as a problem-solving resource. But why is this embodied cognitive science? So the lesson of morphological computation was if you take a task problem such as walking, on the brain-bound approach, the controller is needed for joint movement planning and actuating like asthma. But on the morphological computation approach, the controller only needs to actuate and the rest is done in virtue of the morphology and exploiting the passive dynamics. So why is this cognitive? Because even though it turns out that this task doesn't need complex planning, it needs to think it did. And more importantly, so even if you don't want to call adaptive tasks like walking cognitive, it looks like we can use these same principles to explain how we solve problems that we do think of as higher cognitive functions. So like perception as active sensing rather than representing the world, which is computationally heavy and doing things like long multiplication or accounting using pen and paper or abacus or gestures or the head of mental arithmetic. Forgot that I didn't do a shout out for this. So this is Andy Kapp's book. It's actually really old now. It's from like 1997, but it's still I think one of the best introductions to embody cognitive science there is. It's super clear, super accessible. And the Japanese translation was done by none other than Takashi Akigami who is here. And they just brought out a paperback edition. So sell that to you. So the idea of this extended or embodied or extended versus the brain bound is that this kind of exploitation of our bodies and worlds is pervasive. It's not only for simple adaptive behaviors, even though we've seen that even these for example walking and not actually so simple, but also for higher cognitive processes. So the traditional embodied or extended cognitive science project is to see how much of these processes that are actually explained by morphology or dynamics or exploiting our natural niche and creating new niches to exploit and therefore how much less computation the brain or the control system actually has to do even when achieving super high level cognitive processes. And this is an open question, right? There's lots of work going on in cognitive science on this. Okay. So traditional embodied cognitive science is pretty... There's a sort of a glaring omission of the physiological body. They don't really talk about the physiological body at all. They're always talking about the body in terms of its morphology, its shape, or in terms of, yeah, its ability to sense the world and to act in the world. So the physiological body on this approach is really still just the bat that's keeping the brain and the morphological body alive. There's no mention of affect or emotions. And this is, I think, coming from a science which up until still the late 20th century avoided engaging with the topics of emotion and consciousness. So there was until pretty recently an assumption that emotions weren't necessarily necessary for cognition and consciousness and that surely we're more rational without emotions. And I think we see this, if you watch Star Wars and this kind of, I guess, not meme, but... Yeah. So Spock was supposed to be this amazingly rational character. But of course, if you know your Star Wars, this is completely the wrong example to use, even though it's the one that is often used in the literature because Spock was Vulcan and they do have emotions. They just learned to suppress them at a young age. So a better example is rather data from Star Trek The Next Generation who is an android and also who is designed to be a completely rational creature, an artificial system. But of course, the writers decide to play with what would happen if we put an emotion chip into data because of course that's how it works. You can just have your rational system and then just add emotions in. And we get some quite exciting responses. Also I think perpetuating the idea that somehow emotions make you somewhat irrational or a bit contradictory. Anyway, data ends up not liking it. So I think they take out the chip. It doesn't last. I can't remember exactly how long that lasts, but not very long. They don't deactivate him. They just take the chip out. So this idea that emotion is separate from reason and that emotion might even hinder reason, I think was really quite a strong way of thinking. It was a typical way of thinking up until probably where Antonio DiMazio popularized the idea that emotions are perhaps necessary for good rational decision making with his book Descartes' Era. And since then he's published a whole bunch of books on the importance of emotion for decision making but also looking into other more pervasive internal feelings and their contribution to consciousness. So he uses his main example, or one of the main examples that he talks about which comes up in most neuroscience and psychology textbooks is the example of Phineas Gage who got this tamping iron blown through part of his brain, I think through the prefrontal cortex, amazingly survived even though I think it was like in 1848 and amazingly stayed a reasonably intelligent person and yet his personality changed quite a lot and he wasn't able to really do good rational decision making. So DiMazio is famous for caching these kind of examples and he talks about other examples that he's worked with. Caching is out in terms of rational decision making, needing what he calls somatic markers. The somatic marker hypothesis is one of the things he's most famous for. And that idea is that somehow certain effects, some internal feelings get attached to our cognitive processes that as I understand it basically help people solve a localized frame problem so they're able to not just carry on, just get a bit overwhelmed with all of the different possibilities but see what the relevant possibilities and the relevant actions are in this context. So DiMazio really inspired my entry into cognitive science. So what I wanted to do was to work out, okay well what kind of contributions is our internal body making to cognition and consciousness? And if standard cognitive science, a traditional embodied cognitive science is leaving us out, is this warranted? Or should we actually be looking at the internal body and including that in our embodied cognitive science? So in these two main papers I cash out some of these ideas and I'm going to very quickly outline the main ideas in this one, the leaky levels in the case of proper embodiment. Just to give you a bit of a background as to the way that I'm thinking about bodily information, being important for cognition and consciousness. And so to see where I'm going with one of the projects that I'm going to be doing here at Waste. So in those papers and in my PhD thesis, I argue for the thesis of proper embodiment. And that is combined of two independent theses, but which I think they're together. So internal embodiment, which is that the internal, gooey body matters to cognition and consciousness in a fundamental way. And that the particular details of our implementation matter to cognition. So that's particular embodiment. And you can cash that out as a kind of a nano-functional. And the idea is that if you put these two together with traditional embodied cognitive science, so I have no intention of arguing that traditional embodied cognitive science is wrong in any way, but rather just that it's missing. It's missing a really important aspect. So if you put those together, then you get what I call proper embodiment. So internal embodiment. Internal embodiment is a thesis that the internal gooey body matters to cognition and consciousness in a fundamental way. But what is the internal? I didn't set it up to be doing that. So if you look at the different sensory systems of the body is categorized by Sherrington, we see that there you can see the tele-reception, so vision and hearing, proprioception, which is the sense of where the body is in space and kinesthesia, the sense of bodily movement, extra-reception, so our sense of the external world, touch, temperature and pain, chemo-reception, so smell and taste, where we're coming into contact with chemicals and inter-reception. And this is the sense that until very recently, really until like maybe the last 10, 15 years, we're mostly ignored, but there's been a big boom in the research into this area. So inter-reception is the sense of this little body. So if you look at that in terms of the peripheral nervous system, this is the afferent aspect of the autonomic nervous system. So the inter-receptive system gives you information about heart muscle, other smooth muscles, so not skeletal muscles, that comes from the somatic nervous system, but things like stomach, intestines, blood vessels and bladder, and exocrine glands, sweat glands, saliva glands, stomach, liver, pancreas. So you might be able to see why I talk in terms of gooeyness. It's really the inter-receptive system is really sensing the gooey body. So Bud Craig, who is one of the principal researchers, or was one of the principal researchers in inter-reception, has argued that inter-reception should be redefined as a sense of the physiological condition of the entire body and not just the viscera. So to include things like pain, temperature, and light and central touch, because these are all mediated by the same laminar one spinothalamic cortical system as visceral inflammation. And he calls these pain, temperature, central touch, et cetera, homeostatic emotions. So they're feelings which have an inherently motivational aspect, and they ground what he calls homeostatic behavior. And homeostatic behavior is behavior that's necessary when the autonomic systems on their own aren't enough to keep the body at the right balance to allow the organism to survive. So the idea is that pleasantness or unpleasantness of the sensation, so this is from Craig, is a corollary of the motivation itself, such that at the extreme of unpleasantness, this is from Craig, the discomfort grows until it becomes an intractable motivation. Even though it's not painful, you must respond if you're to stay alive. So inter-reception is the sense of the physiological condition of the body, how the body is fearing, and this information is inherently motivational in virtue of co-activating motor systems as part of the inter-receptive loop. And this information plausibly grounds our feelings of affect and value. So I think this is all cool, and I was super excited when I started learning about inter-reception, like half or two of my PhD, but why is it interesting for cognitive science? So at least two of the reasons it might be interesting for cognitive science is because inter-receptive information may be involved in vision, and it may be involved in perceptual phenomenology. The bar and bar, for example, I argued in a really, really nice article, see it with feeling, affective predictions during object perception, that when we perceive an object, the brain quickly makes an initial prediction about that object using low spatial frequency information, and that the details are filled in by memory. So given the overall gist of a situation or object in context, the brain is left to predict what the details might be given, given previous knowledge. So since this was published, predictive processing has had a huge surge in popularity to some people's excitement, to some people's un-excitement. But there are some aspects of predictive processing that are... So there are some controversial aspects of some people think that it might be trying to explain too much, but that doesn't mean that we should throw the baby out of the bathwater. So bar and bar say when we perceive an object, the brain quickly makes an initial prediction about that object using low spatial frequency information, and then the details are filled in by memory. So given the overall gist of a situation, yeah. So they use to explain how this is working, they use this analogy of the Dutch style of painting in the 16th and 17th centuries. The first, the gist of a situation is sketched, and then over time through recursive application of ever small dabs of paint, a detailed picture emerges. And this recursive and ever finer dabs of paint in this example correspond to the recursive predictions that are generated as a result of errors in the predictions of sensory states. And their thesis is the object perception arises partly as a result of predictions about the value of the object to the agent. And the previous knowledge that's used by the brain to predict the details is encoded in sensory multi-patterns, which is stored for future use. And importantly for us here just now, these sensory multi-patterns involve internal sensations, including autonomic and endocrine information. So sensory multi-patterns are sensory in the fullest sense of the term. They not only involve external sensations and their relations to actions, but also internal sensations from organs, muscles and joints, and how external sensations have influenced these internal sensations. So we know that the connections between these various brain areas give us reason to believe that representations of internal bodily, autonomic and endocrine changes are part of visual processing, right from the stage at which the gist of the situation is being processed by the frontal systems. So even perception at this really positive specificity, it's just a gist stage. It has an effective flavour that helps code the relevance or the value of the object perception. So this is quite different from the idea of adding a somatic marker, and it's quite different from, you know, the idea of you can just put in an emotion shift. This is right from the beginning affectivity is involved. So, okay. So, the other reason I think that related to what I've just been saying, and drawing on the same ideas. The second way that affect is, is relevant for cognition because it's involved in perceptual phenomenology. So it doesn't seem like perceptions always effective. The third way is, I think looking at this quotation from William James, gives us a quite good sense of this. So he says, conceive yourself if possible, suddenly stripped of all of the emotion with which your world now inspires you, and try to imagine it as it exists purely by itself without your favourable or unfavourable, hopeful or apprehensive comment. It will be almost impossible for you to realise such a condition of negativity and deadness. No one portion of the universe would then have importance beyond another, and all the whole collection of things and series of its events would be without significance, character expression or perspective, whatever value interest or meaning our respective worlds may appear imbued with, but that's pure gifts of the spectator's mind. So you can see the same idea in Barrett and Barrett's paper in their model. So when they say, when affect is experienced so that it is reportable, it can be in the background or foreground of consciousness. When it's in the background is perceived as a property of the world, rather than the person's reaction to it. Unconscious affect as it's called is why a drink tastes delicious, or is unappetising. When it's not experienced, some people is nice and others is mean, and why some paintings are beautiful while others are ugly. When in the foreground is perceived as a personal reaction to the world, so people like or dislike a drink, a person or a painting. Effect can be experienced as emotional such as being anxious, an uncertain outcome, depressed, from a loss or happy at a reward. As model provides good reason for thinking that intercepted information provides a constant source of effective information, which is experienced pre reflectively. So it's not the object of the attention, but it's rather that which is shaping our perception of the world, and thus pervades our experience, even even when we're not in an emotional state. So D'Amazio has also argued that intercepted information integrates with information from the ex-receptive senses. Early stages of processing, he has a different model. He thinks that this is because in the superior colliculus, there are three varieties of maps, visual auditory and thematic and they're in a spatial register. So this means that they're stacked in such a precise way. I'm a neuroscientist, so I don't actually know which way the map stacked, but they're stacked in such a precise way that the information available in one map for say vision corresponds to the information on another map that's related to hearing or body state. There's no other place in the brain, he says, where information available from vision, hearing and multiple aspects of body states is so literally superposed, offering the prospect of efficient integration. So integration is made all the more significant by the fact that its results can gain access to the motor system by the nearby structures to the peri-aqueductal gray, as well as the cerebral cortex. So we see strong connections with Craig's work, Amber and Barr, and D'Amazio. Is it just that this kind of effective information shapes how we experience the world without really affecting our cognition as such? Perhaps not. So for example, Michael Gauss and colleagues have looked at how beer can actually change the width between gaps that you judge and also how you, so not only when they were asked to judge how far is this gap, but when people were in a fear state they would judge that gap to be wider, but they would also, if they were asked then to walk over it, they would also walk over wider, which is pretty interesting because there are some visual illusions that we, you know, are quite common in cognitive science, so there's one like with a coin that's a different shape that you might judge it to be bigger or smaller. I forget exactly how it goes, but when you go to pick it up, your fingers will go to the right, how you have the right size. So this is really interesting, but that affect can really change your perception and your action. So it's really involved in that cognitive processing. So, so the moral of this section, there's one more section which I might try to rush through, is in natural cognitive systems like ourselves, having an internal body shapes consciousness and cognition, even when the interceptive or effective information is unconscious or pre-reflective. And the information that feeds into cognition and consciousness is imbued with a natural value in terms of value to the physiological system. So there's an important implication of this is that to create an artificial system that's genuinely cognitive in the ways that we are, we might need to implement some kind of functionally equivalent internal body. And this is what I argued in my steps to properly embody cognitive science paper back in 2013, and which Kings and Man and Antony Demasio are making some steps towards in that 2019 paper, homostasis and soft robotics and the design of feeling machines. But it's sort of very, very, still very nascent stage. These are, these are still, this is still not the orthodoxy. This is not where most cognitive science is going, but it's a really interest. There are there are steps being made towards it. Okay, I will just give you a gist of the particular embodiment stuff. So the idea of this is that the particular details of our implementation matter the cognition that implies any functionalization of the substructure of cognition would need to be at a fineness of grain that functionalizes these details. This is what I call nano functionalism. And in my paper and in my thesis I, I explored two proofs of concept so both I think from Sussex, which we have some people who have such a grant here I think, or at least connections and gas nets and evolutionary robotics and evolved hardware hardware. So, very briefly give an overview of Adrian Thompson's work on evolved hardware. But I think I'll see what I can do. Okay, so, so very briefly and I know there are a lot of people in the room actually who know this kind of research a lot better than me so probably it doesn't matter if I just give a really very brief overview. So typically in evolutionary robotics algorithms are evolved in simulation and then transferred the hardware and what Adrian Thompson is quite famous I think in the field for is evolving algorithms straight onto the hardware. He takes this large silicon chip that is essentially empty so that using a computer you can configure the switches in the array to create a physically real electronic circuit, and he used this array to control a real world robot and using evolutionary algorithms to configure the switches. And he was able to evaluate the circuit based on its performance on a task in the real world, modifying it based on evolutionary algorithms until the performance was satisfactory. So, what the interest we don't really need to know the details of it. And I'm not really technical minded so so what I'm really taking is the is the gist of his experiment anyway. So the gist is something really, really surprising. So despite this chip being a digital chip, and that the experiment was to evolve a recurrent network of logic gates, the gates in the chip weren't used to do logic. Instead, evolution, used whatever behavior these, these high grain groups of transistors happen to exhibit when connected in arbitrary ways. A quarter of the cells could be observed to be contributing to the behavior but some of those weren't actually connected to the main part of the circuit. So that's, that should be surprising right. So, what happened was there was an exploitation of the physical characteristics of the chip that enables the system. This enabled the system to evolve solutions, which have greatly decreased computational complexity than traditionally designed so what I've been calling cognition first algorithms. So, here, it's still, yeah, so here the line between the algorithm and implementation has been blurred. It's no longer a trivial matter. If, if we follow. Yeah, if this is right, then, then it's no longer trivial matter to just implement an algorithm evolved on one particular piece of hardware on to another piece of hardware, because it's, it is an algorithm, but it's so entwined with the implementation that that it's you, you can't just put it on another. It's using the particular details of the implementation. So Adrian Thompson compares. I think I think this is this first picture is a pretty good example of what I've been calling cognition first. So this design by humans we think about what, what, what do we think. How do we think things would would process things how do we think, how would we design things, rather than design for evolution. And it just, it just proceeds by taking into the overall behavior effect of variations. So this tinkering actually actually uses whatever is being, it's being implemented in, it's not separating out this algorithm and implementation. So he says quite nicely in evolution there's no analysis simulation or modeling. So no constraints need to be based on the circuits to facilitate these evolution proceeds by taking account of the changes in the overall behaviors variations, usually small are made to the circuit structure. This means that the collective behavior of the components can be freely exploited without having to be able to predict it from a knowledge of the individual properties. Evolution can be set free to exploit the rich structures and dynamical behaviors that are natural to the silicon medium, exploring beyond the scope of conventional design. The detailed properties of the components and their interactions can be used in composing the system level behavior. It takes considerable imagination to envisage what these evolved circuits could be like the kinds of systems we're familiar with, for example, digital discrete time computational hierarchically decompose circuits are but a subset of what is possible. To summarize, Adrian Thompson shown that the more efficient design solution can be to utilize the properties of the hardware and that this is what evolution does. So it's no longer clear what's algorithm and what's implementation. It can all be specified as an algorithm, but as an algorithm specific to this particular implementation. There's no longer sounds like the algorithms of traditional cognitive science. The story for particular embodiment is the solutions that have evolved to make us the flexible adaptive, newly plastic cognitive systems that we are are likely a result of the exploitation of our particular embodiment, like our particular embodiment over evolutionary and developmental time and plausibly even over the time of the plastic changes that underpin new learning. So what I'm calling a functionalism is this because this line between algorithm and implementation for a cognitive system is much blurrier than we normally suppose. So if you're going to build a truly cognitive artificial system, then it might have to have functional versions of much of our own some very tiny implementation. So interceptive stuff, gooey stuff, gassy stuff, that's from the gas net idea. Okay, so traditional embodied cognitive science and this is almost done. So some of the on the traditional body of cognitive science approach some of the computational work essential to cognition can be partially offloaded to realize by bodily processes and structures external to the central nervous system. So this notion is thus embodied or extended so that it encompasses parts of the body and plausibly also those parts of the non biological world that support the appropriate offloading of computations. But this means that as far as traditional embodied cognitive science is concerned, the body qua body, like our bodies, don't actually play a special role, only the body and virtue of its ability to be a vehicle of cognitive science to load cognition on to. And the result is that, although research in this paradigm is based on the role of the body and cognition, the body really isn't the important factor. So what I'm moving towards is what I'm calling a body first approach to cognitive science, which is not supposed to be in opposition to a cognition first approach. I don't think that there's loads of really good research that can be done under that, but it's rather a call for taking at the same time, a different approach. So to see what falls out of that. So the body first approach is to instead of assuming that you can abstract away from all the biological bodily processes assuming that you can't. And of course, to do science you have to abstract that's that's that's normal. So you can factor out some of these for the purpose of a particular explanatory project, and then shove them back in. So, and when you are factoring them out for the purposes of a particular explanatory project, making explicit what is being abstracted from and why. So how would this change how we approach cognitive science, it would make the assumption that they're likely as a proper bodily contribution, rather than assuming that there isn't. Or rather we should make the assumption. So when we're doing experimental research on cognitive functions considering carefully how much of the generic physiological activity, should we also be measuring to look for correlations, instead of just assuming that there won't be a bodily contribution. You can either everything and you can't just. Yeah, look at everything all meshed together that's too much you have to abstract away, but by assuming that there really might be these contributions, then we should look at that. Okay, so finally, what am I doing here at least. Yes, this is all background, everything I've shown you to you so far has been like previous work, some of it from quite a while ago. But I think the recent research in neuroscience neuropathology seems to support this idea that the substructure of cognition. In evolved systems might be at least sometimes that have much finer grain than previously supposed. And I think that there's, there's recent research showing group group. For example, the role of glial cells in your computation and the different parts of the neuron might contribute to computations and that molecular signaling might be involved in neural computations rather than just modulating them. And it might also involve processes of the body proper to like for example different phases of the heartbeat cycle might affect some kinds of information processing and the immune activity and effects associated with changes in cognitive processing and plausibly even microbiome differences. Some people are arguing or involved in cascades that affect cognitive functioning. So I want to explore what bodily parts and processes that cognitive sciences. Scientists have traditionally factored out because they're either deemed irrelevant to cognition or considered to be mere implementation might in fact be relevant to understanding and modeling cognition. And then to work out how best to conceptualize this for cognitive science rather than for example neuroscience. Do we change what we take cognition to refer to will it change how we view ourselves as cognitive creatures. And if we're going to build artificial systems that are genuinely cognitive. How much of the biology of evolved cognitive systems will we need to implement. So do we need to move from like a soft robotics to a to a GUI robotics. And then until the end of March 2023. And then to be working on one of the things I'm going to now start doing is to work on an opinionated interdisciplinary review paper on this topic. So if anyone here has any suggestions of research from your field of expertise that you think I should know about, I might be in line with some of the stuff I've been talking about or might contradict it, which I though is really helpful and useful. I'd be really grateful to learn of it. Finally, thank you for your attention and to the OIST VP program for giving me the opportunity to visit us to some friends are in the embodied cognitive science research group for including me in that activities and for to cash you for introducing me. Thank you. Thank you. Wonderful. Yes. Can I, you know, can I take. Yeah, okay, so. But I'm here till the end of March. It doesn't have to be now. Oh yeah, no one likes these questions. Yeah, but anyway, so we can take you know a few questions and comments. If you go on. I have a lot of questions, but I will ask one question. You mentioned like more consciousness one, or maybe two times in this presentation. So what is like relation between cognition and consciousness. And like, does this all this means that like, if like we need to have like, like really clever or almost like they need to suffer and like feel pain and all this stuff. Yeah, that was a mistake to include it even two times. I usually try to avoid or talk of consciousness because I mean what do we even, I mean, what do we even mean by consciousness would. Yeah, I think. So I think in general it's better to not talk of consciousness it's just really too messy. I don't talk about experience, which, yeah, which I think is slightly better because partly it's like, not like a thing that something might have, but rather, like, maybe a thing that things do or that things. Yeah. So the question is if we evolve these kind of property and body systems or what would I think the magic is called hermestatically oriented systems I forget his exact term. So yeah, so his idea is putting some kind of feeling, feeling he stuff into it. Oh, I think what you mean is are they going to be aware that they're aware. So I think probably I think maybe they might have a kind of a sentence they might be aware but if we're going to make them aware they're aware. A lot of other stuff is going to be have to piled in do I think we can do. I kind of hope not because probably we don't want to. I think you might differ in this. I think I think I don't want us to create artificially conscious creatures. We're already suffering in the world already why why give more. Okay, questions. Yeah, please. Hello. So, I'm wondering that about you. So in the artificial, artificial intelligence community, the biggest problem that we're trying to solve is the generalization. So you learn some so on artificial intelligence and learn something in the day. The problem is usually whether the particular to learn the algorithm, learn the algorithm can be generalized to another context. So I'm wondering that whether you have any idea about how body coin of science can solve them. This can give a insight about this generalization problem. Also, another related problem is that the body condition changes almost constantly. Like, you've been talking about almost an hour and and your voice cord, your vocal cord literally changes his condition within, I don't know, like within 10 minutes or something like that. And about cognition, the cognition is almost invariant. You can you maintain the focus cognitive focus almost for an hour to give the nice to the nice talk. And so I'm wondering that the whether how you can, if the cognition cognition our cognition is is forms of loop with respect to the our body. How would you explain the discrepancy. Yeah, I guess I don't. I don't think there is exactly a discrepancy I do think that there are changes that are. There are changes that contributions of our body are making to our cognition all of the time and so we do need to bring time into our consideration of of cognition. How did I maybe I'm just not talking. Yeah, I think time into our considerations of cognition, but I guess also what I think your question brings out is what we're talking about when we're talking about cognition so in one sense yeah I was cognitive over that whole period I was able to like talk, sort of think in a performance mode rather than thinking mode but yeah so I was like a working cognitive system but the different sort of cognitive tasks that were going on might have been affected by the differences and so so like yeah so some of this research of like heartbeat. When it's diastolic it's you you process certain things differently. So there are there are contributions that the body is making that may actually affect some of the, yeah, some of our cognitive processing that we're not taking into account or why that rather that we're just not assuming happens. So jumping out exactly what's going on in there is I think still like a project that needs to be done but I think I think maybe we shouldn't assume that just because. Yeah, just because I'm sort of standing here talking at you that that there weren't changes in my cognition over the period reflecting maybe fatigue or anxiety or whatever. Your first question was generalization. I understand the summer question right I think the. I guess my approach is that we shouldn't maybe be looking to generalize too much but rather to be. Looking more at particular embodiments and so not assuming that that much can be can be generalized so of course you can still you can still make abstractions about what's going on, but that doesn't mean that you can then take that abstraction and then implement it in another system so it's not generalizable in that in that way, but rather, yeah, rather the. Yeah, the relevant process is a really tied to our evolved in particular embodiment. Yeah, so, so, um, so you mentioned that that our heartbeat actually impacts our. Some some aspects of a coalition is absolutely true. Yeah, our body movement. You know that there has been reported that the how how how rapidly our mobile body. Also changes the time time perception so absolutely. And yeah, some some feeling states and maybe yeah other kinds of perception I'm thinking like Sarah Garfinkel's work but I don't have a. It's not strong in my memory just now but yeah I think in various ways. Yeah, but there are like invariant aspects of the coalition right so if you. If you think about. I don't know. Yeah, so there are definitely invariant aspects of the cognition that doesn't depend on the your, your, how much have heartbeat. You currently have though. I'm wondering that how would you like, go, go between the invariant relatively invariant cognitive function and another type of the cognitive function that is truly affected by the body conditions. Yeah, I think they need to be mapped out as to which are and which aren't and I guess I probably don't want to make such a strong claim that all cognition. That kind of particular embodiment is, is relevant for all cognition. I think it's worth mapping out where, where it is and where it is and what contributions it does make, does it make, does it still make contributions to that. The kind of cognition you're talking about, but over time, and when it's like, should we still, should we still be understanding it is as affecting that cognition or even constitution that cognition when. When those changes don't, don't affect it right now, but, but do we do affect it over time. Yeah, I think, I think, I think you're probably right. But just that needs mapping out and seeing where, where we should make what claims. Yeah, thank you. Yeah, so you're saying the GUI, GUI soft robotics, that was said, you know, transition from software robotics to. Yeah, which I think is where Demazio is going with the semi aesthetic machines but the idea of that but perhaps not as radically as I was making a, you know, moving oil dropper. Oh yeah. And I was thinking that maybe this can be the next generation of the world. Big problem is that we cannot go, you know, cognitive. Yeah, cognitive systems right so I was just wondering, you know, and then also I think the computer is not a good metaphor for our cognitive system. And then if you can come up with something like a new metaphor. That would be awesome. So I just want to. Yeah, that's a good. Yeah, good comment. Thank you. Okay, so if there are any. Okay, yeah. Sorry, just a comment. It just, it occurred to me. I learned not long ago that bacteria have sensors the point outside the cell but they also have sensors the point inside the cell and that's on the inside. And there are, yeah, there's there are studies into behavior that depends on what happens inside the cell so they're kind of gooey bodies may be important for bacterial cognition or so. Yes. Okay, cool. Thank you. Yes and that makes sense because otherwise why would they sort of go towards sugar when when they're depleted in it. So typically they had they can sense sugar directly like they'll have a sensor directly for sugar. But they also have sensors for like generally sensing things that are good, that are good, right. So because they can sense their own internal metabolites as well so they so it's kind of a bit of both. I should find that reference for me. Thank you. Okay. Any other comments or questions. No, then okay. Okay, again, thank you very much.