 Welcome to the, to this new Integrative Research seminar. Today we have Paul Verschur. I think that Paul do not need any introduction. He has been here, okay, he has been here for the last almost 11 years. Now Joana just told me a story that you arrive here with several tracks of material in 2006, in summer 2006. And you have been here doing so many things, so wonderful things. Training a lot of students, especially master students of our students. And I think that this story is not ending, but it's starting a new chapter that Paul will explain us during or after or at the end or during the next year. So Paul, thank you for accepting the invitation and please illustrate us. So yes, the specs will be moving. We're now at the point of moving to Ibeq, Kettle Institute of Bioengineering, which will be a whole new adventure. And I will say a few more words about that at the end of the talk. But it's all about strategic placement of your research, right? Like where do you want to go? How do you want to organize yourself? And where do you have most traction for the things that you want to do? It is also very strategic considerations. So I also want to highlight the Living Machines, the annual conference that we organized. Ana Moura is very much behind that, so this year is in Stanford. And a lot of the things I'm talking about is also expressed in this annual meeting, which we try to build really a community of researchers of an interest to bring technology and biology together because we do believe also collectively that that's very much where the future will be leading us. So a more biologically grounded technology. So of course the work I'm talking about is the result of many people in specs. So I'm grateful to all of them. This was our last rafting trip. So all the work we do is very collaborative and involves many people from also many different backgrounds. So it's not just me, I just talk about it. So the key to the work we pursue and we do is really how do you build a sort of multidisciplinary interaction that's required to make some progress in this field. And it's also with that philosophy in mind that I very much developed the CSIM master that's also running here now for the last 10 years that Ana now is also coordinating, that's been very effective. And actually many of the people here in this group in specs were also students of CSIM. So you see it is a great way into your future. So this is a proposal, two proposals we submitted yesterday. So here we have the Quadruped Hi-Q built by IIT in Geneva, Detail Initiative of Technology. This is absolutely a platform that competes with BigDoc from Boston Dynamics. You might have seen the sexy videos. They don't write a single paper about this stuff, but they show you lots of sexy videos. But so Darwin, Coldwell and his people at IIT have built, if you want, a European competitive platform, Hi-Q. We now wrote a proposal that is trying to think about controlling these kinds of high-compliant, highly articulated robots in more realistic scenarios. This is a project which is also led from the science side by Ivan Hereros in specs and of course a bunch of other people who were involved in preparing this. A second proposal also with IIT focused on the Walkman robot. The Walkman is I think the most advanced humanoid robot you can find on this planet, highly compliant structure and built for also search and rescue operations. And why these things are humanoid is that if you do search and rescue, if you have an earthquake, often you have to operate in environments that are structured for humans. So you need the dimensionality that humans have. This is quite an issue in that domain. But what's our contribution to that? We like to think much about how you control these systems. So to build them is cool and actually we're going to have a tail which is a fully continuous robotics tail for stability and pitch control. We're going to have a highly articulated soft robot arm, wrist and hand for manipulation of objects. But the real challenge here is how do you control these robots? And that's really where our interest comes in because we're thinking more about how can I learn to control robots? And the point is, as you will see later, the standard right now is in this field of search and rescue robotics which is actually the most demanding kind of task you can imagine. It's very much teleoperated. There's no autonomy. I will say something about it later here. You see the Walkman in action. And for the Walkman, our target is artificial general intelligence. It's a big discussion right now in the field of AI. You might have been aware of the NIPS conference going on here in December which is really, if you want, the main meeting place for people working on, let's say, novel neural network-based machine learning approaches towards general intelligence. And what we tried to push in this proposal is the basic assumption that actually to achieve general intelligence, so a robot that understands the physics of the world and a robot that understands the psychology of humans, you actually have to be embodied. And that's also what sets us apart from our competitors. And why we think this is important, this is illustrated here, you might have seen these videos, but this is the state of the art right now in the field. This is the DARPA robot challenge. And this is the many different ways in which this can go wrong. So even though these robots are very advanced, very complex megatronic systems, in terms of their operation, they're actually still not that great. And the big problem here is actually that many robots are teleoperated. So you see that here, this is actually how these robots are operated. These are humans behind screens controlling the robot. And one of the main problems is that the humans get disoriented. So they don't really fully understand the details. There's not enough sensory information for what the robot is doing and then make the wrong decision. So you saw the robot that just falls over and it tries to turn the valve. So the human operator doesn't understand the robot is not really holding the valve. So it changes the posture and the forces to turn the valve but there's no valve to turn so the robot just falls over. So what this shows you is that for these kinds of complex robots, what the process really required today is more autonomy in their operation, both in terms of their decision making but also in terms of their skills. If you relieve it to human planning, things will be extremely slow. This is one problem but also the error rate is very high, about 30% to 50% in these kinds of tasks. And this is why it's not only of scientific interest to build architectures for general intelligence or for control but these are also very fundamental problems in these applied domains where you want to see robots do better. But in parallel to that, these are two proposals we submitted in February and March. This year we have submitted 15 proposals in total. We're a proposal machine but that's what it takes nowadays. We are thinking about how can I have closed loop whole brain models that really perform diagnostics and intervention automatically of patients with different kinds of neuropathologies. Think about stroke. Stroke prognostics is a massive problem. We actually don't really know what's going to happen to a patient a year after they had a stroke. Well they have let's say aphasia, chronic pain, depression. It's unknown and we believe that one way forward there is to start to think about modeling at an individual level the brain of the patient and to use that model now to start to make your prognostic predictions and also to guide your interventions. And this is also something that we are already developing together with our partners at Val d'Abron, at Esperanza and Ospil Del Mar here at the C. And in parallel to that, we are also developing and testing more integrated measurement systems that put the patient in the loop of automated control systems that try to optimize their well-being after for instance a stroke or Parkinson's disease and what have you. So the research we are doing in specs is expressed in these different domains that might look very separate. But actually they have much more commonalities than you might think at first glance because in the end we're thinking about okay how can we live the happy life with machines, right? How can we think about improving well-being with machines? That's sort of in terms of the applications where we're going. How can machines help us to make a better life and a better quality of life for the humans in it and the non-humans in it? So in terms of the application areas, whether it's disaster intervention or brain health, in the end we're all talking about how do we improve the human condition with technology because we believe this is an essential target we have to have. And secondly, the starting point for us is always the brain. So this is a view on the human brain, one half kilograms that fits in your skull, it burns about 20 watts of energy like less than this light and it can do all sorts of amazing things like sitting in your chair and listening to me speak. But what makes it common for us in these different domains is that we look at the brain as a control system, okay? That makes it different from most other people in the field who think about brains as information processing systems. But you think that it's not necessarily wrong but it is not necessarily the only view you can have on systems like the brain. So our view on the brain and brain-like systems is always from the perspective of control. So that means if I understand basic operating principles of brains this gives me power to control. If I understand basic operating principles of brains I can repair them. This was my challenge to all the theoreticians in the field if you believe you have a theory about the brain please go to the clinic and fix the brain. If you're not able to have impact in the clinic maybe your theory has not so much value. So given that we have this control perspective you can quite easily now start to see the commonalities. Because in both cases you talk about control, you talk about control structures either manipulating them as repairing a brain or mimicking them, emulating them and using them to control artificial systems. So that means in some sense we don't care so much about the brain as a physical system you care more about the brain as a functional system, as a psychological system that gives rise to indeed action, selection, cognition, perception, consciousness, memory, learning, attention and so on. These are the functions you try to understand. These are the functions you try to emulate but for some reason brains have evolved along very unique principles that we don't fully understand. But they're extremely powerful. Take the stupid robots you saw earlier. This is the state of the art. At best people will show you a video from the one time it did work. So there's a long way for us to go on those fields. And the same thing for brain repair. We're not doing very well, we're actually doing very badly. So there are a lot of principles hidden in this structure that we don't understand. So that's now becoming our challenge. So if you look at the evolution of life forms on Earth over time, then actually something really interesting happened about 500 million years ago, which was the Cambrian explosion, when actually all current body plans that we know about and their affiliated brains suddenly emerged on the scene very rapidly. So the idea is, and this is a hypothesis, that at this point in time some common design principles evolved that are still hidden in their own brains that we have to try to identify and extract. So that would raise this fundamental question that we deal with in our research. If I look at all these brains here, these are all mammals, so we go from smoky shrew to human, do these brains actually, are they organized around similar design principles? Or are they all built on different principles? Even you can go a step further and say, look, if I now compare this to an insect brain, we will say, I will say some more about it later, if we go to an inverted brain, is that suddenly completely different universe? Or are they also sharing design principles with vertebrate brains? And as a starting point and to keep things simple, I think it's fair to say, let's just start from the assumption that there's a common set of principles. And let's try to identify those. And if that's not working, we can start to think about, let's say, subcategories of brains that are qualitatively different. So examples. This is work from Nick Strausfeld. Nick Strausfeld is a speaker at BCBT, you know, our annual summer school that we run here exactly in this lecture theater for two weeks. I highlight all the speakers that we've had here if I refer to their work. So Nick has been the first with his co-workers who actually showed that the brains or organic tissue leaves traces in the fossils. So here we look at the fossil of a little shrimp. It's estimated to be 520 million years old, so that's really the middle of this came in explosion. And what Nick has done, which is really fantastic, actually the first talk he gave about it was here. That was just before it came out in nature. He showed that these brains leave residues and then he has reconstructed that brain and he has shown that if you look at the current little shrimp, this one here that lives in Southeast Asia, people eat it down there, that actually the main structures of these brains are highly conserved. So what Nick shows with this piece of work, which is very impressive because he has been cleaving thousands of rocks until he finally had the right cleavage angle to reveal the whole structure along the longitudinal axis. So it was like a casino for fossil hunters and he got lucky. But it tells us an important story that apparently for this shrimp brain the central structures have been highly conserved over 500 million years. Another example, this is from Stan Grilder, another speaker we had here, who is the world's expert on the lamprey. And the lamprey also emerged very early in the Cambrian. It's traced back to also the Cambrian explosion. And what Stan is proposing is if you look at the main structures that control the action selection in this lamprey, you'll see that there are very stereotyped circuits that run between cortical areas, sub-cortical areas, and the spinal cord. We don't worry about details now about that. What Stan is proposing is that this is a central action selection system that in this very primitive vertebrate, this is the most primitive vertebrate that we know, was controlling whether it would turn left or right. So it's an action selection, a competitive action selection network. But then what Stan is proposing is actually through evolution, you then start to co-opt these same mechanisms. You can co-opt an action selection system to select memories or to select perception, and you call it attention. So now you see that you have one basic circuit template that you can co-opt towards different modalities and different functions. So that's how we can think about conserving these principles in the vertebrate brain. Another example here is from Lea Krubitzer, who also has been a co-organizer of some of our summer schools, who proposes a very modular evolution of, in this case, the mammalian neocortex. So in color, she indicates different modalities, how they evolve if we go from the common ancestor in platypus to the human brain. And her proposal, and she has a lot of neuroscience and developmental neuroscience behind that, is that this brain evolution is like modular. So that would mean that if you start to evolve new, let's say, effector systems, limbs, or new sensor systems, the expansion of the cortex just inserts if you want modules in the structure. Okay, so we can now think also about how, then, from a neurogenesis perspective, such an evolving brain, can conserve these basic operating principles. And here's another piece by Nick and also Klaus Frank Hurt, who was here last year, that actually have shown that both at the genetic level, so they do a genetic fingerprinting of neurons in the mammalian brain and the invertebrate brain, and they look at the basic wiring templates between them, and what they have shown is that both genetically and in terms of wiring, there are great homologues between insect brains, invertebrate brains, and mammalian brains. Okay, so that would again support this idea that common design principles even rule between these different species, or taxa. Okay, so let's say the assumption that our common design principles is reasonable. There's data that supports that, but now how do we study them? How do we extract them? How do we validate them? So that's really our mission in life, and that's where we believe theory is very important. So theory helps us to put these principles together. Okay, like Frank, who studies basic ganglia and cortex as an experimental neuroscientist, or Leah Krubeitzer, or Stan Grillner, they are always observing directly just small parts of those circuits. They manipulate small parts of the circuits. So how do you now actually describe and validate your ideas about the whole system, everything together? Of course you can say, look, I just stick enough electrodes into a brain, then I can measure from all of them. And that's also what people do. But at some point it's not the same brain anymore. Okay, and you're also drowning yourself in huge amounts of data. So it is one way forward, but not the only way forward. So what we've been emphasizing very much are system level models of the brain to bring all these pieces together. This is the game that we want to play. Okay, so since a long time ago, I've been advancing such a theory called distributed adaptive control. And actually at the top level, at the top conceptual level, DEC basically gives you like a matrix description of how brains are organized. Essentially it's saying, look, we have a body. Bodies have sensors and effectors, and they have needs, right? You need oxygen, you need glucose, you need carbohydrates and so on to stay alive, to keep this body going so these are given with this body that evolved. And this is from the perspective of control, the starting point, right? Control has a plant to control. They're coupled, you cannot decouple them. So then we have a reactive layer. It basically automatically links sensory states to behavioral patterns, like think about defending yourself or freezing when you hear loud noise. These are predefined behavioral patterns that set up certain drives that already maintain a basic interaction with the world. But now that was the sort of new insight that we brought to this discussion. It's not only that you have reflexes, reflexes inform learning. And that happens at the adaptive layer where as soon as I hear a loud noise and I get scared, maybe whatever else is going on in the world is informative. I should learn about it. So that's very much the role of the adaptive layer where perception is created from sensation. You have to get interpreted sensation, right? This is what we have to learn. We have to learn to interpret the world. We have to learn to say this is a chair and this is a pointer. This is not stuff you get the equipment at birth. The same for reaction selection and adaptive layer. You might come pre-equipped with a whole behavioral repertoire at birth, but you have to fine-tune that to the details of your body and the details of the interaction of your world. So that's also called an action shaping. And this goes via value systems. And then lastly, in some sense, you can say now I have a state space. I have a state space of the world. I have a state space of myself, my own values, and I have a state space of my actions. And I learned this. Now given those states, I can start to form plans for action. I can organize these states along my goals. So I have some sequential memories that can help me to have policies for future action. Now across these layers, we have columns. So actually if you look at the brain in detail, you have structures that are all dedicated to sensory processing or states of the world. Secondly, I have a whole structure of systems that are linked to only process states of the self, like your state of hunger, your wish to become director of the department, once Miguel gives up on that, and so on, right? Oh, don't do it, he says. And lastly, you have a whole column that's purely targeting action and motor control, yeah? So this is a first-order perspective you can have on any brain, okay? And it also means it gives you a framework to think about how control systems are coordinated and organized. So, here we see the robot. Oh, no, this is the ICAP robot. Here we have SOC. Next finger when I move it. Please, thank you. Now I know when I'm touching object with my index finger. So this is a project we just finished. It's called What You See is What You Did. And in this case, the human or the robot. Come, what is this object? Oh, okay, once and no more. So it is learning about itself and it's learning about the world by interacting with humans using both non-verbal communication and natural language. So this is the first order in which we now test this adaptive layer, right? You learn the state space, that's what we're doing here. The robot is learning the state space. Then it can go further. Come, what is this object? SOC is saying this is a duck. This is a duck. I've understood this is a duck. I get it, this is a duck. So next step up, we can do action recognition. How do you put this action? You push the brain with your left hand. So Maxime is asking, how do you call this action? The robot says you push the brain with your left hand. The brain with the red bone? Nice. You want the brown circle. Why this is amazing, you might find this boring. But what's completely amazing here is that the robot is able to invariantly detect the movement, right? The human is pushing, not pulling or grasping, right? So it's an invariant detection of the action. Plus it also knows the verb that goes along with that movement. Plus it has labeled the object, right? So in this case, this whole architecture just sketched out to you, is really learning this linguistic labeling of the world and also using it coherently in a grammatically structured fashion. Come, what then? On top of that... What happens next? My reason. Why is that? Because I take octopus. What happened then? Come, what then? What happened finally? Why do you have the octopus? Come, why? I ask the octopus because I take octopus. Why is that? Because I want octopus. So this is a major breakthrough in robot history. This will be remembered forever. This is the first robot who says it did something because it wanted to, because it really wanted to, because it's an internal drive system. What actually happened, the robot was asked to touch the octopus, which is this red object here, by another human. And the robot has this drive and instructions from humans, because it wants to interact with humans. If humans say something, it will try to execute that. But all these experiences are stored in the robot's autobiographical memory. It stores all these events. It knows the events of the world, how they were interpreted and what it did. And then you can use language to actually have a discussion with the robot about what it did and why it did it. Now, Sok has asked the robot, why did you take the octopus? He didn't want to do that. But then she asks again, so he then starts to dig deeper into his autobiographical memory and ends up by saying, well, I wanted to, which was a major breakthrough for us and for the robot generations of the future. So, basically what happens in this project, which is built on, let's say, six years of work in humanoid robotics, all of it done while we were here, mapping this whole architecture to a control structure for humanoid robots, following all these integration principles of the overall theories. This is a real-time control system. It runs using 50 different processes on the cluster of machines. And if you want, this is really state-of-the-art in human robot interaction. There's not any other group in the world that can beat us on this level of sophistication of human-robot interaction and also the real-time capabilities of the robot. There's plenty of stuff that goes wrong. There are many things we can improve, but we've came a long way. So, behind the language generation systems are, for instance, reservoir computing networks that, and this is work very much done by Peter Domini in Lyon who has shown that using these currently-coupled neural networks, you can have very reliable sequence processors. These are the networks that we use to actually grammatically structure the linguistic expressions and also the perceptual parsing that the robot does on natural language. But the discovery here was that so initially we believed that a robot could report to humans by just labeling its memory structures. But we saw that that became very incoherent and now what we have seen is that to have coherent expression of these kinds of robots in this way you have to think about what we call a situation model which gives you already an initial, if you want, theory about what humans do and how humans communicate. So this is now, again, an empirical hypothesis that Peter is testing in the lab putting humans in fMRI scanners. So it shows you this recurrent coupling we have in the chemical domain. So effectively, if you want to control robots, in the end what you're building is a synthetic psychology and that also shows you as a methodology that this is actually doing what I wanted to do, which is we want to understand the functional properties of brains which are all psychological features learning, memory, attention are the psychological properties of brains and that's in the end what we're emulating in these robots. It's just what we're saying well you need to be embodied those things in a comprehensive and valid way. So this has been done through a whole bunch of different projects that are still running or have recently finished. The two projects running right now are a social sensor motor contingencies what you see is what you did and my ERC project on consciousness. So in parallel other people in the lab have been mapping the whole theory to the brain because I have my framework I can give it behavioral validity putting it on robots and it might look reasonable but now how do you test that? So the approach we've taken is always say well this overall integration framework as such is not true it might work, it might give you proof of concept but now we've got to validate it against the real brain. That means I have to validate it against the physiological anatomical structures you find in brains so we take specific predictions from this framework and we test them and I will give you an example of that. So over the years we have publishing many papers to do exactly that. In the recent one it was by Giovanni Mafai who is sitting there in Diogo he is also here but short of late but that's not uncommon where also we have brought a lot of these subsystems together. And that again is many years of work that you first develop these components and then you have to integrate them. So we are advancing this whole program of understanding the brain along two tracks. One is integrated system level behavior oriented models and in parallel very much neuroscience oriented models. And all these robots have been tested or models have been tested on different kinds of robots. So we use navigation robots we have operating robots we have dancing robots we have music we have a noisy set of videos but just to show you in all cases we do this behavioral validation of models with robots because brains control brains control action. So if you want to build a brain model you have to link it to the constraints that come from being in the real world and acting in the real world. If you believe in information processing you can do let's say some information theory and make pretty pictures in MATLAB but if you talk about action it means real world real time embodied and you have to deal with all the dirty aspects and the challenging aspects of actually doing that. So let's see how we test some of these hypothesis. This is a visual depiction of how this theory proposes decisions are being made in the brain. Again this is not necessarily true this is just the way we think about it. Basically what we are proposing is that as the robot acts in the world and you earlier saw the robot talking about its autobiographical memory it is storing events and actions as conjunctive representations. This is a fundamental assumption. We say look whatever you do I'm always chunking together my actions and states of the world. That's my primitive representational state. They're not separate, they're integrated from the beginning. So they go in a transient short-term memory buffer and as soon as I reach a goal state this little flag here I just copy this whole buffer into a long-term memory system and I retain the order of this sequence. Now once I have that sequence I can do two things. I can say okay which of these sensory states that I store the red circles is matching its stuff going on in the world. That means I recognize something. But also when I now execute actions from memory I also make predictions about the future of the world. I have forward models I make predictions about the world and these predictions I exploit to sequence or chain through these memory systems. So what we have shown is from a Bayesian perspective this is optimal okay so in that sense that's good. It is able to control these robots you saw earlier however how about these conjunctive representations that's a fundamental assumption and that's one of the things we wanted to test against the brain. So that's what we have done so we went to the hippocampus hippocampus are like two little bananas that sit here in the temporal lobe on the side of your brain epilepsy patients have often problems around this area if you get Alzheimer's disease this is often the first area that's going and why would we look at the hippocampus which of course is already well described since Ramoni Kahal well it's because of our friend John Lisman a close collaborator also a speaker at BCBT because John had the idea that maybe we would find a trace of these conjunctive representations in this structure. So what do we know about the hippocampus at sort of a global level? First it is linked to the cortex which gets inputs here over this perforant path going to the dentate gyrus it's called dentate gyrus if you look along the longitudinal axis it looks a bit like teethy it's not completely regular from there you have the mossy fibers sending information to the corneas amount is 3 CA3 area which is if you want the core memory reservoir of this structure and then it goes projects to CA1 which you can think of as some sort of readout system it reads out the memory reservoir and sends information back to the cortex and of course it is then closing that loop and John was saying look maybe in this loop we can find evidence for these conjunctive representations well what does the hippocampus do? Well, here we have animals in a T-Maze this is work by David Reddish and Johnson and this rat is supposed to get a reward on either side of this T-Maze and actually the rat is doing something completely amazing and it just happened, I don't know if you noticed did you notice anything strange? well it's this in lab 4 first animal is more like ballistic it goes left to right but in lab 4 it really stops it looks left, it looks right and that's called vicarious trial and error and it's as if the animal is visually inspecting the environment to understand where it has to go because it's gathering evidence and this behavior is already described in the 30s by Tolman the cognitive behaviorist who sort of identified this as the first he is also the person behind the notion of the cognitive map of the hippocampus and he was really thinking about not animals like automata responding to stimuli but animals as autonomous cognitive entities as agents so he was wondering whether he was speculating already then that maybe this was something to do with the internal simulation of the task and that's the amazing thing that David Reddish established because here they are measuring from the CA1 area of the hippocampus and every little rectangle you see flashing here is the response of one specific neuron in that structure this work is already 10 years old now but it's completely amazing the circles give you the position of the rat and this measurement is exactly at this point that the animal is standing still it looks left, it looks right so what you see are called then sweeps where so now when it stops here you see neurons that are correlated with this position in space become active and subsequently neurons related to this position in space become active before the animal is physically there and only after that it makes its decision so this is seen as a form of mind travel the hippocampus is simulating future states to inform behavioral decisions so we want to understand how that exactly works and we want to understand whether this can inform us in any way about this fundamental assumption about conjunctive representations does the brain form conjunctive representations or not so we modeled over the last 10 years all these different structures from the inputs in the entorhinal cortex to the dente gyrus CA3, CA1, etc and here you see some examples, this is one of our mobile robots here we simulate the famous grid cells grid cells discovered by Edward Moser, also one of our speakers he got a Nobel Prize for that work he'll be speaking again this September and what is so special here you see the response of a single grid cell now in simulation but you have multiple response fields in the environment like x, y, they like multiple x, y positions that are arranged in triangular shapes with different orientations and spacings and grid cells project their information on to play cells play cells that are further down in CA3 and CA1, they really like one location only and that's why they're called play cells so we simulate those cells we know how to do that, we know how to run them in real time and now we're going to test this idea that John came up with like well if you look at the inputs to this hippocampal structure there's something that is really outstanding and atomically that was John's point like the inputs come from two layers if you want in the cortex that's projecting into the hippocampus into this structure you have a layer at the outside, lateral and a layer towards the inside called medial the neural responses in these areas they do something very distinct so if you look at neurons in this lateral area at the outside they are more linked to states of the world like sensory events or factory events or sounds and so on while these neurons that are more medial are the famous grid cells these are the cells you just saw, they like position so John was saying look maybe this is like an indicative of the integration of sensory states and action states conjunctive representations in the hippocampus so let's see if this is true so that's what we did building a model of this whole structure building on our previous work but in particular so we captured all the basic anatomy for as far as it's known the physiology, let's say convergence rates of cells etc. was all captured by Cesar Rinne Costa but now you need a benchmark how do you validate such a model how do you test this idea well there was actually a conundrum in the field at that time that was not resolved until we finished that model and that was called raydream mapping okay and why this was this conundrum is what people believe that this memory system of the brain the hippocampus was working like an attractor memory, like a Hopfield network but if you have an attractor memory system you have rapid transitions if I move from attractor A to attractor B for some time I am in the basin of attraction of one of the other and at some point I will just flip over in the other one there's no gradual transition they don't exist in an attractor memory system but when people did experiments like the Moses and others where they smoothly varied the environment so if you want I go from one attractor to the other attractor with steps in between if you now look at the correlation of the population that means you measure from all the neurons in this area that might be the input area or in CA3 like this memory structure and you look at the correlation between this vector of responses for different environments then if it's an attractor memory you should stay the same for a while for a transition but that's not what they found if you look at both structures like the memory reservoir and its input you actually have a gradual transition so what this looked like from our perspective is as I am changing the world I am changing the sensory information to this memory system and that is then modulating the response gradually but not rapidly in other words action from the grid cells and sensory information in the letter letter on the cortex is merged in this memory system in conjunctive representations so we run the model and what we show here is that if you mix these two inputs sensory input versus action input if you have a mixing factor of 30 70 that means 70% is sensory and 30% is action related information you can exactly recover of this population vector so this then confirms that if a neuron fires if a neuron spikes in this area it's not telling you one thing it's telling you two things at the same time it's conjunctive it tells you something about a sensory event in a certain location so then the Moses so Edward will be talking about this in September I guess I don't think this experiment he was already on the short list for the Nobel Prize by then so I don't think it was this experiment that did it but he directly tested our prediction because the prediction now is okay if a lesion in this sensory area then this raydream mapping should be reduced so that's what he did so in blue we have to control animals as we morph the environment and in red you see animals where you lesion this area that presents the sensory information and indeed you see that this reduction the population vector correlation is diminished so if you remove the sensory information this effect of the conjunctive coding is minimized that's confirming our hypothesis so here you see how we go full circle I had abstract system level model that we test on our robots that made this fundamental assumption about conjunctive representations with that we went to a computational neuroscience approach look at the hippocampus and again it's tested in the lab by Edward and his co-workers so so this is Cesar here we're playing in Rasmutas you might not believe it but we did and this is André Louvizotto who also worked on this project and they're great guitar players so now we can go a step further we understand their conjunctive representation in the system but what we want to explain is this VTE behavior the vicarious trial and error the robot that does the internal simulation can we account for these internal simulations with this model so this is the work by Giovanni and Diogo this is their favorite robot and we have running commentary I think we're speeding up aren't we so here we have the trained agents they're all helium these guys so now it's checking the environment it's taking in the perceptual evidence and it says okay I like red Lego blocks so that's where I'm going and it gets into the work okay how does that work well I showed you earlier we have been mapping the whole deck theory to the brain in great detail so especially emphasis on on the cerebellum hippocampus prefrontal cortex so the question now becomes if I embed my hippocampal model as I just showed to you in this broader control structure can it really serve this purpose of internal simulation does the robot then also perform internal time travel does it imagine its world to inform planning so here we have the naive robot driven by this model you just saw combined with additional models of the prefrontal cortex cerebellum and the other structures and this is the expert robot this is the home base these are the rewards it likes it's rather lost it doesn't have to plan this behavior and now it gets rather efficient in doing that so here's the environment so the robot starts in the home base it visits different positions that I indicate with these colors P1, P2, P3 then we can go to its memory reservoir and we see that these are all the neurons in the memory reservoir that we just put on a map to represent them this is in position 1 this is the neurons that respond to position 2 and the neurons in its memory reservoir respond to position 3 and now the question is are these cells being addressed in some systematic way because of the lateral coupling that they have to perform these sweeps that David Radish is talking about and that's exactly what we show here is that if I'm in these initial positions marked here by the response of my place cells if I propagate activity through this memory reservoir which is triggered by your goal positions if you interpolate between then the subsequent memory response dynamics in the map you get almost linear trajectories that bring you from initial positions to goal positions so we can recover these sweeps and we can then exactly recover this kind of time travel to see also in the rat so that means underlying these sweeps that David already revealed 10 years ago are actually these conjunctive representations that get chunked together in the CA3 memory reservoir and coupled through their lateral connections and this model has sort of pulled it out very distinctly there are many other predictions that came out of this model that we are still following up it's been a very exciting piece of work but still a lot to do now another piece of our work this is very preliminary this is also work that Daniel is doing and Giovanni and Diogo and Ricardo and many other people are involved in the lab we're bringing this hypothesis the set of ideas on the brain and especially the hippocampus to the clinic so this is the work we do with Rokamora and Konesa at the hospital Del Mar they have epilepsy patients here we see Diogo with one of the the epilepsy patients and the experiments we run and we now enter the clinic with the idea ok let's now then try to understand how this mental time travel of the hippocampus could engage with the rest of the brain and this is sort of modulated by the state of the agent like for instance when I'm actively exploring as the rat or the robot versus when I'm just passively exposed to a world as Daniel did in this case so we have this patient here you see the implants of the electrodes we will have about close to 200 contacts in this case Daniel how much is it this thing 100 ok so 100 active contacts so this is then what Daniel built so we have exactly what you saw earlier the robot do right someone is navigating in space in certain locations we give stimuli so you see what we're doing right we're pushing both these channels of sensory information and action information because we know what they do in this system then what Daniel does he's looking at of course performance how will the people remember these kinds of locations this is work in progress so what I'm going to show you is extremely preliminary but it's indicative one of the approach we take towards these questions and also the kinds of results we're getting this is a very coarse representation of all these contacts that we have in the brain and we just look now at their correlation in just in terms of the amplitude of the response it's a very primitive way to look at this we are way beyond that by now but it gives you a bit of feeling of how we go about this problem and here we're looking at when people have agencies so they are controlling their actions versus no agency so the yellow spots are so that show a higher correlation when the human is in charge when you have agency and the blue spots show what happens when you have no agency and the main observation about this is that this was being elaborated now we see a very different kind of coupling of this whole memory structure we just looked at to the rest of the brain under agency conditions as opposed to non-agency conditions you see a very effective coupling of this memory system to the rest of the brain when you're acting and you see a decoupling when you're non-acting this is roughly a top level interpretation of that so this is then another way in which you can represent the data this is our brain cube environment that also run in our experience induction machine here you see the electrodes in the brain of the patient we can replay all the data we have recorded we can replay that together with the stimulation conditions and then here we also display the main network that is jumping out at the level of the correlation between these contacts in the brain which we then can of course inspect and analyze further this is work in progress but you see now the trajectory we're following there's a general hypothesis on the structure we test this in robot experiments computationally we test it against the animal literature and now we're in the clinic testing them we get from human patients which is fantastically complicated and extremely informative because what it looks like is that everything we thought we knew about the brain actually is out of the window and brains operate at least human brains seem to operate very differently than we had expected and so that's that's cooking right now and actually at Del Mar we're now moving towards going single cell these are local field potentials groups of cells average response in a few months time we're going to have really single cells from which we can measure in these structures there will be again opening a whole new universe of analysis which will be absolutely fantastic so time wise we're not doing great but we made some progress so I don't know who's running a show here anyone runs a show when Miguel is not around I'll take the questions okay so how much time when do you want to start with questions we started a bit later you see but I don't want to overdo it are we talking about who do you think okay now the point is what we really looked at now is the interaction between an agent an organism and a physical world and that's only part of the story because I'm not talking to the chairs even though it might feel like that to you but I believe I'm talking to other agents to other humans and I think that's again a whole different ballpark a very different kind of set of processes we got to think about and that's also where my consciousness story comes in because actually what happens in the Cambrian here we were the most sophisticated animals were tiny worms okay and about here all body plans we know had evolved together with their brains those are precursors were there like for instance the lamprey that you just saw so what's special about this why did this happen so quickly well in my mind and that's also done how that canal is evolving what happened is who at this point in time agency became a problem at this point in time brains had to deal not only with chairs and projectors but also with other agents and that requires a very different kind of operation why is that because other agents are pursuing their own goals but they don't advertise that to you you want to get out of here and have lunch right but you're not telling me that in any way you're not advertising these internal states so this is the whole problem of other agents we have to infer what the states of other agents are so in order to adapt your behavior you must have a model of other agents in other words when I'm in an environment with many agents I must run many models in parallel and that's where for me then consciousness becomes an issue which was the second part of the talk which we will keep very short because okay agents run on the basis of internal states and in terms of hidden norms right no one will jump on the chair right now and sing whatever song you like to sing today just not happening because we follow certain norms but the norms also not advertised they're implicitly present in our behavior we have to infer we have to have theories about our culture and actually this goes back to what we earlier called the situation model I told you the robot uses a situation model to structure its narrative and the situation model is actually structures around these norms because it tells you how we communicate with each other and so okay so let's skip the bonobos for now and the bees it's just that the core theory we are pursuing around consciousness is that consciousness is the process that helps you to optimize this norm extraction and I will not go into details of it while here is Francis Crick and Jerry Ailman I worked with Jerry Ailman who was a complete bastard but also a brilliant bastard and I learned a lot being there good and bad he was of course in competition with Francis Crick who was across the road in the Salk Institute I was with Terry Shinovsky where also Francis Crick was hanging around a lot the idea of Ailman was that to understand the brain and consciousness we need theory and I'm on that side I'm with Jerry Ailman on that Francis Crick was exactly the opposite and he was saying no no all we need to understand the brain is data what we need is the neural correlate of consciousness and if you want that's behind all these approaches where people just get a lot of data about the brain and then hope that their correlations will tell them something about the processes that go on in such a brain but as you can see we are very much at the theory side of things as Jerry Ailman Jerry Ailman was also the very first to introduce robots as science tools in neuroscience in the late 80s that's also why I went there at the time if you can find this a very nice BBC Horizon documentary made in 1993 about this whole group of people trying to realize and test Jerry's theories anyway so I will do this this quickly there are a number of theories of consciousness it goes from embodiment to sensory motor coupling to sensory motor predictions to differentiation and integration to a global workspace most of the people you see listed here have been speaking at BCBT as well over the years but actually all of this is already accounted for in the theory I just showed you take this hippocampal structure we looked at well it's embodied it does sensory motor stuff conjunctive representations it makes predictions mind travel it does integrate right it integrates lots of information in its memory reservoir which you can consider as a global representation of states of the world right so the dominant theories seem to be missing the point and the dominant theories are not making sense yet but we have a suggestion here and also Peter Haggard who will be speaking in September at BCBT and that is that actually if you look at conscious states that are reportable and action states that are measurable in the brain physiologically they don't line up in time right so if humans report they will move and that's Dan Libet discovered this in the early 80s and you measure the motor potentials with EEG across the motor cortex you see motor potentials already start to change long before you report and the difference is up to half a second so then the statement is well if consciousness is so slow it can be of no use because it doesn't relate to behavior and then people like Dan Dennard would say you see it's an epiphenomenon and then people like Dan Wagner would say you see there is no free will because it runs behind it's delayed and that's what the DEC theory has made sense of and I will just give you a quick view on that so the point is we have this unified scene of consciousness that evolves with time it's continuous or the stream of consciousness as William James calls it but it's delayed with respect to real time how do we make sense of that well why is then most of this neural hardware that we have also outside of this window of consciousness right that means actions we generate are not being tracked consciously aware of what we do in real time so why would that be well conceptually and that's what we have been testing in many different ways and we can go through that in some other talk this exactly fits this whole problem of sense making of a world that is filled with intentions right because intentions don't advertise themselves to me I have to infer them I have to infer them in parallel but if I have a parallel set of forward models I still have to optimize them relative to my own behavior which is singular so this singular representation of these virtualized world states I imagine that you're here I imagine you're conscious I imagine you're intelligent I imagine that you understand what the hell I'm saying but these are hypothesis about the world it's not physical fact yeah this is exactly what consciousness does as a hypothesis that I have parallel intention tracking so but parallelization requires again valuation I must test that my models that are running parallel are correct that they are valid so how do I do that well we have one example we built a neuro prosthetic system for the cerebellum and if I would build a human cerebellum with my neuro prosthetic chips the stack would be about 150 kilometers tall it gives you a feeling of this optimization problem you have parallel forward models that you have to optimize how do you do that this is a problem of a massive complexity so the counterpoint that I'm making here is that consciousness now is a system that helps you to monitor your real-time performance real-time is all parallel and subconscious world state is on purpose delayed because it's a monitoring and valuation system that helps you to optimize your parallel control so that means if you speak about free will and also free will for robots it's not free will as in real-time control it's about free will with respect to future control this is the big distinction with the extent theories on consciousness and what I'm proposing and this is what we're also testing on our robots and now to just jump ahead a long way because then I had a lot of really convincing evidence so this is all correct and now we're going to miss it because the projectors just didn't do their job as they should so this is the idea so I will myself into the future and that's also what we're testing on these social robots that's why we work with humanoid robots to be social with other humans they must do intention tracking they face the same problems only those robots do so we have a model of that extensions of the deck theory we also are working on then linking the whole brain models that we built that I showed to you to these humanoid robots because I believe the brain is a controller if we're going to control those of the same complexity as brains so that's why we are linking our whole brain model that runs in the experience induction machine to our humanoid robots because I believe the tools we use for this will be converging the tools we use to analyze the real brain to simulate the real brain and to control complex robots as the Walkman I showed you earlier are going to converge we're going to be the same set of tools and that's what we expect to have been building on for the last years now we look at big data problems in this field it's a simulation or a model based integration of complex data so this is the future, this is what is in these proposals I started out with so overall we have the theory we applied the neuro robotics as you saw neuro informatics as you saw in analyzing complex brain data we applied the theory in neuro prosthetics in neurophysiology on our epilepsy patients we translated the theory to the clinic by now we have treated over 800 patients with the rehabilitation gaming system which is the best neuro rehabilitation system you can get for functional recovery after stroke anywhere in the world that's work done also with Belen and Claudia and Martina and many others here in the room we're also spinning this off in our company Eudine which is taking off now we do we have translated the theory to neuro aesthetics this work in neuro rehabilitation came straight from exhibitions we have built doing art is very useful to test ideas that feed back into your science so we do that neuroeducation there's also Vicky and Maria I don't know where they're hiding they just came back, we're running experiments continuously in a number of schools here in Barcelona to bring robots based on the DEC theory to the classroom not to teach kids about robots I don't care about that we want to improve education in general because I was complaining about robots I was complaining about the health system I can tell you also the education system is broken and we have to bring technology to primary and secondary schools to really improve the quality of education right now they're great as youth prisons but kids don't learn a lot we have to improve that so we're doing that kids that we now look at in our experiments 200 or something oh they went back to school okay anyway so this runs and then neurosurgery where we're bringing all this infrastructure we've built for brain analysis to the surgery room to try to build real-time data capture intervention systems for neurosurgery and epilepsy intervention so the pneumonia with machines the good life with machines we're trying to realize over the last 10 years that we've been here at UPF we've made progress on that quite a bit so the RGS is one of the examples of that well we can skip that now being commercialized we have traction in the market and how I see the future and that also relates a bit to this move to eBay I think these challenges we're facing about in the human condition are very deep okay and I feel it's important that scientists researchers tune themselves to those challenges right there are two forms of science one form of science is you have a bunch of people in the room and they'll try to be the smartest guy in the room it's very boring I worked with Nobel laureates, we have them coming here and so I met lots of smart people it's not a very interesting game and yes of course this is a way to survive they review your papers, they review your grants that's the commercial activity we're in but I think the days of Galileo are over we really have to rethink how we define ourselves as researchers and scientists and I believe we have to really redefine science as changing and improving the human condition that's what this is all about in my opinion that's why we are in classrooms that's why we are in clinics because I believe we have to develop a whole new generation of technology you might call humanitarian technologies that directly impact just the improvement of human well-being this is often not where the money is or the science and nature papers but I think if science does not prove itself in the real world we're going to die a slow death because we're just irrelevant okay and the move the e-back helps me to push that program on a larger scale to also have the political support to elevate that ambition so that's why I decided to move so for instance you can look at the Sustainable Development Goals of the United Nations this is what the countries of the world have identified as the main challenges for humanity and that's where we want to contribute because we believe we can do better science than those I showed you by actually solving real problems because solving real problems feeds back to your basic science as long as you solve these problems based on first principles and that's what specs is all about thank you very much Hi regarding the consciousness findings how do you perhaps I missed that how do you feed them into your model at the top level the sensory and reactive the reactive level is at the bottom so do you have to change your whole model a lot no, I really see it as an additional transient memory buffer that exists at the contextual level so remember we talked about autobiographical memory which is like a top level memory system and I see it as very closely affiliated to that so what we're modeling is the sockets modeling and also Geordi is involved in that we're looking at how very core brain systems around the thalamus and the cortex are maintaining such a transient memory buffer so it really resides at this contextual layer that would be the current idea so we don't have to now mess up the whole architecture to solve that it's basically a more of a logical architecture and mapping different functions to different regions of the brain might not exactly or is it also physical architecture it's a physical architecture absolutely, because also what I showed you we bring this down to real brain mechanisms also with consciousness actually we do work, also Martina is doing that in particular on neglect patients so these are deficits of consciousness that we try to model in terms of these transient memory systems as in what Sork is doing so it's a real, oh Martina I'll show you actually so it's not just a logical architecture or proposal, it's really thinking about mechanisms and the fundamental idea is that there's like as soon as you wake up the thalamocortical system becomes active and starts to resonate and in this resonant structure of the thalamocortical system sits a transient memory, you switch off the thalamus the memory is gone so this is now this virtualization memory of consciousness, I believe we have fMRI work to support that we have psychophysical paradigms to support that and we are running experiments on these epilepsy patients this is what Ricardo is doing with Diogo to test exactly that idea but indeed you're right if you would have to throw away the whole architecture to accommodate this component then there's a problem with the theory we must be willing to also consider that option, I agree with you for an outfits however then of course you can still say yeah but how about the heart problem why does the robot really feel does the robot really feel to be awake and conscious well at this stage no at this stage the robot doesn't really feel because we still have not managed to in these robots realize this transient memory structure but if we have it the robot will feel that will be the proposal sure object that's right I call this quala parsing so basically it means we can interpret subjective states but we can only do that in artificial systems because the interpret subjective states you must have a full record of all experiences of the system and that's not possible with biological systems but with artificial systems we can right so you're right it's a matter of time but that's an assumption so let's see you can prove me wrong that'll be good no ok so in terms of some of these concepts that I see over there on the screen like peace, justice and qualities most of those things sound quite abstract to me and to many humans probably so how do you think that you can merge how you can make a robot kind of well how can you try to merge robotics and these kind of abstract things that maybe we don't even understand them properly well first you said they're too abstract to you and probably to everybody else that's quite a generalization right so to you it's abstract and that's fine I agree with that and we often psychologically we use this other like self model we always interpret the other in terms of our self but it isn't necessarily a correct generalization so actually we just had a conference in Amrita University in India exactly on this issue if you think about peace and justice a massive problem in peace and justice missions is observation and monitoring we don't have enough humans to monitor whether the rebels there in the east of the Ukraine are shooting or not and when we send humans there they get killed as a few days ago one car of European observers drove on a landmine and they got killed ok machines can do this really well efficiently they can monitor so we have a huge contribution to make to justice and peace ok so you can take your pick on any of those things poverty poverty is a huge problem in the developing world and this is all because there is no let's say local economy Amrita University tries to contribute to that by going to small villages in India and teach especially women there's also part of women empowerment which is the gender equality item here they teach groups of women to construct toilets and this is a brilliant idea because you solve a hygienic problem and you give people skills and then they start their small businesses but we can go beyond that you can bring technology to these sites for people to improve education health, what have you without elaborate training but now you got to build business models around that you need models of exchange of sustainable exchange to make that work so you need to build economic activity around this technology can be a real part of that so in Amrita they build a very simple kind of rice planting robot which really increases people's crop delivery or crop production that has a massive impact impact on poverty so I think the problem is more with our imagination or in this case with your imagination that with intrinsic properties of the challenges that we're facing it's up to us to be willing to use our imagination to have an impact there ok and we can ok the thing is that makes sense but as far as so far as you accept some of those values as we accept or we think they are right now but I mean do you think that we could take something from robotics these fields to kind of better redefine or we understand how we perceive those things and maybe we should perceive them in another way I mean to change the system in some sense absolutely now that's really a good point because we already see it now right how automation and robotics is challenging very fundamental values of our society like why do we have the discussion on equal income or universal income why do we have the discussion right now because automation is taking over jobs and we already know right now certainly for Spain certainly for the younger people many of them will never have a job so this is the impact of automation robotics in a negative way but it challenges us to rethink the structures in which we deploy these technologies right so yes we face these challenges right now and you also can think of course about enablers how can for instance in India we have also looked at in the same in Brazil use robots for education right not to teach people about robots but to teach people about physics and mathematics and biology and so on as mediators so to improve teaching quality because very often in these small communities teachers are not well educated right to really help these kids make progress and we can use technology for that think the same thing about in these remote villages in the third world people have diseases as well but there is no expertise to help people for diagnostics interventions on technology and robots can do that think about search and rescue when Fukushima exploded the Japanese government actually I know the guy who is running the whole program of salvaging or sort of isolating this plant the Japanese government had invested billions in robotic technologies but nothing worked they couldn't send anything into this this blown up nuclear power plant to clean it up the cables were too short the robots would get stuck they would not have the situational awareness and so on there are huge challenges in that whole domain also for let's say search and rescue environmental monitoring and so on where technology can really help us so yeah absolutely he is justice there is this application in these robots let's say that we send a robot I don't know Robocop so this is really an important point because what you see now if we go in this direction we cannot say anymore we are just an engineer building something there is an intrinsic ethics to what we do and we all have to be part of the discussion also the researchers and the engineers that give rise to these technologies so there is a very important discussion to be had there and actually the European parliament has not just taken the initiative to start a new agency for the regulation of robots because they believe that this is also a political discussion that they have on the other hand they are proposing that robots can have personhood which I think is a mistake but you're absolutely right this is a key discussion we have to have but the most important thing is we have to change the game there as well because if you talk I don't know do you ever talk to ethicists okay I do and they always say I'm not going to give you any advice on what to do I cannot be normative we will just observe and when it goes wrong we have to figure out to fix it and that's a problem we have to have also a rethink about how we approach our ethics in a more proactive way and there are no frameworks for that so there is a very important discussion to be had there across the technology science and the humanities to really think more deeply about these frameworks and I really believe we as the researchers giving rise to these kind of technologies must be much more active in that discussion so far in this self-learning process of the robot this is the thing that he will learn what is wrong what is right I mean as a baby when you grow up you learn that some things are committed by our society or not so the road actually these were the slides I skipped over quickly but they're important like if you look at bonobos we should all be bonobos they have a great life but unfortunately the habitat is rapidly disappearing these are matriarchal societies and they essentially resolve all their conflicts with sex and Frans de Waal who is one of the main primatologists who have been investigating different kinds of primate cultures if you want he makes the argument that these animals organized their morality around two fundamental principles which is empathy like other like self and reciprocity so you scratch my back I scratch yours and he has done many observations to show that if any of these members of these groups violate those moral principles they're out they're in trouble so you see there are innate moral principles they're not learned these monkeys don't write it down in their statutes or something like that right and part of my theory of consciousness is to show that the human brain is unique in the sense that we can rewrite these moral priors this makes it so powerful and so dangerous that's part of the theory basically I'm saying cognitive structures can start to overwrite these innate moral principles that makes it very flexible but it also shows you that humans according to my hypothesis can really be tuned to any moral system that makes us very very dangerous and that's why reciprocity and empathy in human environments perfectly and that's the problem sure I would like to thank you very much you're welcome alright now there's an overdose of cheese I'm told alright thank you very much