 So, I've been giving lectures, public lectures about robotics for probably 15 years now. And early in that period I realized that people kept asking me the question how intelligent exactly are intelligent robots? And of course the subtext was, is always, and should we be worried? And it's a good question. So really this talk is trying to address that question, or at least the first part of the talk. That's the rough outline. And the, so the first part of the talk really is about what, what we mean by robot intelligence, and perhaps what we don't mean by robot intelligence. And then the second part is going to be showing you some experimental work that I've been doing in the last five years or so, which taken together is, if you like, experiments in artificial theory of mind. And I think, you know, you might be interested in that to Elliot. So can a machine think, well, what do we mean exactly by thinking? And of course most people, when they talk about thinking or think about thinking, they typically think about Rodan's thinker. So thinking we tend to regard as the kind of reflective, introspective activity, the very, the thing that you might in fact be doing right now. You know, wondering what on earth is this guy talking about? But actually the picture on the left is just as much thinking as the picture on the right. So when you make a cup of tea, you're having to do a lot of thinking. Most of it of course will be subconscious. You won't really have to think very hard. It's a matter of kind of routine activity, but you know, one of the interesting things is that in 60 odd years of artificial intelligence, we've made plenty of AIs that can do the kind of thinking on the right. So in other words, AIs that can play chess, can play in fact really quite difficult games like Go. Actually better than any human. So really super human AIs doing one thing. But I tell you what, we cannot make a robot that could go into your kitchen and make you a cup of tea. That robot simply doesn't exist. In fact, I'm calling it the windfield tea test. As a kind of my, if you like, alternative to the Turing test. So thinking is a lot more than the kind of reflective thought of Rodan's thinker. So how intelligent are our intelligent robots? Well, we have a kind of folk sense that intelligence somehow goes from not very much to a lot. So we have a kind of general sense that a cat is smarter than a crocodile, and a crocodile in turn is smarter than a cockroach. Well, that might be true, but I'm not sure it's thinking about things the right way. And of course we believe ourselves to be the smartest thing on the planet. I'm not entirely sure about that given the state that we're making of the planet and leaving it for our children. Now where do robots fit? Well, you know, here is my estimate of where your robot vacuum cleaner fits. So it's nowhere near as smart as a cockroach. In fact, I would judge it to be a little smarter perhaps than an E. coli. OK, so really not very smart at all. What about this robot? It doesn't matter about the sound on the movie. So the point is, if you were to see that robot across the other side of the crowded room, this room for instance, you would probably imagine that that was a person. Only if you actually interacted with the robot would you realize that it's not. It's a robot. So how smart is that robot? Well, I would estimate probably not much smarter than your washing machine. And in a sense that really reflects an ethical problem, which is that we can build robots that look very much like humans, but we cannot build the AI to match human intelligence. We're far, far from that. So in a sense we can build the bodies but not the brains. And I think that, so it's what I call the brain-body mismatch problem. And for that reason I don't believe we should be building android robots, robots that look like humans, at least not until we can make them with an equivalent, human equivalent AI. No, OK, so that linear scale from not very intelligent to very intelligent I think is wrong. I believe that there are four kinds or categories of intelligence. I'll very briefly introduce you to these four categories. The first category is what I call morphological intelligence. So this is the kind of intelligence that you get from having the kind of body that you have, the kind of morphology. Morphology is just a fancy word for shape. And it's clear, isn't it, that you and I would have difficulty swinging through the trees, brachiating, like that monkey. We certainly couldn't fly, at least without a machine to fly in. We just don't have the physical body to do that. And really in morphological intelligence there's a kind of trade-off, if you like, between the computation that you would have to do if you didn't have the right kind of body and the body. So what I mean by that, this little walker, David Buckley's walker, has no brain. In fact it can walk because it has the right physical machinery, has springs and is designed in such a way that there isn't a computer actually controlling that walking gate. The Mars rover, in fact, has been designed deliberately with the rocker bogey arrangement of wheels so that in fact the Mars rover can drive over rocks without any computation at all, zero computation. Simply the machinery that designed the morphology of the robot allows it to navigate obstacles in a way that otherwise would need a lot of computing. If you built that with legs, for instance. So I think that's an interesting kind of intelligence. It's an important and often overlooked kind of intelligence. Another one that's often overlooked is swarm intelligence. And swarm intelligence is the kind of intelligence that we see dramatically in social insects. Termites, for instance, are remarkable. And this picture in the middle is a murmuration of starlings. A murmuration, this thing. And what we understand about swarm intelligence is that there is no hierarchy. So for instance, in a termite's mound, there's no brain termite that directs the actions of all the rest of the termites. Each termite acts, in fact, autonomously, completely, has no idea what it's doing. But the net effect, the emergent property, if you like, self-organizing property, of all of those hundreds of thousands of termites is the nest. And here's a bunch of robots that my student, Jan Deere, built. These robots are moving towards a beacon, which is over here, an infrared beacon, so you can't see it. And no single robot can get to the beacon on its own. So without a number of robots, they wouldn't reach the beacon. And of course, people say to me, well, how does the robot know when it's finished? Well, the answer is it doesn't, because it doesn't know what it's doing in the first place. And that's not only true for insects, social insects, swarm robots. It's also true for every cell in your body. So our cells, in a sense, I mean, the word that's often used is a colony. So I mean, it's a joined up colony, or at least we hope. But the point is that we are also aggregates of we're multicellular organisms. And there's a lot of emergence, a lot of self-organization going on inside our bodies, too. So that's the second, my second kind of intelligence. My third is perhaps more obvious and less controversial. And here's the thing. So the little girl on the left is playing with blocks. And she's learning all kinds of stuff automatically. It's a process of self-discovery. In other words, she's figuring out on her own individually how to place those blocks and so on. That robot on the right is doing something very similar. The big difference is that the robot on the right required a team of probably about a dozen computer scientists over four years or so of a significant EU-funded research project. And at the end of that project, yes, the robot is pretty impressive in what it does. But it only does that one thing. Of course, the point is that little girl, while she's learning how to stack blocks, she's been learning how to speak, how to play with other children, how to dress herself. Lots of other things have been going on automatically without us having to program in the way that we do with the robot. So we're very, very far behind. I mean, that little robot on the right is the state of the art in individual learning in robots. We really are very, very far behind even the kind of learning capability that a young child has. The final and fourth kind of intelligence I want to suggest is what I call social intelligence. And it's by far the most powerful. This is the kind of intelligence that allows us to learn from each other socially. So the most fundamental kind of social learning is imitation. And human beings are extraordinarily good imitators. And I think imitation is so powerful that without it I doubt if we would have culture and civilization. I think imitation is right at the root of all of this, if you like. It's very hard to do imitation, in fact, with robots. But I will turn to an experiment that we've been doing or we did a couple of years ago, a bit later in the talk. So we have these four kinds of intelligence. Now let's put them all together. So if we draw these on a graph like this with four axes, and I'm deliberately drawing the morphological and swarm intelligence axis vertically, because I think there's some equivalence, certainly the individual and social intelligence go together nicely on the horizontal axis. And if you think of the point in the middle as zero of all of those. So in other words, if you have more of a particular kind of intelligence, then you'll go up that particular axis. So if we then plot, for instance, animals, in fact, even the plant, because plants are actually intelligent as well, then I think, for instance, a crocodile has a reasonable amount of morphological intelligence. Quite a lot of individual intelligence. There's learning going on. And possibly a little bit of social intelligence. I think the crocodiles certainly do, as it were, look after their young and at least show their young something. I may be wrong about that, but I don't think there's any swarm intelligence. I don't think crocodiles herd or flock or shoal or any of those things. If we look at an ant, then, of course, you have lots and lots of swarm intelligence, an enormous amount of swarm intelligence, a lot of morphological intelligence. I would suggest more than a crocodile. Because an ant, actually, even though it's only that big, has an extraordinarily complicated body where a lot of the capabilities of the body don't need a brain. And it's well known that you can cut the heads off some insects, and they'll still run across the floor, which demonstrates morphological intelligence in a rather gruesome fashion. And ants, in fact, do have some individual intelligence. So there is evidence that ants will learn individually. But we don't think ants don't teach each other anything. So in the words, they score zero on that particular scale. The plant, this is the mustard plant, has some morphological intelligence, certainly, because it can move, for instance, towards sunlight and open and closed petals, stuff like that. And interestingly, there's some evidence of swarm intelligence in the sense that plants, it's only recently been discovered, can give off chemical markers that communicate to other plants that there's a predator or something like that. I mean, this is all very hand-wavy, and I'm not going to apologise for that because it's just an idea. And of course, we would like to think that humans, I mean, we undoubtedly are kind of off the scale as far as individual and social intelligence are concerned. We're only average in terms of morphological intelligence. I don't think we have as much as an ant, for instance. And we certainly don't have anything like as much swarm intelligence as social insects, certainly not as groups. I mean, we have some because we do crowd behaviour, for instance, but not as much as we don't build buildings in the same way that termites build termite mounts. Now, if we then ask ourselves where robots fit on this graph, this is very revealing indeed. And I won't go all the way around the graph, but the thing that I want you to notice is that none of those robots have more than two kinds of intelligence, none of them. I mean, they all have some degree of morphological intelligence. Actually, that's not true. The epochs in the bottom right have really zero morphological intelligence. Simple robots there. And the other thing to notice is or that I should point out is that if you were to superimpose that graph on the previous one, then all of those measures would be right in the middle of the graph. In other words, the quantitative levels of kinds of intelligence are tiny, even for these pretty advanced robots, than any or most animals. After all, the robots haven't had to survive in many animals' cases, hundreds of millions of years of evolution. So, I've been talking quite a lot about individual kinds of intelligence, and I made the point about narrow intelligence. What about general intelligence? Well, the term artificial general intelligence is often given to, if you like, human intelligence. So, humans, possibly uniquely among the animals on the planet, are able to, actually not uniquely, that's not true. But rarely we're able to, I mean, what I'm saying is that it's rare to be able to generalise. So, the point is that what we're capable of doing is learning something in one domain and then transferring that understanding into a completely different domain. Of course, we do it all the time in language. Language is a superstructure of metaphors, and metaphor is, in a sense, a word that somehow describes the ability to change frame of reference from one set of ideas to another. Actually, another word for that is creativity. Creativity often comes about when you move frame of reference, move frames of reference. Arthur Custer in his wonderful book, The Act of Creation, said the difference between humour, a joke, and creativity is the difference between ha-ha and ah-ha. Actually, the same mechanism, I recommend that book. So, where are we? Well, we don't have artificial general intelligence. I think we are many, many decades, if not hundreds of years away, from being able to build artificial general intelligence. In other words, an AI that is as capable as you and I, or even our young children, if you have young children, or grandchildren. Now, there are three ways, essentially, of getting from where we are now, which is narrow artificial intelligence to general AI, and I think data from Star Trek is a really great kind of thought experiment of what a general, generally intelligent robot might be like. And there are three approaches. The bottom, I'll start at the bottom, the bottom one is what's sometimes called whole brain emulation. In other words, reverse engineer human or animal brains, and then build, as it were, an artificial version of that wiring, that connectome, it's sometimes called. I mean, the big problem with whole brain emulation is that if you ask yourself, well, what's the most complicated animal on the planet that we've done that for so far? And the answer is C. elegans, the nematode worm, the nematode worm. In fact, the nematode worm has 302 neurons, exactly. And about 3,000 connections between neurons, and in fact, it's the only animal for which we have the complete wiring diagram. So we are able to build whole brain emulation for the nematode worm. It doesn't get us very far because the nematode worm is not very smart. You can't be very smart with 302 neurons. So I think this is hopeless, whole brain emulation, I think it's crazy. Another approach is to evolve your intelligence. So to use a process akin to actually artificial selection, so Darwinian artificial selection to evolve, artificially evolve, a more intelligent thing. I think it's an interesting approach, but I wrote a paper a couple of years ago, I think 2014, on the energy costs of evolution. And I think the amount of energy that would be required to artificially evolve something as smart as a human, even if it were technically possible, is so colossal that it's out of the question. You'd have to give up, as it were, the entire energy output of the whole of humanity for a number of years, just to power the machine that's doing that evolution. So that's a problem. So really, we're only left with one approach, which is to design it, to design this artificial general intelligence. The problem is we don't know how to design it. And often people say, well, we have lots of computing power. Well, that's true, but just having a lot of computing power isn't enough. It's like having lots of beautiful Italian marble. Imagine you had 150,000 tonnes of Italian marble. Would you be able to build a cathedral? No, because just having the raw material isn't enough. You need to have the design and the know-how of how to transform that large amount of raw material into a cathedral. The same is true for artificial intelligence. We just don't have the architecture. And I think the gap between where we are now, narrow AI, and where we'd like to be, artificial general intelligence, if indeed we would like to be, and that's another question, is about the same as the gap between present-day space, if you like spacecraft engine technology and the warp drive. I think it's about that kind of gap. And I choose that analogy deliberately because warp drives, I mean, some physicists think that warp drives faster than light travel is just about theoretically possible. So I think that's, I'm careful that I'm choosing an analogue, an analogy of something that's just about theoretically possible, maybe, because I think artificial general intelligence is like that. Okay, that, I'm going to skip that slide because I'm a little bit behind time, I guess. But, and my agrabah, which was that, that was mine, thank you. So what I want to do now is to show you some experimental work that I've been doing the last four or five years in artificial theory of mind. Now theory of mind is really important to humans. Theory of mind is the, it's the facility that we each have to be able to infer the beliefs and intentions of others, of others like us. And it's really, it's really a critical function without, without a working theory of mind. Well, I mean, you have problems and there's reasonable evidence that autism, for instance, is an impairment of theory of mind. So I have a paper which was out a few months ago called Experiments in Artificial Theory of Mind. Now this is my working definition of theory of mind. I mean, the problem is there are hundreds of definitions so I had to choose one. The one I choose is to explain and predict the actions both of yourself and other intelligent agents. And the, if you like, the working hypothesis behind the work of the paper is that simulation-based internal models, and I'm going to tell you what that is, simulation-based internal model, allow us to build artificial theory of mind. So in a sense it is theory-based. In fact, this may surprise you. There are, there isn't one, there are several theories of theory of mind. And curiously, they are called, firstly, the theory theory of mind. And the theory theory of mind basically says that the reason that you can read other people's minds is because you know the rules. So in other words, you kind of have an innate sense of how other people might be thinking, and you know the rules. You don't actually have to do very much inference. The simulation theory posits that in fact, because you have a model of yourself inside yourself, then you can use that model to put yourself in other people's shoes. So in other words, you can use your internal cognitive apparatus for modelling yourself to then model others. If you like others like you, conspecifics. And this is particularly attractive to me because for a number of years I've been putting simulations of robots inside those robots. And that's what I call a simulation-based internal model. And this is in fact for the, as it were, the geeks among you, this is the block diagram, the architecture of this approach. So the thing on the right, I'll use the mouse to show you, the thing on the right from here, the sensory input down to motor commands, this is what the inside of a robot looks like. So it senses the world, then it chooses some actions here. It's called action selection. It's what all of us do all the time. We have to select the next thing that we do, like I'm selecting the next word I speak. And then we enact that decision. Now, what we've done here is that we've put in parallel with that the thing we call a consequence engine. And it allows the robot to predict the future. What it does is that it has a simulation here. So this little bit here is a simulation of the robot itself. So it's a simulation of itself. This is a model of the world right now. And yeah, so it's a simulation of both itself, its body, that's the robot model, and its brain, that's the robot controller. So it's simulating both its own body and its own controller, its own mind, its own brain, if you like, and the world. And for that simulation, it loops through all of its next possible actions, all of the things that it could do next. And it looks into the future and tries to predict what would happen for each of those next possible actions. It's a bit like you, if you're playing drafts, for instance, you will try and predict what would happen for each of the moves that you could possibly make. And then you'll choose the move that you think gives the best outcome in terms of playing the game. So this is our consequence engine. It's a piece of cognitive machinery for predicting the consequences of next possible actions. And it was only really after I'd been working on this simulation-based internal model for a couple of years that I realized that this could be a basis for an implementation of the simulation theory of mind, because essentially this is what we have. We have a robot that has a model of itself inside itself, and that means that it can also model other robots like itself, not other humans, but other robots like itself. So we've done a number of experiments. This is one of the experiments. We call it the corridor experiment. So here we have a corridor and the blue robot is trying to get to the end of the corridor, but the corridor is full of red robots that are milling around all over the place. Now the blue robot has a simulation of itself and all of the other robots inside itself, and it's using that to keep away. It's like having a very large kind of personal space. The blue robot doesn't want to get anywhere near any of the red robots, so it'll even back away, as you can see it. So those little filaments, the yellow filaments and the light blue filament is essentially the blue robot's prediction of where the other robots will go next and its decision about where it's going to go next. That's the light blue. So you can actually, in a sense, see inside the brain of the robot. So the red robots don't have this internal model. They're just bouncing around like billiard balls. And just to show you the same thing with the real robots, I'm hoping this... Yeah, this is the corridor with the intelligent robot and the non-intelligent robots. This one has got to get to this end of the corridor, so... And why can't we see? I'm really having problems tonight with my... Never mind. Doesn't matter. Oh, maybe that's it, yes. OK, maybe it isn't it. I'll just let that... Yeah. No, it isn't. All right. Oh, it is. Hooray. OK. So keep your eye on the guy at the left in the middle, now heading towards the middle. Actually, he's not really had to do very much avoiding until now, but the way is pretty much blocked, as you can see. But eventually, eventually, he makes it. I shouldn't be calling it a he, of course. It's an it. Robot should not be gendered. So yes, there is the experiment, the corridor experiment with real robots. Now, here's the thing. I said to my students, wouldn't it be interesting if we had two robots, each with a simulation of themselves and another robot, walking towards each other, like pedestrians, on a pavement? I mean, you've all experienced that thing where you go towards someone and you need to avoid them, and you step to your right, and the other person steps to their left, and you're still in the same problem. And I wondered if we would get the same result with our robots. And the answer is yes, you do, which is amazing. I think this is the first time with robots that we've actually demonstrated that little dance, pedestrian dance, with robots. And what we find is that about one in 10, between one in 10 and one in five runs, we get the thing on the right. This shows the paths of the two robots where there's a little dance, whereas the rest of the time, most of the time, four out of five or nine out of 10 times, something between those, we get the picture on the left where the robots just pass each other nicely in the way that you and I mostly pass each other nicely on the pavement. And I think I'm hoping that we'll even have a little picture here of the kind of experiment that results in them passing each other nicely. And you can see each of these little, so the dotted lines are paths that it's testing, they're paths that they're testing and predicting. And the little spiky lines that you can see are going out, all of the next possible directions it could go in. So, again, you can really see what's going on. And if I show you the case where they do a dance, and in fact, we've got them right in the middle of doing the dance, look. And the interesting thing is that they each have to turn about five times in order to resolve the conflict, because it were. And I think I've even got a video of that if it works, that is. So, oh yes, this is normal passing. So, again, our little epoch robots. So, very nice, yeah. All good, all good, great. And then the dance. So, of course, they both go in the same direction. And no, okay. Actually, that particular run was relatively quickly resolved. So, it wasn't a great big long dance. It was a fairly short dance. But nevertheless, you get the idea. Really interesting. So, theory of mind. You know, those robots are kind of using theory of mind for each other, and sometimes get it wrong, which is exactly what humans use, or humans do. Now, I mentioned imitation, and I don't have any movies here, I'm afraid, which is good because there's less to go wrong, of course. But we've used exactly the same architecture, the simulation-based internal model, to show how robots can imitate each other, but importantly, imitate not the actions, but the goals. So, in other words, robots can infer the goals of each other, which is a really powerful form of imitation. So, something that is, again, it's very hard to say if there are no animals that imitate the goals, but certainly humans are extraordinarily good. Typically, as infants, we imitate the actions of somebody until we reach about 18 months old. And at about 18 months, we start to figure out, actually, you're not really intending to do that. This is what you really intend to do. In other words, they'll work out that, well, so here's a famous experiment. So, I think it was a guy called Meltsoff, child psychologist, and he had a big button on a table, and if you hit the button, then it lights up, okay. And there were two experiments. There was one in which the human being, the mum, if you like, would hit the button, that they'd have their hands on the table, they would hit the button with their head, okay. And the children typically would also hit the button with, no, that's right. The young children, pre-18 months, would do the same, but post-18 months, they would actually hit the button with their hands. Except if mummy keeps her hands behind her back, then the post-18 month children would also hit the button with their head. And the theory goes that the difference is that if you have your hands on top of the table, then you're signaling that your hands are available. And of course, the child thinks, well, if your hands are available, you could have used your hands to light the button, to press the button, so I'm gonna do that. If the hands are hidden, then the child is thinking, well, actually, I don't, I mean, she didn't have her hands available, so maybe I won't either, so I will have to hit the button with my head. Very famous experiment. And I won't go through the detail, but what we've done here is to demonstrate exactly that. We've demonstrated not hitting buttons on a table, of course. This is robots moving towards a goal position and having to do so in the middle column, the blue robot is moving towards a goal position, but it's going the long way around, okay? And the robot which is imitating, which is the red robot, in fact, realises that it's important that even though the blue robot could have gone directly to the goal position, so the red robot infers that that's important and therefore actually itself goes via that intermediate position on the little graph called F, whereas if, in condition three, the blue robot, in fact, cannot go directly to the goal position because there's something in the way, then the red robot infers that it doesn't have to go via that intermediate position because the blue robot had no choice, but to go round, to go around. So I think it's a really powerful example, or I should say that the red robot has this simulation-based internal model, so it's able to effectively model what it thinks the red robot's doing. I mean, I'm using the word think in a very loose and unscientific way. The robot isn't thinking at all, but anyway. And that really brings me toward the final experiment that I want to show you this evening, which is rather relevant, perhaps, because, as Geraint said, I'm Professor of Robot Ethics, and one of the things I'm doing in my work in Robot Ethics is trying to figure out if you could make ethical robots. I have to say that most of my work in Robot Ethics is about how human roboticists, that is, should be ethical, but nevertheless it's interesting to ask if a robot itself could be ethical. The answer is yes, in fact. So this work started with a simple thought experiment. So imagine that you are walking along the pavement and you see somebody else heading towards a hole in the ground. Maybe that's someone else's peering at their smartphone, not looking where they're going. Of course that never happens in real life, does it? Now, you, I guess, I suspect, would intervene to stop that person falling into the hole. Now, why is that, or should I say how is that? It's not just because you're a good person. Not just because you're a good person. It's because you also have the ability to predict the future in a trivial kind of way. You can predict that that stupid guy peering at their smartphone will fall in the hole. If they don't stop looking at their smartphone. And you can do even better than that. You can also predict whether or not you can intervene. So if you're close enough, you might be able to rush over. If you're too far, then you can shout or wave your arms or something like that. So you can not only predict the consequences of their action from what they're doing right now to them, the consequences for them. You can also predict how and the best way for you to intervene. Okay, so let's imagine now that this isn't you, but a robot. So it's a robot who's observing this human being. This time carrying a briefcase. I couldn't find an image with a... Actually, this was an image that appeared in a newspaper when the work was being picked up by the press. So the robot here, and I'm simplifying things, has four next possible actions. So there are four things it could do next. A is standstill. That's always a possible action, do nothing. B is turn to the left, in which case of course the robot is perfectly safe, but the human, of course, is not. C is go straight ahead, in which case of course both the human and the robot plunge to their doom in the hole in the pavement. Whereas if the robot chooses action D to move towards its right, then it might, with luck, intercept the human, and a gentle collision with the human is of course much better than the human falling into this deep hole in the pavement. Now, if we express that as an ethical rule, this is what it looks like. So if, for all of the robot actions, the human is equally safe, then just do the safe thing for the robot. You don't need to worry about the human, okay? Otherwise, output the robot action for the least unsafe human outcome. Now, when I was thinking about this, I was rather embarrassed to realise that in fact, oh, I've gone on too fast. I was rather embarrassed to realise that that ethical rule is in fact Asimov's first law of robotics. So Asimov's first law of robotics states that a robot should not harm a human or through inaction cause the human to come to harm. The important thing is that all through inaction. So Asimov's first law of robotics essentially says, do no harm, but also intervene in order to prevent harm from occurring, which is essentially what I've described in my ethical rule. So rather embarrassingly, because of course in robotics, Asimov's laws of robotics are not taken particularly seriously, they should be, I think, because Asimov for sure established the principle that robots should be governed by principles. Of course, they were a fictional device and the wonderful stories that Asimov wrote are mostly about the conflicts, the dilemmas, if you like, between different rules, you know, obeying or keeping a human safe, for instance. So we did a couple of experiments, series of experiments. The first experiments were with these epoch robots that you've seen already and you can see the setup up there. So we have this arena, which ignore the fact that it looks like a football pitch. It's nothing to do with robot football. But we have a hole in the ground, which is that square yellow patch. It's not a real hole in the ground. It's a pretend hole. We don't want to dig holes in the lab floor. And the H robot, H for human, is heading towards the hole. So that's the haplos human not looking where it's going. The A robot, A after Asimov, is the robot that has this simulation-based internal model plus the ethical rule. So it's the same simulation-based internal model. We've just added that ethical rule to choose essentially that determines how the robot chooses the particular action it's going to take. And in fact, it's more than four. We actually have around 30 next possible actions. And again, you'd be interested to know that we're essentially evaluating the consequences, potential consequences of all of those 30 next possible actions every half a second. And each evaluation is looking 10 seconds into the future. So we're actually looking in a second, we're looking 600, that's a second of real time, we're looking 600 seconds into the future. So we're actually doing this in real time, which is, you know, this is tricky stuff. This is where I'm extremely grateful to my students because I couldn't code this. It's hideously complicated. I mean, think about it. You've got a simulation of yourself inside yourself and you're wiring it up to your sensors and your motors so that the simulation is running continuously and actually able to intervene and moderate your actions. So the second set of experiments by Dieter with these now robots, these humanoid robots. So let me show you the experiments. So here's the trial of the epochs where we have one H-robot and one A-robot. And you can see H moves A, notices at that point that H is heading toward danger, gentle collision, and all's well, and the A-robot then proceeds to its destination. Also itself, of course, having to avoid the hole in the ground. And it works. And you can see that this is speeded up. It works every single time without fail. In fact, it worked. Oh yeah, I'll show you the same experiment with the now robots first. So here, this time, the blue robot is the ethical robot. The red one is the human. Danger is top right. The destination for the blue robot is at the bottom in the middle. And you can see that the blue robot has correctly prevented the red robot from heading toward danger. And this particular paper published last year, an architecture ethical robot inspired by the simulation theory of cognition. So there's not only a simulation theory of mine, but there's a more general simulation theory of cognition, which I think is very powerful. Okay, so after we did the previous experiment, the one with the epochs, and realized that it works every time, I thought, well, it's a bit boring, isn't it? And how can we publish a paper that just says, well, we built this and it works? Let's make it more interesting and add a second human. So let's have two humans heading toward the hole at the same time. And this is the trial that shows that. So I believe that this is the first time ever that a real robot has faced an ethical dilemma. So here we are. So let's try and save this one, ah, no, too bad. Both H-robots plunge to their doom in the hole. So when we were running these experiments, we thought, well, hang on, this is something that's not right here. Now that one works. Did you see that the A-robot saved the H-robot at the top, but then turned back to try and save the other one but couldn't? It wasn't time. So the A-robot definitely is doing its best, but is clearly not working very well. And in fact, in around 30, I think 35 runs, the A-robot saved both H-robots twice, saved one of them about 14 times and about 15 or 16 times saved neither of them. So in fact, it performed extremely badly. So the question is, if we know that the A-robot can reliably save one of them, why doesn't it just do that? Well, the answer actually is very simple when we kind of sat down and thought about it. The reason that the A-robot is performing so badly is because it's basically recalculating its decision every half a second. So in other words, we've made not only an ethical robot, but also a pathologically indecisive robot, which is clearly a very bad idea. If you think about it, imagine yourself in that situation and you go, I'll save that one. No, no, I'll save that one, I'll save that one. You're clearly not going to save anybody. So clearly a real robot with this kind of decision-making process needs to have some kind of sticky decision, a decision that in a sense has a half-life, but could still be changed because we want the ability to change our minds. Imagine you're in a situation where you decide to save that one, but then she realizes she's not in danger at all and she's in danger and stops walking towards the hole, so that you then change your mind. So it's important that the decision should be a little bit sticky, but still changeable. Now, of course, I think there are other good reasons that we shouldn't be building robots that solve ethical dilemmas at all. So we have another paper called The Dark Side of Ethical Robots, where we argue that actually ethical robots may not be a good idea after all. Maybe we can talk about that in the Q&A. And I will just finally show you that essentially the same problem happens. So Blue Ethical Robot, shall I save this guy? No. I'll save the other guy. So, yeah, that phenomenon of being indecisive, dithering essentially happens with the now robots as well. So, Goraint, I think that brings me to the end. So, here are some general conclusions. So I think until we have a clear understanding of what intelligence is in plants and animals, I think the quest to create AI. And I don't just mean general AI. I mean all AI is ad hoc. I think the problem that we have, fundamental problem, and of course the AI people don't like it when I say this, is that we have no general theory of intelligence. So, in a sense, trying to build artificial general intelligence I think is a bit like trying to understand the nature of matter with a particle accelerator without a standard model, which would be absurd. So, in other words, we have no theory underpinning all of the work that's going on in AI. We have little bits fragments of theory, general theory, certainly not a general mathematical theory equivalent to the theories that we have for quantum physics. And my second bullet, I think that simulation-based internal modelling is a really interesting and powerful approach to solving all sorts of problems. I've shown you safety. I've shown you imitation and ethical robots, and I don't think ethical robots are a good idea. So, at that point, Geraint Tal, say thank you very much indeed.