 You talk here at the human futures event at all universities called responsible AI But before we go into details about what it is and what it entails. I would like you to take place in a time machine Okay, and tell me what kind of world you see in front of you in let's say 15 or 20 years What do you see as the biggest Threats on one hand of AI if we get it completely wrong and what are the greatest Opportunities of AI if we get it alright. Okay. That's an interesting question. Thank you so the biggest Promise of AI is that it will enable to us to make better decisions So it is the technology which can make decisions based on data mostly And it's up to us to decide which decisions we want the machines to help us with and How are we going to deal with the decisions that the machines can so of course in a in the best Possible world these decisions help has deal with climate change deal help has deal with the societal differences and ensuring that everybody is included and that we get a world in which Sustainability and well-being are leading for people and environment the worst case I don't think at all that it's the killer robots or the robots of Carmen will It's not the Hollywood scenario. No, no, no, I don't think those ones in 15 to 20 years time Definitely not I don't think they will ever be there. So the worst case scenario is that we just Missed opportunity and we are in a world in which AI is not really being used to help us deal with the problems That we really need to do. Okay, we'll talk more about Them the possible misuses and possibly also some of these existential risk that some of some of the debasers in this area also discussed but I Want to discuss with you some of the moral problems the moral issues of AI and also the societal implications but I Think we need to start out with a few definitions to establish some common ground here definition of AI Yeah, exactly. So so how do you define artificial intelligence? Because I think there are some Some misunderstandings of it and some people tend perhaps to confuse or conflate artificial intelligence with Artificial consciousness for instance. So what is artificial intelligence? So I have two definitions to it I think the one which is the most related to responsible AI is The definition of AI as a social technical system so a system in which we have a lot of software and hardware components which are able of Some intelligent behavior, but those components are not alone They are part of a social and institutional structure in which people organizations and institutions Have as much or more of the responsibility for the whole system than the components What is this intelligent component is a component which is artificial? So in terms of based mostly living in a computer in one way or another which is able to exhibit properties of Autonomy and by that I mean that the system doesn't really only act based on the direct input of the user But can proactively take some Actions it is a system that is adaptive. So it will sense as the ability to sense the world and based on the understanding of how the world changes it will also change its Decisions and its actions and moreover is a system that is interactive. So it doesn't Exist in a vacuum. It exists in an environment in which it interacts with us people and with other systems So these three things combined is what I would say are the properties of an artificial intelligence system That's a pretty complicated definition, isn't it? Is that necessary? So you can say it in one in in one sentence for instance And the only thing I can say no one sentence is that AI is not simple and we shouldn't try to make it simple because it's not But then how do you define responsible AI presuming that there is also something called irresponsible AI? So responsible AI like I say is when I come with this Definition that AI is a social technical system. So the first condition for Responsible AI is realizing that the artifact the component the artificial component is not responsible we as people and as institutions and as societies are the responsible components and Responsible system in this sense is a system that is Aligned with our values and our laws. So it's legal and ethical But it's also a system which is reliable from a technical perspective It doesn't really crash and expectedly and it's able to deal with changes without really completely stopping and moreover is a system which is Designed to be beneficial in one way or another and again definition of beneficial will change with Decides on it, but the system itself should be designed. So it's ethical legal reliable and beneficial. Yeah But let's then go into the first topic of AI that I want to discuss with you some of the moral questions because I mean for thousands of years we have discussed and haven't really been able to agree upon Universal values and universal ethics So how could we possibly agree on how a self-driving car for instance shoot Should calculate its actions when when some some something happens in front of it. I mean should it should it be Should it have an Aristotelian ethics should it have a utilitarian ethics and so on because when we really get down to the Practicalities of it. We do need to to to have some some set of Values for in the car for it to decide. Don't we I don't know. Do you drive? Yes, how many times did you got into a trolley problem in the ears? I never had it, but I'm sure that I mean if if if I was in a trolley problem I would never I wouldn't know what would happen. It would be totally accidental So so we shouldn't put any ethics into yes, so that's probably the answer that you are looking for okay Ethic indeed. We don't agree on ethics. We don't agree on the Which ethical system or ethical theory is the one that should be chosen There is this is the fundamental so the when I when I talk about ethics and especially related to its Artificial intelligence. I always say that ethics is not the Decision ethics is the ability to reason about this type of decisions and I'm not the philosopher So I feel free to say all kinds of strange things about ethics Which probably philosophers would not agree but for me the fundamental issue when we talk about ethics and artificial intelligence systems is this idea that the ethics is about the Ability to think about the issues and not so much about the results which kind of out of those reasoning and therefore is not something which is Easily or maybe not even desirable to implement into a system So and certainly not into systems like the ones we have now which are data driven Which basically would in order for this system to be able to take a decision in this type of issues It would have to have had enough examples of how people react in trolley problems and then use that as a base for the Correlating that with the situation in which it is that is true But I suppose we still need some kind of Obrationalization of decisions in an AI system that Yes, but it's about what type of decisions do we want the car to take are those the ethical decisions We probably not even think and that we want the car to decide where to take us We want the car to decide about the root about the speed about the dealing with the traffic Signs or traffic Restrictions because of roadblocks or whatever But you wouldn't be comfortable to have a car which when you sit on it the car decides where it would take you Even though that we have enough data to know that where you usually would want to go from to work to a job To home to the shops those kind of things that data we have so the issue of Self-driving cars and ethics of self-driving cars starts with the realization that we Cannot control all decisions as people indeed as persons We probably would take a random approach to it just would freeze or whatever and we don't know what would happen So we also need to be trust systems to act Sufficiently aligned with the ways that we act But not having this need for 100% guarantee of what the system is going to do or not and The if we want that and then is again our responsibility is the social technical system Then we the best decision that we can take is to forbid self-driving cars And those are the decisions we can take and then we are sure that the self-driving car will never get into a trolley problem That's true, but that's not the way we are heading is it? I don't know with some of the decisions we are it's AI is not the magic thing Which comes out of the sky and happens to us and we only see we can do is to Accept it we are we are taking the decisions and we are taking them as we speak And if we really don't want trolley problems and car-solving trolley problems We do have at this moment and in many countries are still at that stage We don't allow these cars or we only allow them into certain types of Roads in which we know that there are no Chickens and old ladies and I don't know what have I had the problems that come I appear in front of the car but but isn't there some kind of inherent difference in like You have an example of the the caretaker robot and The decision-making for a caretaker caretaking robot and then a Self-driving car that that where lives are a stake in a completely different way than with a caretaker robot. I mean When you talk about the importance of responsibility and accountability and transparency, it seems to me that those three concepts are not very well compatible with Random randomness random decisions They are if we know that the decision is going to be taken randomly So it's transparent that it's random that we know that there is a system of accountability around it Which puts the responsibility for whatever action the car takes Somewhere where it can be handled not with the car the car is not responsible for what it does We need this social technical system around the car who does take the responsibility Take the example of a dog if you want a dog you the dog might be kind of random random behavior and Might randomly for whatever and never did it before but might attack a kid in the park and Heard the kids the responsibility is yours. So it's a the system is random Let's say he's not accountable because the dog cannot put the dog in prison or whatever But we do have a system of responsibility. We should look at Healthcare robots or at self-driving cars much more along this type of lines Yes, the system is might do stuff which we don't really Understand but we very well understand where the responsibility lies and that is the type of Approaches we have to look at okay Let's move on to some of the more political challenges of AI do you think there's enough you think there's enough focus on the legislation and the regulation of AI internationally at this time Yes, and no the Awareness is definitely there and it's rising and I don't think that there is any country or any area in the world at the Moment in which people are not aware that there is a need to think about legislation and regulation of AI from East to west all over people are aware of that and that's something which two years ago We wouldn't have so this discussion around the regulation and Legislation for AI is Is definitely growing There is not enough Concrete steps of from being aware to Defining some strategies or principles about what we want AI to look like is one step but then move from these principles and guidelines and documents to something which really is Actionable is another step and that step is not enough yet taken not I would Definitely say that in Europe. We are quite in the forefront of this awareness and this Acting on the awareness, but still there is not enough Concrete but Europe is only a very small player in this isn't it? I mean it seems to me sometimes that there there could be a potential or maybe there already is an arms race going on between the US and and China and and also you have these Big tech companies like Google and Facebook and there are so much economy at stake here. So how Do you think that that that? legislation will just come, you know come too late or because exactly because there is this economic potential benefits of the first one who Who really develops some good AI? It's a very good point Taking the issue of the race a definition of a race. That's easier than the definition of AI It has an end point and this kind of Given direction. That's our races in this case. We don't have none of them There is no end point. So when are they going to win? We don't know and also we don't really know which direction we should take we see that both China and US are Following in this direction of the data-driven AI, which is in a sense extremely problematic Approach to AI because he's a brute force approach to AI Which is giving great results at this moment and it's really yes the whole discussion Why we are talking here about this today is because of these advances in AI along the data driven approaches But at the end of the day is not the most sustainable because it requires huge amounts of data and it requires huge amount of computational power when we are in a moment in which we are all Extremely aware of climate change and of the effects of what we do in the environment This is probably not the most environmental friendly approach to AI Having said that Means as well that we need to understand that there are other approaches to AI Maybe not the most sexy at the moment. Maybe not the most explored at the moment, but Approaches which can Inherently solve this type of problems About the speed of being afraid of missing out. I think that the only way we all miss out is When we really do it wrong and do it in a way that it's not sustainable that it's not Environmental friendly and that at the end of the day is not aligned with our human and the environmental well-being and to do that I think we need to take a much cautious approach than just blindly run in front Be behind once we are running already without really I think a good direction where they go that awareness that you see in Europe and in the European political environment is Do you see anything that confirms your beliefs in? When you look to China or you look to the US you do okay Yeah, like I said, they are they are very much looking at what we are doing in Europe They are very much aware that There are Problematic results in the things that they are doing Even the large companies are asking give us the regulation because then we know where we stand but at the end of the day there is no business model in Irresponsible or in an ethical AI Because that's not what people at the end of the day will want if often AI is compared to the oil and We are at this moment in very serious problems in the world because of the approach we took to the Using of oil. So we should learn from what we did and the the bad things we did and HDD things we did with the oil And take lessons to what we do with AI Going slow and looking at what we are doing is probably not a bad bad strategy I would rather have that one than going quick and break things because we cannot unbreak them. No, that's true but then another political or societal societal challenge is the The idea of AI systems Robotics robots taking over many jobs both low skilled but also high skilled labor like the work that doctors and lawyers do and even even perhaps A computer programs. I heard a computer program is saying saying that in 20 years Computers will be much better at writing their own code than he would be able to code So we will need fewer Fewer people to code the the AI's but but and and we have a guy like the the israeli historian uvel harari Who has talked a lot about the huge useless class of the future Is that Of a worry we should have with AI Yes, we should definitely be a war worried or concerned about the way that AI is going to affect our societal Constructs arrangements and labor is indeed one of the large ones If you would give this question to a machine learning system We which means you would ask the question and give it all the data from the past This answer that the system would tell you is that yes, there will be Transition period in which things will be difficult But after that transition period Life gets better to all of us. That's what we have seen in the last 200 300 years With all the industrial revolutions in so There are So this is the answer that the AI system would tell you because based on the date and based on the experience from the past This is what we know But but but but then people have two qualities don't they really as a workforce they have their Their hands their their pure power and and they have their minds and if if And what happened in the first industrial revolution was that the machines took over the the manual labor But now if you have then Computers also taking over quite a bit of of the intellectual work of of humans then then what should what should we do what should Computer sciences like yourself do in in 20 or 50 years Myself in 20 years and probably retired. So I'm out of the problem. No, just not joking. Um yes even machines took out our Hand labor and will definitely and are already taking a lot of our mental or Mind labor But still that is a need and the interest and the possibility to work with your hands You have nowadays Especially in the west and the richer countries Return to the possibilities of doing things yourself and using the capability your and capabilities increasingly so and you see that In europe but also in the like I say in the most Industrialized countries with the mind The machines are going to take some of our brain work, but mostly they are going to take the brain work, which is the repetitive and the Calculating again and again the same things They will not at least for many years take away our creativity and our in power of enjoying the This thinking and enjoying doing those are two things which machines are not going to take away another What is very important to realize is that machines are AI machines are able to answer questions to give answers They are not able and they're extremely bad at asking questions That job that area is really where we really will be As people The differentiating factors and that's where our jobs are going to the doctor indeed is going to use the medical doctor increasingly tools to support the diagnostic tools to support the Operations and so on but there are tools the decision the Questioning the interpretation stays for a large part on the On the doctor my father was an engineer He used to Building houses and bridges and so on and you always calculate square roots by hand Because that's the way I was he was told and that's the way he was certain that Result was correct. None of us nowadays use Calculates I don't even know how to calculate a square root by hand We do need still to know when as an engineer need to use a square root So it's not so much the the tools or the the functions which are going to be replaced but it's the The decision and the questioning power which we really need to To take to ourselves. You don't find to the idea that AI will make us Make humans absolute. No, they will make us better humans Okay, then finally, let's let's talk a little bit about The idea of existential Risks if if you're okay with that No, you're not okay with that Let's try it anyway because When you talk to people about artificial intelligence People don't really most people don't really have an opinion at all Perhaps they smile a bit awkwardly and and say they have This is just fantasy. It's something Hollywood invented or something like that I think most people don't really care and then A few people they perhaps they've heard Elon Musk say something about But also a guy like Stephen Hawking said that that AI could be the last invention of Of humans and then also, I mean we also We also have these perhaps like yourself a group of scientists and AI nerds and futurists that are Well, sometimes perhaps a bit uncritically Excited about the prospects of what AI can do And for them AI is probably worrying about AI is probably like worrying about overpopulation on the moon Are you are are you part of the last group? I'm probably more in the last group than in the first group, but I I don't think we shouldn't Worry about overpopulation in the moon. Who knows maybe we will need to consider that for our future So indeed we should be aware of all potential risks that AI brings But what we have to realize is that actually it's not AI who brings the risks We we are the highest existential risk to ourselves And AI is one of the tools like many other tools like many other actions that we take Which can potentially contribute to accelerate the risks that we are creating, but AI itself is not something which will I don't Think and I I have Studied AI for many years It's not the AI system, which because it becomes intelligent and it's able to solve more and more Cognitive problems that that doesn't necessarily mean that the system will evolve into Situation and it each In which it also will start Wanting to solve problems. It just does what it's Good to do. So you don't think that a very intelligent System will somehow At some point start to no it will have goals about very specific type of things that it's trained to do and the the goal of Surviving which is very much associated with ideas that the robots come and take over whatever we are doing It's not the goal that necessarily arise from the capability of playing chess or of driving autonomously through a city. So there is That becomes you still Computer scientist and philosopher in the uk what she says the most likely that these systems will Develop is the capability to not care Who cares? Why bother? That would be the most logical step that the system will develop And not the the capability to wanting to take over us or whatever Why bother? It's too much work. But isn't that because you you think for the next 10 15 20 or 50 years Who knows that that we will only have something like I think it's called narrow artificial intelligence And and not have something I think in the in the community. It's called general artificial intelligence or any intelligence Explosion and things like that. Is that is that just totally sci-fi or? AGI is not sci-fi AGI so the artificial general intelligence is the ability of solving problems across a whole range of Domains all domains. Let's say It is still a cognitive ability human intelligence is much more than cognitive in the abilities and By improving the cognitive abilities of a machine Are there is no Reason to assume that also other types of human intelligence are going to be As well developed if anything probably less so they will become really this type of Idiot savants Which only know about solving cognitive problems, but they don't really have other types of intelligence So from a GI to super intelligence. There is no Scientific steps that really can Lead from one to the other and there is next to that there is another issue Which is one which is especially relevant for the type of computing that we use nowadays Which is basic on tuning machines, which is a very specific way of computing of calculating And there are mathematical limits to what can be calculated in this way So with tuning machines like the ones we have now and we'll have for quite some time We can be sure that there is a limit to what this type of machines the type of problems that can be computed Of course that might not apply when once we have quantum computing or bio Based computing but with the machines as we see them now There are limits to what can be done so you can you can still in the future have a very very intelligent Machine that can do a lot of things but as long as it's it doesn't have any conscious or self-conscious it will Always serve humanity I wouldn't dare to go into the discussion of conscious because that's a different issue What I say conscious doesn't arise by being intelligent. That's that's what I say So you can be be extremely advanced in Cognition like these machines will be but that is not to guarantee that consciousness like we understand that from people or animals will Be a rising from it Thank you very much. Okay. Thank you