 I have the pleasure and the honor to introduce to you Virginia Dignam. She is full professor at Umeå University in Sweden at the Department for Computer Science. She has the Chair for Social and Ethical Artificial Intelligence. She is still associated with the Faculty of Technology, Policy and Management at Delft University in the Netherlands. Virginia's research focus is on value-sensitive design of intelligent systems and multi-agent organizations. In particular, focused on the ethical and social impact of artificial intelligence systems. She has published more than 180 peer-reviewed articles. One can only blush at that number and edited and written a couple of books. She was awarded the prestigious VINI grant by the Dutch Organization for Scientific Research for her work on agent-based organizational frameworks. But Virginia Dignam is not only a highly successful academic researcher, author and convener of large conferences. She is also unusually active in practical regards in communicating her call for responsible AI to different communities. So on the one hand, as a member of the Executive Committee of the Triple IE Initiative on Ethics and Autonomous Systems and various other associations of European researchers in AI and robotics. She has worked on changing the self-understanding of the researchers in these areas, in computer science and in robotics. On the other hand, which is much more difficult, I can imagine, as member of the European Commission High-Level Expert Group on Artificial Intelligence, the European Global Forum on AI and the Foundation of Responsible Robotics. She has been trying to alert politicians and policymakers to the risks and the opportunities of artificial intelligence. One of the distinctive and I think very sophisticated and clever and productive small aspects of her approach is that she considers artificial intelligence not to be a technology, but a socio-technical system, which includes the producers and the users of the technology. The people designing and using certain algorithms are including in that which we should describe as an AI system. So both as a researcher and a public communicator, Liginia Dignum, is one of the internationally most visible lead figures for the cause of responsible use of artificial intelligence. Someone who has the very unusual double competence of being able to speak competently as a computer scientist, but also as a humanist. We are therefore very fortunate indeed to have her with us today as a speaker, and I'd like to ask you to welcome with me Liginia Dignum for today. Thank you. Thank you. Thank you for your kind words. If you ask me how I got there, I don't know, it just happens. But thank you very much and thank you all for being here. So I would like always to start by thinking about what is AI, and then I will also say what is not AI, and from there we can try to set the stage for what we are talking about. If we look at what media and what researchers and different types of fields and sectors talk about AI nowadays, and this is something which has changed quite a lot from the last 7 to 10 years, you can talk about AI as being a technology. And at this moment what people understand by AI technology is mostly this possibility or this powerful techniques that we have to identify patterns in very large amounts of data. What most of the machine learning and the deep learning techniques are about. These are techniques which are for a large part non-stochastic and non-deterministic, which also is right away why at this moment all this discussion about responsibility and ethics and impact of AI has happened, which was not here 10 years ago because techniques were different. We also, many people think about AI as just the next step in all this digitization, which leads to the idea that actually everything is AI. It goes so far that I know from a Department of Computing Science somewhere in Europe, which had for like 50 years section on artificial intelligence, which they closed like last year, because they decided or they thought we are all doing AI, so why should they have this section calling AI and we are something else. It's this idea that you include then here you can include anything that you want, IoT, cyber, robotics, whatever, everything is AI, which also makes it very difficult to understand and to regulate and to understand the impact. The old fashioned way of looking at AI is a field of science, is the field in which we try to understand human intelligence by building artificial models of parts of it. It's the tradition in which I have been educated. I worked in AI, I worked for more than 30 years now and this is the kind of the background for the diehards and the old people in AI. This is kind of the field that we have. And then of course we have the Hollywood and the newspaper, since just a realistic view, it's an entity, some kind of magic, it comes from the sky. No one really knows where it starts, where it goes. It will take over the world, it knows everything and we just have to wait and hope that it all is going to work well. Of course depending on which of these interpretations of AI we have, the idea of regulation or the idea of legislation of impact is completely different. So let's go back to what AI could be. We talk a lot about algorithms and indeed AI uses algorithms, but any computer system uses algorithms. We all use algorithms to the most normal things. Every time you try to bake an apple pie, you are using an algorithm. The recipe to build the apple pie is the algorithm which leads you from some inputs to a possibly successful result. As you know probably, because I assume you all baked some pies with apples or whatever, doesn't matter, the end result is not only determined by the algorithm, but it's also determined by the decision of who is going to bake that. If I bake or if Jamie Oliver baked the apple pie, it's probably quite different. And also one thing I also like to stress, it also depends on the choices that we make for the ingredients. If I use organic apples and free range eggs, or if I use margarine and very cheap apples, the results are also different and the impact of those results is also different. In AI it's the same. The data, the type of technologies, the type of computer approaches that we take also impacts what the system is doing. So if we go back to what an AI system is in terms of its properties, and mostly about the properties which are of concern for the issue of impacting and regulating this type of systems, it is indeed a computational system, an artificial system, which has somehow some capability of autonomy, and I here don't mean at all a philosophical meaning of autonomy, but just the capability to act without every time a direct input from the user. It has some capability of adaptability, and that doesn't really necessarily mean learning, it doesn't necessarily mean machine learning techniques, but it's able to sense the environment and take changes in the decisions based on that sensing, and it doesn't do that in a vacuum. It does it in an environment in which we and other machines and our people and our institutions are there. So that's why when we talk about the regulation or the impact of this type of systems, we can not only think about the technical system itself, but we do need to think about the social technical system around it. It's about the people who build it, it's about the people who use it, it's about the people who allow or not this type of systems to be used in what type of ways, and it's about the all which are indirect or indirectly affected by what the system does. So we cannot really just talk about regulation of AI as thinking that we are regulating a piece of software. No, we are regulating this all social, technical system around it. What is also very important to realize is that AI is not intelligence. It's very, very far from intelligence. What we see at this moment is that what systems that we are mostly concerned about in terms of impact at this moment are great at identifying patterns in data, in text, in video, in images. They are good at extrapolate or they are decent enough to extrapolate those patterns to new data as long as the new data is not very far from the data that it has learned from the past. And it can or we can build those systems to take actions based on this type of patterns. What the system doesn't understand at all is the meaning of the pattern. So you can build a system to identify cats in pictures and we do that. Cats and AI is something which is forever related and we are very good at identifying cats in pictures. We are also very good at identifying cancer cells in pictures. We are very good at identifying all types of image issues in pictures. What the system will never be able to do, at least not with the techniques that we are using here, is to tell you what is a cat. It will not even be able to understand that the cat is somehow related to a mammal or what is the difference between the cat an elephant or a chicken. It will not be able to tell you that cats usually have four legs. It will not be able to tell you that cats reproduce. So all the things that we have about meaning of concepts, what makes us intelligent are things which the machines cannot do. It will identify wolves in pictures based on the number or percentage of white pixels in those pictures. Because it happens that most pictures of wolves are in the snow. So it has nothing to do with the wolf. It has a lot to do with some type of images and our own biases in the types of pictures that we feed the machine to tell them what a wolf would be doing. And that makes these systems extremely easy to play or to interfere with. You probably already have seen this type of example. You had some kind of, changed some pixels in a picture in which we cannot really see the difference, but the machine right away starts identifying the panda as being a gibbon. You change slightly, and it's hardly changed, and that vulture is an erotic tank. You change the zoom and something which was never an odd dog is somehow associated with odd dogs. So it is very... There is an old field of study nowadays, adversarial AI systems, which doesn't do nothing else than try to trick the systems or find out where the systems fail. It is extremely brittle in what we do. These ones are probably just fun, but this one is different, and this happens. And these are physical changes in the observations of the machine. And if we are regulating self-driving cars and we know that this type of interpretation can happen, we should think about what exactly are we regulating and how we want the self-driving car to deal with this type of problems. So responsible AI is basically to think about what are we doing with this type of technologies. It's about understanding how we can potentially do a lot with it, but should we do it? And who is going to decide? It's me as the computer scientist. Are the developers sitting somewhere in Silicon Valley or in China building systems for Alibaba or to Google? Which values are we going to consider? Are we at all thinking about which values we want these systems to adhere to and whose values? How should we build these systems to deal with dilemmas? We can build systems to improve privacy, but by doing that we might be diminishing the capability of dealing with security. Who is going to take these decisions and how are we going to take these decisions? These are not the decisions that machines are going to take. These are the decisions we have to take. So responsible AI is about building systems that are aligned with law and with ethics, whatever that might mean. And machines that are reliable, they don't crash, behave in a way that it's supposed to be, they don't stop and hang, they don't start calling you at certain inappropriate times and so on, that they are somehow beneficial. And again, all these meanings and what we mean by beneficial or ethical is what should be the discussion about. And it's mostly to realize that the machine is the artifact. The machine is not responsible. The hammer is not responsible to what we do with it. We are the ones who set the purpose for these machines. We are the ones who decide the purpose for these algorithms. We are the responsible part. So the social part or the social-sectical system are the ones with the responsibility. And this is for some way increasingly realized by a lot of different organizations across the world, from the European Commission, in which Joanna just said I'm a member of it. Actually, they are meeting at this moment. I'm here and not in Brussels. They were expecting me to be. The IEEE, so that one is kind of top-down. The European Commission decided we want to have some guidelines about what it means to have trustworthy AI. The middle one by IEEE is bottom-up. It's an old bunch of around 500 researchers and the practitioners around the world who have combined efforts to decide how we would like to design systems which are ethically aligned. And the OECD is also a collaboration between many different countries. But those ones are not alone. By the last count that someone did in a paper which Ellen at least is linking to, there are 84 of this type of principles. And every time there are more cuttings. Each country is coming with an AI strategy with some guidelines to define what type of AI do we want in our country or not. All kinds of organizations are doing that. And they are all coming up again and again with very nice and very neat type of proposals. They are not so different from each other. It seems that there is no type of issue which appears consistently in all 84. But the main idea is quite similar. Of course we want the systems to be robust. Of course we are concerned with human rights, we are concerned with transparency, we are concerned with fairness, we are concerned with all kinds of stuff. There is very little that we can disagree with this type of principles. There is also very little that we can really implement from this type of principles. And that is where the issue is we are endorsing these things and the countries and the organizations are endorsing it. It gives very nice pictures and very nice conferences and events. But what is this guidance and this regulation for? What are we regulating? And go back to that first slide of me. Are we regulating the technology or the magical entity? And after we endorse and say yes, we are behind this type of things, we are still very far from ensuring that things are compliant or that these type of principles are operational. What does it mean to have a fair system? How do I define fairness in a way that I can program it into a computer? Those are the issues that we have to discuss. Fortunately most of them are aware of that so the European Union just last week comes with the principle or at least the initial discussion on how can we go regulate these things in terms of very concrete legislation. The OECD is building an observatory to see how things are changing and evolving and the IEEE is developing standards in which tries to define in very specific type of contexts indeed what do we mean by transparency in this context and then we have a definition which we can add there or not and we can check things. And why is all this being done? Basically because we all expect that AI is the technology, is the way that it will enable us as human kind to make better decisions. Decision power is what AI can do and a lot of possible different types of options and come up with some classification and structuring on this many, many options even in many cases kind of doing somewhat if an L is what we should choose this or choose that. So helping us with making decisions is what AI can do. And by doing that it's also the issue which we are all of us at this moment aware that we need to understand what is the impact of these decisions. Especially because we know that a lot of the technologies that we are using at this moment are technologies that we don't really fully understand or at least we cannot really always link input to output in a way that it's deterministic. So that is the issue. We have this hope, we get better decisions better for who decides what is better. That's the first discussion. And the other one is we are doing that with much more of a gut feeling. This works and it seems to be working well in the past and for a very limited and very narrow type of sectors and fields and domains. But we get this gut feeling yes, this is what is going to help us making better decisions. That the complexity of many of the issues in the world nowadays requires some support on the decisions that we can do. But what are the decisions we would like these systems to make? Just imagine a self-driving car and just think what type of decisions you would like the self-drive car to make. Any ideas? What should the self-driving car be heading on? It means, for instance, an old person and a child it should decide which one should be going now. Wow, that's... How many times... You drive? I drive myself. How many times you had to take that decision? Fortunately I have never. No, okay, good. So probably the car will never really take those type of... But that thing is going to move so it's here now and it's going to move so it's going to decide what other types of decisions the car should be taking. It should choose the route where the least noise is created for the people living along the way. Good, okay, that's already complex enough. Before it gets there it needs to take simpler decisions. Stop and go. Left or right. Maybe taking into account the conditions of the road less noise for some ones but maybe then that caused more noise to another ones but also if there is a road block you want the car to stop and to take another route around the road block and so on. So in most cases, except for the example indeed of the old lady and the small child in most cases are extremely operational decisions and those are the decisions that we expect the car to take. Most of us wouldn't be comfortable that the car is taking these decisions for us. We can do this. This is as easy to build as deciding on stopping and go. We have enough data and the car can calculate the weight of that guy. The moment that you sit it can know whether the weight was growing or not growing and decide okay, if your weight is growing then probably you should go to the gym so this was technically possible to build and we even have the data that enables us to do that but this is never the kind of decisions that we want or that we expect these systems to do for us. Operational decisions, yes determine our own goals in most cases not and there are of course issues in other situations in which we might want that but this is where we have to start thinking what are the and yes we are very concerned and very obsessed with the chicken and the old lady and we are going to be killed first and indeed that is part of potential way and potential concerns how we build cars but that one is very easy to solve. If we really are concerned about that the easiest way to solve that one is to forbid self-driving cars in our roads and they are not there they will never kill chickens and old ladies so that's possible it's another decision that we have to take do we allow, are we comfortable enough do we trust enough that the system is going to deal appropriately with unknown situations do we trust enough about what are the safeguards around that if the car indeed does something which is not what we expect and kills the old lady or kills the chicken or whatever and we are not happy with that do we have safeguards and do we have ways to deal with the blame we are going to blame, we cannot blame the car we can blame the car, put the car in prison it doesn't really matter do we have those safeguards it's the same example as if you have a dog and you know you trust enough your dog to let it in the forest nearby but still for whatever reason the dog might hit a child or bite a child or something like that it happens it's not the dog who is to blame the responsible and the one who is going to deal with the consequences it's you as the owner of the dog so we have to look at this type of systems not so much as the reasoning and the responsible entities but as the acting entity for which we have trust enough that there are safeguards to deal with it and when should the car be deciding in which situations we want the car or AI to be deciding it's not just about trolley problems trolley problems indeed are and I suppose you are much better philosophers than me which I'm not a philosopher at all are sought problems are situations which wants us to think about what the reasoning about ethics and that's what I often say for me ethics is not so much the result is the fact that we are able to reason about situations in which we don't really have a clear cut answer. I don't really care what is the result of that deliberation but it's about the reasoning and the thinking about what is right and what is wrong when for whom and so on so it's not just about this type of trolley problems there are many other issues that we should think about it's about how we choose between the different properties of these systems if I'm a doctor in a hospital and I'm given the option to get a system which gives 95% accuracy on identifying cancer cells in a photo but gives no explanation whatsoever of what why that thing is identifying as a cancer cell or I can choose a system which gives me 80% accuracy but explains all the time why that thing is identifying as a cancer cell these are the type of decisions we have to think about which one of them do we want to use because we want both and at the end of the day we would like to have full accuracy and full explainability but that's not always possible so is it usually harder to make it accurate if it also has to explain it's higher to get let's say with the techniques that we use nowadays yes it's harder to get accuracy if we have to explain and it's definitely much more hard to calculate so another question is which explanation am I willing to have for how much energy consumption is 1% more explanation worth 20% more energy consumption those are the issues which we really should be solving before we have to think about trolley problems but then the question might be who decide what is accuracy when is something accurate or not accurate so another question as well most nowadays our accuracy is defined in this type of systems is in percentage of true positives and true negatives so how much of the things that it sees are and it classifies correctly are indeed correct and how many of the things that it classifies as not being whatever you are identifying are indeed not correct so this is the main definition of accuracy used in this type of systems it already has a very big bias to the false negatives and false positives and I think I have those examples later in which that can be quite problematic in other issues but still indeed those are all these things have definitions in the back I think another interesting question is who decides what is explainability that's a very good one I will go back to that one in another slide because accuracy can be measured with many metrics for it and designing on that is definitely a normative question but explainability is notional is it irrational definitely and depending on the user but there are ways to measure explainability given a certain user and given a certain type of question so there are those ways it's something which people in the user human computer interaction they do a lot of this type of measures but indeed it's again which of these many definitions are we going to take so those are the things which these are the decisions that should be taking before we go into trolley problems it's also about for instance using chatbots it is there is it is possible to build chatbots either by voice an image that look exactly like a person I actually was at BBC yesterday and they have synthetic news readers which would have a large time to identify that they are just cartoon let's say but and there are they might be very good I don't know what the reasons to use this type of systems but when and who decides what are those reasons and how much do we want to be bugged to get all the time the information you are now speaking to a machine do you agree click one do you not agree click two whatever those kind of things how much information do we want when and again who decides how much the use of this type of systems is interfering or not with our potential for self determination and again the ones of the jobs what is efficiency how do we measure efficiency of production of jobs of economic efficiency and how much are the metrics that we use for that still relevant in a world which a lot of the manual and simple mental activities are going to be replaced by machines are we measuring efficiency in terms of well-being in terms of environmental benefits or in terms of profit again it's a discussion which is not for the machines and it's probably not only for us in this room but for the economics and the politicians of this world we are measuring these things and there is all this discussion about the AI race in which China and the US are winning and by that what people mean they are at this moment potentially making more money than organizations in Europe but is that the way we want to measure it and they are doing that by going quick and breaking things is that the way we want to follow on that race and then again which values and who which values and what food we want we can look at societal values and there are lots of studies about societal values who are we going to involve we can look at laws we can look at all the ethical theories and ethical studies and philosophy to identify which values are potentially relevant to including this type of systems but is deciding which we are taking we have to say how do we make choice and trade-offs because and indeed we do this often and we have artificial systems we are relying on artificial systems to do stuff for us and to decide for us for quite some time in democratic societies like here or most of Europe one of those artificial systems artificial because it's built by us is the elections is the way that we decide to vote and how to combine these votes this is artificial it's fully determined by us and it also we all know that the results of an election are for a large part determined by the way we count the votes and Trump being president of the United States at this moment has to do with the way to how votes are counted in the United States because if it has been counted in a majority of voters you lost the election but still we take this as democratic acceptable ways to vote it also has to do with what are the questions that we are asking to the people and Brexit is an example in which the question was probably not the best one for the results that you want to get out of there so we have to take the info account how to count the votes with question are we asking how much are people caring really about the question how much are you involved in the problem in order to have pundit and reasoned opinion about the situation and there are many referendums all over Europe and in the United States we couldn't really care less so we just go there and vote something and that's it so all those things influence so how this but because we have these systems and the way we do that in democracies the way we design our democracies influence how our societies are and the way our societies are influence our democracies we can expect this to make decision making systems to take decisions for us and I'm not saying only in political type of situation but all types of decisions another one we can in principle see whether a certain decision or a certain statement is socially accepted is morally acceptable or is legally allowed and you can ask that first statement the problem is that in many case things are not aligned and some thing can be at same time legally illegal but ethical or societal accepted but not ethical and we all have and these things moreover not only they're not static it changes the way we as societies thought and think about slavery has changed in this space since 200 years ago the way we think about capital punishment changes even today depending on where you are the way we think about animal rights the way we think about the environment all kinds of things can change there so it's very difficult to build a system to be aligned with our values when we know that our values are potentially interpreted in many different ways and moreover we can think of this for each value but when we put them together things get even more complex so again it's not so much and I'm a computer scientist so at the end of the day I would like rather sit down and program these things but it's not so much the decision that I'm going to program that is always a kind of a normative approach to a all field in which the point is the deliberation and not the decision so we have to be aware that when we are implementing whatever type of ethics into the system we are taking a very normative decision that this is the way we are going to deal with it and the best we can do then from an engineering perspective is to be explicit about it this was the decision that I took about how to so I want to build a system to be fair and I would suppose that all of you would agree that we want these systems who are taking decisions for us they should be fair what is fairness? probably some of you can think about fairness as if we divide resources equally it's fair or we give everybody the same opportunities like fair and there are many ones I'm now going to implement it not with a computer but with boxers and I can get to quite different results and then if I okay then equal opportunities is much more important than equal resource but that is in this very specific situation and even in this situation if because the system is supposed to stay there for quite some time and environment changes maybe later on the fence disappears then we are have to think again or maybe the bigger guy gets old and then he needs a box to sit on because he gets tired so we have to be robust enough to even when we take one decision to identify or to define fairness in a certain way it needs to be robust enough to keep with the changes in the environment so what we are doing now in my group is first try to make these steps as explicit as possible and then look at the system assuming that the system is indeed a black box we don't really need or cannot know how the system is built but we can try to run simulations about this system to understand whether or not inputs and outputs are aligned with this very explicit definition of how I decide to build fairness into this system you can do responsibility mostly in three ways we can look at the processes by which we are building the systems and that has nothing to do with the result of the system but we ensure that we build systems in a way that is justifiable and acceptable we can try to build systems to be ethical by design so put the ethics inside the system with the conditions and the expectations that I just described or we can look at the social part of the social technical system and then look at how do we educate the stakeholders how we ensure that institutions around it take and are responsible and accountable for the decisions that are there because if we go back to this type of systems if we are assuming that the technical system is autonomous we need somewhere to put responsibility for that autonomy if we are assuming that system is adaptable we need to have some transparency about what is happening and when and why is the system adapting and if the system is interacting with us we want to have somewhere the accountability for the actions of this system we get to what we call art accountability, responsibility and transparency also because to build this is more art than an engineering part of things there were questions somewhere there just briefly how do you differentiate between accountability and responsibility exactly one is forward looking and the other is backward looking quickly yes we can this is what we would like to have the cars which we build the car with the system inside to make it ethical by design yes and then it will always decide between chickens and dogs and old ladies and stuff like that but I think I already gave enough argument of how unlikely this is we can look at the environment at the social technical part of it and here there is a lot of work being done for instance the European guidelines the issue here is that AI is complex is difficult is not something which is easy to define is not something which is easy to understand and which can try to make it as easy as possible to understand but it remains a very complex system applied in a very complex environment with a lot of interactions on it but we have lived with complex systems for a long time and we most of the times trust those systems the Boeing we still all probably feel safe enough to take another airplane even if we know that things can go wrong we all drive cars around so there are all kinds of complex systems with which we are comfortable enough to live and to interact with why because we have some trust that that system does what the box says that it should do because we have enough safeguards like I say which can deal with the cases in which things go wrong and because we know that there are groups and agencies that help or to check that things are indeed doing what they supposed to do so if can anyone tell me if this is a free range can you go buy I suppose in Arras is also possible to buy free range X in the supermarket probably of 20,000 different kinds how you know you trust and you trust some kind of stickers that someone put there of course these things can go wrong and it's we all know that there are cases in which someone to make whatever awful eggs look like they are free range eggs the point is not that of course the bad the worst players in the whole system are us people because we are all the time trying to rig the systems the point is that I don't need to know what makes a free range egg I don't need to know how you check whether egg is free range or not I can trust whoever put the sticker on the egg that person first is allowed to do that by the government whatever food is in the supermarket here there is a guarantee from the governments which lead our countries that what we buy in the supermarket is edible that will not die from directly eating those things so there is a minimum level of quality there and then there is so there is a minimum level of quality there is somehow persons will allow these certifying agencies to go around and these people will go around checking chickens and eggs have some expertise about chicken and eggs that I don't have and that I trust enough that they can check these things we don't have at all this type of institutions around AI we can get whatever possible or impossible can come except for GDPR but that's much more about the data aspects than the systemic aspects we have no idea what those systems are doing we have no idea how the systems were trained were they trained with distance and well obtained the data or they were trained with some data sets that the programmer which was building the code found on the internet and saw that looked enough it was not about chickens but it was about elephants but anyway it's animal so let's train the free range eggs with elephant data and this happens every day this happens because the data about elephants happen to be much easier and if it's not that what we are doing is we have at this moment as we speak hundreds of thousands of people look sitting in Romania in Philippines in rooms which are kind of the size of football fields looking at images on the internet and classifying those images because to train the system we need to have the picture and the label without this and in most cases I mean there are other ways to do it and I'm being a bit black and white here but most for a large majority of the systems that we are using today they need labeled data to learn from so we have to have a million 20 million pictures of a cat and we need 20 million pictures of which of them some of them are classifying as having a cat and other ones are classifying as not having a cat and those people are sitting now as we speak doing that nothing else than this for a few cents an hour literally and in many ways classifying things which they don't really never saw in real life try to have the people in Nigeria classifying news or reindeer probably the quality might be different then so those are the issues we have to look at there are other alternatives I think I refer to them already sometimes one of them and I think it's the most important is really realizing that by accepting and expecting that we have systems who are going to take decisions for has increasingly number of areas from the decision of which X to Y to the deciding whether or not you get mortgage deciding where which schools your children go to and all types of other decisions which are done nowadays already we are really need to fundamentally change our education is not about educating children or educating people to know the answers the system knows the answers much better than us and can find many more possibilities to evaluate it's about learning which questions and how to ask questions so we really need to change our education for all of us and that means that's good news for all of you who are not computer scientists that it's not about teaching people computer skills it's about teaching people social science skills it's about teaching people creativity teaching people how to think what does it mean to think so your jobs are much more safe than mine be happy of course people need technical skill or computer skills as much as we need to know how to write but that's kind of a basic skill we don't really need everybody to be computer scientists we don't need everybody to be programmers and indeed I was talking with and before this probably programmers are the ones who are the easiest to replace by machines because machines can program better than us but really the humanities the syncing the really under creativity and understanding what does it mean to be human these are the type of skills that we really really really need most of it is in the book that I just wrote who is developing AI at this moment the numbers speak for themselves that's also a concern we really need to change these numbers it's not because the white young males don't want to do things right it's just because the group is not diverse enough and each type of people colors, genders backgrounds and so on will help to create the diversity that we need the like I said the guys sitting down in Nigeria classifying moose it's not because they don't want to classify moose it's just because they don't know enough about moose so we really need to look at this at work with at this moment building and developing and thinking about AI and this goes back to the education issues so I think that that was more or less my presentation and now we have more time for questions and for discussion thank you thanks a lot very interesting presentation I was maybe concerned you had this art thing with transparency as one of them and I was also thinking about the car example because I'm pretty sure of course we need to design the cars in ways that they don't run old ladies over constantly but when we've done that I'm pretty sure the biggest problem with self-driving cars will be that there will be so many of them or some other problem that we haven't thought about yet because they're not here yet nobody saw nobody foresaw the problems that we now have with social media when the internet was invented right so in that sense I'm curious what kind of transparency can we think of that allows us to for instance see why am I getting this on Facebook or how does the system manipulate me which is basically what I need to know because if I really want to be concerned with fake news or Trump or Brexit or whatever I need to understand how the system is working and the system won't tell me as it is currently because also because of business so I'm very curious what you mean by transparency good there are at least two main ways to look at transparency one is the transparency of the system in terms of how does the system get to a result given a certain type of input that is technically very challenging because the techniques that we use are for a large part stochastic and undeterministic so it's extremely difficult to really link input and output and that's the transparency that most people are looking at what they call opening the black box what I think that it's the most important type of transparency and also probably a much easier type of transparency to achieve is transparency about the process how did we build this system what type of options did we took into account what type of decisions did we took into account and who was involved in the development in the deployment in the research in the training in the buying, selling all this so this type of process based transparency is extremely important it's what also will give us the safeguards and the trust that there is someone behind this thing or some human institution that I can ask to be accountable for what the system is doing and it doesn't necessarily require to be opening the black box we are dealing every day with other types of the black boxes organizations are typically black boxes but we do have ways to contain or to constrain the activities of those societal black boxes we need to take some similar approach and that doesn't mean that we also don't really have to try and to strive to build systems which are more transparent and we do have the techniques to build systems which are much more transparent but those techniques are not very efficient so we can go back to the symbolic type of AI which is extremely transparent there is no opaqueness there but the accuracy and efficiency of those techniques is much less what we see increasingly in AI research is people trying to combine both data driven and model driven type of approaches to AI exactly to try to account for the lack of transparency of these black boxes because some of them we cannot open at all because either of national security or of business models and business secrets also because they are too complex and the effect is not worth the cost that it is to understand the system so there are many reasons not to be able to open the algorithmic black boxes but still we can have this process based transparency so I'm quite interested in this transparency question as well and as you mentioned the transparency on the process level when we build machine learning systems we reason at the level of the population so we make some choices that work well within a statistical mindset it seems like a lot of our ethical issues are on the level of the individual we want to know why did the car do this thing at this time so how do we resolve the problems in reasoning how do we get from population level thinking to individual level thinking very good question I'm not really sure I know how to resolve it if I knew I could do it but I think it's again an issue of trust what are the type of situations and the type of decisions in which we are comfortable to take this population level type of approach and if we have enough situations and enough decisions in which this works maybe classifying cats in pictures and other types of things that would work sufficiently there if we really want to go to the individual decisions and the decisions and the effect of the decisions at the level of the individuals then we definitely need to take that approach to AI we cannot do that with the approach that we have now so it's again a dilemma how much do we want to profit from the type of decisions we can take with this type of systems and then be transparent about these are the decisions I can take and this one I cannot take and then but if you really want the other type of decisions we need another type of AI I was thinking about there are things that are sort of going under the radar which has an enormous influence already and where you could say there's a great public outcry when one or two persons get killed by autonomous cars and rightfully so but compared to how many people we could perhaps already now save if we had more autonomous cars driving at least in places where the weather conditions would make them efficient on the other hand you have a huge cultural influence or a huge financial influence on the way that machine learning or artificial intelligence affect the lives of people in the way that we consume culture in particular with children who sort of engage with monopoly like media like YouTube and are influenced by the choices but in ways that are not completed terms parents of course you can figure out why do this pop up again or with people with a financial system taking loans getting health insurance or insurance but sort of met with a system that they cannot understand with some of the parameters they give come up with a number and they're not able to have a meaningful conversation with the party in question and are we too unconcerned about that at the moment because that's actually having a great influence on people's lives good question a lot of what media is doing is not necessarily AI and it doesn't mean that we shouldn't regulate it but I think the discussion can be separate from the regulating AI discussion and indeed we can do much more on the media and the types of things there without really needing to go into the AI but basically in the YouTube example that you give the most influence of AI on this type of systems and on the recommendations that it gives you to watch the next so based on what you have watched so far then you probably want to watch this next one but that's only a little bit of the old influence that YouTube has on children and also the types of content and so on so I think the discussions need to be a bit separate or at least understanding that we are dealing with even more complex situation in which many more technologies are involved I think that indeed there is a big resistance from government in general maybe less outside Europe than in Europe about regulating things and there is a big push from the and the biggest thing which we are not regulating is the large tech companies which have influence and the rich which is bigger than any of our governments the annual revenues of Google are probably larger than those ones of Denmark or around that and they have an impact on much more people than the government of Denmark you can vote for your government at least you have a side there you can vote for what Google is doing or not so this type of political power outside of the normal or the traditional political power that we are really seeing happening in the large companies which is again not necessarily something to do with AI is something which we really should be very concerned about and it's again a discussion which we need to have in parallel to the AI discussion but I think there is a lot that can be regulated about AI I wouldn't really so much go about regulating the technologies themselves but really looking at the applications in which we are seeing AI being applied and what types of combinations of techniques and applications need to be at least flagged out as potential risks so I have been coordinating a big project on robots in Europe and I see quite a lot overlaps not just because robots are also connected with AI but also some of the issues and one of the issues we found was that it was really really difficult for engineers programmers if you like to say what I'm doing is actually superfluous I can make this robot here but is it actually necessary do we actually need it for this task are there better solutions and so we came in as anthropologists and played the bad cops in a way and said why do we need this robot here couldn't you have a better, smarter and cheaper solution in doing this and the answer was no because this is innovation and we have to do it because we have to try these things because that's what drives the world forward the other thing is well known so I just wonder what you would say to that and then I have one more question indeed we engineers build things because we can and that's what drives engineers we can do it and we can explore and after we see if we can we'll think should we have done that so that's also as a lot to do it goes back to education is why that engineers are educated it's about really identifying novel ways to apply certain techniques without really thinking too much or giving enough background in the understanding the impact and thinking about what other options you have so that's that's indeed something which I see a lot and it's a lot what drives also AI and it's good for innovation in a sense because you try new things but what we should have much more is differentiation between kind of explorative approach to identify new paths and the step of doing that while these things are being used in real life so kind of boxing or something like that might be a way to do it I agree completely but I also learned a lot from the engineering approach because I learned that well even though today it might be a better solution to have the persons do something maybe tomorrow it could be but I'm just thinking about when we're talking about decision making and it's not just the AI making decisions but also humans we tend to prioritize technological solutions and not in robotics where I was not that's true and we were sort of thinking that we could do more in trying to make a combined issue where we say what can humans do better than machines in these innovations but then I had another question in that case one of the things this is one of the reasons it's diverse enough and the other reason is also that we give more value to the and a lot of we give more value to the technical innovation over the societal innovation for whatever reasons but like you see from the presentation I gave a lot of it is questions for us people and I do the same type of presentations with engineers exactly because we need to start asking the questions before we do things that's why your work is really appreciated thank you but one more question I had is about one other thing we saw in robotics was that in actual implementation of these technologies a lot of things happens to adjust to the technology so the physical environment has to change we followed experiments with self-driving cars and we don't want either the old lady or the child to be hurt and the way to solve that problem is simply to exclude the humans take them out of the equation some of the engineers we spoke to even said that very directly let's get rid of the humans they're spoiling the system and so I wanted to ask you about that where are the limits there yes so indeed the worst can happen to an engineer is to find a user that's something which we don't want no users are kind of scary things we want to have them far away now taking the joke out indeed it's easier to adapt ourselves to the systems than to adapt the systems to ourselves because we as people are very very adaptable and very resilient and very easy to a change much more than the systems less than for the engineers it's not always bad to exclude the users or the people for instance in Sweden people are working in the north of Sweden in creating fully automated mines because you don't want to have the people inside the mine it's not really anymore from this time to force people to go down the mine and work there in the dark and break whatever iron the walls so you can do that without the people there in ways of course there is a whole issue of the miners which would then not be miners anymore but in some cases you would need to most of them they are doing that by joysticks which looks like a mining game you sit in an office and you drive the machines inside the mine with a joystick but anyway so in some cases it's not bad to exclude the user we do have to be able to understand why and when which case is the user really relevant there another thing which is a bit related to what you say and what we have in some other questions come out is this idea of wanting to know that we are talking to a machine that we are interacting to a machine that we understand what the machine is doing in that if in one end we are extremely resilient in another end we are also very lazy we don't want to be bothered all the time by the machine telling me now I'm going to do this are you okay with this or not now I'm going to read the news to you are you okay with this or not so in that sense is extremely difficult to find the right type of interaction in which you still have enough understand of what the machine is doing and indeed that you are interacting with the machine without us being very lazy and say I consent and agree with whatever because I just don't want to be bugged every time to know what's happening so that's one extreme and the other extreme is when we try to take people out of the equation it is a difficult balance and that's again one in which you cannot solve from an engineering perspective alone we really need a multidisciplinary approach to this type of issues thank you very much for your talk sometimes there's a lot of stress on the dangers of AI and I've been looking at healthcare and AI and I found in some cases it's very banal they don't really care about the AI because they just discard it if it doesn't work if they think their own judgment is better so they don't really I mean it's sometimes it's helpful sometimes it's useless so there's something very trivial about how they go about AI and that's sort of what I would like to ask is where do you find the two or three most present cases or areas where we need to look at AI as a danger I mean accepting the driverless cars because we know about that so the really dangers of AI can come from the interactions with AI so the self-driving cars or the autonomous weapons are a kind of the examples that always come I don't see those as the biggest dangers of AI because that is typically what we people do with whatever technology we have we use it and exploit it for whatever evil reasons we might have or not so that is not the blame is not on AI the way we use it I think the biggest danger of AI is more on the self-determination and capacity to think for ourselves the doctor now is still aware that you can discard the results that come from the AI diagnostic systems because those systems are still quite separate from the normal interactions and the normal process but once these things are part integral part of the process becomes much more difficult to understand when is the system making a decision and when do I still have the capability to make a decision myself so this lack of self-determination and lack of the ability to reason for ourselves and to indeed say I don't agree with the system I'm going to do something different will be more difficult so a doctor which will be able to say no I'm not taking over the decision that the machine tells me because I have some kind of a gut feeling that it is not right it will be increasingly difficult to find and that might be good if we can really build the systems which will be accurate in all types of situations but in many cases especially in the fringes of the systems people still have this capability to be resilient and to find some type of creative ways which the machine doesn't have. Going back to the engineering example I'm an engineer and I don't know how to calculate a square root by hand anymore which maybe it's fine because the machines they do it accurately all the time so I don't need to do it I did learn how to do it in school nowadays children don't learn anymore how to calculate square root by hand the point is we still need to know when do I need to apply a square root and this type of knowledge is what we need not so much about the technique but about the questions that you have to ask to decide now I really need to calculate a square root. You said that artificial intelligence isn't intelligent and in a way you convincingly demonstrated it through those examples with the science for instance but then I'm wondering if actually artificial intelligence isn't or if it is intelligent because I'm thinking of Juval Noah Hararis Humodeos in which he remarks that what is new about artificial intelligence is that it is separated from consciousness and that might be the reason why it doesn't seem intelligence so maybe in a way intelligence is stupid and it only gets intelligent when it's combined with consciousness Maybe I don't know, you are the philosopher not me so I don't go on that one it's more than that at this moment and my claims that it's not intelligent as nothing to do with consciousness or not is just with the fact that most of the systems we are using nowadays are purely based on correlation all systems can do is correlate data any intelligent being from an animal to a baby to a Harari himself are other types of cognitive abilities which are part of the cognitive intelligence even without the conscious thing there we do use many other techniques, abstraction we use causality we use all types of other techniques to be cognitively intelligent which the machines like so even without having the discussion of the consciousness and this is not it doesn't apply for all types of AI techniques but to ones that we are seeing at this moment as being the most successful and the ones which are applied again and again in all the examples which we have given are pure correlation based and that's just a little part of what we would even taking the consciousness out of the equation a little bit of what we can do as people. Many of your examples were AI based on images so how would you say? I know but there's a difference depending on what kind of data we're talking about so how would you say how do the challenges differ depending on whether it's images text or numbers? In terms of the application of correlation techniques doesn't really differ much. On a more broad In terms of the impact depends a lot on the application or the domain in which we are using those type of systems. One of your examples to begin with was this about adversarial use of AI or machine learning how could you come up with an example of that in terms of text based machine learning so what would that be? Fake news. How does that work? Inserting certain pieces of text into another text which will lead you to convince that it's something different. In terms of sound or audio these type of systems speak like a person but they are not a person and there was an example of the Google Duplex some time ago in which the system also has this type of breaks and interruptions in the way that we talk so and that doesn't necessarily need to be wrong. Also the pictures not necessarily have to be wrong but will lead you to believe something which is not there or was originally not there and text is the easiest one to manipulate. Hi. I actually have a question more of a research side of things not just AI but what do you feel has already kind of been researched enough in the EASafety research because as you said there is like 84 guidelines and they are not quite the same but they have very similar ideas in them and what new can researchers bring into this like into this whole regulation system and stuff like that. Okay so I think things which we do have 84 of those things but we don't really know yet how to apply them so that's the first how do we move from very nice words to some concrete recommendations or concrete requirements for the systems that we have and now then we also evaluate and check that those things are done because it's very nice to sign yes my systems will be fair then I'll do I verify that the system is fair or not. Those are kind of research which really needs to be done and again cannot be done from an engineering perspective alone it's really multidisciplinary research. I think issues which we don't really understand at all enough is the impact of all these systems not only in the ways that we are going to reason ourselves but also in the ways that our societies are going to evolve and to change and how do we include this type of understanding which is again something much more from humanities and social science how we include this type of awareness and the method that we use to understand impact in the design of the system so really closing the loop because what we see in most type of development engineers and technology come up with things then they drop it into the world and then the social scientists and humanists start looking at it and looking at impact but it's very little done about closing this loop it's kind of one way and then the engineers build other stuff they dump it in the world so you study it and you look what happens okay let's talk together and see we know that this is what we can expect that it's going to happen we have the techniques and the theories and the methods to really look at it from a principled and scientific way and now let's give it back to the design and that's the there is a lot to be done there is very little done yet I have a bit of a follow up question as you mentioned like really big tech companies they actually represent a lot of political power because especially maybe in America I don't know through lobbying and stuff like that but how do researchers can actually get to these big companies and say hey do they just have to work with government a lot or how do you do that actually the very large companies are very aware of these issues and there are they are not completely evil there are a lot of evil people but there are a lot of good people there as well and there are very good people working at these companies which are aware of these discussions there are platforms in which these companies are really trying to together one of the one I know best is called partnership on AI you can Google it which all these companies in this kind of thing yeah we are aware of the problems don't worry we are going to solve it which then starts us getting worried but there is ways to enter a dialogue with them and many of them have this type of sandboxing platforms in which they are happy to discuss things of course you cannot go check exactly how Google does their page ranking algorithm and their decisions on what YouTube is going to give you or not but there are opportunities there yes and the dialogue is increasingly more involving all the different parties yes I have a question on responsibility you said many you said that the responsibility for example for self-driving cars mostly lies both with the user as well as the engineer and my question would be if we were if we were to allow self-driving cars on our roads would there also be a responsibility of society because in my opinion as a student both of computer science and philosophy I can very firmly say that any AI system that runs on a computer will never be able to understand meaning so there will always be a possibility for for error which is impossible to predict from an engineer standpoint so is there a need for a discussion on whether this is an acceptable risk which we are willing as a society to take to allow self-driving cars on our roads definitely society has a very large part on this discussion and one thing also which is important to realize and following from what I just said the moment we allow self-driving cars in our roads our behavior will also change and I talk with kids in high school often about self-driving cars and other types of dilemmas and one of the things which almost and I have done that in several parts of the world almost always comes out is when they realize that well designed and properly operating self-driving cars will by default stop when someone is crossing the road they will go into that car and the behavior of the kids and they do I don't have to tell them that they all come with this idea and you see them the high is getting high okay let's it's a nice toy we can try with the car will stop yes if I'm half a meter before it will stop okay what if I'm 40 centimeters what if I'm a centimeter before so the behavior I mean not only about the kids but if we trust that these systems by default will stop when we cross the road we'll literally again because we are lazy even take more care about the way we are going to cross the roads so we will continue looking at our screens and moving because we know the cars will stop and that's just a very simple example to show how our societies change because of the technology that we put there in Melbourne I was recently the stop signs for the people are not anymore in poles are in the ground why because we are looking at the ground we are not looking at the poles so our society change because of the ways we use technology and this indeed is a very important thing to realize also when we design technology and with awareness that not not everything can be solved by the technology alone yeah I'd like to return to that discussion of transparency a little bit because there's this notion of explainability I know that explainability and explainable AI is taken by some to be a kind of silver bullet that addresses the transparency need so I've also seen in the explainable AI community that trust is taken to be one of the ultimate goals of explainable AI that the explanations are meant to engender trust in the user so my question is how do we avoid that to over optimize for that purpose because people can be deceived to trust so how do we avoid that we accidentally make just persuasive systems rather than actually explainatory systems I do agree that at the moment explainability is seen as the silver bullet I don't really know if that should be the case of course we need to understand what the systems are doing we need to have sufficient there should let's say societally speaking there should be enough understanding of what the system is doing and there should be enough knowledge about understanding the behaviors and the design of the systems whether each one of us independently needs to have this type of explanation I wonder there are millions or not millions thousands of systems around which I don't really know how they understand and I interact perfectly with them without needing to understand my car even less and less understand how that sing what exactly the moment I push the button and it starts moving what exactly happens there I don't really know but I also don't really care if the car was trying to explain to me what happens between pushing or turning the key and that's the motor starting probably I would just get irritated I don't know every time again and again so explanation in AI needs to be contextualized it should be given at the moment that it's necessary to those who can do something with explanation if I just get an explanation and I still have no capability to react on that explanation I also cannot do nothing about it so there is a balance between and again it's one type of discussion which needs to be much more multidisciplinary all right time flies yes I don't understand why can you explain it to me yeah but you've been so succinct in your answers and you've been so lively and engaging in conversation so let's first thank Virginia so much for coming thank you all for being here