 Sitchi Sona, Sitchi Sona. Thank you very much for coming, for adding to your already very busy weeks, also this additional time in the classroom. So this is not going to be a lecture. The purpose of this meeting is to have a conversation rather. So I'm acting here as a facilitator. I have a series of slides, but I will just show you questions essentially. And for today, this will be sort of open questions. It says that we don't know the answers, so for sure I don't, to most of them. But it's more important to know and see how you feel about these questions, what are your insights. So it's a matter of trying to communicate your opinions and your feelings. Tomorrow there's an exam, this will be real questions. But for today that's really more of a conversation. So we have two mics here. As much as the flow of the conversation allows you, you are invited to make them circulate among you. If not, not everything will be recorded, of course. But just because questions just come up and then it's better to follow the flow of the discussion rather than stopping by. But since we have it, you just move it back. If anyone wants to, thank you Luca, you're already working. If anyone wants to intervene, you just move it along in the audience, okay? You can, we can go with the, I think even the instructions, okay? So you see around and you let this move, okay? Again, it's not important. It doesn't work, just shout. Fine. So just to set the stage for discussion. Of course when you talk about artificial intelligence, many things come to your mind. So what do you, what's the first thing that comes to your mind when you talk about artificial intelligence? You want to ask a question already? Okay, ask a question. I don't know if I can answer. What's the question? Okay, this is going a long way. This is a question that is going a long way. So I think I will just ask you to ask again this question later on. My question is, what do you think about when you think about artificial intelligence? What's the first thing that comes to your mind? Speak up. Yeah, you don't need to be addressed. You just speak up. Yeah, that's very, it's a classical reference. It's not silly. Yeah, exactly. How robots interface with humans. And I think that that's one of the main concern and we will discuss this. Ethics. Ethics, of course. It's part of that. It's part of that, right? Before we have to establish some ethical framework in order to relate with machine, with other intelligences. What are the expectations and do we want to sort of trace a boundary? Or is there a boundary? Or is there an actual limit, an intrinsic limit? That's a good question. Yeah, exactly. So that's a big point because that's actually a point that touches on the very notion of what is intelligence itself, right? Besides being artificial. Yeah. No, this comes in the end. Wait a second. Yeah, perhaps not. Perhaps we will keep on adapting. Yeah, we can discuss this later on. I was just suggesting about what comes to your mind. Yeah, the design of the goals of artificial intelligence is very important and would be one of that. Anything from back? Perhaps first the movie Matrix and the fact that how long can we remain as masters instead of slaves to the artificial intelligence. And the second thing, as Diego said, what is the difference between human mind and AI? Perhaps we are just big bags of calculations the same as robots are. Okay. So there are, Google comes to your mind. Sure. Exactly. So we will try, that's the purpose of today's meeting. Try to walk you through all these kind of questions in a slightly organized way in which again I reiterate I have no particular standing in this. I'm not an ethicist. I'm not a lawyer, neither a lawmaker, even though recently it seems like it's become a lawmaker in Italy. But none of that. So I will discuss from the viewpoint of my personal take on that, but it's just as good as anyone's else. So one thing that I think it's helpful is just to wipe out the field from many complicated things. It's just to recall you to most students that actually our definition of intelligence is very low level. It's actually wants to encompass as many possible options that are available. So it's enough to have some agent which perceives the environment, who reacts to that in an adaptive manner. And in this loop of actions and perception, sometimes such as in robots or some arguments is directed towards a particular goal. Okay. That's a very basic definition, but it's very important because it allows to encompass many different life forms, not only humans, animals, vegetables, machines. So the way, I think, the way which is particularly useful of framing the discussion about our intelligence is that we must realize that we live in sort of an ecosystem in which there are different agents and they interact. So we use ecosystems like, okay, plants and animals and then humans, et cetera. Perhaps we have to look at this thing as a larger, larger ecosystem. And then we have to care about the relationship between the various agents that are present in the ecosystem. And as already emerged from this discussion, one very important concern is about how do we feel about artificial intelligence more than how to define it often. Just how do, what's our perception of it. And sometimes it's shaped by the literature or by the movies. And this means of communication may play a very important role in addressing the public opinion in ways that might be good or bad sometimes, right? So the question that I'm asking again to you is not what do you think about, but what feelings you have when you think about artificial intelligence. There's a whole spectrum of this, right? Going from expectations to fears, right? So can you tell me some of those? First of all, let's just make a very quick survey about the overall feeling. Do you feel more excited and with great expectations? Those who feel like this, raise your hands, yeah? And then who is more inclined to be, let's say, from suspicious or agnostic to outwardly possessed with fear? Can we be both in the same area? Of course, of course. Obviously, that's probably, I think, the thing that captures most of the feelings that we have. We both have some expectations and some fears, right? So this is concerning the expectations. What are specific instances by which you think artificial intelligence, my dream would be that it's able to improve the quality of life? I know that some, like, image recognition systems I use for a number of projects and very often some tumors could be recognized using a machine. Yeah, but that's absolutely something we'll discuss. Is this an expectation? You want to rush me too much into this. This is a nice topic scenario. We will get a little, somehow later to this, but just a little. I mean, I think, let me, I will make an example, right, what concerns me. I have a couple of things in mind, but maybe some of you share the same thing. One of my dreams would be instant translation. So I go everywhere, I just listen to people, I understand what they say, I talk, and they understand what I say. Good. Still, nonetheless, I'm speaking English here, which is not my native language. So something accessible to everyone, of course. Yeah, that's my dream, right? It's a small dream. Again, so I said, yeah, that's, that's a delicate issue. Of course, that's a delicate issue. That's a relationship with work. Okay. Sorry, but that still remains my dream. And second dream, assistance to the elderly. You disagree? Okay, I said, like I said, okay, it's my own assistance to elderly people. The old ones will leave me the assistance, rather than having families or operators and a lot of tasks, maybe automation. Mindful automation is one way of alleviating for many, many of this task, which actually in the West, with an increasing population of more eight people, it's becoming a big toll on the certain part of the site. You might want to say, okay, this is not a problem in 75% of the world, but that's like, just like I said, it's my own expectation. So you might have some others. Yeah, please. You know, I really worry for my privacy because the big brother is watching us. And the problem is that he has more facilities to use better system, better machine. So in past, we kill each other with a stone. Now we kill each other with very new high tech technology. So yes, of course, a lot of uses you can use instant translation or such thing. You know, I now I use a free application, then it shows me some advertisement that knows exactly what I want. Yes, it's good, but what about my privacy? I really worry about privacy. It seems that we want to move about the fears. Advancing science, yeah. Helping us in intellectual tasks, yeah, sure. There you go. There's a door where all the repetitive tasks are have gone and only creative jobs are existing. Transformation of work. Yeah. To what he said, I'm a bit concerned because if society is not ready to evolve on this, it means just that machines can do all the manual jobs and likely more people are going to be jobless. So that's what I wanted. Sorry. I just have to move on. We just don't don't progress. That this is the outcome that I wanted to reach. Okay, because that shows you how entangled the two things are. Right. So to every expectation, it comes with sort of a downside. So I'm happy for the universe translator and then who studied to many languages then sees the risk of getting no job. And time and all these other things that you say distant things just come together. Right. And this is exactly what what many surveys which are more systematic than the one we're doing here actually confirm. It's it's a it's a big complicated issue. It's it's it's very important not to try to simplify. So many of you are physicists or scientists. So we like to say, well, let's simplify the things and cut and chop it. But this is one of the things that you don't want to do here. Okay. It's an inherently complex problem which we have to address with care and making particular attention to the fact that there are always two or many sides to the same thing. So people you see it as a whole spectrum of things going from mistrust and anxiety to excitement and optimism. And I think actually all of us have these things in different proportions and that that's that's the initial realization that we have to deal with. So like we said, both expectations and fears are present. Some of them. Some of them is so so some of them have a more practical dimension. So for instance, lots of people concerned about, oh, I will take away our job. We will end up being jobless and is this going to make us richer in the sense that we have more time to do more creative job or is just going to let us jobless and that's it. So this is a practical concern and it's direct. And it's the the thing that we have to think about and to face first as members of society because famously it was famously said by. I think. Yeah, I don't remember the name of the economist who said that it's even it's much more likely that humans were raised against machines rather than robots overcoming over humans. The risk is scenario is that there is a complete repulsion towards technological innovation, which is a phenomenon that occurs often and reproduces itself over the history. If something arrives in too quick a question and it doesn't. There's no background to integrate this new technology into our systems of values and beliefs. Then it can easily get repelled with results that not necessarily are good for humanity as a whole on the long term. Right. So it's important to address this fears and try to find out ways of making the impact of a more friendly and less disruptive. So you see when we have to weigh the pros and cons. For people, for instance, it's very happy to have digital assistant for. For for low issues. So it would be better to have affordable legal advice than to preserve the job of lawyers. Right. So this is a very aggressive statement towards lawyers. But for some other sectors of society, that's that's pretty different as you can easily imagine. So yeah, these surveys are essentially done in Western countries, so they don't represent the full the society. But these are at the current stage, the only ones that are available on this scale. But certainly it geographically it can probably change a lot. And it seems so far. What we've been looking at has not been so much I guess out there of artificial intelligence itself, but rather how and why it is used in what way it is applied. Yeah, but that's what causes fear in itself. I don't. I mean, it's the fear comes from the relationship with the artificial intelligence, not in abstract as an entity, right? Is that what you're saying? I mean, yes, but perhaps say something like artificial intelligence. I mean, because we have different kinds of intelligence, right, be it specialized, generalized or super intelligence, right? Surely that's something different if it if it's no longer a question of human operators. There's another class of concerns which I would call more existential, right? So is the fate of humanity going to change because of the arrival of artificial intelligence on large scale and in which ways. So there was one dimension which is more how we will it impact society and our way of living. And in other ways, how we will it impact the existence of humanity and how we transform it, right? So both, both situations are present. We try to discuss both of them if we can. Thank you. So first question that is present in the back of the mind of many people were interviewed. Are you worried about not finding your job or since you're a pretty young sector, for most of you would be not finding your job because of artificial intelligence. So maybe that the things that you're learning now will not be irreplaceable soon. So how many of you do you feel sort of competition with artificial intelligence for the job market? This is very, very well explainable. Okay, so you don't feel worried about this. So do you think that your parents should should be worried your parents about the impact of artificial intelligence? Okay. And yeah. How do we, how do we conceive the notion of job? What is job meaning? Is it simply an activity? Is it simply this is not linked to AI especially? I agree. It's a general, it's a general about innovation. And the innovation at hand here is AI. Okay, so we're discussing a general problem of impact of innovation on society with this particular case. Only that it seems at this moment might be that we are just overstating the impact. So it seems that it's pointed that this impact would be disruptive in many fields. So that's why we want to discuss this. So it's not something which is specific to AI, but that's that's what you want to address here. Absolutely, absolutely. But the political level has a consistent constituency and we are the constituency. Okay, so that that's what we can do now just discuss. The job market changes a lot, changes continuously. And yes, we have to probably to adapt to this. Most of you think that you don't really need any adaptation. That's good. Okay. Because probably you are already habituated to adaptation and you can, you can handle new situations and that's good. Yes, please. I think there are many ways to tackle this problem. It's like it really is something between capitalism and social democracy. We can pay people who are losing jobs to robots for a time and help them create their own, their new abilities if they if they are able to do so. I think when you're America, this would sound funny, but I think in Europe, this is something that people can agree on. Yeah, sure, sure. Absolutely. There's absolutely this need to accompany this process and to preserve some parts of the society who are more fragile and successful to be strongly affected. Because there would be these things have happened in the past, right? When new big innovation came into the job market, there's always a gap period in which the transformation is rapid and then it takes time to adjust to eventually perhaps that we've been far more job in that sector than they were in the beginning, but certainly a transformation affects very strongly some layers of society, especially people who have been doing the same job for many years and have less ability of moving to an economy which is much, much more of short works and a completely different scheme of our work proceeds, right? Because I think if you look deeper, the overall wealth of the society will increase and the problem is only the distribution of the wealth and if we agree to help those people, it's good for everyone. Yeah, that's very right and that's a big concern because another thing that is about artificial intelligence, if it will even impact, it will be highly unequal across the world. It will impact in very different ways according to the geographical and political situation of each country and its history. So it's highly prone to enhance inequalities and so we have to take an eye on that for sure. And there is also the question of who is possessing robots or artificial intelligence and will it be like an open source of artificial intelligence or will it be protected and so what about... I don't know, I know nothing about it but for instance I'm sure that armies throughout the world are developing their own artificial intelligence and brand machine and so is there any... Yeah, absolutely. Is artificial intelligence a common good? Assuming it's good. So this is the question actually. Will jobs disappear? Will some jobs disappear? Can you mention one? Retailing, yes, retaining. There are some new shops that are completely automated. I mean my father owns the shop but it's going to be a job that is going to disappear. Retailing all these things will be strongly impacted if they are not already. So transportation or controls, yes, it's basically a totally automated job. Security also. Security? I say security because probably we will be at a point where we will no longer need actual people for security purpose but probably machine will be more reliable. That's interesting because again this touches upon the problem of interaction between humans and robots. There are robots patrolling some places and they get vandalized a lot. Which is again something which touches upon how would you relate to that. It's a cop and on top of that it's even a robot. I think I framed the problem. So again, let's see what people say. So this is the result of a survey which has been published on a paper. So this is a scientific paper about artificial intelligence. It's a survey among participants to all this big conference like NITS, etc. So they asked the participants, so this is a survey among experts what jobs and tasks will be taken over by machines. And you see that some of them have a pretty short horizon, right? So transcribing speech, folding laundry, cleaning the house in general is something that's going to be hopefully taken over by machines very soon. It's not particularly alarming, okay? In that case people feel it very beneficial. I'm happy to have some small robots which cleans up the mess that I make. And then we start with something, okay? Drive a truck. Ten years' horizon. Of course these are estimates, right? It's a guessing game, but it's a guessing game done by many people who are acknowledgeable in the subject. And this is probably going to have a significant impact on the way the job will be transformed. But then the interesting thing is that, let's take this as a particular example because I think it's particularly telling, automated trucks. This would be really a boom. It would be a wonderful thing, okay? Because you don't have people who are driving trucks for hours on the highway and then they get distracted and lots of accidents, okay? This could have really a positive impact on public health because you really remove some of the risk that is inherent to transportation on trucks. But is this really going to disappear? When you look into the scenarios that people in that sector and transportation outline for this sector, what they say is that actually they have in mind the following idea. There will still be drivers taking trucks from the warehouses, where the material is, to the entrance of the highway. And then from that point on, the truck will be automated. So it will be driverless, basically. The truck, there will be another operator sitting on his armchair with many screens connected to all the sensors, a joystick, okay? And keeping track of what is happening and remotely controlling this truck. That's the scenario that they depict, okay? So would that mean that this job of truck driver has disappeared? So in this scenario it's just being radically changed. Perhaps numbers will change, there will be needing of less people, that's possible, of course. It's difficult to make an estimate on that. But certainly it will be a job which will be largely modified because there will be no long-hauled transportation by a single person. Most of the job you could do it from 9 to 5 or in shifts, okay? So that's perhaps, it's a very optimistic scenario. Perhaps we will not ever see self-driving trucks because the integration with the ordinary messy traffic is too much of a difficult problem to be taken in practice. Perhaps on the contrary, in 10 years this will be the reality on all of our highways, we don't know. But it's important to notice that, at first sight you will say, okay, this means no truck drivers around in 10 years. But probably it's not the case. You as a truck driver will be required to have different skills. Just video gaming, yeah, that's a good investment, okay? So it's an old, this thing is just an old metaphor. It's a metaphor of continuous learning, right? It's always like this, innovation is a stair which is going down and you want to walk up, okay? And you have to keep the pace in order to keep your job. New jobs are up there, old jobs are down there, okay? And it can be painful for other people to keep up with that, right? It requires a lot of effort to learn new things. But it's a process that's important, especially when these things speed up, like when new innovation comes in. Yeah, I know, yeah. Yeah, that's very sad, right? Okay, but before you blame me and tell me, okay, that was really crude and you're just heartless. So let me give you a little bit of context for this. So this is at the opening of a new mall, okay? And all these people were in line because they were distributing the folders for free at the floor up. And these guys were trying to shortcut by using the stairs, okay? So you don't feel so much compassion now that you're... So this is, okay, this sort of... Yeah, let's move on. This is a harsh metaphor, but of course the point was exactly the one that was raised before. We need policies that help people climbing up the stairs without just being left to themselves and struggle and fight and then, yeah. In the AI limit, they can do everything. What does it mean that... So after I... Now, you're going full throttle into this topic scenario. Yeah, might well happen, I don't know. Yeah, we will probably react to this. I wanted to discuss this later, but it's probably a good time now. When you think about this topic scenario, it's always funny because it's just like now there are robots and everything else is just like the same, right? So we just project ourselves 50 years ahead. Only on top of that there are super intelligent robots. But what has happened in meanwhile? Okay, now you're pushing really too far, okay? I will give you Elon Musk number and you can discuss it later. Now, the important thing is that at some point if there is going to be more and more AI, we have to establish some relationship with it and one of the key points is trust or mistrust. It's really a relation with algorithms, with the outcome of algorithms, with the information that we provide, okay? So that's the question. Generally speaking, do you trust AI? I know it's a buggy question, it's very... I would say that it depends on the task because for example we were talking about AI robot. I was thinking about the scene where the robot finding the card which had fallen in the river of the ocean, I don't remember. Decides to save with me instead of the children because we had a greater chance of survival. Exactly, so would you... In that situation, no, I would definitely not trust AI, but in other situation maybe AI would. Okay, of course, I mean everybody trusts Roomba. It's a little... we trust Google, remember that? Because we checked it, right? If it were to just to fail us very frequently we wouldn't trust it, right? So I can interrupt you because we'll discuss about this. Yeah, there's something I want to... Yeah, please. Absolutely, we don't know anything about AI. We tend to be very distrustful. So our priority is... Very little trust level. Priorities. It sounds society is we are more trustful. Unless proved otherwise. In other societies we are very untrustworthy. Yeah, that's true. It's a stranger who wrote the task. Yes, you see, it's a very delicate issue. So now I'm trying to give you some contrasting examples, okay? Because my task here is to confuse you, okay? So I will give you one example in which you're sort of... It's a bit creepy. Another example in which you say, okay, no, that's good, okay? Yeah, but we have a question in our intervention. Well, as he says before it depends on the task because, for example, I was talking about the cancer treatment by AI. They can't find the cancer. It's not a treatment. It's diagnosis. And they can do it probably better than any human because they have much greater computational power, let's say. But there are probably some other tasks which they cannot be trust. It depends on the case. I can make just another example on a situation where I could or couldn't trust in AI. For example, if war robots are built with AI, I would definitely do not trust a robot that has been programmed to kill me or hurt me. While on the other side, if an AI robot was built to rescue people after an earthquake or a flood, for example, that would be much more trustful. I have a more compelling example on this which is very much on the subject just a second. First of all, let me show you this. This is something from real science research. Do I need to explain? This is a real-time manipulation of a video. No. Yes. There's a camera which is capturing the face. It's building a representation on its face and it's planting into a video. In this case, it's George W. Bush. That will be pooting soon. Perhaps many of the things... Yeah, how do we know? That's the problem of trust, right? It's just like you trust your eyes because they haven't failed you. But then you trust your ears because you hear directly the meaning. But if there is a translator in between, would you trust it, perhaps less? Now, machines are an interface between us and the outer world, right? So do we trust them? You see how it works, right? It's a totally neutral image which is manipulated in real-time. So anything which is now transmitted real-time, you are not at all expected to believe even if it's happening right now just because it's passing through some interfaces if we're getting at you. Clearly. It's a concern, right? It's a concern. It's a concern which is motivated people to think, okay, perhaps we have to push on the break. Let us give us time to think and to make it more progressively or let us just stop it altogether. Who is of this opinion, generally, we should put some breaks on this process. Two, three, four. Yeah, of course, yeah. And it was just more inclined to let it evolve and then we'll see what it goes. People will just push ahead anyway. People will? Yeah, I think we can stop. Do you think, really? Oh, just like GMOs, right? So people will push ahead anyway. Really? No, not at all. Some things which are substantial innovations can get at the wrong angle with society and be bounced off at full speed. GMOs is one big example of this. Big problem of communication which then you might have your own opinion of, okay, I have mine. But this is out of question that there was a big communication problem about what these things could do and what they are different from what people have been doing for thousands of years. Just this takes place at an accident rate. This message hasn't gone through at all. And some things could happen for artificial intelligence. If something could be potentially good, then you give some kind of examples like this, and GMOs. GMOs are definitely perceived by a large fraction of the society, common people as something evil. No AI. It's just an example taken from science communication. Just to have an idea of the kind of problems that might emerge. And then you could have a short-term effect that you don't know. It's even worse. Why would they just need to pass people's rules? Yeah, of course. You cannot stop it all together just because everybody decides for its own. It would be very difficult to plant a global policy on this. So let's go to this point that was raised before because, again, I think it's particularly illuminating. So would you trust AI for a medical diagnosis? It's a terribly badly formulated question. You should know this in advance. It's a trap. But I'm asking, raise your hand. Who would trust an AI for a medical diagnosis? With some caveats. Okay. Good. Lots of hands raised. Very good. Now, I'm telling you that diagnosis is a mental disorder. With some caveats. You still agree? Okay. The point I'm making here is that actually there's a huge variety of situations in which AI might be extremely helpful in others in which it's very difficult to imagine how it can impact positively medicine at this stage. So one situation where you expect to have great advantages is sort of obvious. And it's dermatology. So there was a dermatologist at one of these sessions that I had a few nights ago, and he was horrified. But that's the thing. I mean, it's something which is now on the cover of this journal. And the task of a dermatologist is to classify. Experience. We have seen many, many lesions, distinguishing them from innocuous moles. And then the dermatologist, she or he, is able to tell the difference. And now you can do this in an automated way, where the performances are already exceeding the ones by a panel of dermatologists. So in this case I would go for fully 100%. But then, if the thing is subtler, if you have some other problem, medical problem, which is more complex and relates to your lifestyle and such, do we have sufficient knowledge in such a complicated, we should really be able to know ourselves as humans. And perhaps sometimes a human can understand us better based on something which is not easily put down on paper with the classifications with the plots of the mind. Perhaps it will come into your course at the time. Yeah. You know what the point is that now you're starting to search for some strange stuff, and then tomorrow there's a police knocking at your door and says, That's a diagnosis. You diagnose and then you don't do anything? Yeah. Neither I do. Neither I do. I'm just asking questions. Rest assured. Yeah, I understand what you say. I understand what you say. There is this power, but if in this case that's something which is pretty transparent, once the diagnosis is made by the computer, there will be a doctor assessing and then deciding, OK, let's have this removed. And the procedure is pretty simple. But if the disease or the health problem is more complex than that, it's difficult to transform this diagnosis into treatment without having an additional complicated interaction with humans. I don't think it's a very good time. Yeah, that's a very, very interesting scenario. I would be extremely excited by that. To medicine, it's interesting, but to the point is, so on your question whether an AI system will be able to diagnose a subtle disease, maybe like a mental disorder, all this is, of course, difficult to say, but for sure the moment we will be able to collect and integrate the data in a broader sense, we may be able to get much deeper knowledge, but there is a problem. And the problem is that as humans now, when it comes to some very delicate issue, we would like to understand why and artificial intelligence devices like this learning don't tell us why. They are not transparent. They are not transparent. They are black box. So this is many, many layers and different issues coming up here. Very, very interesting. So one of them is there is a tension here between how many data you want to give. For instance, if I take a photograph of a mole, I'm happy to share it. I would add it to this database because this contributes to the health, not only mine, yours and my children, etc. So I'm very happy to share data about this. For other things, it might be more complicated, right? So there's a trade-off between how much you want to share and how much you want not to share. Yeah, but we share. Who has access to this? It's true. But also we share a lot of data already even with our health. Absolutely. Because we get something that changes which is free or almost free. Absolutely. I think it's another thing. It's towards improving the performance. So like Luca said, we cannot diagnose this, but if we provide more data, perhaps this will be better. But on the same side, if you do this, there will be side effects which are difficult to predict and you might be worried and then you have all these fit bits sending your data around and then tomorrow an insurance company buys them all and then they say, no, you cannot get your life insurance because I've seen from your heartbeat rate that you are 90% likely to develop a heart disease in the next 10 years. Not you, of course. Neither me. You're talking about this topic, I'm literally reading a report that says how Target figured out that Dean Gehr was pregnant before her father did. It's just that Gehr was buying Target at the store and she has an ID. So she is buying and Target is mining the data to make recommendations and everything. What Target decided to do was to send her two books of baby themes and offers for pregnant women. So the father of the Latinean called Target saying, hey, why are you sending this to my girl? Target says, hey, you should talk to your girl. If nobody knew, she was pregnant, just Target. And it was just time mining. Yeah. If you cannot resist this behavior of these examples of buying themes that we want to report, if your intelligence is giving us performance on this one, if you cannot resist this proposal, we get pregnant. How should we learn? How should we learn? I'm sure that if we learn that, yeah, so sorry to drop. You can go with the discussion. I think it's very interesting. So let me just move a little bit forward. So the fact that you cannot just give to artificial intelligence all of the job except in some particular case means that it's very important to work side by side with artificial intelligence. And while this is sort of perhaps familiar with you, or a medical doctor which uses several diagnosis mechanisms and a way to compute, to predict, diagnose diseases, if you interact with an algorithm, it's something which is pretty neutral. But then things change when you have to interact with a physical entity. So you have to work side by side with a robot, which is something which moves, occupies space. Now these things are already here. And they pose a challenge. So on one side there is this point that we were raising before about polishing with robots, which of course is a very delicate issue. To understand how delicate it is actually, there are robots which deliver pizza. They are nice, roundish bots, very much Star Wars-like objects that get their pizza and the pizza goes from the pizza shop to the house and then they get the money, they get back. So it's pretty nocus, right? They get vandalized horribly. Why is that? Because people are stupid. Yeah, because you don't relate to this, you look at these things as moving around, et cetera. Now think about something which is doing, partly is doing your job or is helping you, but also somewhat sort of changing the way you're raised with it. Exactly. It's something asserting power, okay? This is why white people vandalize. I'm buzzing this machine, et cetera. But now, you know, these robots exist. They exist in the hospitals, even in Italy, which is not probably the most advanced technology country. So these robots just go around and deliver treatments and they help taking care of cleaning, et cetera. They are very efficient. They share space with nurses and doctors. And nurses and doctors have been trained on how to interact with those robots. I mean, if you encounter a robot, it will stop, it will not crash on you, it will just don't roll over you. They stop by and then allow you room to move. This slows them a lot. So as they've been spending more and more time in the hospital, people have been able to more fluidize this movement, so that they now move around in a very fluid way. And then when a nurse encounters a robot, it just walks around, okay? So it's just like helping each other doing their job in a very, very simple and basic way, which is not just making an obstacle to the other. Or if a robot just gets stuck in a corner, which might happen, then you just will help it go away from the corner, right? So these are issues about dividing space and contributing together. And you have to have some very simple and basic ethical behavior towards that robot, right? You have to behave well and expecting... It's not the high issues of saving a human's life and deciding whether it's what is best in the human it is, but it's really everyday sharing. And this also is one of the small steps, which is always overlooked in this topic scenario. Actually, having confidence, having sharing space with robots physically occupying your space and then interacting with them and seeing that things can get along is something which probably will shape our feeling about what robots can do in the future. So, and again, this idea of confidence and trust. So there are some tasks that you are willing to give to robots, some others less, okay? It's a very wide spectrum, no problem with cleaning your house and folding your laundry. Some other things are more tricky. So this is one of the questions that I usually ask. Who is okay with this? You are a very educated audience, okay? That's absolutely the case. So that's no problem with that. Again, this is also pretty much skewed. So I would like to hear something from people who would never buy a sub-driving car. Yeah, could you please elaborate? This is just part, not because I don't want to ask the question. That's a perfectly legitimate question. Okay, I won't say that you are a control freak. No, I'm just kidding. Yeah, there's a pleasure of doing some things that you don't want to just give up, okay? That's perfectly legitimate. And other motivations? The same, the same. It's not about, so I understand it's not about being, not trusting what the sub-driving car would do, right? It's about the pleasure of driving, right? So you saw that in your mind that the idea that, okay, this is the thing that which probably can be, can get to the point where it saves human lives and improves on this. Okay, this is a highly, please. Different countries and places. Now I've stayed in different country places and the traffic conditions in different countries are really different. Like for example, the traffic in China, in Hong Kong, here or the US are all completely different. So if the cars developed by the American country and those developers may not, they may like ignore some, for example, some strange conditions in China. So in this case I would definitely not buy a self-driving car made by American company in China. Okay. These things are just too complicated. That's already something which unfolds all the complexity of the thing in itself. But before discussing this, let me just show you that this is a survey and would you feel Americans, would you feel uncomfortable about an AI flying in an airplane, 70%? Americans what? What? There's a problem of perception. There's a problem of communication. That's what I'm saying. It's about the same if you ask, if the Earth is 5,000 years old. It's like, I don't want to go on that point. But you clearly see that sometimes which are, now okay, we're okay with that. If you're cooking your meals, you don't expect to be poised on by machine. It does a lot of assumptions here, right? Yeah, sure. That's a problem for all surveys, right? Driving a car 2,000 people would feel uneasy with that. This doesn't reflect your average opinion, right? It's very different, but it's another survey. So now to add a little bit of complexity to the issue of self-driving car, if you're riding a self-driving car, there might be situations where the car has to make decisions. And some decisions might be difficult. So the car on average will be faster than you to react. We'll probably have a better view or a better sense of what is happening. So let's assume that this is a good car. I'm not talking about crappy automated cars. You have a very good car, but nonetheless, sometimes there's something happening that you couldn't just forecast. And you, as a human, would probably have made a disaster. So running over someone or crashing into a wall. Now the car is self-driven, so it has to make a decision. The other option is not to make any decision, which is a decision itself, right? So you have to choose. So if we accept the idea that there will be self-driving cars and trucks, it means that there has to be some software which makes decisions in our place. And this decision must be somehow ethical. And that's the, if you want, the modern instantiation of the iRobot scenario. Okay, there's a car and has to decide whether to steer and risk to run over one person or hit the other three, that's something which goes on the name of the trolley problem in the philosophical literature. Yes, please. Yeah, but that will not be connected to people. I mean, in the sense that you cannot move them away from your way, right? So if someone is crossing the road, fine. But what I want to say is that this is just a limited part of the spectrum of things that can happen, especially when there's a space which is shared between workers and just like in real cities, right? So that these are the challenging situations. Otherwise, if you are on the highway, that's much easier. That's why people is pushing a lot on this for trucks because the range of possible situations is much more controlled. It's a decision in order to minimize damage, let's say. If anyways crashes on a wall or I don't, I mean, there is anyway an accident that someone dies. In this case, who takes the legal responsibility of the accident? Totally open issue. Totally open issue. That's one of the biggest concrete hurdles to develop in that direction. I totally agree. But let's abstract from this for a moment and see. So the first point is that how should this algorithm work? That makes the decision. So some people came with proposals. So this is taken from the MIT website. This is called the Moral Machine. It's a very simple device which you can just log into that page. And it proposes you several scenarios. It asks you what you would do as a driver. And these are harsh choices, right? Should I go left or right? There will be damage anyway. What would I do? And then collecting many, many data. The idea is just to build what physicists would call the mean field morality. Accounting for fluctuations also, perhaps. And then use this perhaps as a template for how cars should be. That's the suggestion. I'm not saying that it's good or bad. It's actually very different. What is interesting, it's very different what people decide sitting on a chair in front of the screen and say, okay, let me evaluate this. Okay, but this is a doctor. Okay, then we'll do this. There are some reckless psychology experiments in which people are confronted with sort of recreated virtual reality in which they feel like they are controlling the real thing. So they saw a video of a train, and they were sort of putting the condition that they had to maneuver this thing. And they were really believing that the thing was happening. And so there was a track like this, and then it was just two roads. There were two workers on one road and three workers on the other. Okay, and then the train is going in the direction, and it's gonna go in the way of the three workers. And then if you act on this and you are left to the side, you should rationally say, okay, it's a lesser impact if I switch on the other. And I kill two people rather than three. So can you imagine what people did in this situation in this real experiment? Yes, that's what they do. They freeze. They freeze because just it's a decision. But if you do anything you do will be a responsibility. It will be you killing two or three people. Now the conundrum is that you cannot do that with a machine. Or you can decide you can randomize the choice. You can, yeah, there are many things. Okay, many, many things. But it's a totally open problem how to tackle these things in a proper way. We don't think. We just don't think. Yeah, yeah, but it is. I mean, because sometimes you have to write down instructions. Yeah. But we are facing it again in another, from another angle, which I think it's also instructive, especially for philosophers. Absolutely. The problem is that the weaknesses are a couple of intervention. We've been saving people or not saving people. How, how, again, coming back at the topic of ethics, how do we program the artificial intelligence? And if you said, okay, randomize or choose the less debts, is that your fault as the programmer of the artificial intelligence? This is not moving away from the problem of responsibility. Is there some possibility in the hands of the owner of the car? Is it in the car maker, in the people who wrote the big debt, the debt of code, who decided the low maker, which decides this has to be morality for cars? Okay. And then again, let me just add another layer, which is pretty creepy. It's following one. Now I assume these cars are on the market. Okay. And then I go by my car and then the company says, okay, you know, for 10,000 euros, you can buy this car, which as our life safe in software, nobody gets killed with that, et cetera. Of course, there will be a situation in which this car decides to crash and kill you and your children. I'll never tell you that, but that's what's going to happen because that was better for the algorithm, right? But if you put on top another 2,000 euros, I can give you the version plus. It doesn't kill you. I have nothing to add. Well, it looks like a very complicated scenario. There is something more, in fact, because if you think about the systems of AI that we have, it's very likely that this system will have some self-evolving so they will learn from experience. So at a certain point, with the program, there's no car manufacturer, which will be able to tell you exactly what the AI is going to do. It's likely to be unpredictable, yes. Again, the issue that we were seeing before. We talk about AI like we can understand how it thinks, but we don't really, okay? Concerning the examples that you just made of the luxury version of the car that doesn't kill you, my personal opinion on this is that this is only a new way to represent diversities that already exist in society. Like I have read many articles about superhumanism, the fact that in 20, 50, 80 years, we will be able to enhance our own bodies with machine-like devices, and this poses great problems of diversity because, of course, if I am rich, I can afford a better body. Basically, I can live 150 years while if I am poor, I cannot. And, unfortunately, this is not a problem... I mean, it's also a problem regarding ethics, but it's just a reflection under new means of diversities which are already there in society. Yeah, absolutely. I think that it's just blowing it up to scale. It's just increasing the inequality, making it even more apparent. Yeah, totally agree. I think that, as in the example of the car, I think that in this case, these kind of algorithms should be standardized, meaning that... Oh, yeah, sure. That's exactly what, for instance, the government in the U.S. is doing throughout all the fields. More regulations. Is that irony, because maybe... No! Someone can explain the joke to me. It's very difficult to go towards the direction of enforcing more regulations now. It's extremely... Can you imagine? Really, you should enforce not only throughout the state, throughout the world, all the cars should share the same ethical standard in order to avoid inequalities. Is that feasible? It's highly unstable in the system. It's highly unstable. Yes, it's highly stable, but I mean, if there is a damage-minimizing algorithm, it should be the same. Yes, of course it is. What's beautiful is that you're so young. Yes, I know that we can... That's good. It's good. I like it. It's the same of the cobalt I tried to avoid, I mean, using atomic weapons. So, I mean, it's highly unstable, but think, I mean, it is still possible perhaps to enforce global rules, and if you are facing global problems, which are threatening us. I hope very much so. The deterrence principle applied to cars. Okay, so... Do you know... Yes? Yeah, sure. The deterrence, yes. So, let's... It's taking already a lot of time. I know you have to leave, so feel free to leave anytime. So, then we move to the more existential-like aspect of the world. That's right. So, we will ever AI be human-like. Why is your answer? This connects with what we just said. Machines, perhaps there's a future rather of us in which we connect more with machines. It would be superhuman or post-human, which is the best scenario in which we are swept away. Or just a scenario in which we have to choose whether we want to be pets or cattle. Right? So, this is the option. If you're going to be inferior organism, you can be a pet, be nicely treated, or you can be cattle. Okay? All these things are pretty much... But the interesting things is that there are some more practical questions that emerge in this thing. So, one thing that I always ask is... It's very interesting. Is can AI be creative? And I usually get poor source of answers. Who thinks that artificial intelligence now can be creative? Of course, it's... What is creativity, right? But we don't define it. I'm just asking the question. Who thinks it could be creative? Okay? Who thinks it is not and will never be? Okay? Is this the possibility? Okay. So, I'm showing you this picture. So, do you know who's the painter for this picture? No, that's not the Fungog painting. Sorry. This is just a simple algorithm, actually, that reads images from a repertoire of Fungog paintings. Okay? This is the real Fungog painting. And then it reads it and extracts features out of it. So, basically learns the style. And then applies it to another picture, which is just the picture of the Nectar river. This is just a photograph. And then, combining these things, that is, seeing this picture with the eyes of those features, it constructs something which is, looks like Fungog painted the river Nectar. What you said, it learns the style of Fungog, or it doesn't develop the style of Fungog. But it comes... It has a particular size of 2,000 or 20,000 painters, and then start combining them in ways. That's exactly the question. That's exactly the question about creativity. What is creativity? If you assume, and this is an operational definition, that creativity is the ability to combine things which seem unrelated, and then you connect them and you mix them and you create something which is new, but it's a combination of other things, among many possible combinations. And then, this is creativity. Then, if you refuse that, you will have to say that creativity is something like more spiritual, and there's a gap between the ability of knowing many things and then combining them together in something which is new to your eyes. Okay, I think it will be similar to what you said, but you don't need the ability of creating new rules or mixing those rules into new ones, to express something. To express something that could be an emotion, a message, or also just to impress and show you most of your skills, something that requires a will. Yeah, I know. That's to create emotions. This is a very basic example. But now there are AIs who are writing texts, producing short movies, and these things can cause emotions, explore different states of our emotions. So, that doesn't mean to express. Yeah, again, like I say, what is expressionist? It means that you have an emotion and you share it with the rest of the people. But it seems here that there is a problem of information. So, let's say that you have a space and you're connecting different elements of this space, but you're staying within this space and somehow, I mean, you get this amount. So, what is information? Okay, but that's exactly the question. The creative human mind goes out of this space? Yeah, exactly. Does it? Yeah, that's the question. That's the question. But in this case, I mean, we have filled somehow this space. Why did it exist in the beginning of time, or have we created information? No, I have a, personally, but again, it's not something. I have a very basic understanding of creativity, which is pretty much combinatorial. So, I tend to be on the side of a spectrum in which a new thing is just because there are some people who have very good knowledge of many things and are able to say, okay, these things connect to this one and then we come to something which is new, in the sense that this relation was not drawn before. But that's, again, that's my own viewpoint. It doesn't mean anything more than this. I would like to give an example of art because, okay, you said that the machine could learn two thousand different styles, but at the point when the style of my go is, for example, I see that paint, it transmits me a deep sadness that the painter is trying to... Can we go one step deeper, then? Sure. Can you do the word emotion and draw? It is not people's home. We need kind of a very specific question. The question was, the ability to see the emotions probably was one of the few that defines the value of a human being, you know, a human being. I mean, you know, humanism is the topical base of our world politically. And also, for a moral point of view, again, there's, for example, the reason is a competitive one, but not that much people like to be or not because they're not, I'd say a linear style that is the emotions or not that they're... So this whole discussion about, especially the AI, it's the first crack in a human society of the moral world and probably trying to teleport ourselves in the future and trying to, for example, what ethics are we going to teach for the AI? Probably these ethics will be right now. Thank you. So let's go a little bit deeper then. Sorry, we have to finish. I was going to think forever. So let's go a little bit deeper. So with some examples, which are less probably obvious than the painting. And so when we talk about craziness, do we understand what are the processes that go into artificial tensors? Like Luca was saying before, in most cases, we don't. So many algorithms which do such very complex tasks. So we understand the principles, but in the essence, many of them are black boxes, which pretty simple. And to give you an idea, of course, of how white this is, this is an example we discussed already in lectures, right? So it's about go. And you already seen this, the surprise shown by Lisa when that move is made So it's certainly something that, if it's not emotion on the side of the machine, it's certainly something which produces emotions of surprise in the person at side. So perhaps you might take this as an example of creativity, right? It's something, again, out of combinations, something. Yeah. The machine is finding another one of the minimums. It doesn't inspire to do a break of power. So then that's the way I'm going to do this. Okay. Let me give you another example. So this is an app which you can download on your phone. And it can translate about 100 languages into 100 other languages. So how many dictionaries do you think are loaded in this Google Translate? None. None is too much. You have to start for somewhere here. It works offline. It works offline, sure. There's one simple estimate, right? If you have n languages, it should be n times n minus 1 divided by 2. Okay? Or if there are reverse dictionaries. Okay? It's a lot. I mean, it would be 10,000 dictionaries. Now actually the number of dictionaries that are loaded is of the order of the number of languages. So it scales linearly with the number of languages. No, it's not. It's not using one reference languages to which you connect all of them and then go back. It's not the way it works. Yeah. But what does that mean? I mean, did you extract pattern from the languages? Yeah. So in practice what it does is that, okay, so this was the example. It creates its own language. So rather than going from the source language to the translated language in a direct way, it goes all the way up to the semantic language. So it understands the meaning of the sentence. And then in its own language, this interlingua, which nobody speaks except Google Translate, it goes down to the other language. Okay? So it has developed its own language, which we don't understand. We just understand the outcomes of that. So do we understand our machine's things? No, we don't. In Europe? No. No, no, no. This is not something that has been constructed like this. It has emerged from the fact that there was a deep network taking out examples and then extracting syntactic features from it. Okay? Of course, this is something which is ongoing research, but this is basically... To this mechanism, people ascribe the great improvement that Google Translate has undergone in the last month. No, you're shaking your head. You don't agree. No, you're surprised or you disagree? Both. Both. That's good. Okay. So then we come to this. Yeah, sure. Yeah, that's the way. That's the way. You just... If you have many languages like this, you don't have to have a fully connected graph, which is the way you would do it if you had n squared dictionaries. You just need a path. Right? You need a path, and from that path, you just go up and create... This is enough to connect all these languages. They extract meaning out of these sentences, which is on this other level, and then go down. It creates really a meta level, which is not simply the direct translation level. So, like I said, in many of these topics, scenario, you have this variety of options in which there is either the nasty superintelligence and we have to fight it and escape from it. This is a classic in science fiction literature. Or we have some more integrative scenario in which there will be a combination of humans and machines. Or we have this other class of scenarios. This is by Sentinel-Man, in which everything goes as always. So you get your cereals in the morning, except that now there are robots around. Okay. And honestly, this is less critical than that. Okay? Because things will happen in the meanwhile. Of course, people have been asking lots of questions about superintelligence and whether this will lead us to an existential catastrophe. Is this an issue really? Should we really worry about this or is there anything else that we should keep more about? So one thing is that sharing information. More or less? No, there's a mic. Now, this is a very interesting panel from the Future of Life Institute. This is that often we have many mythical worries about the fact that superintelligence is possible or impossible, AI will turn conscious. These are really mythical worries. There are much more concrete problems. One of them, which I think is the most important of us and has a clear immediate interpretation, is this one. Machine will become more competent and that's the genie problem, right? If you ask for something, you will get exactly what you asked for. So we have to ask machines right things. We have to teach machines common sense. These are the challenges to implement goals which are sufficiently defined in order for them to learn and performing them well, but meanwhile providing them general boundaries so that in this effort of getting the goal done, they don't just overlook everything else. This is an open challenge to design something like this. And this is one of the basic messages. That's why we are here to ask about it. To think about it, to be aware, not necessarily to predict, but to be able and conscious about what's ahead of us. And what's ahead of us might be, like we said, the post-human future in which machines overcome or the trans-human future, which I think is probably a reasonable scenario. Now, some things already happen, right? So there are these exoskeletons. It's completely an integration between a machine and a person. It's doing a very trivial job because it's just measuring the muscular force and helping the worker to keep the drill up for many, many hours. So this work is a very demanding work because you have to drill some screwdriver that you have to apply many, many times a day. And this is just interacting with the body and helping the musculature to keep relaxed while doing this job. And it's in a very interactive way. It's not something that is just like the robot doing a repeated set of actions. It's just interacting with the person. If the person bends, the robot will adapt. So this is a way of integrating machines and humans. And this is an even more interesting example, again, from Johns Hopkins. So this is a robotic hand which is attached to a stump. So this is controlled by the person. All these movements are just controlled by the person through these sensors which measure different things, like the tension of the musculature, electric current. So this is a real bionic stuff for real. And it's around the corner. And of course, in this transhuman scenario in which machines and men actually merge together rather than facing a competition where one would take over the other. There are very interesting issues. So this is a nature issue a few weeks ago. Four ethical priorities for the integration of newer technologies and AI. And these are privacy. If you have a machine which reads your brain without information you want to share willingly, how can you protect those that you don't want to share? Identity. If I have a prosthetic arm and this just hits someone, is it me or is it the arm? Agency. This is the fact that what I did. Identity is am I the same person as before if I have a large part of myself replaced by a robot or am I a different person? All these are totally open questions. And then, and we go back to the final point, equality. Because this, once again, and this is the final message, serious, serious risk about artificial intelligence, the explosion of inequalities across the globe. And that would be by far the worst scenario, the worst possible scenario. So by this I conclude. Thank you for the conversation. I hope the outcome would be that you are interested in knowing more. And if you want to know more, don't ask me. Okay? Thank you.