 How can you explain it in a way that a kid can understand it or my grandma, for example? Right, so if you take the most commonly accepted definition of AI, is that machines should be able to solve tasks that require what we would think of as human intelligence to solve. So there are many topics within this field, things like natural language processing, understanding text, or generating dialogue, translate text between languages, related to that is speech, no speech recognition, to be able to have personal assistance that interacts with humans in a human-like way so that people think of it as a natural dialogue. Computer vision, understanding images, recognizing objects. So if someone were to ask you where AI is used today already in our lives, how would you explain it? I think that AI is having a very strong impact in our daily lives, mainly in developed countries. So there's this field of AI that you were mentioning, which is called machine learning. What is machine learning? Machine learning deals with large amounts of data and it can train models to predict possible outcomes. And this field has evolved like in an unprecedented way in the last decade. So which type of applications can you use? Yes, so maybe we don't realize, but AI is there in a subtle way. For example, when we have a recommendation of what is the next song that we want to hear, or what are the news that we are consuming, or who are we going to connect on our social network. So recommender systems is one of the main applications. But we also see an increase in the voice assistance application domain. So now all this machine translation and speech processing applications, they are being deployed in our daily lives. So one of the things that happened with AI in the past is that it was like over promising and under delivery. This is something that I like to say a lot. I took it from Moshe Vardi. But this nowadays is not happening. So now we see many applications of AI and in principle based on this technique called machine learning. Because there are two main factors for that. One is that we are generating, our societies are generating a lot of data in all possible forms, digital traces in the internet, but also for many sensors that we have in many places. And also the computational power. So the existence of graphic processing units that allow to train all these machine learning models in a vast and a large scale way has enabled this type of applications. I think this is a well founded risk. And this is something that is happening now. So AI solutions are able to automatize many of the tasks that were not being automatized before. Can you give examples of this? Well, in many jobs that the tasks are founded on repetitive actions that do not require a lot of reasoning and a lot of analysis. For example voice calls, so all these call centers, it's easy to see that when we talk, when we call to some service we are not talking to a person. The first thing that we have is to interact with a machine. And also for example even text generation. Now there are all these text generators that GPT-3 for example that can generate text and that is being used to at least have a draft of an article that then you can edit and in that way this can affect a lot the job market. But at the same time there will be new opportunities. Yeah, definitely. So actually AI has already contributed to creating many jobs in technology. Some examples might be drug discovery. Drug discovery is an extremely expensive form of research where you have to try thousands of different compounds to see if they can help against a certain disease. And AI is being frequently used today to predict the effect of a certain compound on a certain disease, let's say. So to help speed up this and this has created a lot of job opportunities if people are seeking education in data science and so on. Yeah, so there are many new studies at the undergraduate level. We have these undergraduate studies on data science and there are new masters also in AI. So there is a lot of demand of this type of profile in education. And I've read that the number of AI companies has multiplied by 14 in the last decade. Definitely, I think there's a lot of opportunities out there. I think there are many well known limitations for AI, in particular this machine learning field of AI that we were talking about. We know how to design systems that are able to function and perform very well in a given domain or task. But once we put the system into a very different domain where it didn't learn before or there's no easy to get acquired some data, then the system miserably fails. So one example of this is this promise of self-driving cars that was made, I would say, 10 years ago. And you see that they can only work in very controlled environments where they have been trained for. There is also the idea of explainability. So if I go to the doctor and the doctor makes a diagnosis, I want the doctor to explain me why he or she arrives to these conclusions in a non-technical way if possible. But these machine learning systems, they often work as black boxes that you cannot interpret what they are doing. So they are just giving a prediction that in many cases it works very well. So for many applications it's fine to have. If you want to reach that point in the map when you are driving, it's fine if you make a small mistake. But of course you don't want to make mistakes if you want to diagnose someone by cancer. So lack of explainability is one of the main limitations. So today's AI system is kind of a reactive form of intelligence in a way. There are systems that receive input and they react to this input in some way making predictions or generating text or generating response of some kind. But I think one thing that's missing in today's system perhaps is more high level intelligence. What we would think of that humans are very good at not to draw conclusions from things that you see or to reason about what will happen if I follow one course of action counterfactuals. So counterfactuals mean that you evaluate what would have happened if you tried different options. So it has similar to opportunity cost I guess in economics what would have happened if I tried something else. These systems are very good at finding statistical patterns in the data and they learn that in a large scale that is not accessible to us. But in the same way that they are learning these patterns they are also learning some biases. So if you want to hire someone for your company and in your past you didn't hire any women then your system is only going to recommend you to hire men. All these biases are a big issue now in machine learning systems. But not only biases there are other forms of ethical problems and trustness and also privacy issues. And in a certain way I think that the solution should be driven by regulations. So in a similar way that when we created the automobile the cars we didn't talk about ethics in the cars. So how did we solve this? We didn't try to embody an ethical system in the cars. We didn't try you know what I mean. So I think that the solution is public regulation. What about if something goes wrong if an AI system performs an action that has disastrous effects? Who is to blame? Is it the person who designed the system? Or is there a way to... Yeah I don't know I'm not a lawyer but this is an open question. I guess this happens with self-driving cars for example that there have been several fatal accidents. There is also this risk of when we are not used to interact with machines that have human level performance in some task we tend to attribute these machines like human traits. No and this is also a risk. So there is this movie called Her in 2013 that explains this ethical risk also of a society that finds in artificial intelligence a way to escape from the human society interaction. And I think this is also one of the risks.