 And speaking of AI, our next speaker has been working in the AI area for the last three decades. Francesca Rossi is a professor at a university, as well as a researcher at IBM, and she has a completely unique perspective on AI. We would be doing ourselves a disservice if we didn't look critically at our systems to ensure that they are helpful to us as possible. Please welcome to the stage Francesca. I'm an AI researcher. What I do, I build machines that help people make smarter and more grounded decisions. And yes, I'm very passionate about my work because I really think that AI will help us solve some of the world's most difficult problems, from healthcare to education, from science to many other things like the environments. I've been working in AI for a very long time and I'm very excited to see now the renewed interest and progress in the AI field. 30 years ago, when I went to my first AI conference, the attendees were all AI researchers, and the focus was on making machines smarter and smarter with little or no discussion about the impact of AI onto our lives, our society, and our culture. And this continued for many years. Now, I regularly work and discuss with philosophers, psychologists, sociologists, economists, lawyers, and policymakers. And this multidisciplinary environment is really what's crucial in having people understand and improve on very important issues related to this technology, such as bias, explainability, transparency, and value alignment. Today, I'm going to focus on bias in AI. What is bias and what does it have to do with AI? And why would it be a concern or an issue, something that we might want to eliminate? Well, bias has to do with the prejudice in favor or against something and that could lead somebody to behave unfairly to certain groups compared to others. So if we want our AI systems to guide humans in making better decisions or to make decisions themselves, of course, we don't want them to be biased. And why would an AI system be biased? Well, because most AI systems are trained using data provided by humans and we know that humans are biased. The way AI works is that once a system learns something out of a certain set of data, it will try to generalize its understanding to situations and scenarios never seen before. For example, if we want to build an AI system that recognizes whether a picture contains the image of a person, we will train this system with a huge amount of pictures and for each picture we tell it whether there is in that picture the image of a person or not. However, if these examples are not diverse enough, not balanced enough, not representative and inclusive enough of the human population, the AI system will have problems generalizing to other pictures that has never seen before. If all the pictures that we give to this AI system contain images of white people, of course the AI system will have trouble recognizing people of a different skin color. So if you embed this AI system into a decision-making process, it's easy to imagine that this could lead to an unfair treatment for certain groups. To make the challenge even more complex, not all form of bias are bad. Bias that has to do with expertise and domain knowledge can actually be good. A doctor that possesses exceptional skills and experience exhibits a form of bias that we don't want to eliminate. So it's very important to recognize between good and bad bias, experience versus discrimination. Most current AI systems are biased, but we believe and predict that in the next five years, bias in AI will be tamed and eliminated by people and companies that really are care about the responsible use and development of AI. And only those AI systems that will not exhibit bias will be actually trusted and adopted in the long run. AI researchers are working hard to achieve this vision. For example, at IBM, we have published research that shows how to detect and mitigate bias in training data. And this is, of course, very helpful for developers who will be able to create AI systems that do not replicate data bias. We have also shown how to recognize and rate bias in AI systems even when we don't have access to the training data. And this is very helpful for end users who will need to trust that an AI system is not biased before adopting it in their everyday life. To achieve this vision, really, it's important that we have a very multidisciplinary environment. And these are all very good news, but there are even better news, I think. The more we learn and understand about AI bias, the more we recognize our own biases. The more we inject bias detection and mitigation mechanisms into AI, the more those AI systems can help us be less biased by alerting us when they see that we do not behave fairly. So how do we do that? Well, it's very important, really, that there is a multidisciplinary, multi-gender, multi-stakeholder, a multi-cultural approach. Because this approach is essential, especially in a world where people are increasingly retreating in their own filter bubbles. So that's why I've been very encouraged over the years to see the evolution of the AI scientific environment that now includes many more women and experts from many other disciplines. Only a very diverse and inclusive approach can help shape AI in a way that is both trustworthy and beneficial besides being smart. And in this effort to build trust between humans and machines, we may actually learn how to improve much more than AI. We might just improve ourselves.