 Welcome! Looking forward for this final keynote of the day. Remember, audience, that you can send your questions to Richard through the Q&A. Don't miss this opportunity and watch out for that enigma sentence that we revealed during Richard's keynote. So, Richard, whenever you're ready, all yours. Thank you. Thank you very much and they're very happy to be here and to share with you some experiences on good AI for good. Why good AI for good? Well, because usually, and I think you've seen that also in this conference and in many other events, we speak about AI for business. It's all about how can we use this technology to improve our business, to generate more revenues and to have better operations. But actually, what I want to state here today is that this is actually much bigger. There are more things that are important. It's not only business. Actually, there are four things that matter. It's AI for business, it's AI for good, but you have to use a good AI and also more recently, a green AI. So what does it mean? Let's very briefly look at what this means. So first of all, we all know the AI for business is a huge business opportunity. Almost 16 trillion dollars by 2030 according to an estimation of price PWC. Many use cases, many different sectors, and also in Spain there is now a consortium of AI of the industrial sector. The government has an AI strategy where they will invest 600 million euros in the coming three years to develop this technology for business. Huge opportunity, but I will not talk about that today. You've heard that a lot and that is known. Then there is the AI for good, the known commercial opportunities. We all know the challenges we have with the sustainable development goals of the United Nations. We have to reach them by 2030, less than 10 years to go. There is the climate change challenge. You can use this technology very strongly to improve those objectives, to help them achieve, to monitor them. Also very important if you think about artificial intelligence. Then you have the other part, the good AI. So AI has huge opportunities that we've seen, but it also has often unintended negative consequences. Ethical risks for our society and for the people. If you look at the slide, you will remember facial recognition that recognizes better. AI recognizes better white men than let's say black women. Is that fair? Do we want that? We have systems that help hiring people for companies and they favor men over women. Something we don't want. It's not that those things happen on purpose, but it still happens. You should avoid those things. Something like the Asimid laws. You have to have laws that help you avoid those things. You want to avoid those negative things of AI. That's good AI. And then more recently, we have also the green AI, because there are some studies that show that big models, like the big tech users to understand, to do machine translation or to understand natural language, one training session of such a model may have an equal carbon footprint as five cars during their whole lifetime. Or another natural language model that generates the same footprint as three households for a whole year. And if you look at the energy consumption of those algorithms, then there are some studies that said GPT-3, a big model, natural language model of open AI, spent about, one training said, 12 million US dollars in energy. So it's not that you can just run the big models as often as you like. You have to be very careful about that. It's not only models. Also, think about all the photos we have on our phones and all the emails we have in our inbox. They're all somewhere in the clouds and they all have a footprint. So we have to become more aware. So this is the four parts that are, in my view, for the future, become very important for AI. And if you want to do AI as an organization at the large scale, you have to think of all four of them and not only on the business part. But today I wanted to talk for you only on two parts, good AI for good. And I start with a good AI which has to do with the ethical use of AI. And first is about ethical principles of artificial intelligence. Now we've already seen some of the challenges with this technology. I mentioned facial recognition, but there's another problem that is about explainability. If those algorithms help take decisions that impact people's life, like access to a public university or a medical diagnosis, then of course you have to know how those decisions are taken on what they are based. And many of those models, the biggest models and the most powerful models, they're actually black boxes. So it's very hard to understand how it happens. And that's something you have to work on. And think about the gender bias in all the professions and how that permeates in machine translation. If you simply translate nurse in Google to Spanish, you get usually the female form. And if you translate engineer, you get the male form. So it's not that Google translation wants actually to discriminate, but it still does it. And that's because how this technology works. So we have to work on avoiding those risks to happen. Think about privacy, think about the future of work. Those are all consequences of using this technology at large scale that we have to think about in advance. So what are organizations doing against this to avoid those negative consequences? Whereas they are unintended. I'm not talking here about doing harm on purpose. It's about unintended negative consequence. First, there is an example of the European Commission that came up with guidelines for trustworthy AI that have like seven requirements. If you want to deploy an AI system that has a high risk for people, then it should comply with. It should be transparent. It should not discriminate. It should be safe. It should have privacy. So there are seven requirements that you have to fulfill with if you want to use that technology. Telefonica has a company about three years ago, actually this month or last month, issued their ethical principles which has the use of artificial intelligence in all of the business should be fair. It should not discriminate. It should be transparent and explainable when needed. It should be human centric. So put the human in the center and it should be with privacy and security. And if we work with third parties with providers, then also they should to some extent comply with those principles. And as Telefonica, as the European Commission, there are now currently hundreds of organizations in the world that have issued and publicly declared. If we would use AI, we want to do it. We want to avoid to cause those kinds of problems. There's even several studies. This is one of Harvard University where they took the first 36 organizations in the world that issued their principles and they analyzed them of how important are they and how do they work. And then, so now you're a specific company and you have to define your ethical principles. So what do you do? First of all, you can be overwhelmed by the amount of principles that are available because in the end the potential ethical impacts of this technology are huge. You have on the one hand things like transparency, explainability as we just saw. You have the future of work. You have even super intelligence, super intelligent computers. You have the concentration of data and power. And then you have also privacy, security, human agency. So a lot of things. So if you are a company or an organization, you want to work with this technology. You want to choose some principles to make clear that you're not going to make any problems. So which of them do you choose? Now there are various dimensions that you can use to actually come up with your principles. The first is that you think about what is within my scope, what I can act on. So what is related to my organization versus what is more responsibility of government. So if you talk about the future of work or the liability of artificial intelligence systems, that's more government oriented. But if you speak about robustness, safeness, privacy, it's within your realm. So make a division between those principles that you can act on. And that's the pool that you should start with. Then you also could think about, well, the things that I do and I know that I do, of course, that's not a risk for me. So you have to distinguish between intended purpose and unintended consequences. And actually focus on the unintended consequences. A few years ago, all of the challenges that I mentioned before were actually unintended consequences because nobody knew that they were happening. Google translation doesn't create gender bias because it wants. It's an unintended consequence. But over time, we've learned a lot so much that at a certain point in time, if an algorithm discriminates, you can say, hey, it's not unintended anymore because it's known, so they should have been able to avoid it. That's another important distinction that you have to take into account if you want to select your principles. So this is the pool where you can look from on the left-hand side. The things that are within the scope of the organization and the things that are unintended. Then two more dimensions that you can look at if you want to choose. First of all, an end-to-end versus an AI-specific approach. Now, if you speak about privacy, human values, security, that doesn't only apply to AI. It applies to any digital system that is out there that works and they have to fulfill with those principles, of course. If you look at the very AI-specific parts, there are things like fairness, non-discrimination, bias, human agency or control, autonomy. How much autonomy do you give to the system and transparency and explainability? Those are very specific for AI. If you already have the other things arranged in your company or organization, focus on the AI-specific ones. If you really want to make a statement, our end-to-end approach of AI is ethical, then maybe you want to include all of them. As I mentioned, you have to take into account is the one in which sector are you in. Of course, it's not the same being in industrial sector or in the medical sector in the public administration. Different criteria or different principles may have a different importance, so it's also you should think about. Especially if you are in the services industry, then usually you think a lot about fairness, transparency, explainability because you work with people. If you are in the industry, you have a factory and you want to do predictive maintenance, then actually it's less a matter of people. It's more about safety and robustness and security. Those are four dimensions that you can think of for coming up with the principles for your organization. Once you have the principles, of course, it's nice to have a statement, we will not do this or we will not do that, but then how do you actuate it into your organization so that it's implementing your ethical principles. We came up with a methodology like privacy by design, security by design, we call it Responsible AI by design. That has five ingredients. First of all, the principles. Second, a lot of training and awareness of what this means, like I'm doing right now. A questionnaire, if you develop a system or want to buy a system, questions that you think about for each of those principles to make sure that you're taking the right decision. A lot of tools, of course, because they're all related to data and you can't do it just by hand or with an Excel, you need technical tools. And finally, a governance model. The roles and the responsibilities of each of the different actors in an organization. As an example, on the right-hand side, you can see a course, an online course that we made in Telefonica where every employee can understand what this means. We have also a questionnaire line that for every product that is launched into the market that uses AI, you have to go through the questionnaire, have to have answers to some questions, and if not, you can ask it to some experts. There are several technical choices that you have to make that have an impact on the social impact of this technology. That's about if you have a system that continuously learns without human intervention once it's in the market, you have to think about how much autonomy do I give to the system versus how much I keep as a people. It's human in the loop, human on the loop, or human out of the loop, bias and discrimination you have to think about. How explainable does your system needs to be? What happens to the errors is a false positive equally important as a false negative. For example, in COVID test, a false negative is much more important to avoid than a false positive. It's further propagation of the virus versus confining one specific person. There are many technical things that have an influence on the society of your AI system, and that's what you have to think about and take explicit decisions, not just doing things, but record those decisions. You need this government model where you have to think about what happens if I don't know something about a certain question of the questionnaire for the ethics of AI, how do I solve it, how do I escalate it, so you have to put it in place, if something difficult happens, you know how to deal with that. That is how you can implement those principles. Now, if you do this on a voluntary basis, like many companies have done, then probably you'll be well prepared for the upcoming European AI regulation that is coming out in a few years. That distinguishes between forbidden uses of AI, so the unacceptable risk, high-risk AI where you impact really in people's lives if you make mistakes, it's about hiring people with those systems, about promoting people with those systems, but it's also about access to essential services like health, education, and those kinds of things. So there are eight identified areas. Then there is limited risk AI, think about chatbots where there is an obligation that the provider of the chatboard tells the people who interact with it that they're actually speaking to a machine not to a person, but also if you want to do a deep fake, this fake video is about famous people that you can make them say what you want, you have the obligation to identify this is a fake video. If you don't do that, you are in compliance with the law. The large part of systems, of course, will be not high-risk, not limited-risk, so you can just use it, and if you want, you can use the voluntary scheme as we have seen. That was good AI. How can you build an artificial intelligence system that doesn't have negative consequences? Of course, it's impossible to remove every negative consequence, but if you follow what I've said, using a methodology, think about the principles, then the likelihood that you enter up in the press with some kind of scandal is much lower, and it's all about that. Now we go to the second part. Use this technology for good, not for business in this case, but how can you use it for good? Now think about how you can use data. What does data mean? It's something like in Plato's cave, where you actually see a reflection of reality. Now if you see the video on the right end, you see all kinds of bubbles lightening up on the map. Actually, this is mobile activity just before, during, and after an earthquake. When you see the earthquake strikes, you see a lot of lightning up. That means people start to communicate about something has happened here. I'm good or I'm not good. So that tells us that big data is a proxy for human activity, and that's the aspect of data that you can exploit to use it for good. Imagine if you had this in real time, you could tell the government where this earthquake is happening, and the services that help people where the earthquake hit most, and what people are doing with it. This is all anonymized and aggregated. It's about groups, it's not about individuals. Now, in the sense of this, this is a kind of data that has a lot of external value as well. If you look at payment data, you can estimate the economic impact of natural disasters. Search queries were used in the past to find flu outbreaks, because suddenly people start to search all over the world about what to do if you have the flu, and if you capture that, then you can actually draw a map on where flu outbreak is happening. Satellite imaging can be used to estimate GDP of countries and mobile phone data, we will see. It can be used for many things for good. So those proxies help solve large societal problems and even environmental problems. This is an example of those proxies in Spain. There are almost 150,000 antennas for mobile phones in Spain, and they generate 1 billion events every single day. It's a huge amount of information, big data. If you anonymize that and aggregate that, then you can really create beautiful value from that for society and for people and planet. Here is another example where you see activity, mobile activity, again in Mexico with flooding, and you can see a very high correlation between when the flooding happens, really, you see it on the circles and in mobile phone activity. Again, a proxy that helps humanitarian organizations and governments to direct better the support for the people who are in trouble. Here is an example of what this kind of data can say in COVID. Here we see the first part of COVID, and what you see is a few lines that represent electricity consumption, payment with credit cards, mobility, how people move around, and it's very interesting to see that everything that we've seen. Before COVID happened, before the lockdown was there, people started to buy suddenly much more in the supermarkets. When the essential economy was down, you saw a very big valley in energy consumption and actually inter-mobility, inter-provincial mobility of people didn't recover because everybody kept working at home and didn't travel at all. You can see how these big distortions of society are reflected in data, and that really can help a lot. Here's another example of how big data and artificial intelligence can help fight against COVID. This is an initiative that's unique in the world where 16 mobile phone operators share aggregated and anonymized data with the European Commission, such as the European Commission and the European Centre for Disease Control and the European Medical Agency regulator can understand at the European level how the virus has propagated through the different countries and depending on what measures the countries take, how that has an impact on COVID. Because we all know now that movements of people are very much related with the extension of the virus. So this is an overlay of mobility data with cases data of COVID and if you look a bit more in detail on the right hand side, so this is a focus on Sardinia and Italy, you see a very high peak in inward travel into Sardinia and two weeks later you see a very big peak in the cases. And that's what has been learned through this data. There's really a two-week separation and it really matters how you restrict mobility for the crisis. Unfortunately, now at the moment in Europe we are still suffering from this effect. This is the last example here. You can see that you see a lot of colors within Spain. This is the map of Spain and you probably recognize the provinces but really what is here interesting to see is all the different colors that flock together is a kind of mobility community. So these are groups of people that move around in a certain area and they move around more in this area than in other areas. And that is a kind of proxy for economic activity. Now, if governments could confine populations based on these mobility communities or these economic activity communities, they would still reduce mobility a lot but on the other hand they would have much less impact on the economy of the region. So it would be an insight generated from big data and artificial intelligence that could help to tackle the crisis but have a less harsh impact on the economy. Fortunately, it's not so easy to communicate those kinds of decisions. So as far as we know, no European government has been able to put this into practice but it is an interesting concept. Now, last example of AI for good is about air quality. It's about the climate change. So about 7 million people die every year because of breathing bad quality of air. In Spain it's about 10,000 people. It's three times the people of fatal accidents in traffic with cars. So it's really a big problem, especially in big cities. So what we wanted to show here is that we can offer a tool that helps governments, especially local governments, to better manage and monitor the quality of air in a city. And we did that in Madrid. So what we did is we combined a lot of open data that is already published by the local authorities or by the National Statistics Office. We combined it with privately held data about mobility data, what we've seen. That is used in the fight against COVID of a telecommunications operator but also with mobile sensors, air quality sensors that you put on top of a car and then you just drive through the streets and it takes measurement every 10 seconds. And the result that you can get with this is a map. It's just like a map we are used to see with red, orange and green but now it doesn't mean the intensity of the traffic. Red means bad quality of the air. Green means good air quality. And then if you overlay that with all kinds of things you can build at the street level this air quality map. You can then also combine it with other public data like where are schools located. And you can know that if there is a school in a certain area and at 11 o'clock when there is the break there is actually a very high peak of contamination that would allow governments of local authorities to intervene and to take decisions. Okay, we have to do something about the traffic at these times of the hour. Currently how is this managed? Well, there are like 21 in big cities. There are per district there is one fixed air quality station and that is an indication for the whole district. But what we've seen is that if you drive through different... if you drive at city level through a neighborhood then that one sensor can be green but three blocks away or three streets away even it's completely a bad quality of the air. So with this technology you can actually take decisions much more agile and much more frequent. And especially if we're now in the area of low zone emissions where it's obligatory for every city with more than 50,000 inhabitants to have a low emission zones this would be a perfect tool to help that make happen. Because currently it's not done in such a data driven way. Of course you have to see also what people are affected by the quality of the air and you don't want to discriminate like we've seen in the other things that certain types of people suffer from worse air quality age, gender or even ethnical origin. So you want to do it right for everybody. Therefore also the ethics is important in the air for good. Alright, so I explained a lot of things to you about good AI for good. I also mentioned something about AI for business. How do you make this happen in an organization? How can you take all those things in practice? How can you make them happen and what do you have to think about? Well, I wrote a book about a data driven company that speaks about three of those four things. It speaks about good AI, it speaks about business and it speaks about AI for good. So the green AI, I'll keep that because that's very recent, I'll keep that for later. But there are clear examples available of how you go through this journey because definitely it's not easy to make this happen. It's not only about technology and about machine learning. It's a whole change management process that you have to go through. Thank you very much. I wasn't ready Richard, I'm here taking notes. Hang on, let me just get my notes. Oh my God, here we are, there we are. Richard, my God, so much work to do. So much work to do. Well done. Well, Richard is the last speaker, so keep sending your questions in the meantime. Let me just double check. Okay, talking about AI for good and you mentioned at the beginning the bias in the algorithm, so they ask you how to avoid, I mean is the eternal question, how to avoid and desire bias in algorithms? Is this going to be forever a problem or will eventually... Well, I think the good thing is that we are now aware of bias and we also have biases as people. So it's not on the one thing we say, okay, we need algorithms, AI, to avoid bias, but maybe they have bias, but maybe they have less bias than we have. So even they're not perfect, maybe they're better than we. That's the first thing. If you look at bias where it comes from, it starts with the data. So, for instance, if you train an algorithm about schools in a certain neighborhood and you build the algorithm, the model, and you use that model to predict education scores for the whole city, then of course you have a problem with bias because your training dataset has not been representative for the target audience. That's what happened in the photo I put off of a camera where all the faces have a rectangular, except for a black person. So the data that used for training did not have enough diversity in the dataset. So if they had taken care of diversity in the dataset, then of course the algorithm would have learned because an algorithm doesn't know what is a face. It doesn't know what is a nose. So that's one way to get away from bias to make sure that the data you train is without bias as much as possible. This is like the story of the chicken and the egg. I've been with Cema Alonso from Telefonica many events and a lot of people keep asking the same question. Behind the algorithms, there's always a human. So that human, we assume that he has bias and so this is an impossible mission because, and then he gives an explanation which I've forgotten to be honest, but he also explains why not, or how can we avoid that? Of course, we have bias and if you have a diverse team that builds an algorithm that helps, but it is to be aware that there is a risk. And if you're aware, you can apply a methodology. And with this methodology, you can maybe remove not everything of course. So in the algorithm, you have things that you can do. So use the methodology. They're also asking you what happens to the future of work when AI automates jobs done by people before? Well, maybe we can have a weekend of three days or four days. Wouldn't be wonderful. What happens of the future of work? It's not only the future of the work, it's also the work of the future. Because the work will completely change. Now, of course, this is a very big topic and this is one of the topics that you cannot handle as an individual organization. This is really government stuff. It's an opinion. Nobody knows what will happen. So there are many different opinions. One is, okay, this is yet another technological revolution. Jobs will be destroyed, but new jobs will be created. And then in a few years, no problem. In the transition period, of course, there's a problem because there are people who cannot be upskilled to what is needed, but then in the end it will be solved. There are other people who say, no, of 50% or 60% of the jobs, so you don't automate jobs, you automate the tasks within the jobs. 100% of the tasks, of course, you automate the job, but usually it's not 100%. Very good answers, Richard. But there is an even... It's not only about the future of work, it's about the purpose of life. So lots of us work, we get the purpose of our life because we work, because we generate income to support our families. Now, if we maintain productivity, but we need less human labor, what are all those people are going to do? Good question. God, so many questions, Marc. We're going to leave us with a lot of other questions. Regarding, you mentioned the regulation, the European regulation and what is being done. But in that sense, Telefonica obviously has many programs. It has the big data for social good by Luca. What are your relationships in terms of Europe or international with other programs such as Destination Earth and other kind of programs within European institutions and private companies? Well, in terms of... so there's two questions. One is the European regulation, this is the ethical part. And the other part is the AI for good part. In both, we work a lot together. We have very close connections with the European Commission. I personally participate in many of the... giving them advice for what they do, because there's a lot of talk, a little concrete experience. And we happen to have some experience because we started already in 2018 with this. So also with international organizations, World Economic Forum, the Global Partnership on AI, IEEE, they all work on ethics. So of course we share examples. And I'm also the co-founder of this Observative for Social and Ethical Impact of AI. And through that we have even more relationships in the ethical part. Also with the government, with Big Tech, to learn what a few companies know to bring that to all the other companies. How... are we getting... are we agreeing? Because how difficult it is to get all these different governments, cultures to agree on something so immaterial in a way. You started talking about Plato and the philosophy and the ethical issues are so subjective. It's not easy to get... No, it's not easy. Actually we should, yeah? And therefore there is a focus less on... sometimes there's a focus less on ethics and more on fundamental rights, the international human rights that is agreed by all the countries in the world. So that is a better vehicle to relate this to. But then, I mean, of course in Asia, the notion of community is more important than the individual. In Europe, it's the other way around. The individual is more important than the community. Now, so we have a very different notion of privacy than in, for instance, China. And that is what you see, yeah? So China was the first company to control COVID because they didn't care about privacy and had the data of everybody. And they could just confine people, etc. And then if you look also, if you have this example of an automatic car that has to choose between killing three burglars or three old people or children, in the Western world, we promote more children. So we want to save children. In the Eastern world, they want to save seniority. They have respect for older people. So those things are... That's what I mean. That makes it very hard. It's not just black or white. But I think that is the easier part. The difficult part is the geopolitical ones. It's not really about AI. It's about power and who rose the world. So you have Europe, you have the United States, you have China in AI. And it's kind of an arms race. In this sense, a final question. It has a lot to do with that. It seems that this should be a responsibility from governments or, let's say, government countries. But on the other hand, private companies, multinational companies like Telefonica has also been a pioneer in many aspects in entrepreneurship, in startups, in AI, and so many things. Why is this? Do you feel like governments are not doing enough that private companies should have a more important role that you, like Telefonica, can be the driven of this? Well, I mean, innovative companies cannot wait for a regulation to happen and then act. I mean, the private sector is always ahead. And usually governments come into regulation to, hey, hey, don't go too fast, or you haven't thought about this. And that is also happening with AI. I think there are very huge amount of good opportunities and there are some negative impacts. And of course, the negative thing is they get so much attention that it triggers a lot of alarms. I think the best thing, Richard, will be to read your book, a data-driven company. It's 21 lessons for large organizations to create value from AI, which if you don't want to buy it, which you should, maybe you can win it through getting big points on the R&D platform. So rate our talks, participate, et cetera, et cetera. Final message, Richard, for closing up this day. Give us some homework, send us a call to action. Just tell the audience what to do. What do you think, what's the next step? We should maybe as individuals and as a community. I think as individuals we should be more aware of not take the stand, okay, this is not for me, it's all happening, and I'm just here as a consumer. The audience, the people have to be aware of what this technology can do for good and for bad. And if you want to make companies change, then the best way to do it is through the people. But therefore we need to be more aware, we need to understand a little bit more, we don't have to be data scientist, but we have to understand a little bit more of this technology to have a good future. More awareness, well, we'll take that note of that and we read Richard's book and we continue. We have to say goodbye. Thank you so much Richard Benjamin from Telefonica.