 Hello. My name is Miguel. Alberto came from Galicia. I came from Asturias. I'm really happy to share with you today my experience over the past decade working at the United Nations as data scientists trying to use AI and big data for good. But first I'd like to run a quick survey. So how many children are vaccinated against misless? Okay, this is Serampionian Spanish. Worldwide, who thinks the reply is A, please raise your hands. B, C, D and E. Okay, so this is the right number. Okay, this is with the status quo of the world. Just one more. How many girls go to primary school? Okay, this is worldwide. A, B, C. See a majority here. D and E. Okay, this is the reality today. Just want to try one more. How many people have electricity at home? It's A, nobody. There's B, C, D and E. Okay, we are 85%. Okay, so this is just numbers matter and the reality of the world is different today than was in 2000 when it started the Millennium Development Goals. Now we have the Sustainable Development Goals. So this information is in this book, Factfulness from Has Roslin. I really recommend to give a look because it allows you to understand with numbers where are we. Why I'm asking you these questions? Because this can be Madrid, but I always think about this other picture, which was a picture of the year back in 2013, and this is migrants in the coast of Mogadishu in Somalia trying to speak with their loved ones. I mean, and the reality is that today technology, it's everywhere. I mean, mobile phones are everywhere. And so they can be used for a lot of things, not just for business purposes. So this is a video from Professor Song from Japan. This is Tokyo. This is information from mobile phones. I mean, how the city works. This is the morning in 2011. And this was the day where we have the earthquake in Tokyo that led to the Fukushima disaster. So right now, we have an earthquake, okay? And you have seen how really public transportation has been like really stopped, how people are walking, are walking from near to the coast because there's alerts of tsunami. And that information was used afterwards by the government of Japan to plan the next evacuation, okay, to plan the next earthquake. What is interesting is this same system was used years after this same idea in Nepal after the earthquake in Nepal. I mean, why? Because when you have a disaster, the first thing you need to do, I mean, when you're planning, your response is to know how many people are and where. And the best way to do that, I mean, in a place where the last census was 10 years ago, is to recalculate the number of people using the number of phones connected to different cell towers, okay? So this was the first time that a map like this one was used in operations on the ground. And by the way, this is the same sort of map that has been in the center of the scandal with INE. INE is trying to make this side of aggregated statistics, okay, it's never with personal data on mobility information. We have used, my team have used the same thing to plan evacuation in Vanuatu, in the Pacific Ocean, in an island where there are like eruptions of a volcano. So where I came from, so I work for the United Nations, I mean, I'm sure you all know, but we work on peace and security. We work in humanitarian action. We work in human rights. And we work in economic development, okay? So in particular, I mean, I work at a unit which is called UN Global Pulse, where we work on big data and AI. And our vision is how they can be harness responsibly for the public good. I mean, I've been a joint organization back in 2011 and being, I mean, technically the first data scientist working for any international organization and sees them, I have the chance to make many real projects all over the world, okay? So one of the projects that we have done is what you are seeing in the screen, okay? We support an agency which is called UNISAT, which is the one that makes satellite image analysis, both in the aftermath of natural disasters and in conflict scenarios, okay? And you will say, okay, I mean, it's obvious I can use artificial intelligence to detect things in images, in satellite images, so I support the analyst. But the reality is that the analysis in these situations is done by a team of analysts, of humans, because you cannot afford making a mistake, okay? Why? Because that pixel on the left or on the right, that pixel, which is not clear because we have 95% accuracy, might mean lives of people, might mean that your home is flooded or not. So this is why we really need a good analysis. And that's why we have developed a system which we call like human in the loop system, where actually we are, each time we have a new problem, it can be like a camp, we let's count the number of shelters, improvised camp, each time we have that, we take like a base model from deep learning, and we adapt, maybe overfit that model to the analyst behavior, and to the images of the particular camps, okay? And that allows us to minimize the time that we need to have a perfect camp counting, okay? Because again, here, when human lives are in play, we cannot afford making a mistake. So now let's go to one place. This is Satari, this is a refugee camp, which is in the border of Syria, and Jordan, okay? This started in 2012 with the Syrian conflict and has been home of about 90 to 100,000 people, okay? This is like a city, like Santiago de Compostela or Cadiz that appear like that in one month. So for instance, with satellite imaging, we can measure how this city is growing, how this camp is growing, and how, I mean, we, and this allows us to planify how we're going to provide the services. But I'm going to go down to one particular project, I mean, in one particular service, which, I mean, was beyond remote sensing. So what you see in the picture is a photo of Omar, okay? Omar works for UNICEF and UNOPS, and he is in charge of managing the water delivery in the camp. How this works, we have a fleet of 20 trucks that every day will go through the camp. I mean, we'll put clean water in tanks and take waste water, then bring the waste water to the recycling plant, and so on. So we do this manually. So what you see is the map. These color dots are going to be the tracks. So you say, okay, track number three, let's go to this tree number 17, track number five to this tree number three. You came, you go take a piece of paper, okay? We have vouchers, perfect. That's how it does working. So I arrived with my team and we spent some time. We say, okay, we think we can digitize this. We're going to put GPS on the tracks. We're going to use like a digital system. So, I mean, obviously, if Uber or Cavify can optimize how the network of taxi drivers work, I'm going to use the same to optimize how my network of track drivers work. And we get all the data. We make some examples. We put apps and everything else. Even we make models. Okay, this is a particular district on when the waste water tanks are going to be full and are going to start to send sheet around, which is the moment where people call A, what's going on here? So we make the model and then we try to put this on the ground. And what happens? It didn't work. Okay, it didn't work. And it didn't work. Why? Because in these scenarios, I mean, people is more than 50% of the problem. I mean, these are no scenarios. So here what was going on is that the track drivers in the world of Oman, okay, Oman is from Iraq and is working in the border of Syria are the worst people in the world. So they were putting like panels inside the tracks. So when we measure the quantity of water they were taking, because it's with a bar, it looked like they were taking more. One day, there was one truck driver that took the waste water and instead of like going to the recycling plant, go out of the camp, paint the track. And the next day came back and put the waste water as clean water and we have a number of people going to the hospital. So you need to take into account all the stakeholders. Always you need to align everybody. I mean, and that is key. I mean, that is key in this context, that is key in any of your business. But for me, problem definition and alignment of all the stakeholders is more than 50% of the problem. The technical side is easy. So I mean, by the way, it didn't work like that. This is a picture of the truck, of the trucks. But we managed to make it in a different way. So at the end, the solution was a sort of a gamified system where the truck drivers got points by like taking the water for particular tanks, which were like more complicated and we had to seat someone in the truck, in the seat next to the driver during the whole period that they were in the camp. So that's one example. So let's move. We have done projects with other things. I'm here, I'm showing you things, so I mean, I wanted to make you think. So this a different kind of data is social media. Okay. And mean, if you have whatever products, obviously you check social media, you try to understand what is like the behavior of your potential consumers, sentiment analysis and so on. I mean, for us, in this case, what we did is a system together with the High Commissioner for Refugees to understand the perception of Europeans, of host communities to the refugees during the crisis and in particular during the terrorist attacks. Okay. So there were, you had these terrorist attacks in Munich, Berlin, Nice and others. And what we made was a monitoring system that check, in this case, was a Twitter and other sources and trying to extract the perception of the of the communities against the refugees and whether they were linking that with terrorism. Is there a connection between terrorists and refugees? I mean, there was not, but maybe it's in the public opinion. And if it's the case, the High Commissioner for Refugees need to make a statement and campaign. Okay. So this informs communications departments and really can shape the public opinion. In that project, I mean, we also start to think about, I mean, how we can track and monitor hate speech. Okay. And in particular, we made a collaboration with, with IBM Research, where we tried to characterize online hate speech. So then we could like filter and try to understand what was going on. And here we said, okay, which are like the four dimensions of hate speech that we can infer from a message, either in Reddit or Twitter or any other social media. So one is the target. I mean, is this against Muslims? Is this against any gay, lesbians? I mean, in particular, a group? What is the essence? What is the severity? This is very important, because it's not the same kind of fence that incite to violence. Okay. And that incitation to violence sometimes end up in physical violence. But also, I mean, which is the framing. So I mean, this was the first steps. But really, at this point in time, we need, I mean, and this we say is the wall and social media companies and governments, public, I mean, really public sector and citizens, we need to really find ways to monitor these messages and act in consequence. We made a similar different project, but similar concept. In this case, my my product, our product was vaccines. Okay. And this is because there are many messages that link vaccines to autism. Okay. And you as the Ministry of Health or the World Health Organization, you need to know if you have to reply to make a campaign to say, okay, these messages that are linking autism and vaccines are false, or it's better just to keep it quiet because the noise is really small. Okay. In this case, we work on vaccines in Indonesia. So on health, we have also been working in other projects. And again, we are changing from different data sources. And in this case, this is Mosquito, Mosquito is the carrier for malaria, but also for Zika. And what we did, I mean, and this was together with Telefonica and researchers from Italy, is we tried to improve the model, the epidemiological models of how spread Zika to plan epidemiological interventions. So the idea is that mobility is key to understand, I mean, how this is spread and this case is okay. Let's see if using aggregated mobility information from those mobile phones, we can improve the performance of epidemiological models. Okay. So this is another example of what we can do with data and complex analytics for good. So just changing again, and I mean, this is probably unexpected, but the oldest social network or social media is radio. Okay. And there are many countries where, I mean, they don't use Facebook, they don't use Twitter, but still radio is the main communication media. And turns out also that when you see which are the languages where you can speak to your phone and get like speech to text and translation and all is, I mean, it's always like languages like Spanish, English, French and so on, but it's less than 100 languages. Okay. In total, we have around 6,000 languages speaking in the world. And from the 100 to the first 1000 languages, these 1000 languages have more than 100,000 native speakers. Okay. So that means that many people are being left behind of the digital revolution of all the tools that you are developing, of all the ideas that you are getting, just because nothing is in the language. So what we did in this case in Uganda, in the north of Uganda, we developed a speech to text technology in Uganda and Acholi. This is local Uganda languages, which was used then to monitor public radio streams to understand what was going on in the regions, in this case on the border with South Sudan. Okay. So, I mean, people call to talk shows and say, okay, food is rotten, this road is closed, or we are having this strange problem with a fever in my place. So that monitoring system, I mean, we found that is really interesting and useful for a small scale natural disasters and for epidemic intelligence, so public health surveillance. So now we are working and really trying to make a speech to speech technology in those languages, which are spoken by millions of people, but still there are no commercial incentives and they will be out of this digital revolution. So yes, I mean, all these places where have been done. So we have three innovation labs, which are the pulse labs in three locations in New York, in Jakarta, in Indonesia, and in Uganda, okay, in Kampala, and we consider the labs as centers that give the space and the time to work in new ideas, having private sector, academia, international organizations, citizens, society, I mean, all everybody, I mean, all the projects I show you were done together with consortiums. So we have the space to try things. What's interesting about the labs and maybe this is how, I mean, how you are working or how you will work is that we have very multidisciplinary teams. I mean, that is critical. So, I mean, the same way I was the first data scientist working for the UN back in 2011, we have also, we have the first data visualization experts, data engineers, multiple technical profiles, but it's just half of the people, half of the team have technical profiles. Half of the team are policy experts, development experts, privacy experts, partnerships, like anything, I mean, you could, you could imagine in international organization. So we are learning from each other, okay, because our job is not just, I mean, to make innovation projects, it's also work in public policies, is to show what is possible, is to inspire academia, to inspire others to work in these, in these subjects. So, I mean, I really encourage you, I mean, if you have a technical profile, in trying to, to learn also from, from the other side, okay, which are the applications of the themes of the technologies we are, we are developing. And just by knowing each side, I mean, you are able, I think, to pass to the next level. It's very important. I mean, in what we have seen in our labs is to have, to have diversity. So, I mean, these pictures came in the first one, I mean, came from Data Science Africa, which we, we co-organized now a few years ago, which is one of the leading data science gatherings in, in Africa. And you, I mean, you really see that the level is as high as anywhere else. I mean, I remember one day we were streaming some, some talks from our lab in Kampala. We were having feedback from someone in Finland. I mean, this is, this is incredible. I mean, even some part of our, our team now are working in big companies as, as Google. I mean, also we have the picture in Jakarta. It's really important to, to have these, I mean, local capacities. Again, from Spain you can do something which is as good and as big as anywhere else. And I also think it's very important to have more, more women. This, this tweet is from, from my colleague, I mean, Rebecca, she's from Mexico and she has been the first data scientist working for the High Commissioner for Refugees. Okay. And I mean, again, women in the room, I think you have to, to show the example, but also encourage others really having people from different disciplines, like multidisciplinary but also diversity. This enriches, I mean, your work and really make your, your business better. Talking about business, maybe this goes against many of the things that you have heard here. I like this quote from Jeb Hamabaka. I mean, that can be that the best minds of my generation are working on making people click in ads. And that's a reality. You have seen many examples. You can choose what you want to do. I mean, I think you, if you could choose, I would choose really something that matters. And that's important. It's really important for the new generations. I mean, we really know you need to go beyond the classical business and the internet business and the digital marketing. Just go beyond that. Find applications that matter. Now, I'm going to talk a little bit about things I am worried about. Okay. And these two pictures, one is a deep fake. The other is a real person. Which one think that the man with the glasses is the real person? Please raise your hand. The woman, the real person. Okay. So the real one is the man with the glasses. The woman, she is fake. She never exists. And that might be a problem. If you look at the ears, they are different. I mean, the earrings. So that might be a clue. We need to start to scrutinize the information as Alberto was saying more. I mean, really we need to be critic with every piece of information. We're going to try just one more. Who thinks the baby is a real baby? And who thinks the lady is a real lady? So here's worse than before. The baby is the real baby. The other never exists before. You will know about the fakes. But why? I mean, I am worried and we are worried. So this is one project I did with one of my team members three months ago. So essentially, we put all the speeches, like more than 7,000 speeches given by political leaders and presidents of countries. Since 1970, in the United Nations, in the General Assembly, we put them all in the machine and we created, like, I mean, AI that could write a few sentences just based on the first words. It was astonishing because, okay, I mean, with little editing, these kind of like look like real. But these also may think, I mean, make a lot of people think, a lot of people are saying like policymakers, hey, what's going on here? Because now with the phase, you can start to impersonate. I mean, that obviously is really a danger for democracy. Now, if machines can produce species that look like humans and even in this context, the thing can be more complex, okay? This is the next level of disinformation, as we have seen. But also this can be the next level of hate speech, okay? I'm worried that, I mean, one day in a country of elections, we have deep fake videos circulated by WhatsApp, okay? And especially in the in that case, if in that country, we have history of genocide, okay? So really, I mean, we need to be aware and there's going to be an explosion of these tools, which obviously, I mean, are in the hands of anyone here because that's 10 hours of training and $10, just for the computing servers. Also it's important to understand which other things are happening with technology, okay? And, I mean, probably you remember we have Myanmar genocide with the Rohingyas, okay? And that genocide was fuel in social networks, okay? Those messages of hate in Facebook were the beginning of a real genocide, okay? This is the first time in the history of humanity that has happened, but it's really something to think about. So, I mean, what, how are we going to moderate it and we cannot wait to the next genocide? I mean, it's not enough that social media companies put a few moderators to moderate messages when those messages can have implications in reality, can have implications in the life of people. So, I mean, at this point, we need an effort together, international organizations, governments, companies, but we need to do something so this doesn't happen again, okay? And probably this will imply to have hybrid methods which both combine machine learning and, I mean, human moderators. Here at the key, and maybe this is some of these things which are losses in translation because many times you don't have technical leaders and policy leaders that speak the same language. The key is how to translate the principles, the legal principles and the legal frameworks we have around human rights into things that you can code in the machine. And if it's not the case and you need to have 10,000 moderators or you need to do other things you will have to do it, okay? So, this, I mean, it was really a turning, a turning point. Another thing, another risk of AI, I think that we have to be aware, is autonomous lethal weapons, okay? I encourage you to look at this campaign which is called stopkillerrobots.org because, I mean, now we are, I mean, at this point there is no consensus or no signature of agreement between countries around banning smart autonomous lethal weapons. I mean, and that, the same way we have one, for instance, with chemical weapons or with nuclear weapons, we should have with these, I mean, drones that can target individuals. But even the question is who is responsible for an attack, I mean, committed with an autonomous weapon. So, that, I mean, is the programmer, is the one that hit the button, is the manufacturer, I mean, what is the legality? But again, if we have some regulations around this, I mean, at least that could be, I mean, that will make less complicate to develop and sell and acquire these sort of technologies. So, another topic, I mean, and here I think it's really pertinent, we are going to have the climate summit here in Madrid in a couple of weeks and a topic that, I mean, has not been in the table until really, literally a few months ago is what is the carbon footprint of our AI and machine learning models? I mean, if you are working in this space, you should be aware and we have to be aware that really, I mean, the carbon footprint is big. So, nobody so far has calculated but for this paper and another one, which is very recent, but essentially most of the methods that we are using, some of the papers we see where we are just doing image net or distinguish dogs and cats pictures in the internet, have a bigger carbon footprint that you and your entire life. So, I mean, it's time also to think, I mean, sustainability here and before starting or deploying an AI project, making an assessment if the footprint, the carbon footprint you are going to have with that project is worth what you are trying to do, okay? So, I mean, interesting, there has been a group put up by the Secretary General by Gutierrez, which has released a report on digital cooperation a few months ago. It's a high level on digital cooperation with the idea of saying, okay, what is the world today? Where do we want the world to go? Okay? And I mean, I really encourage you to give a look. And he was saying, okay, I'm concerned about security, quality and human rights. And it's what we have seen. So, I mean, first thing is regulation, I'd say, I mean, and beyond the threats that we have seen, which are extreme, think that the same way with nuclear science, we will have that we have regulated like nuclear energy, nuclear weapons, nuclear medicine that can save lives, we need to start to have regulations around AI applied or much learning applied to the different verticals. How does regulation look like in AI for health? How does regulation look like in AI for energy? How does look like in AI for marketing purposes in the same way the TV is regulated? So, that's something that needs to happen. Another one, I mean, I'm sure you have heard about, I mean, the ethical principles of AI. Alberto was mentioning ethics matter, ethics matter also in this moment because we are at turning point where we are deciding what is the future we are building with these techniques. So, these are the principles that were released by the European Union, I mean, recently. In fact, now there are like seven, there is a checklist with seven high level ideas that you have to check in your AI project to, I mean, understand where are you place in this ethical space. I mean, however, I am a little bit concerned about how this discussion is happening and which are our conclusions because, I mean, out of the 100 and more ethical frameworks of AI, I mean, some of your companies will have, will have, I mean, government, think tanks or organizations. In this more than 100, you see that the main focus is on trustworthy AI. That is AI that you can trust, that is technology that works as it should. Fine. But is that enough? I mean, it's like genetic engineering. It's, I mean, it's perfect. We want genetic engineering that works, but we don't want to mess up, like taking pieces of DNA and building chimeras and doing weird things with human, with human beings. So here, why are we thinking so short term in technology that working as it should? Maybe it's technology working as it should, but with some, with something more. Okay. And here I, I like, I mean, I want to take what Yuval Harari, which is the writer of Sapiens and Anomo Deus says, which is, if I mean, artificial intelligence might create a new class of people. So the same way the industrial revolution created the working class, the artificial intelligence revolution, if it's a real game changer, it will create an irrelevant class. I mean, that irrelevant class is people that is not needed to create wealth. I mean, why, you know, any action I can take enough data. I mean, even if it's a bartender, I can take a row. I mean, I make a robot that put is the beers and I don't need anymore the human. Okay. So here what's saying is maybe in the long term, we are making humans irrelevant because we are just automating everything that they can do. Okay. And that is not coded in those, in those principles. Okay. I think that, that is really important. We need to start thinking a long term. I mean, the other, for me, that is one problem. And the other problem is that in a platform economy where winner take all here, what's going on is that we are all going as fast as we can, taking all the data we can, making that model and then once you arrive, if you are the first, you will take all. Okay. At that, we'll put the augmentation and productivity in the hands of very few, just the first that arrive. For me, one of the keys for the future, and again, this is thinking long term, not for me, but also for my daughters, is that we redistribute the augmentation of productivity. Okay. That we all benefit from that and not just the first that arrive. Okay. So for me, I mean, the closest that I can find about this is, is the concept of solidarity. Okay. And solidarity is in, in many constitutions, is in the heart of the Spanish Constitution, is one article in the European Union Constitution was this creation for, for a human rights declaration and is the concept of sharing the benefits and the burdens. Okay. So I'm saying, okay, we need to figure out ways to share the benefits that we are talking here, the benefits of artificial intelligence. And maybe the question is, because at the end, the data is ours, maybe we have to pay taxes. I mean, for automation, or maybe it's a royalty system like Spotify. So each time you use an AI model, you pay back to those that gave you the data. I mean, why not? But we need a way to give something back because right now, we are just running, we will take all. Also, it's sharing burdens. Okay. Because maybe the day of tomorrow turns out that there is a model, an AI model, which is deployed in many places because like it's public access and it's used, for instance, to analyze x-rays and all over the world and there was a bias or there was a bug or there was a problem. And we need to take it back. We need also to cooperate to share potential burdens. Also the examples that we discussed before. I mean, all in all, I mean, from my daily job and my perspective. So what in addition to finding examples as the ones I showed you and that really will contribute to create something that is the world, somewhere where is the world that we like, on the other hand, I think it's time to paint the digital red lines. Okay, it's time to paint where really are human rights. So we need that framework. We need to understand that, I mean, digital is a continuum from the physical and we need to make sure that we are not, I mean, breaking the rights of each of us, of citizens. So thank you.