 So now we switch to a deep dive on data and economic inclusion. So of course when we think about inclusion and AI, the role of data comes to our mind quite quickly. And this is a panel for us to discuss the role of data in creating economic inclusion, thinking about privacy, security, governance challenges, regulatory challenges that come out of it. So these sessions are going to be moderated by my good friend Malavika Jayaram, coordinator from the Digital Asia Hub. So Malavika if you want to come to the stage and as Malavika sets up here in the stage, let me just give you a quick quick info, which is for those of you who wants to visit the museum and you might find like, oh, it's tricky. Like I'm in the museum during these events. Can I skip the events to visit the museum? Which session would I skip to visit? The museum, don't worry. We got it covered. So you have a guide tour to the museum of tomorrow starts at 1.15 pm today and tomorrow. You can choose which lunchtime you want to kill and just like have lunch in 15 minutes and then start the guide tour 1.15 pm today at the reception desk, at the registration desk. So make sure you are there. We are ready to have you through a very nice tour around the museum. So Malavika, you take the lead for the session. Thank you, Carlos. It's great to be here and I was a few minutes late yesterday so I didn't get to actually tell you about the hub, which I was meant to but KS very ably stood in for me. So that's great. I'm just going to invite our panelists. We have a really great set of people from four completely different stakeholders, which I think is very important given the theme. So we have Florin, we have Bruno here, Stephanie and Felipe. So please join us here. Thank you. Okay, so we spent a lot of time yesterday speaking more about the social emotional effective issues and you know, they cleaved along sort of global north and south divides and we thought for this session given that economic inclusion isn't something that we only think of in the global south context, even within developed countries, even within rich countries, you see these kinds of disparities playing out increasingly and you know, you see a lot of discussions about privacy becoming a luxury, whether it's only something that a few people will have and they'll have to pay for it while the default model is surveillance and that's the business model of the internet. So we thought that it's something that actually is much broader than a global south issue. So that's why we framed this session the way we did. So we have four different perspectives. So Florin will represent more of an academic insight into this. Stephanie will talk about it from this sort of development space and we have Bruno from Facebook, which is you know, one of the platforms that harnesses a lot of data and is using AI in different ways. And then we have Felipe from the Omedia Network who will also talk through both as a funder, but also as someone who's you know, been watching this space and supporting it in different ways and many of the trends that they're seeing and also throughout some provocations for the audience. So we have Inesius here who's going to keep time. So I urge all of you to just stare at him or he's going to gesture to you. So we're going to start with Florin and just to make a few sort of framing remarks before I did say that, you know, the idea of inclusion isn't just a socially constructed one. It's also economically constructed, increasingly so. And when you think of different senses of both data and inclusion, you know, yesterday sort of unpacked just how rich and nuanced these issues are. And Arvind Narayanan from Princeton recently had this great thread on Twitter where he said, you know, the law and policy people are very used to the idea of definitions being, you know, multivariate and very complex. But he said, you tell a computer scientist that there are 21 definitions of fairness and they think, well, how on earth do you expect me to implement and compute for that? You know, I can't actually encode 21 definitions if something is meant to work. So what we think of as being something we're used to dealing in terms of a world of complexity, something that when you actually need to hardwire and encode this into systems, it's like, what's your baseline? Where do you start and what data helps inform that? So I think that's something, you know, we would like to unpack. But also, I think we think of data as being a resource in many ways and an input. But also I want to unpack, can we see data both also as an output? And if we keep thinking about how there are black boxes and, you know, transparency might only go so far and no further. Are there other proxies? Are there other indicators that will help us see whether a system is healthy, whether it's functioning in a fair and equitable manner? What other sort of, you know, data and metrics can we use to assess systems to see if they're fair? So I'm going to hand over to Floral, who'll help talk through some of the privacy issues in particular. Thank you. OK. Thank you very much, Malavika. Yesterday, as we all know, we mainly discussed inclusion with regard to gender, race and the like issues with a focus mainly on the global south. However, issues of inclusion are also important in developed countries that actually believe they have reached a rather high level of inclusion. In this respect, I would like, in the next few minutes, I would like to draw your attention to two different issues. First, there is a risk of excluding privacy-sensitive individuals from data sets, a situation which would lead to biased data sets and provide biased results, as discussed yesterday with regards to gender, race and the like. Second, I believe that small businesses and startups are actually also at risk of being excluded from providing AI-based services, given that only a few big players in today's markets have access to the big amounts of data that is needed to provide such services. This may lead to even increased market power of these big players and hamper competition and innovation. With regard to the first problem, we're probably all well aware of the fact that online services are used on a different scale by different people. Some people are excluded because they have no or no easy access to the infrastructure. This is probably the case mainly for people in the global south. Others would actually have access, but they choose not to use these services because they deem the storing and using of their personal data a problem. These privacy-sensitive individuals are largely underrepresented in data sets already today. Some, probably only a small minority, will not use online services at all. What is probably more important, though, is that a potentially quite large group of people that are using such services are using such services as little as possible, or they use all sorts of technical tools to make sure that they produce as little personal data as possible. Whichever way these individuals actually choose to protect their privacy, the decision will lead to them not being adequately represented in the data collected by all sorts of online service providers. This will lead to, as I mentioned, biased data sets, and maybe also to biased results. Think, for example, of research on political views. If such research is based on privacy-biased data, issues such as whether and to what extent people actually value and care about their privacy and whether they fear surveillance will not be analyzed appropriately. What is worse, we have to assume that some people are privacy-sensitive exactly because of their political, religious, or other beliefs. These people will be underrepresented in data sets as well. While this problem seems to be quite clear, the answer is not. At least for a lawyer, it is quite hard to imagine how such biases can be compensated by an adequate design of algorithms. Instead, in order to avoid privacy-biased data sets, we'll probably have to provide more privacy-friendly online services. But if they are really privacy-friendly, we will have a hard time getting the quality and granularity of data that we need to come up with meaningful research in the first place. With regard to the second problem, we're all well aware of the fact that a small number of very big businesses own enormous amounts of data. In an economy in which data is a key asset, the market power of these players is likely to increase even further. Many businesses have realized that data is very important quite some time ago and they are collecting as much data about their customers' products or production processes as they can. But they will still have a very hard time catching up with the very big players, at least with regards to personal data. This of course raises the questions of whether and how we can ensure competition in such markets which heavily rely on the processing of vast amounts of personal data and AI is one of them. Since data is and will also remain the key asset for many services, potential new players will only be able to enter these markets if they have access to sufficiently big amounts of data. We may think of various ways to solve this problem. For example, through cooperation of a sufficiently large number of small companies that share the data that they actually own, or through open data initiatives, or maybe even through some sort of collective action of a very big number of individuals that make use of their access rights or their right to data portability as far as such rights are granted by the national original law applicable. However, it seems to be quite unsure whether these approaches would actually really work. Another potential way forward would be the introduction of access rights of third parties, so not access rights of data subjects, but of third parties that want to use such data. These rights could give potential competitors a right to request access to the data needed to provide their envisaged services. Such access rights from a legal point of view could be introduced as compulsory licenses, that is, schemes that actually allow anyone fulfilling certain criteria to request a license and the right to sue the potential licensor if such licensor denies to grant a license. At least in IP law, compulsory license schemes have proven to be quite successful. This is evidenced by the fact that actual lawsuits hardly take place since licensors are ready to grant licenses if the conditions of compulsory license are met in order to avoid costly and time-consuming lawsuits in the first place. While such access rights would certainly not be welcomed, or I guess would not be welcomed by today's big players in the data markets, they may be a way forward to actually overcome market power and foster inclusion of small, medium-sized, and maybe most importantly, start-up companies. Looking forward to discuss these issues further after. Great. Thank you so much. A lot of really useful stuff there, and I think the sort of, you know, building on the compulsory licensing thing and the open data initiatives is really useful. But I would love you to come back later when we have the comments on how you see the role of consent, given that a lot of data that's being used in this field ends up being ambient data collection that's just happening all the time. Like Ansaf yesterday talked about the ubiquity of this, so if you could talk about how it's broken or can be fixed, that would be great. Thank you. Stephanie, you're going to talk more about sort of the production and consumption and the sort of how you want to reframe users and producers in this space as economic actors, so go ahead. Thank you. Actually, yeah, it fits what I wanted to talk about. It fits very, very nicely with what you talked about about surveillance as a business model and also what you've talked about. It's like choice between whether your data is being used or not and whether that's an actual choice or not. So when I started preparing this talk about AI and economic inclusion, I was first interested in the question of users and producers of AI and to what extent the region in which I work, which is Egypt and the broader MENA region, to what extent there is AI being produced and developed from the region? Because am I initial understanding or am I initial thinking around the power dynamics where if you are producing AI and if you're developing the AI, clearly you're in the position of power. So I started looking at what are the different applications and generally there aren't that many AI applications developed in the region for a large set of reasons. There isn't a lot of data. There isn't really any education on data science. There are a few courses here and there but there aren't even any complete degrees of data science that you can take at universities. In addition, there is the language barrier. There are smaller and enthusiastic initiatives around openness and around open data and our center is involved in a few of them and I'm happy to speak about them a bit in the questions. But I also wanted to go to look at the examples of use that I found of AI and a lot of them are very different because you have examples of Israel using AI to scan social media to find out where threats come from and who they can arrest. So it's not a very, it's a very different kind of, it doesn't really fit well with economic inclusion or the power dynamics I was talking about before. We have examples of drone attacks based on AI and machine learning as was already mentioned yesterday. You have governments acquiring software to scan conversations around social media in Egypt and in other contexts. We've had a recent example of Saudi Arabia giving citizenship to a humanoid robot that was based on AI. I'm sure that's made the news. So you have a lot of interesting examples where you don't really have, and then this has sparked a lot of conversations around why does that robot get more rights than a lot of people that live in Saudi. A lot of questions are raised around that. So you have interesting questions in terms of users and producers and this was part of a big conference where it was about financing and investing in AI, but it was mostly about investing abroad. So what seemed like a necessary thing to do was to bridge the gap between users and producers. But then I started to think about the way we use those terms users and producers because while the MENA region might not be as represented in developing AI in the way that we understand it, the data is very much used and people's data is very much used. So I thought if we start rethinking the way we use those terms users and producers, we might actually find a different way to think about economic inclusion as well. And I'm going to use the Facebook deep face algorithm to explain what I mean. And I'm using Facebook as an example because it's one of the biggest platforms that is used in Egypt. It has far-reaching use that is very interesting. For example, if you're stopped at a checkpoint, the police might ask you to show your Facebook. So it's a big thing. So talking about the facial recognition software, it's based on an algorithm that it's easy for the algorithm to develop to become better because the way that we basically do some work for the algorithm. Because if we upload pictures and we tag them, the algorithm first of all only has to search a smaller range of pictures because we know it's probably friends of ours, or we tag it so we actually help the algorithm learn what face or who that person is. There are several angles of the picture. So there's a lot of ways that our own use of Facebook and the time and the effort that we put into this is doing some part of the work that the algorithm actually has to do. So in a way this has made me rethink the way that we produce and use our data because we're actively involved. Or if we rethink the way that our data is being used in algorithms to rethink it from the passive way of we make this data and then it goes into the algorithm just to rethinking the role of producing data as a more active way, then I feel like the question of economic inclusion can also be rethought in a different way. Because when we think of how the algorithms then and in this example again the Facebook algorithm, it was published in 2014. There was a paper published about the algorithm and it was developed and it was a very successful example of the algorithm. And then the way it was written about was always about how Facebook will choose to use the algorithm. And I thought it's very indicative of the power dynamics between the users and the producers. And I'm trying to kind of flip this power dynamic on its head by kind of trying to attach some sort of rights or different value to the production of data and to us using social media and basically spending time and effort and labor on creating and clicking and to rethink also the value of metadata as you thought mentioned before as instead of just a byproduct of something that we do to rethink of it as something that has actual value. And it also fits nicely with what yesterday Nishanta said about rethinking how we think about AI and inclusion with the lived realities of the computed. And in this case it's people using Facebook or creating data based on which then algorithms are produced. And basically rethinking or thinking about data markets in a more broader way and taking a step back and including the work that goes into producing the data and not just regarding data as a renewable source that is just there because it's produced by someone and those people could be rewarded for that technically. So yeah I mean I've used Facebook the Facebook algorithm because it was a very easy example. Things get a lot more messy when you think of the AI that Israel uses to arrest people because it's also based on data that they have been creating. So it's a very different question of power dynamics and things get a lot more messy. But yeah so this was this was my point. Thank you. Thank you so much. I'm so glad you brought the human back into this and and sort of this idea of labor. You know the labor that we're all doing in producing and refining and improving these systems that people don't get credit for in any way. And the value that they bring to this the robot that you mentioned Sophia. I met her at the ITU had this AI for good conference and it was really hilarious because she actually came on to stage. And here is this robot that's been fed with the knowledge of millions of years of civilization and every single Wikipedia article ever. And she comes on to stage looks at the audience and she's like I don't know what to think about anything anymore. And the person sitting next to me said well then I just gave up if she has no idea what to think about anything what hope do we have. You know with like incomplete information and you know inadequate knowledge. So but I think that was the most human sort of you know response to the sort of data deluge that we're all working under. So since you've mentioned Facebook we actually have someone here who can help you know take respond to all the potshots that people will inevitably take. And I think you know Facebook has been in in the news for all the wrong reasons and right reasons. But I think you've had to deal with a lot of issues about the power of the platform and the power of all the data that you harness and how. I think you know with all the big companies the GAFA Google Apple Facebook Amazon and Microsoft about how are they in a better position to actually harness the revolution that AI poses. And you know I do you actually exclude the smaller entrance into the market because you just have hold of so much data. So it would be really useful to have you talk about how you're using AI in different ways. But also when we talked earlier you said you had some solutions about how you could actually help make it more inclusive. So over to you. Thank you Malavika. So thank you so much for the opportunity to clarify some of these issues here. So I actually wanted to start by talking a little bit and trying to dismissify how Facebook has been using artificial intelligence. So there are basically two main ways that or two main fields where Facebook has been using artificial intelligence. Those are language stuff like speech recognition language translation natural language processing question answering you might have seen Facebook's digital system. And where people can ask questions or ask the system to set up appointments or so forth or set up appointments and also book hotels or make reservations. But also Facebook has been investing a lot on visual vision and stuff like object object recognition of images tax recognition within images. Action recognition so on and so forth. So these are the two main ways that Facebook has been using AI to comment a little bit on some of the applications that this AI has. I think it's also a good way to demonstrate how Facebook has been trying to address inclusion with AI as well. So the first one of the first tools that Facebook launched is called automatic out tax. What this tool does is it's a tool to recognize objects within images. So and how Facebook has been using this tool the primary way has been to make available a tool that allows people who are blind or have some sort of visual impairment. To have whatever is on Facebook read to them. So there is a mold on Facebook that you can turn on where this tool this AI will read everything that is going on on Facebook for for many years. For or recently you might have seen that there was a big change that Facebook became more visual and now it's becoming more of video. So in the past it was easier for this tool to function because like it was mostly taxed and and the technology to read tax was a lot more advanced than the one to identify objects and what is in the object. So unfortunately I don't have a demonstration here but I can make it available for you afterwards or you can actually use Facebook and check out for yourself. So the tool will read whatever is on the object for people. The other way that Facebook has been using AI is to develop a tool called disasters map. So it's a tool that helps get gathered data about how populations move whenever a disaster happens. So what we do is we try to get a sense of what are the movements and where the so for instance if there is a flooding. There was a recent flooding in Peru and we actually made this aggregated anonymized data available so that organizations in Peru knew where to send resources and just to send aid. In Brazil specifically there were two initiatives that we did using Facebook technology and data to try and help with the inclusion piece. So the first one is so we actually saw an opportunity to address a major health issue in Brazil. So in the end of 2015 there was an outbreak of Zika disease in Brazil and Facebook worked with many organizations especially UNICEF to try and get a sense of how to build effective awareness campaigns to inform the population of how to take the right steps to take the prevention against the disease. And what we found it was a very interesting exercise so again like we gather aggregated anonymized data on Facebook to try to get a sense of how people were talking about Zika. And UNICEF's campaign used to be targeted actually at women as if in the thought was women had to take the steps to prevent from the disease. But actually what we found out was that men was talking a lot about the disease as well in that we saw an opportunity to also empower men to take the responsibility for the prevention of the disease as well. So actually what UNICEF did was to change many of the creatives of their campaign to include this audience that would also be crucial in the prevention of the disease. And we saw like a huge uptake in the in the service that we run asking whether people took action after seeing the materials. Well now I just wanted to comment a little bit on some of the issues that were raised and try to point to some of the solutions. I was actually happy to hear some of the proposals because I think a lot of what has been talked Facebook is already doing especially stuff around sharing of data sets. And sharing of technology. Facebook recently launched an initiative called partnership on AI where it is working together with academics with civil society and with other members of the industry to try to come up with solutions and have a conversation on the different uses of AI. More than that Facebook makes available many of its technology through open source. So a lot of Facebook's hardware design is made available online. Anyone can use it. A lot of the underlying technology the platforms the frameworks that are used to develop AI is shared online and anyone can go there and actually have access to it. Because a lot of the research that actually goes into the products that Facebook uses develop within the academic community. So many of these projects are done in conjunction with academics are made available online. You might not find the specific algorithm for newsfeed but many of the underlying technology that will allow for instance competitors or the new entrants to have access to this key technology that is useful to advance the knowledge of AI is widely shared by Facebook online. So and finally I guess like the final issue not to take too much time that I wanted to point out is we do believe that the way to move forward on addressing some of the issues for instance that for instance raised is through partnership, voluntary partnership rather than compelling companies to disclose everything. I think Malavika pointed out very correctly the issues around consent for instance. If people consent that their data is used by Facebook according to what is listed there and all of a sudden Facebook is forced to disclose its data to the public at large. What are the privacy issues that can come out of that? It's one thing for you to release and Facebook has been doing that some data sets that can be useful to advance AI but at the same time are mindful of the privacy concerns. So for instance you have to use data that is made available for the public audience instead of that is made available with specific groups. And you also have to make sure that you de-identify whatever information that is in within this data set so as to not compromise people's privacy. So I guess this combination of the privacy issues around broad disclosure of this data is something that we need to think about and I think that working through these partnerships and specific arrangements we can make sure that Facebook can share many of its knowledge about artificial intelligence while at the same time make sure that people's privacy is not affected in a way that they do not expect. Also the other point which might not be completely related to only privacy issues is that if you make these databases or data sets available who knows how they're going to be used to the public at large who knows how they're going to be used. Maybe they will be used to spend people based on some of the information that is extracted from it or maybe used for some other purposes that are not going to be made very clear to people. So I guess like pointing out another potential solution is in terms of these voluntary partnerships maybe we can think about a way to build up a group of companies organizations that have a sort of a multi-stakeholder board that can make sure that all the information that is shared with this group is used afterwards in a way that is consistent with its terms and its privacy policies. And with that I think I'm going to finish and thank you so much. Thank you Bruno. I think you're going to get a lot of pushback on the privacy idea from people working in civil society because I think we've seen a lot of situations where platforms and companies exploit data when it helps their business model but when it suits them they use privacy as a shield to say oh actually we won't release these data sets because we don't want to violate your privacy. So I think we see a lot of companies that are playing that game in a very convenient way so I'm sure you're going to get some pushback on that idea. And I think also like that that unpacks for me this interesting idea of around how we use Facebook or the data that companies generate as a proxy for knowledge in a lot of ways except that a lot of the data that's gathered for routine business purposes lacks the rigor of what a social scientist would do in terms of data collection. Yet we use that as being some kind of holy grail of telling us a lot of really rich information about people but I think that's something we'd love to talk about and when we come back to the questions I'd love to hear your thoughts and you know given that Facebook has now been pivoting from being about connecting people to now being more about community. I think the second you talk about building a community as your main strategy that involves a level of personalization and tailoring that necessarily involves a lot of data gathering and analysis even more than usual when you're only talking about connections. So I wonder whether your sort of mission change or shift affects the way you use AI or whether it's going to help further it in different ways. So we'll come back to that. I'm really happy now to turn to our last speaker before we open it up. So Felipe working at the media network has sort of been on the front lines of a lot of these issues and watched how different players in this space have been doing research on it. I've been actually doing campaigns and advocacy and also has a lot of experience looking at the privacy side of things in this space. So we'd love to hear from you on your take on this. Thank you. Great. Good morning. Are you still here? Good morning. My name is Felipe Stefan. I work at a media network. I already addressed you yesterday but just to add briefly a media network is a philanthropic investment firm that focuses on advancing social impact. That means that we are able to fund nonprofit organizations that are working on advancing social impact through their own models as well as for profit organizations that we can support just as social entrepreneurship startups who we also believe play a critical role in advancing social impact and key issues. We were created 10 years ago and since then we've invested about one billion dollars. That goes up quite significantly in the most recent years going to be north of 200 million. And I know there's several here in the room who we support and it's great to be able to work with you. Before I enter into my remarks I just want to tell you very briefly a reflection that we went through internally. So in the governance and citizen engagement team of which I am part of we have one of our thematic pillars used to be called open data. And we mostly focused on supporting making government data and data about citizens and the interaction between government and citizens more easily available and ensuring that citizens could use that data. Over time we realized that the work that we've done in that area is incredibly helpful and impactful but that it isn't sufficient when we regard what is largely happening around data. Now that pillar for us is called data governance and it includes work that we're doing on AI and privacy as well because we understand that actually the issue is far more complex than simply is government publishing data. Do you know how to use it? What can we do with it? And so I'm glad that you're bringing up some more nuanced reflections on AI and privacy that are critical and I think we ought to think about how that story that makes sense to people outside of this room that may have more superficial access to it. Because I think if you were to ask people outside of the circles that work on data what they're, and this is, I haven't done a study, I shouldn't say this in front of academics, but if you were to ask people I think anecdotally what they think about data. I wonder how many people would say positive things because I think when data is on the news we often hear about leaking platforms. We often hear about invasions of privacy. We often hear about, you know, the Ashley Madison telling who's cheating on whom. So it's, I think that story needs to be told differently but just to go into provocations what I want to do with the time that I have left is just give you my top 10 list of things that I think we need to be talking about and I'll do that very quickly because I'm following Nishant's lead by saying that I don't really have answers to it. Many of these questions. So I just thought that it'd be helpful for me to pose them to you as we kick off the conversation. So here's my top 10 list. Number one, we should not equate ease of doing business with inclusive economic development. So I think a lot of the story that has been told thus far is, well, you know, AI automation makes things easier, quicker, faster, better. Well, that's not necessarily the same as making them more inclusive and of figuring out where the inefficiencies are in the current economic system. So my question there is, how can we address existing inefficiencies in a way that goes beyond simply saying faster, quicker, easier? Number two, the economic impact of the data and artificial intelligence revolutions is not equally distributed. And so the way that it will affect different legal contexts, different contexts that have different labor laws, different contexts that have different sets of commitments to human rights will be different. So how can we collaboratively, my question here on this one is, how can we collaboratively understand where the main pain points are, where the areas of greatest vulnerability are, where the geographies that will be most negatively affected, which are those, and begin addressing those pain points? Number three, the greatest economic opportunities of tomorrow require not just enhancing yesterday, but redefining it. And so I think there's machine learning and then there's machine learning. Because what I mean by that is that as long as machine learning continues to solely operate on historical patterns and how it can make predictions about historical patterns, we will never address the problems of economic exclusion. So we can work simply by telling machines to learn from the past, we have to redefine how they learn from the past. So how can we redefine machine learning? Number four, if we pit the interests of those driving the AI revolution against those who most need to benefit from it, we're all going to lose. So how can we, my question there is, how can we change the incentives so that those who are in powerful positions, so that big technology companies, so that large corporations, so that those who hold significant amounts of data think and believe. Genuinely, that it is not an opposite interest of theirs to benefit the larger social impact that can emerge from the AI and data revolutions. Number five on my top ten list. The current data governance structures benefit the most powerful. To me that's very clear. Users produce data. I think I totally agree with your point. You know, if there's an organization we fund called Tactical Tech, they have an exhibit that goes around the world. I just visited in London last Friday. You spend time cruising through social media and it tells you how much money you should receive from the data you've produced for them. I thought that was quite interesting. But my point with this is that current governance systems that define how data is collected, how it is used, who can derive beneficial and economic benefit from it, benefit those who are most powerful. So the question then for me becomes how can we transform that in a manner that we can foster competitiveness and that we can recognize data producers for the contributions that they're making to the broader economy. Number six. The future of work needs, there's a lot of conversations about the future of work, but I think the future of work needs to look brighter and more inclusive than it does today. Because the way that AI is starting to be used for example for recruitment purposes leaves a lot to be desired as to whether we're actually going to be more inclusive when we are recruiting for our companies and our corporations and even our own organizations philanthropic or civil society organizations. The new jobs that are being created are not necessarily, people are not necessarily being trained for them. So how can the future of work be more inclusive is my question there on that point. Number seven. Are you still with me? I should have done a top five list I know but as the conversation kept advancing I was like no I need to do more and then I was at eight doing a top eight listing weird so I decided to do a top ten list. Number seven. If AI exacerbates what is currently perceived rightfully so as the crisis of representation and the crisis of access to opportunity the results could be catastrophic. And so I think that in Latin America especially as a Latin American I can say that I am very nervous about next year. Next year we have elections here in Brazil that are quite consequential. We have elections in my own country Colombia where the current debate is whether the peace agreement should be removed. Very consequential elections in Mexico. And the current system, the current political system is responding to what is a clear crisis of representation. President Temer, President Peña Nieto, President Santos in those three countries are some of the least popular leaders in the world. I mean I think this is on the record so I should be careful with my numbers but I think from recent discussions President Temer is the most unpopular presidential leader in the world. And no well I guess yeah I guess Peña Nieto is up there yeah eight percent now for Peña Nieto I guess there were some studies. Now it's two percent, well you know it's a race to the bottom in Latin America. Just like with AI. Just like with AI a race to the bottom and the reality of it is that the reason that I'm nervous about it is because it's making people believe that the current systems that have been put in place, that democracy, that the democratic process are wrong because the outcome has been insufficient, has been unsatisfactory. And so there's a danger there that then we remove all of those systems that have been put in place to protect people's rights, to ensure citizen participation because the current systems have problems. If AI makes that worse, if AI makes it so that economic crisis, that crisis of representation exacerbates, it could be truly, truly so detrimental. So my question there is how do we avoid that catastrophic outcome? And I know I'm short on time so I'm going to do this really quickly. We want your last two very quickly. Very quickly. It's three but I'll just say them. So the way that economic, number eight is that the way that economic values, the writing is changing completely. I know I won't for the sake of time get too much into this but for example as we're thinking about the attention economy and the rights and dialogue around the fact that now economic value comes from who can get your attention for the longest and how can they gather data from that. There's questions there of how can we make that more inclusive. Number nine is that philanthropy and academics need to take proactive collaborative approaches because at least I feel from philanthropy we're always playing catch up. The world is changing, things are bad, we need to fund things, who can we partner with, right? And so how can we create, my question there is how can we create proactive collaborative strategies? And then my last one, number ten, my top ten list is that the story of AI needs to be redefined. I think that we need to demystify AI as part of the strategy to reduce the power asymmetry between citizens who don't really understand it, who don't understand what is happening with their data, who don't understand what they sign when they sign terms and conditions, who think about killer robots and that actually need to be part of defining the solution. So how can we demystify the story of AI and data? Sorry I went longer but thank you for that. Thank you so much. I was hoping for a song at the end of that list but maybe in the coffee break. Yes, or maybe tonight. Thank you so much. That was super helpful and I'm sure you're going to get a lot of funding proposals based on these ten things after the coffee break. But you unpack so many super interesting issues and you know the way that we've sold a lot of things for us. Zainab Tufechki's article recently about you know did we sell the whole house for a few click baits? Did we give everything away? But I'm very glad you ended with a point about power asymmetries because I think that's fundamental to everything that we're talking about, both in terms of data and the economic aspects that we're discussing here. I think fundamentally it all boils down to power and differentials. So I'm very glad that you ended with that. And I think that also brings up sort of one of the unsaid things so far which is what if you don't participate at all? You know what are the costs of exclusion and what are the implications of saying I don't want to play? Do I keep getting left out as you know coming back to what Flora was saying you know what if you don't participate at all or you want to exercise your privacy choices do you just get left out of the benefits? And I think it raises this weird paradox where damned if you do damned if you don't if you participate you know you might be under surveillance or face certain privacy harms. If you don't participate you're not helping it to be adequately representative. So either way it raises different challenges. So I'm going to throw it open now. I mean I did raise a few things I'd love to come back on but I'd like to give the audience a chance to actually throw out questions. So one lady here someone at the back in blue third one here anyone else and four here. Okay so the lady in white here if you'd like to start could someone get her a mic please. Oh okay we'll go with the person already has the mic. Hi thank you my question is for the gentleman from Facebook and there was something interesting mentioned about having the platform be able to pronounce read out stuff. This is for people who are disabled I assume who can type. The experience I've had with technology in Africa I'm a linguist is that those kinds of tools are typically deployed through the big languages English or French or German etc. And I can guess that the one you've created on Facebook doesn't have an African language either you can put me wrong. So it's the same thing I've seen with Google voice or Siri. Siri for instance I always mentioned has a voice service for Norwegian for Swedish for Danish. Those three languages amount to like 18 million people. Yoruba has about 30 million people and it doesn't exist Siri doesn't exist in Yoruba. So by creating technologies where we focus on the big languages that are easy to create and not focusing on languages I have equally important numbers of people who are interested in participating in technology but don't have the means to do that. We contribute in excluding a number of people from technology. So it's something I think you should pay attention to. Thank you. Can we just take all of these four questions in this first round and then we can get responses. Yeah, go ahead. Thank you very much. Roxana is an economist from the Department of Economics at Universidad Católica in Lima Peru. We may open up the round of questions and spend a whole day I think on these issues. I just want to put up a couple of comments. The first one is regarding I cannot help but think about the meaning of economic inclusion and it is intermingled with social inclusion. We cannot think of economic without the social part of it and this is not my definition but Sen's definition and Marcia Sen's definitions. That economic inclusion means that people is able to participate in the market. And in that sense contribute the value to the market and take out value from the market. And I cannot help but think data on how to make people, the poor, the marginalized, the vulnerable people, get into market in less developed countries. And when I listen to you, it comes and goes. Okay, that's the first comment. The second one, thinking as an economist and a professor and thinking about the four assumptions for the market to work, one is incomplete information. And I was just thinking and throwing this on the floor whether AI through our mobile devices can help us completing the information. And so making markets work better and work better not just for the powerful and the most informed but for everybody. Thank you so much. There was someone here, yeah. Shall I go ahead? Sure. Thank you very much. My name is Negla Riz from the Access to Knowledge for Development Center, American University in Cairo. Your ten points are fantastic. And if I may just comment on the fourth, I caught it and I hope I got it right, you spoke about incentives. And to me this is very important and very interesting because in my mind it brought the importance of business models. What business models exist that can ensure the white sharing, the pool of beneficiaries and perhaps not necessarily efficiencies but more equity and more to deal with the asymmetry of power. And somehow lurking in my head is the issue of intellectual property. We talked a lot yesterday about the AI models and the sharing and the openness of knowledge and platforms. So there are like keywords, IP, business models, incentives as an economist as well. So I wonder if you could comment, perhaps speak to that a little bit more and that's a question also open to other members of the panel. Thank you very much. Great. We'll have you and then, yeah. Thank you. So we'll take those two, yeah. Hello, my name is Alek Tarkowski. I come from a Polish civic think tank called Centrum Cyfrowe. I really liked Felipe your comment about opposing interests and going beyond that. I think that is crucial and I think that requires real dialogue and at the risk of sounding maybe a bit too dramatic, it makes me think of roundtables. But you know the kind in Poland we had the roundtable to end communism or I'm a bit afraid of going into metaphors that have to do with conflict. But I think in a way we might get there in terms of significance of these debates. So I sometimes also think I was really inspired when Bruce Sterling wrote a pamphlet where he suggested the big platforms are like empires and then we might need diplomacy. So these are my two solutions. So my question would be, do you think such dialogue is a question for all of you is realistic and you have real dialogue that goes beyond PR talk, you know and so on. Hi, good morning. Susan Aronson from GWU and Seagie, I think tank in Canada. So my question is, excuse me, thanks. My question is how are we as society going to be able to influence in particular social media platforms and their use of algorithms and the threat that it now seems to pose to democracy. I wonder if you all can address that and I'm not just talking about disinformation. I'm talking about manipulation in many ways through grouping people. That in itself seems to me inherently wrong. Yes, we are picking who we choose but I think we're proud to pick who we choose. I see this in particular, I'm not on Facebook deliberately but I see it on Twitter. But I think Facebook in particular seems to focus on that. Thanks. One of these from Web Foundation, my question to Bruno. I've been very happy seeing that the big companies have been opening their AI. I understand that this means that any coder can use their platforms to kind of run the processes. One of the questions I had at Web Foundation, we try to ensure that the technology is developed throughout the world and not just in the centers of power. So on the one hand this enables people everywhere in the world to use the computing power Facebook has. At the same time does Facebook use that metadata, the crumbs from this process to train what would be general AI. Is the use of the platform by other people helping Facebook in that way? My question is will eventually a developer in Africa or Latin America or Southeast Asia be able to compete with Facebook's AI or are we behind in that race to a point of no return? Thank you. Bruno, do you want to respond to the two questions and then Felipe if you'll respond and then we'll just take a couple of closing comments and eat like five minutes into the coffee break if that's okay? Thank you. First addressing the question of the gentleman in blue, I think you are absolutely right. We need to do a better job at focusing on developing technology primarily not for English languages. I speak Portuguese and from Brazil it is not always that we find also technology available in Portuguese and most of it is developed in English. I think you are spot on. I think part of Facebook's work towards that is to open source its technology that I mentioned here that helps identify objects and read objects in images for people is actually available open source so people can actually use it and translate it. But I think you're pointed absolutely right. I think companies need to do a much better job and going back to Malavika's point about how Facebook is now focused more in community. I think that's key to Facebook's new mission. Facebook is trying to be much more open to the public, much more participative not only in the US but across the world. Actually our AI development systems or actually centers are not only based in the US and if you go to Facebook's artificial intelligence researchers lab online you can find more information on how developers can apply for programs to work with Facebook's folks but also you can simply download technology and tweak with it and tinker with it yourself. And the other point, sorry, the other question was about the users. That's right. That's right. Sorry. Yeah. No that's right. So look, a lot of the research that Facebook does towards general AI and the specific cases where Facebook uses AI in its product is done together with academia. Actually many of the Facebook's researchers, they are both academics like teaching and doing research and working for Facebook as well. And we believe that this is important because I mean we understand that the developments that we make are important for the academic development of the area as a whole or the field as a whole. So yes some of this is I mean used by Facebook because it goes into the open source technology that is made available for everyone and Facebook also uses that technology. Felipe. Oh I'm sorry. I'm sorry. Yeah. That's right. So your question was more about like the role of algorithms for the democratic process and right. Mm-hmm. Mm-hmm. Can I suggest we move to that because that's not really a data and economic inclusion point. Can you guys take that offline in the break and just let Felipe respond to the other questions? Yeah. Very quickly. Sure. Yeah. I mean one of my responses to that question is actually connected to yours less about filter bubbles but more about algorithmic accountability which also connects to that question about monitoring platforms. My way of explaining why we need algorithmic accountability is the same way that we get nutritional facts on food. So I once had a conversation with someone from a big technology company that is not Facebook just to take the heat off of you a bit. But the who said to me you're asking Coca-Cola to give the recipe for Coca-Cola. And I said to him no I'm asking Coca-Cola to tell me what is going to happen to me when I drink Coca-Cola. And I think that's that's what I think we ought to be asking. I think the question from the from the person in from you from Poland is right. Is that right? You're from Poland. About about diplomacy I think is exactly right. I mean I think we need to create channels in which we actually engage in conversations about how actors that are defining our life, our democratic processes, our economic processes impact us. I have comments that we could leave for the break as to whether those always ought to be voluntary or not. I think it's very nice when the powerful say well I want to come to the things that I only want to come to. And so I think there's that warrants a broader conversation. And then I just wanted to address the question briefly on business models and for the sake of time I'm happy to talk to you more about this over the break. But I think we need to start there's there's a there's a couple of several things that I think ought to happen at once. First of all is that for profit business models could have social impact and actually could be the ones that are preserving the sustainability of efforts that lead to social impact. And I think I think being able to move from this binary conceptualization that like the good people that do good things always are nonprofit people. And bad people who do like corporate private things are always the for profit people that moving on from that binary would be useful. I do think that we need to understand that profitability can't be the sole determining factor as to whether a business model is considered sound or not. Because if it is we end up with a concentration of power and conglomerates that basically have more power than anyone else and it reduces competition in a way that it makes it so that creating new business models and innovative business models and creative conditions for those business models to flourish and for people to try things out and be able to be successful without having to be a giant corporation. If we don't address the issue of concentration of power that becomes really really hard. But there's more to unpack there and I would love to talk to you more about that. Okay great. I'm going to make all of you stand up to keep you honest in your very quick tweet length solution. So you're going to stand and give me your one last minute wrap up. So we're on our feet. So each of you very quickly if you had either one you can choose to do one of two things. You can either give me the one solution you would love to see in this space that you haven't quite got to yet. Or what do you think would be a good measure? How can we say we've achieved economic inclusion? What does that look like? What does success look like? So give me either or. I guess the first question I feel like one way of doing this is what I did talk about in my talk. I'm trying to give value to producing data and including people that way into the decision making also what happens with that data. And especially I would say companies that do have user profiles would have a very easy way of asking people what to do with their data because they do have, I mean there's a direct line between those two. In terms of use or consenting to being on a platform when there's actually a necessity for participating in social life. I don't think it's a fair assumption of looking at that as consent because if you have to use a platform to participate in certain social or aspects of life or professional life then you don't really have a choice. In terms of use or not legal binding contracts and they're subject to change from companies. So I feel like this would be one way of approaching this. You have raised the question of consent in the first place and I think that is really key. We talk about consent we basically think of a situation where we share data in the first place and agree to what we'll be done with that data later on. And that is problematic of course because we don't really know what's going to happen. Some people might share that data for good purposes and we would agree to such a sort of processing. Now instead of focusing on consent we have means already today in the legal order that you can always withdraw your consent. So it might be way better and more interesting for individuals to not consent in the first place and then kind of be stuck in that consent but get rights to withdraw that consent at a later stage that is in a situation where you actually know what's going to happen with your data. And research has actually shown that people are more ready to agree to data processing if they don't agree in the first place but at a later stage when they actually know what's happening. So that might be a positive way forward. Thank you. This little empirical research is showing that standing doesn't actually change the length of people's responses. I did try. Quick wrap up. Thank you. I would end with just two words algorithmic accountability. I think a proactive approach to how algorithmic accountability needs to be mandated and regulated legally is absolutely necessary. Great. You're not going to like that. I'm sorry. I'm glad you're here. No, I actually look and I actually make my comment about that as well. I do think the voluntary approach is critical because I think it's very hard for you to solve the issue of how do you get people's consent to the whatever like if you make these databases available for everyone and I mean what is the purpose that those databases are going to be used. So you have to have some sort of control to make sure that the purpose for which people gave their data in the first place is actually a respected in a meaningful way. Otherwise people will not share their data. And in terms of the algorithm accountability, I also agree that companies need to do a better job of being more transparent about how they process data overall. I can comment on what Facebook has been doing like Facebook has a blog publishes any updates it does to a news feed. Of course there is always the issue of I mean for instance in Facebook news feed it takes into account more than 100,000 elements to decide how to prioritize our news feed. So it's very hard for people to understand and I think like we should come up with better solutions to make sure that everyone understand what's going on. Thank you so much to our great panel and I think it's demonstrated very clearly how you can't talk about economic without talking about the social, the cultural, the political. They're all intertwined so my last word is intersectionality. Thank you so much and thank you all for your questions. Thank you. Please keep the conversation going.