 So, in this panel, I'd like the three panelists to join me on stage. Please come, so I'm not alone. This panel, I think, is quite important and interesting. There was a journalist from Folio de San Paolo. He was here yesterday, and he, please sit whenever, and the journalist said, I liked the event, but quite theoretical. I don't have anything to write about. And I said, yeah, come to my panel, because my panel is about case studies. So this panel is about things people are doing. And I will say two things that are absolutely redundant. And because they are redundant, it's what makes this panel interesting. So Amazon bought Whole Foods, and this is quite related to artificial intelligence. So now, I was in London, and Sunsbury created a new distribution systems to deliver e-purchase. And they said, no, this is stupid to deliver stuff, because with artificial intelligence, you know, when people want to buy things. So Sunsbury is not competing with Amazon or Whole Sales, because they don't have artificial intelligence. The reason why I say this is redundant, because this is one of the case studies we hear all the time. We hear corporate uses of artificial intelligence for the market that are really interesting. But how many case studies you have based on inclusion? How many cases you can point if someone writes 20 million and say, tell me three things that should people have done to tackle inclusion and artificial intelligence? And we have very few ideas. So in this panel, we have three people that will have ideas of what they are doing to tackle the issue of inclusion in artificial intelligence. And I think it's quite rare. I call it strawberries in the desert opportunity to have these three samples of people and their cases on what has been done. Then we're going to open for questions. You can question them. And if you have an interesting case, please share, because it's rare moments where we have these case studies about AI and inclusion. So I have here with me Lukas, Lukas Santana from the Zabaf Social, he's from Brazil. He's wearing a t-shirt about racial affirmative actions. We'd say, if something is black, the thing is good. And he's going to talk about algorithm discrimination or profiling in bank images. We also have here Arisa Ema from the University of Tokyo. And we have Mark Sermon from Mozilla Foundation, who will share all the stories and I'll leave them to present themselves. Anyone who wants to start? Mark? In this order or let's see, whoever the slides up first is going to start. Let me look for the gods. The gods said okay. The message from the gods is coming down. All right, everyone. Good? All right, okay. I guess my slides up first, so. Hi, everybody. If you hadn't just had a break, I would get you to stretch. It's been a long day. So I guess I'm going to provide four very quick provocations of case studies. And so it's sort of, but it's a long one theme, which is really most of this conversation, the characters in the story have been big companies or to some degree governments and regulation. But if we want to look at inclusion or agency or self-determination or however we're going to talk about where we as people fit into this cyborg world or this world where computing is pervasive and wraps around us, and we have to look at ourselves as citizens or ourselves as people who aren't companies who aren't agents of the government and how do we shape and how do we shape what the reality becomes and put ourselves into the story, not just as the computed, not just as the users. And so it's one of the reasons that increasingly I think about the need for a social movement or an environmental movement for this digital environment, where citizens actually are shaping what's going on. And obviously in that world, the tactics that have worked in other movements. I mean, I grew up in the environmental movement and the peace movement in North America, but many movements have their own traditions of how they're trying to shape reality. The question for me and where I want to look at the case studies is how could we imagine if we wanted a movement for a healthier digital environment or one that is more humane and more empathetic, what would be some of the tactics that we would try? And it's something Mozilla is really investing in is trying to encourage a movement like that to grow and support people who are doing probably the kind of work you guys are going to talk about. So I'll just give four quick case studies of types of tactics we might imagine to tackle some of the challenges we've talked about over the last day and a half on AI. And so the first is this idea that especially in the context of machine learning, there's a huge kind of centralization of power and oligopolies, which have a lot of negative impacts, but include that if we wanted to create a counterbalance to most of the big AIs, you know, like Alexa or Siri or these things, we actually don't have the access to the big data sets that those companies have. And so there's a barrier to competition. If we wanted something like a Firefox or like some social alternative or some alternative from different parts of the world, we actually don't have the data to start with. So one opportunity as an intervention is this one I mentioned yesterday, is that we would imagine cooperatives or commons of training data. So it's not one entity that creates the kind of training data that would allow you to build quality systems, but it's actually collectives. And so we actually tried an experiment with that, something called Common Voice. And what that was or is actually is an attempt to create a training data set for voice recognition that has two benefits. One is it's a commons, so anybody can use it. And you can if you go to that site, speak a sentence and help train the machine or help quality control by listening to a sentence and say, did it sound like that or not sound like that? And I think we got in a very short period of time, just kind of going out to our community 10,000 hours over the course of a couple of months of training data on voice. And so the benefits of course are that somebody could then use that. That's a free open source voice recognition training data set. But also it means that we can start to have languages that are not going to be represented by the kind of larger corpus of, you know, the mainstream platforms, languages other than English, much smaller languages represented, you know, through that kind of open source approach. So that's one thing, you know, that, yes, I guess, Mozilla is an organization or a company of sense where we're a nonprofit, but really is a citizen action of people coming and contributing to this common. So that's one thing we could imagine doing is more cooperatives and more commons like that. The other is to think about, like in many ways, the challenge of not trusting the kind of products we have or the technology we live inside of. And, you know, one of the things where this comes up a lot is in this talk about auditability or verifiability. Do we know, like, what bias was in the algorithm or how algorithm made a decision about me or also in, you know, how is my data being used? Is it being used for the purposes that I, you know, that I wanted it to be or being used in other ways? And so an interesting idea is to kind of take a page from the history of different ethical products like organic food or fair trade and look that you could tackle some of these questions or at least attempt to tackle some of these questions about verifiability, auditability, about understanding how my data is used at some systemic level by building basically fair trade marks or trust marks. And so we actually just supported a paper with a group of folks called ThingsCon on building a trust mark for IoT, which really then extends a lot into AI. I mean, the two kind of often go hand in hand. So that's another strategy as citizens we could use is to look at nutrition labeling or trust labeling as something that we then either encourage companies to do or over time regulate and force companies to do on their products. And that's a way to start thinking about that over a number of decades at a kind of a large scale. And so that's a real thing that we're exploring to see whether we could make happen in the industry, but driven as a citizen initiative and an initiative that involves researchers like some of the kind of people who are here in the room. And then a third piece, and these are really provocations of like what other tactics could we think of? These are things that we're thinking about as we imagine kind of more of a citizen movement around these issues is to be more demanding customers and know that we actually have market power as people or at least those of us who choose to opt into buying these technologies as people who are buying them by making choices and also like being vocal about what is good and bad and what we want and especially being vocal about what's stupid. So one thing we just released this week is a holiday shopping guide for IoT things which helps you know, what things are spying on you that you're buying for Christmas. So in there you can find out that there is an Adidas soccer ball that you can buy that has a microphone, a camera, you need to register an account to use the soccer ball and all of these kind of things. And it's funny, but it also is starting to say, let's build a lexicon of knowing what we're buying. Especially as we imagine embedding AI and sensors in basically all kinds of products that we use. And that's a very tried and true strategy from other movements is looking at consumer power as a way to influence how the market behaves. And then the last piece which relates back to the, the fact that Greenpeace was my opening example is really to shift the narratives and look at citizens as especially artists, people who can imagine a different future. It was just actually upstairs in the museum and it says on the wall, you know, what different futures can we imagine? What different tomorrows can we imagine? And so much, I think even here as we're critical of where things are going with AI and want to look at inclusion, we so much look at the narrative that's being written in Silicon Valley as, you know, a narrative we're trying to negotiate inside of instead of looking at other possible tomorrows. And so one of the things I would like to see way more of and we're trying to support is basically design fiction and other kinds of things where you're prototyping a different reality. So, you know, there's a group of artists we've worked with including SuperFlox to basically develop IoT products and AI products that won't ever exist. But they do imagine a very different kind of future. And so this one, if you go and see the video, I'll give you the link at the end, is about this, basically this voice assistant that this woman has that she can dial up the different mood settings of the voice assistant. And then when she actually calls for customer service, you know, she starts it out nice and she starts to get more and more annoyed and she keeps telling her voice assistant to get meaner and meaner. And this is a sort of like fantasy digital assistant she wishes she had. And of course, it's talking to the AI of the company and they're seeing these battling AIs get angrier and angrier at each other. The value of this, I think the value of art as a part of a political strategy in this is imagining that the future that is being painted before us doesn't have to be the future. We actually can, I mean, they're prototyping these products as real, you know, objects you can touch that have real electronics in them just never gonna go to production. So we can imagine a different future, I think, by blowing up the imagination. And the last thing I'll say, because it says I have no time, is that I actually feel, you know, pragmatically there are ways for us to intervene and shape things. And if you think, you know, oh, did I lose my other, my other slide is, oh, it's so sad. I have a really beautiful slide here on this screen of the GNU mascot, the Tux Linux mascot and the Firefox mascot at Fisley, the open source conference here in Brazil. And, you know, we do have a history in the free software movement of being a citizen movement that did shape big parts of how the internet have worked out. It's also backfired on us a little bit. But I think there is, to me, the free software movement is, you know, like the prehistory of the environmental movement. You know, you didn't, you had environmentalists organized for well over 100 years. But there's phases and phases and phases. And I think we have to imagine a new phase where in the free software movement, we had something where we were able to shape and build digital reality. I think we have to reimagine that now with strategies like these and others we can come up with so that we get a future that is different than the one is being painted and the one that looks like it's unfolding around us. So that's my provocation to you is what are those strategies and how do we reinvent that movement? Thank you very much. Arisa, would you like to go next? So hello, my name is Arisa Ema. I'm from Japan, the University of Tokyo. So I came on the other side of the earth. It took like 30 hours to come to here. But I really enjoyed having attended this symposium so far. So I would like to talk not specifically about the technology, the AI technology, but how it's kind of like a story. So how I created my colleagues or like the engineers, the social scientists to be involved in a group to make more people included in this AI ethic, to consider the ethics and policy in Japan. So the title is like breaking down silos. So as you can easily imagine, there's a silo has been created among the engineers do the engineer's job, the social scientists are doing their jobs and the policymakers are doing their jobs. So in the 2030 or 40, the IT, the AIs has to be considered in the... We thought that it would be really important to include many people in Japan, the many stakeholders to discuss the AI ethics. And this picture became one opportunity to build that kind of community. So this is the journal cover design of the Japanese Society for Artificial Intelligence. So it's official journal cover. So before the renovation, so it was used to be like this, the originally dull cover design. So you could see in the library or so. But in 2014, they changed the cover design to this like more attractive animation like magazine. And this, I think you could easily imagine, this was criticized by from the gender perspective. So there's a lot of things that has been discussed, but the thing is that this is woman plucked. So it's kind of like anthropomorphism of like the vacuum cleaner. So she's cleaning the room, but she's plucked. And some say that she has a hollow eye. So it seems like she's not willing to do this kind of job. So it's, and so, but however, this picture was kind of chosen within the community. And it was, so, sorry. So it was, it got like a first prize within the community, but they saw that this was really nice because it feels something like nostalgic way. It's traditional Japanese. So you could see there's like the library and they used to have this kind of girl cleaning in the room traditionally. However, the thing doesn't, so the criticism become bigger and bigger. And the worst thing for them is that they want, the news went into the global. So the BBC News wrote the article saying, Japan artificial servant girls broke sexism rule. So this kind of, oops, sorry, article went into the global. And so, however, so I wasn't involved in this community in this editorial board at this time. But the thing I found is that there was two opinions. So one say that this is really good. It's really nostalgic and impressive cover. However, they found out when they released these issues, the public thought that this have more gender issues. And it has like a, you need to think about the political collectness and the other thing. So the AI researchers realized that they have social impact on what they represent. And so they started to discuss their social impact with not only within the community, but including the philosophers or the anthropologists or social SDS researcher like me and law or sociology. So we are creating this kind of community in right now. And I think this was a really good opportunity to discuss this. And also we actually wrote the paper together about this topic and do like ethics and social responsibility. What should we do? And who to be included in this kind of communication. So this was 2014 and the ethical board was actually organized within this Japanese Social Society for Artificial Intelligence in 2014. And I was invited to the society, the ethical committee. And we started to discuss, well, what we can do to say that the public that we're not the much scientist. So the conclusion is that why shouldn't we create the ethical guidelines and show it to the public and to start discussion. So you could see these nine articles and the most of them is, you know, it's the common thing that it's really more like the code of ethics to the most engineers, the contribution to humanity or the fairness security act with integrity. But the unique thing is that the article nine, Abidance of Ethics guideline by AI. So it says that AI must abide by the policies described above, above means one to eight in the same manner as the members of JSI in order to become a member of a quasi member of society. So here, in this guidelines, we treated AI as not just a tool, but they aims to treat AI as our partners, not like human beings, but as a quasi member of society. So this article captured a lot of interest and also the criticism, like how are we considering about the rights and also like the obligation or the duty and so that kind of thing. So we were not thinking so much in depth, but I think this is kind of like a unique view, way of thinking about how we treat AI, how we build the relationship between AI or how we collaborate with AI in the future. And also this is another, it's not the same route but also the government is also interested in creating the AI R&D principles. And recently they actually released the guidelines and you could read it in the English, in the website. So I just read the core message from that R&D guidelines. So it was created by the Ministry of Internal Affairs and Communication Japan and it consists of 33 participants and it's not only from the engineers or policymakers but it also includes the lawyers, the social scientists, I'm also involved in there. And so it has, so it also has the nine principles here. So the guideline aims to protect the interest of users and deter the spread of risks, thus realizing a human-centered wisdom network society by way of increasing the benefits and mitigating the risk of AI system through the sound progress of AI networks. And also it, as you can see, they list the principle of collaboration was a first principle. And as you can see, the transparency, safety, security, privacy issues is listed. Also they created the users guidelines, well, usage guidelines. They want to create usage guidelines but before that they did some kind of case studies in these five, six fields. And in Japan, as you can see, we want to within, so not only within the stakeholders like the engineers, scientists, the public or the policymakers, we also want to be involved in the global networks like this symposium. And in October, we held the symposium called AI and Society and we invited the person mostly from the West and we discussed about how we could build the beneficial AI. And to do so, we have the partner from all over the world, like the IEEE or the CFI or the West. And also in Japan, we have a Japan Deep Learning Association which promotes or the enhanced competitiveness of industry by deep learning certification systems. Or like we have like a research center and also the organization who do the research on the general AI and that kind of. So we have a good community building right now and we would like to, I think I'm the only Japanese here in this symposium, so I would really love to have some kind of networks within this, the person who is living, who is now here. And this is my, this is the last slide I have. So I think Japan has really unique ecosystem regarding like the robotics, artificial intelligence. So as you can see, this is Apollo, the pet robot and also the eyeball, the Sony released the newest pet robot. And also, can you tell which one is the robot? So this kind of robot, he created the robot and he also created this telenoid robot. So the very opposite robot. So we imitate in the human beings and just the element of human beings. So these are kind of doing crazy things, but I think maybe we could learn from these cultural differences. And so for example, here is like the box, there is a avatar or the VR and this is the young lady's inside the box and it's kind of like a chat box. But I think there contains a lot of like a gender problem and other things. However, I think, well, we need first to talk about why there exist needs of this kind of thing and what's the problem, what the ethical issues. And that will be very, very interesting to start with. And this will raise many other questions and maybe sometimes solutions to have the mutual understanding among the cultures. Thank you. So hi everyone, my name is Luca Santana from Zabafo Social. And despite the theme of the talk, it's not a good case of the AI or machine learning by the bad one. According to National Bureau of Economy Research, black people weights 35% more for riding in Uber. And according to Harvard Business School, people have 16% more to cancellations in Airbnb. And these are cases of digital discrimination. These two examples are not exactly about AI, it's not exactly about machine learning, but these are examples of how people interact with each other through the technology. The technology should be a good or a natural space, but can perpetuate a bad behavior of our society. Yesterday was said that AI is a black box and really is, sadly. To talk about the black box and the virtual invisible racism, the Zabafo Social media campaign about photos of black people in bank of images. So let's see the first video. Why is white the standard? Why do we need to write the word black to find black people on the internet images search? Can you stop, please? That's what happens on the photo stocks that feed the advertising and editorial market in Brazil. This is the second one. Can you play the first, please? So we made these videos, we sent to bank of images, received some answers, some responses. So as you can see, black is not the standard, is not the standard in white society, is not the standard in algorithms too. And algorithms, artificial intelligence and machine learning are all around us and will be even more. The bias will continue to discriminate people just like happens every day in offline world. But now is invisible and that is dangerous. Now we're going to see the second video with the results with the campaign and think about how to change the situation. The second video, please. Why is white the standard? Why do we need to write the word black to find black people on the internet images search? That's how it happens on the photo stocks that feed the advertising and editorial market in Brazil. Desabafo Social, an NGO that fights racism and for black representativity has recorded videos of real searches and invited the photo stocks to talk about the problem. Some photo stocks replied, others ignored the cause, avoiding dialogue. They repositioned their communication, though. The public opinion helped spread it through, drawing attention from an even bigger institution. We sat at the table to discuss the problem. All those who will look for and question why white is the standard. Bank of images, searchers, we take the first step. But I want to remind you that changing the algorithm is changing reality. So, algorithms are not neutral. So, algorithms are not neutral. There are some European standards that discourage the force. Their decisions made by algorithms must be understandable and explainable. IEEE, which I remember, has a huge role in it. And big companies like Google, Uber, Facebook are the biggest agents. They are the agents that can change the situation. And I'm glad they are here. So, these machines should be trained with diverse data and should be built by diverse people, which they are in right now. Nothing is going to change if the same black boxes are made by white men from Europe and North America. And nothing is going to change if the same people are here discussing how this box can change with technical language. We need to pay attention to what black feminists are saying. We need to pay attention to what trans feminists are saying. We need to pay attention to what Africa is saying. We need to pay attention to what Latin America is saying. And what and how are they complaining? So, right now we need to think. You need to build some new and more black boxes and let's eradicate the machine together because change the algorithms, it changes reality. Thank you. We're ready for some questions from the floor. Thank you very much for the three presenters. I think we have in Spanish some ideas on what can be done to address the issues of inclusion. You can do stuff. You can create data. You can train data openly. You can attack the silos of communities and make stakeholders talk together and reach more consensual and more inclusive decisions. You can start a debate with those who create algorithms and point them to the direction they might not have spotted and some of them might reply to that. Questions on the floor? Please raise your hand. We have mics on both sides. Comments. There's one there. Victor Akiwande, IBM Research Africa. I just want to talk about one initiative that I really like, which is the AI for all initiative. I think more and more we need to see such initiatives take ground. Initiatives that actually try to spread, educate a generation, a next generation of AI technologies and actually push for that to happen. One of the challenges is the very, very few AI experts. These AI experts are being taken up by big companies, Google, Facebook, Appos and the likes. This is also a real challenge as well. For example, researchers, professors, they leave research academia and then go to industry. How can we come to an understanding that it's actually possible for them not to... More and more we need to see more researchers, academics actually work in industry but not exactly leaving academia completely so that we have some way of ensuring that that knowledge is actually spread across and not just concentrated within silos. More hands? Thank you. Only to congratulate your work and also to share that this story from Jezebaphos Ossial remembered me of the famous the sheer lay cards which were used by Kodak as a standard for photography since the 50s if I'm not wrong. And only around 2000 they made a new card with women with other colors because the whole standard of revelation of films that we use for half a century was based in a white woman. So until today when filmmakers try to use and film black people they have to adjust these and this is an example of technology reproducing the bias of a racist society and I think algorithms are not free of that so congratulations for raising this. I hear someone there. Is this on? Yeah, these are such amazing examples. Thank all of you so much for sharing them and I am really especially encouraged by the work that Jezebaphos Ossial has been able to do following the project for a while and it's really great to see it getting more visibility and calling attention to that inequality. I just wanted to kind of like add on to the table. I know I've mentioned it before at the conference but the work of Joy Bulamwini who's with the Algorithmic Justice League who's been looking at the intersection of race and gender in Algorithmic Bias and so one of the suggested readings for this conference was Kimberly Crenshaw's classic paper on mapping the margins and on intersectionality which is very, very specifically as we all in our different respective spheres take forward the analysis and the critique and especially the audit of Algorithmic Systems and the auditing of Algorithmic Bias which means taking systems in different domains of life and analyzing the way that they distribute harms and benefits among different kinds of people. Joy's work and other people's work can really make us focus on the fact that we can't only look for single identity categories when we're looking for bias. In other words, Joy has demonstrated that in the face recognition and gender classification systems the three most popular ones they do best at mapping sort of white male faces, second best at mapping white female faces, third at black male, and fourth at the bottom is black female faces. So if we don't pay attention to the way that intersecting inequality of race, class, gender and other axes, disability and so on the way that they work together even when we're constructing our audits we're not going to actually be effectively monitoring the inequalities that AIs are reproducing. We have one question here. I was very intrigued by that by these nine principles or whatever you want to call them. When you said that you see that AI is also bound by the first eight and that you see that as the idea of regarding AI more as a partner than as a tool and I would just like to ask you to tell us a little more about that to elaborate on that because there's that huge discussion about attributing agency and free will and what will you do AI and that's very contested and it seems to be a very different concept that you have and I'd just like to know more about that. Yeah, so yeah, I think this is really controversial the nine article is very controversial and we actually when we created this principle we had a debate about whether we should include that into the principle. However we thought we there's if you go to the website on this ethical guidelines there's is writing some statement or like explanation why we include this article nine and he said that considering that the Japanese people traditionally considered like the robots as the friends like the Raya Mon and the Astro Boy and other stuff rather than Terminator and that and many Japanese researchers actually aims to well read that kind of cartoons and animation and their motivation to research is to create that kind of friendly AI and so that's why he wanted to include well most of the AI researchers wanted to include this into the article. However so like whether we put the robot rights or like whether the robots has duty or that kind of issues is really very kind of getting messy right now. However so we were not actually seriously discussing that into the depths. However by writing down this article we want to start the conversation with the public well what is the what is the relationship between humans and robots and how we could promote what is the friendly AI or what is what's the element we should study or like so what is the security what is privacy and that so on. So this is just a start to discuss the thing and I think we are just writing the code of ethics and how we motivate to create that kind of AI. So next step what we have to do is like to think about the AI ethics so is it really so what to so right now we the ethical board only have like a social scientist and also the AI researchers but we need to include the lawyers and the policy makers and other stuff to discuss this kind of big issue so we are still on the in the process of discussing it but I think posing this kind of concept like AI as a partner is really interesting to discuss and it has something to say from this East Asia. So I think we have one question there here and then I'll get two more and then five more and then I close. Hi Joe from Amnesty International it's more just a comment just to say thank you for the inspiring examples I couldn't agree more that what is needed here is building a real movement of citizens who are in various different ways challenging the power structures that exist at the moment and trying to empower people to understand how these systems are working and how we can hold them more accountable and more transparent. I mean I think it has to be a combination of what we've already been talking about some of the long term research that builds the evidence base but as well as that combined with some of the great sort of short term campaigns on particular companies like in the example that you gave with the Shutterstock and others and I think looking forward a little bit I would also think I also think it's important that we that we demand transparency through freedom of information requests and testing the legal limits of the existing legal frameworks to be able to reveal as much as is possible in order to then subsequently find where there are gaps that require new policy solutions to resolve. I will just speak, those who can hear can hear. I also just want to echo everyone and thank you for this really inspiring examples. Lucas the work that you're doing is fantastic. It reminds me in 2013 when Google had a similar problem with auto search complete because if on Google for example if you typed in female scientists it would be interesting didn't you mean male scientists because the category of female scientists did not was too rare for people to be searching for so they presumed that you had made a typo at that point. But I do actually want to highlight the point that I also find it really reassuring that when you reach out to these companies there is equal willingness for them to have a conversation. I think oftentimes this becomes this very stakeholder thing like the moral vanguards of whatever and I think it's important to realise that these are questions that are equally hounding people who are working with big databases and algorithms and sometimes we just have to trigger them into having those conversations. Sometimes it needs more pressure more public policy work or public advocacy work but I find it really reassuring that there is an opening up of space for conversation like that so thanks for highlighting that in that instance. Thank you very much. I just love the examples that you were talking about and I know that Mozilla is not a regular kind of a company but it is still a company. I'm just very curious about how are you imagining the kind of examples that you are giving as having democratisation potentials just because of the cost that goes into these things. So just to give a brief example, two weeks ago I was at the University of Illinois Urbana-Champaign campus which is one of the five supercomputers in the world so I visited deep waters for example and they told me that this supercomputer which is housed in a land grab university in the US and supported by the university is so expensive that their students can only afford to use 8% of its computational power whereas everything else has to be given to the different kind of people who can afford to pay for it. We talk a lot about empowering civil society to take back the tech. We talk about how we should all be using AI applications but the incredible amount of cost that goes into computation makes it almost formidable for a lot of smaller organisations in the global south to work together towards it. I'm just curious how you would respond to it but also give us an idea of what next steps could be for smaller organisations especially organisations that are not necessarily dealing with AI as the core focus to kind of reappropriate and get assimilated into this network and ecosystem. I have time for more two questions. There is one person there waiting lots but we start there and then we end there. Sorry for the short time. Let me make it easier for you, it's not a question it's just a quick comment. To people talking about AI as a partner versus as a tool this is a narrative that is not very well known to a lot of people but it's actually very, very common in East Asia and I think that we have to keep an open mind when we think about this and approaching this because it's something that is very kind of new and unique but I feel like Arisa was telling about why is this happening? There are so many different considerations and this is research that I do about the emotional need for meaningful companionship through artificial intelligence and there is a, I mean the hologram at the bottom people testified that it was the most meaningful relationship they've ever had and the robot that looked like a weird kind of kid thing is a robot that is used to prevent dementia for the elderly and there is this really interesting narrative about AI becoming a partner which I think is going to become much more prevalent as we progress. It's very common in a certain area right now but I think it's really going to change and I would like to encourage people to think about this kind of thing and also to think about the implications of including AI not just having an eye on inclusion also including AI in the sense of AI becoming a quasi-member, even member of society. Thank you. So let's get a last question over there and then I'll go for a final round of closing remarks. Thank you very much to all the speakers. Your presentations were very interesting. I guess my comment, my name is Juliane and I'm from Uganda and my comment is in relation to case studies, the African problem would be how do we make the few case studies more visible such that the AI and inclusion discussion can happen there as well. Thank you. So should we do a final remarks? Mark, do you want to start? Sure. Thank you for all the comments and for inspiring to hear you guys as well. I think on Nishan's question maybe as then to wrap it into some other things in Mozilla as maybe most people here already know is a non-profit and was kind of founded as a non-profit with this idea that there was going to be no other way to take on Microsoft in the browser market than through open source and through volunteerism and Firefox has been successful a different economics than in the beginning but I think that's still the roots that we imagine people getting together to challenge the centralization of power on the internet is something that is worth doing and can have an impact and so there's two parts of that in how we do it and the first part relates to your question Nishan. One is as a social enterprise and then the other is more trying to encourage things like what Lucas is doing as a social movement part of the organization. So we have both of those aspects of the organization. On the social enterprise and the technology piece I think the computing power one is one of the barriers in terms of people getting into AI whether it's civil society organizations, small entrepreneurs even medium-sized companies in places outside of the big five tech companies in the US and the one that we're first poking at is the machine learning big data sets as a thing that maybe we can democratize more and that's even a tiny experiment so that's a theory of one thing to do you know the question of how you deal with the computing power one is there large parallel distributed systems that citizens could build so could you imagine SETI at home but for citizen AI or for small company AI or as cooperatives so I think those are exactly the kind of questions where huge resource centralizations whether that's through whatever NSF or DARPA funding is paying for that supercomputer or what Facebook is able to do in data centers we do want to find ways and push on that to counterbalance power and I think it doesn't have to happen we may fail but I think there's a lot of people starting to say now how do we reconceptualize the idea of free software as a distributed disruptive force in the era of AI so we don't have all the answers to that that common voice or other things that are starting to ask that question but to your other point how do smaller civil society organizations have an impact I mean that's amazing you guys build on the history of basically consumer activism in a way that is very fresh and related incisively to specifically the everyday experiences we have of biased algorithms and like you're not talking about the abstraction of algorithms these are the products we all think people did an image search of any kind in the last month in this room right these are the everyday realities of the technology we live inside of and so I mean that's the other side of it you don't have to get into open source AI to have an impact on how this is doing I mean you guys are having an impact so I give the word to Arisa and Lucas and just technology if you saw a woman at the end of the movie is Moniki she's sitting over there so I can also talk to her after the break Arisa do you want to closing remarks Lucas so about the question of Sasha yes you are 100% right about it we need to we need to bring more conversations and bring more problems this campaign was about one about race it's about social ideas a race and Joe a race we talk a lot about race and this is why we did this and about talking to talk with companies after this video after this campaign we had a day with Google to talk about this campaign about all the other campaigns with other problems so we are trying to talk and start a conversation with companies too which is really important so thank you so like when I came from Japan the discussion in this symposium this conference is somehow really new to me because Japanese we have many people living from the Philippines, Brazil or the Koreans however Japan we talk to Japanese and every Japanese well East Asian people have the same color and we say that there is an algorithm bias but it's not a real problem however when I hear many discussions here I think it's really important and I think Japanese also have to consider and contribute to this kind of communication so I would really love to continue this kind of discussion and maybe we could talk about AI as a partner or tool and that kind of things next time like tomorrow or this afternoon thank you so thank you very much for coming one last advert tomorrow 9.30 to 10 there's breakfast here so there's something the program called informal meetings and other things there will be food so you can talk and eat so 9.30 tomorrow there will be breakfast here thank you very much for the panel thank you very much for the audience and see you later, bye bye