 So in front of you, most of you, you have some teal bag. We call it the Wonder Bag. It was assembled with a lot of love by the team Berkman Klein, and then with support from ITS, we stuffed it. But I promised we will make you work, so you're not here to just listen. And what you see in this bag is basically four activities that we're going to introduce to you throughout the days. And the goal is that you participate at least in three out of four. So you can skip one, but I promised also personal recommendations. The first one is the easiest one, so I would probably contribute to that if you feel like it. Okay. So, I think we have a slide for the keynotes also with the title. But before we actually, before I introduced the moderator for the keynote, I was going to explain to you the first activity that is relevant not only to the keynote one, but also to keynote two. And so before I make here a mistake, let me look, green, so I'm going to make a mess here because there are so many things in here. So in your bag, you should have a green envelope, also just to let you know that on your clipboard, there are the instructions. So in case you find my instructions confusing, please read the instructions. Ten people read them, apparently they seem clear. So in the green envelope, you have three cards, yes. And in the instructions, it says what you're supposed to write on each one. So it says, yellow card, please write down one research question. Just one, not ten, just one. Green card, please write down one take away from the two keynote sessions that might shape your own work moving forward. So one take away. And then on the orange card, write down one issue related to this session or to both sessions that you hope to explore over the coming days, yes. And so you can have the time of the two keynotes to do this. In the end you put them back in the envelope and when the break time comes, someone will collect them outside. It's all anonymous, you don't need to put down your names or anything like that. The important thing is that we designed all four activities to get as many voices and inputs as possible. This was really important to us. And some of these activities will actually serve as input material for other sessions. So you will see there are more activities and some will be as information for breakouts and so forth. Okay, but so far so clear. You can skip one. I would not encourage you to skip the first one. Okay. Let me put here my things in order, too many paper, too much paper. So with this, actually, so the goal of those two keynotes is actually the first one is going to be more focused on AI and have a few respondents and the second one is going to be more focused on inclusion and have a few respondents. And the very goal that we are hoping to accomplish is that by the end of the two keynotes, we actually came much closer together in terms of our own knowledge and understanding and background because many of us, like me, come more from the inclusion side and some others come maybe more from the technical and AI side. And so hopefully really those two keynotes, we tried really hard to make them in a way that they actually help us then form a much better understanding. So the first one is AI and the building of a more inclusive society and I'm happy to welcome Madeleine, Madeleine Ellige who is a researcher at Data & Society, here you are. See, I was all confused because my own order was mixed up, so welcome Madeleine who is going to introduce the keynote speaker and then once that was done, welcome the respondents on stage. Well, hello everyone. It's a true honor to be here. I have to say I'm quite obviously not Swiss. I was not, I'm not appropriately prepared. I did not know I was introducing the speaker, just moderating. So I'm just going to get straight to it and I want to introduce Ansaf, Ansaf Saleb Ousisi. I'm sorry, I'm just really ruining that. I'm coming from Columbia University. Thank you. Hi everyone. First of all, thank you very much for having me over. This is a great honor to be part of AI Inclusion and to present today a talk about AI Inclusion. My talk will be two parts. The first part will introduce the AI concept, definition of AI and where the field is standing. I'll have a brief introduction to AI and also to machine learning which is one of the main drives of artificial intelligence. Then I will dive into AI Inclusion and define what I call the four dimensions of AI. Be with me. There are two pieces of talk. I'll try to fit in the time. First of all, AI has inspired lots of movies, which this is where most people get the definition of AI. Basically, AI is the future, but as defined by experts, AI is the science and engineering of making intelligent agents. This definition comes from John McCarthy, who is the researcher who coined the term AI in the mid-50s. A more detailed definition comes from Russell and Norvig, who are the authors of the best-seller Artificial Intelligence, a modern approach. They define it as the study and design of building intelligent agents, where an agent is a system that actually perceives its environment and takes actions that maximizes its chances of success. This is one of the most accomplished definitions of AI. Why AI? AI is a revolution. AI is our extension. Andrew Eng, who is the founder of Coursera, the MOOC, and also a professor at Stanford, said that just as the industrial revolution has freed up a lot of humanity from physical judgery, AI has the potential to free up the humanity from a lot of the mental judgery. The take-home message from here is that AI is our extension. We can compute fast, computers can compute fast, and we can rely on computers today and on AI to perform tasks that are actually intellectual. AI is interdisciplinary at the cost of mathematics, economics, statistics, philosophy, computer engineering, et cetera, so I think I will need to revise my slide with more disciplines like law and social science, et cetera, so this is the vision of how different disciplines are contributing to AI. So I wanted to talk about history of AI, go back to the Turing test. This is a test that was designed by the famous British mathematician who was the code breaker of World War II. In his paper from the 50s, he defined all the ground for what is artificial intelligence, going from what is language, rezoning, knowledge, learning, and understanding. The test that he designed is called the Turing test, in which he deems a computer as intelligent if he passes the following test. So suppose you have a human interrogator behind a wall, and behind the wall we have also an AI system and another human. If the AI system can fool the human interrogator on the left, then you could deem this AI system as intelligent because this human interrogator couldn't tell who is the system from who is the human. All right? So this is, Alan Turing was very visionary and his paper is actually an excellent introduction to AI from the 50s. Some more history of AI, AI started in the 40s and 50s with a symposium that was actually initially Turing paper was the first one. We also started by trying to model the brain with some simple Boolean circuits. In the 50s and 70s, John McCarthy and some few researchers organized a symposium. This was the birth of AI. Check out this wonderful video about the thinking machine on YouTube in which you could see how people were enthusiastic about AI. It was great expectation of AI, especially in terms of machine translation. In the 70s and 90s, it was AI winter, mainly because expert systems were kind of requiring lots of human intervention to write rules about different domains. And also because machine translation system has failed because we tried to translate word-to-word. 90s to present, so we are in the middle of the AI spring. There is a lot of progress going on. Especially because we have more data, we have smarter machine learning methods, and we also have the ability to process large data sets has become the topic across several disciplines. All right? So this was the brief introduction of AI. So what are the kinds of applications of AI we could think of? AI applications are among us. We use it every day. For example, when you write numbers on a check and use that in the ATM machine, there is an AI system that translates your handwriting into digits. Same thing for zip codes and envelopes. Actually it is a network behind it. Machine translation was actually historically the motivation for AI to begin with when the government, the US government wanted to translate Russian to English. Machine translation has gone again through ups and downs, but today it's actually becoming more scientific, statistical, and it's doing a great job. But before that, we had examples such as out of sight, out of mind were translated by the early translation systems as invisible in the seal, which has nothing to do with the meaning of the sentence. But today machine translation has become more prominent and operating with very high accuracy thanks to statistical translation using parallel corpora of existing text that's available online from the United Nations, from the Canadian parliament, et cetera. So machine translation today we have 100 plus languages you could plug in the sentence and see what you get as translation in Google Translate. Of course robotics, obviously a recommender system that we see in many applications from Amazon, et cetera, are actually there's an AI system that knows how to rank the different items based on your choices, on your previous purchases, and also on others previous purchases. Search engine, there is a ranking algorithm which is a machine learning algorithm behind your search engine that can bring you an ordered list of answering to your request. Spam filtering, there is a classification or an AI system behind your each email today, that can classify emails are either spam or non-spam. So the text of the emails is actually leveraged to do this classification, but there is a whole learning process that happens with your email box that the system is actually trained on. Phase detection, so most of us has smartphones whenever you try to take a picture, the system identifies very fast faces on the image, right? There is an AI system behind that that can actually use machine learning to detect features of the face like eyes and front head and the nose. So there is an AI system that can detect on the fly the presence of persons on the image. And finally, actually not fine, there is one more, speech recognition is an ongoing research. Systems like Amazon Cortana, Amazon Echo Siri Cortana are all systems that try to understand language and respond to language. Natural language understanding is a hot topic, it uses speech recognition and natural language understanding methods to be able to communicate with machines. So also a known application is games, so you probably are called gas power versus IBM blue, right? Or Geoparty 2011 where humans were competing against IBM Watson using natural language understanding and information retrieval to answer questions. And most recently, Lee Sedol who was the champion in Go, it's an ancient game that actually is very hard because the number of possibilities to play the game are very large, is very large and using deep learning reinforcement learning and search algorithms, Lee Sedol was defeated by AlphaGo who is a Google machine based on deep learning. Finally autonomous driving is around the corner, right? This all started with DARPA grand challenge in 2005, 2007 and 2009, but now self-driving or autonomous cars have been driving hundreds of thousands of miles and this is just, as I said, around the corner, all right? So this was for the applications, so we can classify AI as old school AI and new school AI, but old school AI is actually still ongoing, old school AI aims to find to do search, for example, for games, for finding any, to find designing algorithm that work toward the goal and the AI agent is actually in charge of finding the sequence of actions to go to achieve the goal. For example, given a map of North Africa, I'm sorry, of North America, we could try to, so an AI agent can be designed so as it can find you the shortest path between any two cities, right? There is some search and exploration of all possibilities and then execution of that path. Adversarial search or games are actually at the core of AI. This is what, this is the AlphaGo, this is IBM blue, et cetera, but this actually is a kind of search, but we have a search with some strategy finding. So the game is having two opponents and if the rezoning is, if the opponent does A and if I do B and the opponent does C, et cetera, so there is an embedded rezoning which means that the search space of possibilities to make the move for the agent are really huge. Constructors is faction, problems are predominant in artificial intelligence and are still part of AI today. Recognized Sudoku is a CSP problem in which you need to find assignments of values to variables, right? So Sudoku's, the hardest Sudoku's can be solved by an AI agent in a couple seconds while it can take a human being hours, right? So we have constraints about not having the same digits in the same block, but as I said, my students developed this kind of algorithm and they could solve the hardest Sudoku's in less than two seconds. All right, so this can be seen as I said as an old AI, but lots of active research is actually happening in search, in games, in CSP problems. Today AI is driven by something we call machine learning. Machine learning, according to Tom Mitchell, who is one of the founders who founded actually also the first lab in the US about machine learning, is about how do we create computer programs that improve with experience. Experience is data. All right, so just to dive into what is machine learning, we have two categories of machine learning. Machine learning that is supervised versus unsupervised, but in the general setting what we have is we have training data and this data is a set of description, it's data points, right? So we have a set of what we call using machine learning jargon examples or instances. Each example is described by a set of features and potentially has a label. Machine learning is supervised if we use or leverage the label and it's called unsupervised if we don't care about the label or we don't have it to begin with, right? So this is an example of let's say one million fruits. Each fruit is described by features or variables like the length, the width and the weight and we have a label telling us whether the fruit is a banana or an apple or an orange, right? So the idea of machine learning is to design algorithm that can learn models based on this kind of data. This model would be unsupervised if we don't use this last column here. So in the case of supervised learning, we want to find some structures or clusters of the data. For example, this seems to be coherent clusters or grouping in the data. Examples of methods use are K-means, Gaussian mixtures, clustering, et cetera. Obama 2012 campaign used lots of unsupervised learning to organize, to do a targeted campaign and organize the possible voters into groups to be able to do a better job at convincing people to vote for him. More formally, if you want to formalize what you want to do, you want to create a function. At this function, we'll go from RD, which is the feature vector or the variables, to the cluster, all right? In supervised learning, we have what we call the positive examples and the negative examples. This could be banana versus apples or disease versus non-disease. So in general, we talk about binary classification when we have two classes, but we can have more than that, all right? So the goal of machine learning is to find this decision boundary. This is the model that can separate, actually, whatever is in the top will be classified as plus, whatever is on the bottom of that decision boundary will be classified as a negative example. Again, so if you want to formalize, you want a function F that goes from the space of features or variables to two classes if it's a binary classifier. If the outcome is not binary, if we have, for example, a value, we talk about regression. For example, so this is a regression. For example, the amount of credit or the weight of the food. If you want to learn from RD into R, which is a real value, all right? So it's not always easy to find data that is well-behaved, that where the plus goes on one side and the minus goes on the other side. Data is dirty, it's complex. So you could have a linear, what we call a linear classifier, this decision boundary here. It could be curved like a polynomial. It could be a little bit more curved. It could be complicated. It could be also separating different classes. So the goal of using machine learning methods such as support vector machines, non-networks, deep learning, random forests, decision trees is to find this kind of decision boundary that you could use in the future to make predictions. So generally speaking, every machine learning algorithm aims to find, aims to optimize something. And this something is how to make the most accurate model. Sorry. So we have a term in which we want to either optimize, maximize or minimize. So we want to increase the accuracy of the model by comparing the predictions we make to the true values of the label. And we have some regularization time that help us not to overfit the data. So if we want just to maximize accuracy on the training data, then we will have a perfect model on the training data that will be enabled to generalize on future data. So we want to balance things up by using the major term of optimizing accuracy and by balancing with some term for regularization. So there have been lots of methods in machine learning. Today, the trend is to use what we call neural networks and their new variant deep learning. Neural networks have actually seen the day in the 90s. They were used by the US postal services to sort the mail by looking into the digits on the zip code. But today it's back because the circumstances are good. We have lots of data. We have lots of computational power. And we have lots of people working on it. So now that's why we are seeing actually breakthroughs in vision and speech today. What's a deep architecture? So you plug in the data on the left hand side. This is the input layer. And you have a class on the output layer. So for example, you plug in number eight and this is just a set of pixels. And the machine learning algorithm will train the weights of this network to produce the right outcome, which is the digit eight. So deep learning means using a neural network with a series of hidden layer. So for example, suppose we have an image of pixels, of any image of pixel. And for the computer, it's just a vector of colors, right? But a deep learning algorithm will go through successive steps of learning different features at different levels. For example, the first level could be identify the edges, the corners. Level two would be identify the eyes, the ears, and ideally come up to a final representation, which tells us it's a man and a woman dancing. The process going from the pixel to this description is really going through different layers of representations. Yesterday, we were hand crafting the features or the variables. Today, deep learning will be learning these features without human intervention. And that's actually what's novel with deep learning. That makes it really powerful. We don't need. We let the machine find the features itself. So another part of machine learning is called reinforcement learning when we have an agent that can interact with the environment. Just a quick note here. Deep learning today and reinforcement learning mixed together made us find a solution for the game. All right, so this was my introduction to AI. I hope it was useful. Now I will dive into more about AI and inclusion. So AI has challenges and potentials. So most of the research today, most of the AI and society concerns today, have been discussed. Should be, for example, be afraid about AI. Is AI a threat for our humankind? Will AI impact our job market? How AI will impact our cities, our jobs? All these are really very well discussed. And it is less so on how AI will change our regulations and laws and how to develop inclusive AI systems that not only optimize accuracy in the objective function, but also optimize safety, non-discrimination, and the right for explanations. The way I see this is that AI and inclusion actually evolves around the four following dimensions. And these are develop, decipher, de-identify, and de-bias. Develop pertains to how to empower individuals around the world with AI education and to avoid the digital divide. Deceiver how to provide the right for explanations through understandable and intelligible models. De-identify is how to protect people's privacy and the right not to be categorized, which may lead to social exclusion. And for de-bias is how to ensure fairness in decisions and avoid digital discrimination. So I'll be discussing now the four dimensions of AI and inclusion. So first of all, develop AI knowledge. So the problem with developed country, for example, in terms of education is that the quality of education with such an innovation is a bottleneck. People don't have access to education in AI. The digital divide, actually, that was discussed in the paper from Andrew Slambana, July 2017, discussed as well how the digital divide will evolve from the word of internet, from the digital divide that happened with internet to the digital divide of AI. And this actually would be different levels of, first of all, of divide, the first one being access to infrastructures and access to machines. The second one is how to have the skills to use AI and how to deploy them. So here, crucial is an important point that could help with this dimension is to develop self-learning tools and to develop further online learning through MOOCs, like on EDX and Coursera. So I brought here a case study of a course that we designed and we are offering at Columbia University in terms of in collaboration with the EDX. So in January 2017, we started a course called the AI MicroMaster. And I'm giving you some statistics about how this actually interested so many people around the globe. So the course is a set of four courses. It's a macro master, which means that people can use it to come to take a master at Columbia. So the four courses are artificial intelligence that I teach, machine learning, robotics and animation and CGM motion that my colleagues also teach. The macro master so far has attracted over 280,000 learners from about 200 regions and countries in the world. The AI course, which is the introduction alone, has attracted 150,000 students. Where are the learners coming from? So basically we have 22% from the United States, 17% from India, Canada is represented by 4%, 3% from United Kingdom, 3% from Germany, 2% from Brazil, China, Indonesia and then those who are not declared or a mix of all the rest of the regions is 42%. So we see that it attracted a large number of people from around the world. It was really a great experience. So how about gender? So we have in the AI course, we have about one in six learners as a woman. So we see a gender gap and difference between learners. And finally, the percentage of learners by age. So we have about 85% of our learners are 40 or less. So you could think here of kids who are just growing today are growing with AI at their fingertips. But then there is this range of people between high school and 40 who are in need of understanding and using AI. So this was how the AI actually, we could contribute to AI and developing AI knowledge across the world. I think maybe more sensitive MOOCs that can actually break the language barrier by adding translations in different languages could help make AI accessible to a larger range of people across the world. The second D is deciphering models. The deciphering models, we know that many of the best machine learning algorithms are not interpretable, including support vector machines, non-networks that are considered as black box. Being able to decipher models is crucial because we can detect bias and fix it. We can understand decisions and we can communicate and explain predictions to other constant parties. We can also bridge the gap between different AI players. So explainability is actually present in the research opportunity. You can find lots of papers on interpretability. It's an emerging topic. However, it's still at the early stage. We don't have yet a framework that actually brings all the machine learning methodologies together under the umbrella of interpretability. And time is short. For example, the European Union Regulations on Algorithmic Decisions includes the right of citizens to receive an explanation about their algorithmic decisions by May 2018. This is around the corner and I don't think we are ready for it. There is lots of work going on but there is no rigorous framework or science of interpretability yet. So major questions are what should an explanation look like? For example, are we happy with the decision trees? Decision trees are known to be intelligible. Everybody can understand them but they are simple and they're not the best methods out there. Are we happy with visuals comparing bananas versus oranges? Or are we happy with a set of rules? The problem is that this, again, not the best performing machine learning methods. So how are we going to interpret this? This is a support vector machine with an RBF kernel. It's hard. It's geometry. The model is complicated and it's hard to translate it into an intelligible set of rules or anything like that. Or how to interpret a vote among hundreds of trees in random forests? This is completely non-linear and there is no way to translate that at this point into intelligible explanations. Should interpretability be part of the problem and add that as in the objective function? The last term is there is a distinction between two kinds of interpretability. One is interpretability now when we need to, for example, provide an explanation to someone, a reason for refusing a loan, versus interpretability in the long term to advance science and research. So we want to understand, for example, why disease, the reasons for disease, and this would advance science more like given explanations to someone who needs to know why the reason of a decision. De-identified, do we have control of our data today? Of course not. This went out of control. The right to be forgotten is also mentioned in the GDPR. We want to avoid profiling, labeling and social exclusion and protect people's privacy. This is challenging because of the web, of course. Data is out there and data is in different types. And the big problem is that deleting just identifiers that are key features like race, gender, and age is not enough. Our privacy is in our images, in our writing, in our likes on Facebook. And de-identified becomes a really complex task. De-biasing, automated decision-making is common in recommendation systems, credit scoring. Decisions rely on predictive model that are as fair and biased as the data they were trained on. Bias in, bias out. Data can be biased, it can be incomplete, and it can even include biased decisions. If your outcome variable, the label, has biased decisions, you retrieve that in your model. This leads to digital discrimination as coined by, we've been 2015, of members of underrepresented groups. De-biasing models, there is some activity going on, mostly from the side of designing measures to assess bias in the data. So it could be also simply looking at your data and making sure it's balanced. If you have 80% of men, 20% of women, you could either oversample or downsample. Or by addressing the objective function itself, this is the accuracy by putting the cost when you make a mistake on the minority, higher cost on when you make a mistake on the minority class. Okay, so I'm finishing. I'm summarizing now, so I hope this was useful. AI is flourishing. It's exciting and broad-filled with high impact on humanity and society. The transfer day is on using machine learning, deep learning, and reinforcement learning, which are complex methods. The potential of AI is amazing and challenging from an inclusion perspective. AI inclusion, there is a lot more work to be done in my opinion, because the methods are different. We have different levels of methods from linear to non-linear, from probabilistic methods to non-probabilistic methods. Data is different. We deal with structured, with images, with text, and sometimes a mix of all of these. And there is a lack of consensus about how to quantify the criteria on inclusion and how to optimize machine learning algorithm with this criteria in mind besides accuracy. Thank you very much for your attention. Okay, Anza, thank you so much. Actually, I was gonna invite everyone up to the stage. Madeline Ellis again here from Data & Society, a little more prepared now. And I'm happy to introduce as respondents, Mark Sermon, coming from Mozilla Foundation. Nagla, is Nagla here? Okay, great, great. Coming from Access to Knowledge Development Center at American University in Cairo, and then Leonel Brossi, coming from University of Chile. Great, yes, so I will sit over here. And yeah, they each are gonna go through a few moments of response. I believe you have microphones next to you. Mark, do you wanna? Okay, hello, all. It's a pleasure to be here. I took some notes because it helps to put in order my neural network. So my response, I would like to connect my response with two of the key dimensions of AI and inclusion that I've mentioned. And they are to develop knowledge and to devise data and algorithms. When we talk about artificial intelligence and inclusion, how do we position ourselves within the notion of inclusion? We have here an amazing range of human diversity. And I'm sure that if I ask you, what is your idea of inclusion, you will probably come up with a lot of different answers. So the question is, what is the common ground where we can start from? Inclusion, it's a complex notion, but I think the idea of expectations gap can give us a clue. It refers to the imbalance between the existence of material and symbolic resources and expectations and possibilities to have access to them. So to understand what is the expectation gaps in different contexts, we need first to talk to people, to talk, to dedicate time and to hear people, especially those subjects and communities that come from underserved populations or are or were traditionally marginalized. And also work with organizations who represent these communities and these subjects and they have the knowledge of how to tackle the inequities within their communities. So to the question of how can we start working to reduce or eliminate the potential discriminatory or unethical outcomes and different forms of exclusion that may arise from AI systems, from my point of view, the answer is to put people and diversity in the middle of all the discussion, in the center of all the discussions. I think we are in a moment that we still have some time to make positive changes, but if we don't pay attention to people and to diversity, we are probably in the wrong track. To me, this is the entry point to think of actionable steps to advancing the development of a more inclusive and more equitable AI systems. Just to give an example from my experience in Conectados Al Sur that Ezequiel briefly mentioned today, we have been developing workshops all around Latin America and we realize against our expectations that youth has a lot to talk about artificial intelligence. And actually youth will be the community that will have the biggest impact in their life with AI systems. So the problem is the voices of youth is not being included in the processes of design and creation of artificial intelligence systems. So we can develop knowledge, like the first dimension that I mentioned, by co-designing with subjects and communities. And the second dimension is that we can debias data and AI outcomes, paying attention to people and situated contexts with their own particularities. I think this is the way that we can have a chance to develop AI systems to help us to reduce inequalities and I wish and I hope to achieve a better world. Thank you. The unfortunate part of not arriving earlier, you weren't in the debating of the order. Yes. And then we drew your straw and you get to go live. I made it, yes. Okay, so I think I was gonna go and then you were gonna go. This is as we had the four of us at breakfast, but then you didn't make it. I had a hotel incident, so that's why I made it. So I think I'm gonna go and then your stuff really builds well. So building on both of these, I think what's exciting about you making the link between the technical and the social and the inclusion and work you guys are doing, and even this whole thing, is we're having a debate we weren't having two years ago about what is AI gonna mean in terms of what it is to be human and how our societies work. And one dimension of it that gets talked about a lot and is in these cases is the sort of impact on us as individuals or how the system and us interact. And so in some ways, sort of inclusion and human-computer interaction. The piece I wanna throw in is how we think about inclusion from a much broader level of the political economy of the digital society. And so who has power, who doesn't have power, who extracts value, who has a value extracted from them and who benefits and who doesn't. So I think we wanna also add that dimension in. And I wanna just quickly go to three points that build, especially on sort of where machine learning is in the story. And one point is very much about the current political economy and who has power. I think we know that mostly that Silicon Valley, what that's gonna mean for AI and how that will shape power going forward and then what we might do about it. So just quickly on the first point, who has power now, if we can get there, there's a couple slides where we're gonna point to, can we get them up there? Let's see. In the meantime, I will do the alternate version of the slides, which I can do using all of you as the audience. So how many people here, we already kind of saw up there, how many people here work for Google or Facebook or Amazon, not very okay, good, welcome. How many people here live in Silicon Valley or Seattle or one of those other places? Okay, very few. How many people here on a everyday use as a part of their work or their social life, a product by Google or Facebook or Amazon or Apple? And how many people are not from North America or Europe? So that, I was gonna show a map, but it was basically that map. Is the current political economy of information is that a very small number of companies, a very small number of geographies, make all of the apps, make all of the digital world, control how they're designed, where they go, and the rest of the world ends up using them and ends up having value extracted from them. Certainly, we've looked at money trade flows both in advertising and apps, and they look pretty much like a colonial map of the world, except for Palo Alto is where London is or if you go back further in history, where Rome was. And so, it's important to start from that spot, right? To understand that we, it's not that the people who started these companies, because Mozilla works with lots of them, started out necessarily to rule the world, although Peter Thiel maybe is the exception. But that and Elon Musk, it's a funny question. But they have actually formed these large-scale monopolies that have come with network effects and really mean they govern how the digital world works. And so, we have a very small set of imperial centers governing how the digital world works. So why does that matter for AI and how AI is gonna work out? Well, machine learning is probably the biggest example. The network effects that have created these monopolies of the digital economy are also the thing that are creating the machine learning databases that will be and are most meaningful at this point. So the fact that search or shopping or any of the other examples that you put up there are early mainstream everyday uses of AI is no accident because those are the things that we create data about. Our data is extracted from us every day that then allow those companies to further extract monetary value through ad targeting, through how we do Uber rides, through how we do online shopping. And so really, who's gonna control the AIs but also how the AIs get shaped? I mean, one of the biggest shapers of the AIs right now is deepening the surveillance-based advertising economy. So what the AIs are for and who controls them is going to reflect this fairly colonial system of how we organize a digital society unless we decide to do something different about it. And of course, there is an exception which is China, which has its own thing and basically has its own internet, but the same thing is going on there where the people who have the big data, whether that is government-supported researchers having access to commercial data or the Chinese commercial players are the ones who are getting ahead in AI. And China interestingly, if you look at number of peer-reviewed articles mentioning machine learning or deep learning, China has just surpassed the US in number of academic publications. And so there is an arms race between the two internets effectively in terms of who will win in AI. So I think that's important to think about everything about what is the political economy and who's included and excluded because you'll notice that I haven't mentioned almost the rest of the world. And so that becomes the next question I think you're gonna talk about so I won't go deeply on that. But one of the things I think a group like this and certainly Mozilla thinks about a lot is what can the people who aren't at the center of those decisions and who are amassing that power due to shape where AI goes or where the digital society goes and who's included? And I think that's a great conversation to dig into in fora like this. And there are practical ideas which is, maybe even those smaller players who aren't Google or Facebook or Amazon, we could build massive co-ops machine learning training data. Like that's actually a possibility. Think about a peer to peer approach actually having the people win on AI. Things like that are possible. Look at how we have different kinds of regulations that might counterbalance the companies. And I think increasingly it's impossible for me to imagine us winning without also imagine that citizen power and some massive social movement on the scale of the environmental movement or the peace movement from the 80s or the civil rights movement. Without that, I think we actually won't rebalance the power and make sure that we are all included. So a few things to talk about. Thank you. Thank you very much. I apologize for not showing up. I had some arrival issues, but I'm settled now and I made it. So thank you for including me. I pick up from where you left. I think the narrative of the political economy of AI is not new. We already are aware of the narrative that regarding digital technologies, the internet, the power of the big players as opposed to the small players. Only now the issue is perhaps while it's similar, it's perhaps amplified because of the fascinating capabilities of those technologies. So issues related to skill-based technological change, the future of work, the inequality in pay. Just looking at the slide that you had done stuff on the learners by age is telling, very much telling about the future of who is it that are going to be the players in this field. But really the issue here is while the threat has always been there, the threat could always be turned into an opportunity. And this is why we're here. How can we flip the argument? How can we turn things around and use those technologies for the very purpose of inclusion, democratizing as opposed to leading to inequality. The one new variable is that the global context is different. I mean, we live in a world where there is always a threat of terrorism, radicalization of youth, displaced communities, AI in weapons now, I'm reading about that. So for me, this is a new variable that's perhaps becoming even more amplified than the discussion earlier about the internet, the digital technologies and their impact. So let's talk a little bit about AI for development. Context matters, where we all come from. We come in many parts of the developing world. We have weak institutions, informal economies. We have increasing populations. We have youth bulges in many parts of the world. Mine for sure, I come from Egypt. We have pockets of poverty, refugee crises and displaced communities. We have authoritarian regimes. And I'm going to add it a bit since this is going on record, but I'm happy to say more in the discussions offline. We have, in some cases, failing states and we have radicalization of youth. So against all this, there is always an opportunity. There is a context of expanding internet usage. Mobile phones becoming smartphones becoming cheaper. And at the same time, waves of entrepreneurship and innovation in our youth that are trying to make a change and potential of leapfrogging to utilize the fantastic power of these technologies. So if I may talk a little bit about knowledge creation. Again, part of the debate on the digital technologies, the internet and development was centered around ICT use versus ICT production. And I think of AI use versus the production. And there is knowledge to be gained in both. Your course is wonderful. And I think when I think of my part of the world, I think of communities. I think of cafes. I think of schools. I think of perhaps working very strongly with the civil society to engage in localized contents, content that's emanating from the ground to solve local context specific issues, the way innovation happens in developing countries. I also think of learning as far as using, adapting or applications of AI, learning by doing, maybe hackathons, maybe sharing experiences. I'll give you just a few examples. From Egypt, in Egypt for example, our system of education, school education is not the best. So we do have a culture of private lessons and families spent a lot of money on private lessons. So thinking of alternatives that are offered through tablets, through smart technologies, that could actually provide an alternative for learning rather than the informal, expensive format that is already there. In transport, for example, AI for transport. Very interestingly, we have, maybe here in Brazil as well, j-walking, we have a lot of accidents that take place because of j-walking on high roads. So we need to think outside the box for context specific issues that could be solved by AI applications. I just read about an AI application for refugees in Lebanon that helps, psychological stress helps, and people, there is a stigma about talking to a therapist, but people can talk to a machine. So there are small pockets of knowledge, development of applications and knowledge and growing communities using and working with AI applications that could actually be a benefit tremendously. In our work at the Access to Knowledge for Development Center, we have a data for development program in the MENA region, Middle Eastern North Africa, and we are actually working with researchers at Beerset University in Palestine in developing courses on data science in localized context. Of course comes to mind open source technologies as an enabler for the access and sharing and developing knowledge. I'll talk a little bit about also data, de-biasing data, because again, parts of the geographical bias would be very relevant in developing countries, the gender, you know, gender segregated data. And again, the key here would be grounds up data collection from the ground, from the field and sort of as a step towards AI applications. One of the things again, investment in technology is part of the story. It's very important to, again, invest in the enabling environment, the rest of the story, the productivity paradox related to IT and productivity in the late 90s in attempts to resolve that. A lot of literature looked at the question of delay and the idea of investing in organizational changes in human capital and human skills in the legal and legislative infrastructure and all of these. For example, we do have an AI application that was very successful by one of our graduates that ended up as a startup in the US. This is all the economy challenges. This is what the World Bank calls analog challenges because it's basically the environment surrounding entrepreneurship and SME development. So that would end up in a brain drain phenomenon. I want, I'm conscious of time, so just I have a lot to say but we can carry this further in the discussion. I guess what I'm trying to reach is that the narrative is not new but we need to have a consciousness of what AI can do and how we need to have concerted efforts of academia, civil society, governments, industry, entrepreneurs, communities with awareness of the SDGs, the digital divide and the tremendous impact for this to be amplified but also how to harness the power of these technologies so we do not romanticize, we do not demonize but perhaps use it to democratize, equalize and perhaps humanize. So I invite you to think of a discourse of AI for AI. Thank you. Thank you all. Thank you all so much. So now I'm afraid we are a little tight on time but be ready with questions. Gonna take a few questions from the audience. I'm gonna take a prerogative and ask a question though of all of you which is actually I wanna push a little bit on this idea of de-biasing data as that being a sufficient goal in and of itself and then actually frame that where you started your talk with the idea of translation and that this was a goal of early artificial intelligence. Translation, sort of channeling things, optimizing things. I think that AI is all about optimizing certain variables and so if you think about de-biasing data sets it also needs to be about how those data sets will be used that is what values are you optimizing a system for because that will still introduce different kinds of bias and so sort of speaking to the digital colonialism aspect that is currently playing out I feel like a lot of these systems are designed along sort of American Protestant values particularly around exploitation of capital and so I'm wondering if you all could say a bit about what you imagine other values or other points of optimization might be when you think about the potential of AI based in from your perspective and your context. I know it's a tough question. But I think this idea of optimization is central. I can try, is this still working? Okay, I can try. All right, so in terms of de-biasing I think there could be de-biasing at two levels. The first level is sanitizing data, right? So you got data, we suppose in machine learning that data is a snapshot that's correct, that's clean and that's unbiased, reality is not, right? So sanitizing data could be some effort that could be done and that actually some research community started looking at how can we balance the data back? How can we de-biase when we have features or variables related to gender or things like that? So there could be also de-biasing of the models themselves rather than de-biasing the data. So how to include a piece in the deep learning algorithm so as we correct that bias and this is something that's not achieved yet. So this is my take on that question. I look forward to discussing more. Does anyone else want to take it up? So, I mean, I guess I would say implicit in your question de-biasing is not enough. I think that's the first conversation we've really started to see grow in the last couple of years and it can be helpful in terms of taking bias out of systems, computers can be objective in certain ways or in certain settings. But I think ultimately the bigger questions are these questions about power and then the values behind the people who design the systems or own the systems or whatever. Part of that I think is Western and Protestant but the Chinese are not Western or Protestant and certainly I think are bringing a different set of values as are other actors coming to the table and the Russians are certainly investing a lot in AI as well and trying to catch up. You heard Putin talking about it a couple of weeks ago. So what I do think the dominant values emerging in what AI becomes are either Western capitalists or about control economies or actually a lot about how to grab power globally and about warfare. And so the question is like can we imagine values built into the nature of our AI systems which we can also de-biased at this level lots of good intent behind most of these systems but those are values with growing dominance and to me yes we can imagine different values whether they're cooperativism or values that are from the societies that aren't at the center of those power or values from communities that you talked about that are excluded. The big question, so we should spend some time here imagining them. The big question is actually how do you make them real? So in the face of who designs these systems we're not gonna just by calling for something nice get a different set of values. We have to either use what worked in one area of the internet which is to build systems that will counterbalance and disrupt that set of values or find places that themselves have a different enough set of interests that they can use legislation or use citizen power or something. But we actually have to find a way to a different set of values. I think as a more tricky question of what those values might be. I think it's a chicken and egg. Well we have three days to talk about the chicken and the egg. We do. Maybe something that I mentioned in a talk on AI at the Perkman Center at the Law School at Harvard that most of it's, this is taken from the cognitive psychology and it says that most of the systems are generated by weird people. But weird means Western educated from industrialized, rich and developed countries. So maybe it's to learn how to get out of that box or to try to make an effort to get out of that box and maybe we can gain some more new values to develop AI systems. One of the things actually it's openness. I think open data is very important because again in some parts of the world data that comes from top down from statistical offices are very much politicized. So now that there are the new technologies and the sensors, the concept of collecting data ourselves ground up using the new technologies could actually serve to layer and we layer the data to sort of try to be as accurate and present the reality for what it really is. And then if this can be offered as open data then this can be shared and could be actually used and benefited by everyone. We have, I'm being told we have no time. So maybe let's take two questions. Two questions, okay, good. I was hoping so. Can you all the way in the back? Yes, you. Thank you very much. Alex Gakuru called IP trust in Kenya. I want to make a statement and I want to say responses from the panel that there is no denying that AI is an engineering marvel. It is absolutely mind blowing an engineering invention. But unfortunately AI is inherently biased, architecturally exclusivist and a control instrument and worst of all it's private property. Is that statement true or false? Yes or no? Let's take both questions at the same time and then we'll have a few minutes to respond. Hello, I'm Daniel Schwab. I'm a professor at the Catholic University in Rio, Computer Science. I'd like to reinforce that point and make a statement. I think there is no such thing as unbiased. I think it's always biased. So perhaps the goal should be the best we can hope is to make it explicit in terms of values. So when you publish some data or use some data to make it clear, what are the values that have been put into producing and consuming that information? And perhaps that can give users a better understanding of what's going on. It's not going to solve it all but it may make it more transparent. And my guess is probably the best we can hope for. Anseph maybe you could respond and then we'll close it out. Okay, these are two great questions. My first reaction to the questions in the back is that basically it's true that there was so much excitement about what AI can do without really thinking about this kind of questions. But today we know, we know that AI is biased. We know that this kind of systems can be exclusive. That's true. But I think also that we can stop progress with AI. So how about taking this as an opportunity to remove bias, right? So it's idealistic but this is an opportunity for humanity to work on building an unbiased world. So and research will continue on that. So there is obviously a big interest in AI. We discovered that it's not perfect but you're working toward people as a community are working toward fixing it. Yeah, no, that's great. So I, okay, I'm getting okay from Thunder. Let's just, I think that was a great way to sort of end the first keynote, the first panel. Thank you Anseph. Thank you to the panel and thank you Sandra. Okay, so we are doing fairly good on time, I would say for Swiss standards, two minutes behind. So maybe just for the sake of us stretching a little bit in between the two keynotes, if Hugo can maybe play seven seconds of the music that we didn't have to play because we're running so well on time. But in case you wanna hear the fun music we had prepared for you and then maybe I'll try to stretch at least a little bit. So you're ready for the second one. Where's our thumb music? Somewhere it is, no? Exactly. Okay, so, I think that was it, too short. Yes, yes. Okay, so please music off again and please sit down. Thank you, we continue. Thank you, please take a seat again. Anse, please take a seat. First and foremost, I promise that over the course of the next three days we will have many, many, many, many more opportunities to interact, particularly tomorrow, the day is designed so all of you can just talk. I promise almost no listening. But please bear with us. So the first keynote was much more technical, trying to convey some of the important messages about AI with the people in the room. And now we're gonna go, my friends, I'm gonna call all the names if you don't start sitting. Madeleine, please sit, yes. Okay, so first keynote again was about AI, came a little bit more from a technical angle. The second keynote is gonna focus more now on inclusion. And the moderator of the session will be Danit Kahl, you heard from her in the opening session, but Danit is doing really important work on AI and governance and ethics based at Peking and Tsinghua University. So she's gonna be the moderator for the session. And I promise you will have a lot of fun in this keynote. Thank you so much. Wonderful, thank you. So in the interest of time, we're gonna keep this really organized. I'm really, really excited to invite Nishat Shah to give our second keynote. He is the co-founder of the Internet Society Center in India, a radical humanist and an ardent feminist, which makes you kind of a hero. All right, oh wow, okay, that was bad. I never get used to these contraptions, but largely because it takes me back to my days when I was 19 and working in call centers and like spending eight hours a day saying, hi, my name's Nishat, how can I help you? They taught me how to smile while speaking because apparently Americans notice if you are not smiling even if you're speaking on a phone line. So you have to always be like, hi, hi, my name's Nishat. So every time I wear this, I have this like huge impulse of doing that again. But I'm not going to try and do that. So good afternoon or good morning or evening, whatever time zone your mind is right now on. And thank you for giving me this really wonderful opportunity of addressing a room that's a little scary. It's scary not because it's a very large audience that I haven't spoken to before, but it's scary specifically because it's filled with friends and colleagues and inspiring people from whose work I continue to learn and whose political conviction to equality, to equity and social justice have been my compass in my own work around digital cultures. So I really do feel incredibly humbled. I feel very thankful both to, well, all of them, to the ITS, to the Berkman Klein Center, to the network of centers, to invite me to come and speak to you today. And Sandra has really scared me because I was given 25 minutes. I have planned for 20, but I'm from India, so it will take 35. And we're trying to figure out how to fit all of that together. So let me begin with an image. And I apologize for the graphic and the misogynist nature of this. Though I'm presuming that all of you are specific already on the internet since a while, so that's actually quite mild. But this image is fascinating because it made its way on Amazon and was actually up for sale. I'm sure a lot of you remember it. Apparently there is a next day delivery audience for people who want to keep calm and rape a lot. Back in 2013, when the image first went up, there were a lot of warning signs about the futures of automation and what it could mean when it comes to questions of inclusive politics. This t-shirt, just so that you are sure of it, does not really exist. This is not the image of an actual t-shirt which was already made and hence it was put up. The image is a simulation and almost as real as any meme that you see online. The image was not even made by an actual human being. An algorithm which was trained to generate the take the famous meme of keep calm and carry on and associated with a series of verb clusters that you were trending on social media generated this image. From the generation of the image to its publication on Amazon, there were no human beings involved in the entire process. And so it passed all the checks and balances to make its way into the public where it stayed for some time, it even sold a few copies and before it was finally flagged as inappropriate. So for me, this image is kind of a classic example of the problems that we have to consider when thinking about AI and inclusion. And the problem's not a simple one about how to make better and more inclusive AIs. The problem is more about how we would ever get down to analyzing the relationship between the two. Because if I just stick to this image as a placeholder and I try to think of the different iterations I had to make in order to get this talk successfully out, I realized that through the iterations I was falling into several tropes which are actually existing in more social sciences, humanities, digital cultures discourse around artificial intelligence. And let me kind of quickly chart out these three tropes for you which I think are a problem in how we address the question of artificial intelligence in the domain of social and human inclusion and equality. The first one was to think of this as a problem of inclusion politics, right? So this is a human frameworks problem. Interacting, so human frameworks which need to interact with machine interventions. That is the first problem at hand. The question to answer in that particular trope was who was responsible, who needs to take the blame for it? And hence we create bridges between artificial intelligence on the one hand and inclusion politics on the other. And in the process we reinforce the idea that our emergent AI practices as well as our inclusion legacies are innocent. That they are to be celebrated and unquestioned. The minute you fall into the trap of saying how do I make better AI? And how do I make AI talk to better inclusion politics? You have given away your position of critique either of artificial intelligence or of politics of inclusion which have a long-standing legitimate history of being continuously questioned. The second trope that immediately comes into mind when again thinking about the image was to say that maybe this was the possibility of AI as a critique of inclusion politics, right? So this is the point where we humanists can go and say we can now despair at how the human corpus of digital language that the AI was training from is so magnificently repungent. That recognizing that inclusion politics have a long way to go and that almost all of our efforts at transforming public discourse around gender safety are being met with counteractions that are quite potent. In this case, the algorithm that constructed this is actually the hope of the future. Because if you can train this algorithm to not repeat this mistake, you know that it will never mistake, repeat this mistake again and like every single human being who's culpable of repeating gender stereotypes and gender violence over and over again despite all the education and training that you could give. So this specific algorithm which produced a problem is also the future hope of resolving this problem, right? Because AI better than human being, period. The third trope which you could easily fall under or at least I was falling under was where I could think of artificial intelligence as essentially evil, right? It's something that through autonomy can circumvent human judgment and intention and produce new dystopias where the human beings are not even included as actors. This would think of the human as under threat. Many different tropes kind of come in of AI as essentially evil when the human is under threat, be it, let's say human labor under threat of replacement by automation, human ethics overridden by self-learning machines or human relationships made redundant by companion robots. The technological then becomes something to be controlled and contained. And in my own kind of iterations of this talk, I realized that all of these tropes while they articulate certain concerns continue to perpetuate a separation of the human and the technological where the reconciliation between them is without critique or we have to choose one over the other the tainted human, the clean technology or the fragile human and the violent technology. And that's a conceptual deadlock. And yet it cannot be the end of our inquiries into how we think through inclusion when working with emergent structures of artificial intelligence. And the first way out of this deadlock is for me to invite you today to stop thinking of yourself as human. Instead, I'm suggesting and inviting you that if you want to speak productively, precisely and pertinently of artificial intelligence and its connections with inclusion politics, we will have to come together as cyborgs. And if Oprah says it, it must be true, right? And as cyborgs, we will have to be part human, part machine, part reality, part fantasy and part science fiction. This invocation of Donna Haraway's fantastic cyborg from her eponymous manifesto is not just to lay out a context for our technologized existence, but also because it points to the tensions that lay at the heart of the theme of my talk today. So Haraway, writing from the intersections of biological sciences and technological interventions way back in the 1990s, was already warning us that the human has always been historically tainted, painted, contaminated and contained by technologies of different kinds, but that the digital technologies are teaching us to count, discount and miscount to make the human accountable in a completely different way. Across her body of work, Haraway has shown how we cannot address the contemporary terrain of politics or of a need of equality and of frameworks of inclusion by holding on to the idea of a discreet, romantic, prelapsarian, biological, mortal, guilty, of faithless human construction. In her cyborg realities, she persuasively argues that our contexts of living are hybrid and so we will also need to think of ourselves as hybrids, part human, part machine, part reality, part science fiction, part fantasy. But Haraway was particularly insisting on positioning ourselves as strategically distributed and resolutely non-unitary with fragmented futures and bastardized histories because it has clear ramifications for the possibilities of emancipation. Developmental feminist Martha Nussbaum also shows quite precisely how the human subject in this hybrid machine technology human collaboration is precarious. In her work on looking for example at how technologies have developed in the global south as tools of emancipation for women, Nussbaum shows how the first thing that happens when a new technology encounters women is that the paradigm of the woman herself changes. So in India, for example, she looks at how the advancement of reproductive sciences which have worked so hard at saving women during pregnancy and reducing infant mortality while they have been successful on one axis have also affected the most poor and the historically underprivileged women in an adverse way. Nussbaum looks at how the secular Western medical practices as they replaced older forms of localized reproductive engagements were also blind to the politics of caste and religion that operate in the everyday life. So what you did was you trained a whole bunch of new women to become medically trained midwives, but these midwives were essentially upper caste and they refused to touch the erstwhile lower caste woman's body, thus denying the services which would have been made accessible to them, right? So drawing both from Nussbaum and Haraway in a particular way, I'm actually making a very simple point that the history of inclusion is not innocent and it has a large critique from social and political justice advocates, activists, and academics and that we cannot think of inclusion as this happy, benign solution to all of our problems of being human. In fact, as Wendy Chan points out in her body of work in software studies, the technologies that promise us freedom will eventually recursively become the technologies by which we will be controlled. And this recursive cycle of technology and freedom constructs structures that often naturalize the suppression, oppression, and omission of human subjects, but especially women, queer people, and people of color by making them perform the task and labor of inclusion and then be excluded from the histories of computing. One of the best examples of this is actually going from Wendy Chan to Jennifer Light is to kind of look at Jennifer Light's material history of the mainframe which reminds us that from the 1940s to the 1960s, the computer was not just being operated by women, a computer was a woman. Women with degrees in mathematics and physics worked as computers in a mainframe doing the labor of counting and correlation of engineering and repair, which we now relegate to the task of automation. It was only when the computer shrunk and the practices of these women were replaced by automation that the computer suddenly became a masculine object. So that by the 1990s in three decades, we came up with the rhetoric that the women are not good at computing and that we need to include more women in STEM and computing. In this inclusion drive, we not only negate the fact that the computer as we know it today is a product of women's labor, but that we also naturalize the idea that women have nothing to do with computing. And so we now need to make extra efforts to get them there. A robotic process that replaced the labor of computing by algorithms, not only replaced women, but also made them in the histories of computing itself. So invisible in the histories. The capacity of automation and artificial intelligence then is not just in obsolescence, but also in erasure, where they provide a context of forgetting where entire populations could be erased as having a stake and be constitutive of technological paradigms. So when we talk about inclusion of women in STEM and computing these days, we think of it as a new thing, as something that has to be achieved for the first time. And we naturalize the separation between, we naturalize this separation instead of examining what made this inclusion necessary to begin with. So inclusion often appears as a benign mode of operation, but often it takes up the form of erasure of memory and forcefully separating the subject from the very arena it needs to be included in. So I belabor this simple point, merely to point out three things, that we need to stop using artificial intelligence and inclusion. And my entire critique is about the end-ness of the formulation. And I want to make a case that we need to stop introducing separators between AI and inclusion politics because the removal of the separator interrupts the three tropes that I began with. One, it stops us thinking of how we will bring inclusion politics to AI as if everything's fine with inclusion movements, it's not. It stops us from naturalizing the separation between artificial intelligence and inclusion, pretending that the politics of inclusion have not already been brought forward and are in interaction with AI development. And third, it stops us from beginning with AI as such a huge disruption, something so new and startling that we must now change all of our inclusion agendas beginning with the potentials and promises of AI. So side-stepping these three human technology challenges by embracing our cyborg presence instead opens up a new way of thinking about AI and inclusion. Thinking of both of them dealing with common, complex and intertwined problems and examining how each offers a series of resolutions and reconciliations, which might be attempted in the shaping of artificially inclusive and naturally intelligent futures that we want to shape. And I'm going to try and think about these common tensions and give two specific examples just as a way of thinking about AI and inclusion together as opposed to thinking about what can AI do for inclusion and how do we bring inclusion back into AI. The first tension I find at the core, and I'm very glad that Ansel already broke up John McCarthy because I actually want to go back to that history in the 1960s of the conversations that were happening in Stanford about the future of AI and what computing was going to do for us. So the first tension I find at the core of artificial intelligence computing systems and political movements of right-bearing inclusion is the tension between representation and simulation. In the 1960s, when they were imagining the future of computing at sale, there was a fundamental opposition that emerged between the political commitments of Douglas Engelbart and the visionary practices of computing proposed by John McCarthy. So this is a fascinating conversation which has been very well documented because Douglas Engelbart, who was both a romantic and a humanist, insisted that the computer was always only going to be a tool, something that augments human existence and allows us to reach our highest potentials. For Engelbart, the computer with its processing power was almost like a magic trick by which human vision can be converted into action. Thus, the human was always at the center of whatever we do with the computer and this was a representation model that the true measure of everything the computer does will eventually be in whether it serves a human good or an idea. For McCarthy, however, the computer was to mimic the human but only as a point of departure. The human could be the entry-level training corpus so it could be a data pod which can be harvested to make unclean data sets out of but once that is done, the computer was meant to replace and transcend the human so that the task can be completed with more efficiency. Once the computer was deemed intelligent enough to replace the human being, it had no relationship with the external and the lived reality of the human subject and so it created a world order that began with the computer within which the human being's only job was to be replaced by automation. This is the model of simulation. So I draw a strong and perhaps a caricatured opposition between these two but it's only to make this point that this tension between representation and simulation is a significant one because it continues to envision how we look at the future of artificial intelligence. Will AI be only representative of human intention and thus always follow human commands or will it be simulational and one day start giving commands that the human being will have to follow through on? Surprisingly, this is a choice that inclusion activists have also been struggling with since the post-war reconstruction of rights based ideas of equality and equity because of our traditional interventions and inclusion are based entirely on representative politics. The human subject is considered organic, originary and ontologically innocent and there's our ideas of law, justice, rights and safeguards are dependent upon the human being as a representative model which can be extended. The measure of any notion of inclusion is not in the instituting of a policy but in its material consequences experienced by the beneficiaries of that policy. However, increasingly, inclusion advocates and theorists have discovered that inclusion itself works on principles of simulation. At the heart of inclusion, for example, injustice is the fiction of a reasonable man, the template that determines the efficacy of our politics and whether justice has been served. The reasonable man is an abstraction, an extraction, a consolidation and reconciliation of different data points that create a profile or a demography to measure the robustness of inclusion by applying it on simulated models which do not necessarily interface with individual practices of uplift or emancipation. So it's quite possible that the reasonable man might be experiencing increased human rights but that there might be increasingly more people who suffer a denial of the same rights but do not have enough data which is included in the making of the model of the reasonable man. So these tensions between representation and simulation are at the core both of artificial intelligence and inclusion politics and how we resolve and critique them might offer one instance of thinking of both of them together. The second point I had, which I'm going to actually skip because I have one minute left, was to just think about one particular thing which is the tension between possibility and probability where possibility belongs to the realm of mathematics and probability to that of logic. Maybe later in the Q&A or in the course of the two days I can come back with why this tension is obviously necessary to kind of deal with because probability, as you know, deals with pattern recognition, whereas possibility deals with speculative fictions and that there has been a choice made both between contemporary computational practices and in contemporary inclusion politics to choose one over the other as if both are not reconciliable at the same time and we can talk about it later. But I quickly want to just move to why exactly is it that I'm going to try and think about this particular thing and I'm suggesting that one way of dealing with this might be to stop talking about artificial intelligence and inclusion politics and maybe start thinking about what inclusive intelligence can actually mean and that the inclusive intelligence at least allows me to achieve three things which I'm hoping are both provocations but also reassurances for you to take away for the rest of the conversation. One, that we get to denaturalize the units of measurement both for artificial intelligence and inclusive politics, showing how the humans and the processes involved are constructed on multiple scales and bringing these scales or interaction with each other is important. So we need to denaturalize the terms of debates, AI good, human bad, AI evil, human fragile, these need to be, these are not natural exams that we need to follow through on. Two, that we get to think of both AI and inclusion as part of a larger ecosystem. Instead of thinking of them as tools, devices and objects, maybe we need to start thinking of them as conditions that we live within. So not things that we work with but conditions that we live within. Thinking about as Ansel really beautifully pointed out the ubiquity of AI and the way in which we actually live within it, not even aware of it. And similarly, the need for inclusion not merely as a referent or an addendum or a checklist which has to be fulfilled but something that is continuously negotiated in the conditions that we live with, this needs to come together in specific ways. And three, that we think of AI and inclusion as mutually constitutive. Recognizing that the algorithms of deep learning and computational neural networks that inform AI are not biased because they did not know better. They did know better. They made the choice of not taking up that up. So it's not as if the AI is innocent and hence it's making mistakes. It's programmed to make these mistakes because it amplifies certain kinds of divisiveness and inequality that we live within which are generally considered as human problems and not as AI problems. So we need to examine how AI is shaped, how the human condition is shaped by the technologies of connectivity and collectivity. And we need to understand that inclusion is not merely a celebratory moment and AI is not merely something that we need to in abstraction think through. That we need to recognize both of them as forces of governance and organization. And the deep seated historical intertwining with conditions of control and domination both need to be brought forth. That inclusive intelligence then perhaps offers us a framework that allows us to look simultaneously at the cyborg futures of the computer, which is the device in all its materiality. So the processes of computing, coding, algorithms, protocols, vectors, neural networks and so on. So the futures of computing, or futures of the computer, sorry. The second is the process of computing itself, which is data cleaning, extraction, correlations, which are the ways in which classifications would be made. But maybe adding a third thing to it, which is the lived realities of the computed. The people who are going to be counted and the people who cannot afford the risk of being counted within the systems that we are building. And that might be perhaps a new way of thinking about AI inclusion together. Thank you very much. Thank you. Thank you. Thank you, Nisha, for a wonderful talk. Please take a seat. We are very lucky to welcome a really outstanding panel of experts. So let's start with Sasha Kretenza Chalk, a scholar, activist and media maker for MIT. Ja Westby, an expert in business and human rights from NSD International. And Philip Odo, the founder and executive director of the IWrights Lab. Wonderful. So let's quickly start with your responses to Nisha's fantastic talk. Please. Hello, hello. I don't think it's on. Is it on? It's on. Okay. So thank you so much, Nisha. This was such an amazing talk and there is so much to respond to. My name is Sasha Kretenza Chalk. My pronouns are they, them or she, her. And I'm associate professor of civic media at MIT. And I think there's so much in there that I'm just gonna focus on one point. Maybe providing a little bit of context from my own lived experience to talk about this question of the paradox of inclusion in AI systems. And I'll say a little bit more in a minute about what I mean by paradox. Within an intersectional black feminist critique of AI. So my personal sort of story and bringing some of sort of lived experience into this point that you ended on Nishant about people who are going to be counted not counted. So when you travel, when you do air travel from the United States, most airports now have a millimeter wave scanner system. This is the scanner that you put your arms up for. And so it's generating a millimeter level resolution of your body and then it's comparing your body to a statistical model. But what a lot of people who are cisgender don't know about that system and its socio-technical implication is that when you approach the millimeter wave scanner, the TSA agent, the security agent on the other side, looks at you visually and decides whether they think you're male or female. And they have a small touchscreen with a little icon of a male icon and a female icon and they select. And myself, depending on how I'm presenting my gender, often they assume at a distance that I am female and select the female icon. And then when I pass through the millimeter wave scanner, when I come out the other side, on the screen that they've used to select my gender is an idealized image of a female human body with bright yellow highlights across the chest and groin areas, which are the areas of statistical anomaly for what they believe to be a female gendered and sex body which didn't fit the statistical binary model that the system is using. And so what that triggers then is another agent who comes and does a more invasive physical, touches you all over your body, which they call a pat down. It's a very nice way of saying that, right? So I guess the point of this is that I want us to think about how statistical systems that are designed to normalize and designed for efficiency, and they do that binary gender classifier because it gets a better model for objects on the body or they think it does that aren't supposed to be there. But the problem is that, so this is an instance of how AI is reproducing existing forms of structural inequality, what black feminist theorist Patricia Hill Collins would call the matrix of domination, which is white supremacy, heteropatriarchy, cisnormativity, capitalism and settler colonialism. We have to think about how those are intersecting structures and you didn't have time to really go into intersectionality which is why I wanted to bring it in here a little bit more. So we have to think about how there's an unequal distribution of harms and benefits which are broader and are structural also happen interpersonally and that's when we're talking about bias. We're usually talking about the framework it calls up as interpersonal bias, but interpersonal bias doesn't exist outside of a structure of history, of structures, of institutions, of laws, of governance and so on that are constantly reproducing an unequal allocation of harms and benefits according to your location within that field of white supremacist, heteropatriarchy, capitalism and settler colonialism and probably out of time. So I guess, so in my last 60 seconds how this works specifically right is that in this case that I gave an example of the training data sets first are biased, right? So in this case they don't include non-binary people in the data set. Joy Bulimwini at the Media Lab is working with the Algorithmic Justice League and is looking at sort of face recognition algorithms and how they get better positives for white male faces, next up is white female faces, next black male and black female at the bottom with only like a 64% true positive rate and so the training data sets are biased, the classifiers are inadequate, so in my case a binary gender classification system at the database level is inadequate to represent my lived experience of the world and it has material impacts, in this case the small impact relatively, it's a body search but across other systems it could be much greater impacts. The benchmarks are built around hegemonic assumptions and yeah, gosh I'm out of time there's just so much to say. I think I'll sort of end by bringing in to this space yesterday coding rights, organized a sort of pre-conference on what would it mean to think about trans-feminist algorithms and artificial intelligence and we came up with a lot of interesting ideas that I'd love to talk with folks about later, Joana Veron, Yasu Kodava are here as well and we're at that workshop, others were too but I think I'll just end by saying that in my example, my personal example, one question is paradox of inclusion that you brought up so well, is what I'm asking for for us to have a whole bunch of like intersex and gender non-conforming people in a big database so that the TSA millimeter wave scanners can more appropriately deal with us? Is it that I'm asking for better training for the TSA agent so that the socio-technical system that constitutes that scanning process has sort of less harms for people like me? I don't know, maybe what I'm asking for is that we shouldn't have the millimeter wave scanning system at all and that we should replace like the security theater which purports to be good at stopping terrorist activity but is actually probably mostly only good at lining the pockets of the millimeter wave scanning companies, so the point is inclusion, exclusion, the right to not be included in any of these systems is an important question and I'm glad that Nishant has raised us and I hope that we keep it in mind throughout the whole conversation. It's not just do we wanna increase the efficacy of the model, we also need to think about when are moments when people who are gonna be most impacted or potentially harmed or receive benefits from these models get to decide whether they participate or not. So moving from inclusion to self-determination, thanks. Wonderful. Okay, go on. Thanks Nishant for this firework, it was great, I enjoyed it. I have some more questions to you because I think it's very interesting how you want to describe them in the context of your presentation. One of the main questions I have and I'm very interested in what you want to say is in which way can you secure that coincidence and also, for example, serendipity could be part of this AI system and especially in the inclusive AI system in which way we can secure it. I think this is a very important point. Another very important point is the question of in which words or terms we're talking about, should we talk in the context of inclusive AI talk with the term of social AI, it's a question. It's a new term we should use in general for it. Another point which is very interesting for me, I think it's the question of if we don't only want to think about it and to talk and discuss about it, we have to prototype and we have to make political deals and implement it in the political debates and the question is from your perspective, what do you think is the public interest to have an inclusive AI? Are there transnational, cross-national similarities which is the political bias? We can discuss and talk about it. And finally, I think it's very hard to find the way out of the discussions which could be a way to describe very concrete and to prototype very concrete models for which we can fight for in the political area because we don't have any time. We have to jump and we have to run and to decide and to implement. There's no time to talk about it for five years or so on. So it's a problem, it can't be only science in this moment, thanks. Hi everyone, thank you for the opportunity to talk and thank you Nisham for a very interesting talk. I work at Amnesty International, so obviously I am still very supportive of the human rights framework and feel that it has a lot to offer in this emerging space of AI inclusion and the ethics around AI and the challenges that we face. But I think Nisham's talk raises a lot of questions which I already had about how we in the human rights field stay relevant in this emerging new context when we're all becoming cyborgs, as you say. What does it mean to be human and to have rights? And more importantly, how do we make sure that the accountability and oversight mechanisms which are core to the human rights system stay relevant and stay meaningful in the context of AI and all of the challenges it face. And you touched on the governance points a little bit already, but I think what I would go back to is the points that were made in the previous discussion around the power and political economy in this space and how do we effectively challenge some of those, the people and the companies and the governments, the small handful of actors who control the innovation in the AI space in order to make sure that we do address some of these challenges which have been articulated very well already around labor rights, working rights, discrimination, privacy. And I think that is really gonna be the fundamental challenge here. I hope and I believe that there are ways to address some of the inherent technological challenges in terms of challenging data bias and we've already heard from some of those efforts in the previous discussion. But I think how do we ensure that these oversight mechanisms and safeguards are actually put in place in ways that empower people around the world to be able to use AI in a way that is good for society and which doesn't lead to negative human rights impacts. And for me, this cannot be left to the goodwill of the handful of companies and actors who are in this space. This has to be some form of legal protection but what does that look like in this space? What are the government and systems? I think for me is a fundamental question. Wonderful, thank you for your responses. We have time for a couple of questions but using my prerogative as a moderator I'm going to give you a slight provocation. So I live in China and I was wondering how we can include people who by nature may not look very inclusive to us. So in a sense China is a very prominent force when it comes to AI and has a lot of implications for its surrounding region but how do we reconcile between the different concepts of individual rights or individual freedoms and the common good or human rights versus harmony the way that it is seen? How would you include people who may not be as prone to be inclusive and still foster that kind of environment? Oh, am I supposed to say? Okay, all right. Thank you so much for this very thoughtful and provocative questions. So one of the promises I made to myself a few years ago was that I'm not going to present any more things that I have completely thought through because it's pointless. So if you think that I have all the miraculous answers to your lovely questions, you are in for a big surprise. But can I draw like a couple of things together just to... Quickly? Yeah, very quickly. So I think Sasha's point of moving from inclusion to self-determination already kind of hints at the new language and vocabulary that's required in order to start thinking about, within the social landscape, what exactly is it that we're looking for? Because at least drawing both from feminist, well feminist queer and post-colonial theories and activism, I'm significantly aware of the fact that language is such a tool of domination control and contamination, so to speak, that taking any language as naturalized is already a problem. And so that's one of the things that we, I think, at least when I wear my academic hat, but also my activist hat, is something that you really want to look at is how do I keep on denaturalizing the scripts of AI versus society or AI and society that are presented to me and what touchstones we might be able to emerge because of this. But I particularly love the question about what do you do with people who are not like us? Because it's essentially that question. It goes back to the idea that there is statistical models of normalcy which are already set into place. And this is precisely why I think the kind of strategies that we would require to kind of be tactical and be moving with this, is to move away with the idea of universalism. The post-work on reconstruction of the world was debunked by the 1990s as against universalism. That's something that we didn't want. We didn't want it in theory. We madly celebrated post-modernism. There was a reason why we did away with grand theories of the world. And with technology, but specifically with AI because it seems to be such a non-discriminatory, benign, laboratory kind of a force, we are going back to developing grand theories of the world which are actually controlled by a very small number of actors who are defining what the world should look like. And that it's time to fracture artificial intelligence into multiple repertoires, but also just questions of affect and empathy because that's what's missing, right? Because if I sit here right now and I tell you that I've just gone through a massive surgery and I'm actually wearing a steel reinforced corset which hurts to sit here, I'm hoping that you will deal with me differently with an artificial intelligence algorithms not going to do that. This capacity of how do I acknowledge you not for the data set that you can produce, but for that thing which you refuse to be counted, right? The thing about you which I cannot understand needs to be introduced into both policy but also in engineering practices of what AI could look like. So I would kind of wrap it up there then. Wonderful, let's open it up for questions. Please, please introduce yourself briefly. Listen to you, I'm hearing a very static notion of human rights and maybe part of the problem is we can't anticipate how we're gonna interact with these technologies over time and how our vision of human rights will change over time, but I just wanna ask all of you, do you think since AI is built on data about human beings from human beings, perhaps the answer to moving in that direction is to make the algorithms public. In other words, no proprietary algorithms. Is that an answer? Well, I mean, I would agree with you that we need to be thinking about human rights in a dynamic way, not in a static way and seeing how human rights can evolve and stay relevant to meet these challenges. And I think touching on the point which you raised, this is gonna be even more difficult in the context in which AI technology and investment in technology is racing ahead in China where they obviously have a very different approach to some of these issues. That's not to say that there won't be ethical models and ways to address some of the concerns, but it may not be from a human rights perspective. In fact, it's obviously unlikely to be from a human rights perspective and what does that mean for questions around universality, as you said. But sorry, what was the second positive of your question? Concept, then perhaps one way to help make algorithms more inclusive is to make sure that they are public and constantly debated publicly, which is underpinning this discussion, but... Yeah, well, just briefly on that point, I mean, I think transparency around AI and algorithms is gonna be core to addressing these issues. I think I would agree with you that that's definitely a good way forward. I think practically speaking, we're gonna come across obstacles in the face of companies with proprietary systems and issues around confidentiality, but these are hurdles which we have to try and tackle as part of this conversation if we're gonna get anywhere. I just add to the response, I mean, absolutely, but I think it takes more than just making the algorithms public because the support of human decision making is being taken through the algorithm, the training data, the benchmarks that you're trying to reach and so on and so forth. So I think that there are really interesting proposals that are very specific, which is to, sorry, and also that there are, since it's machine learning systems and so they're constantly modifying over time as you feed them more data, so it's not a static thing, it's a dynamic system, and so one of the interesting proposals is to do ongoing intersectional audits of the distribution of false positives, false negatives, and so on and so forth, and so there are some models for what that looks like, so Joy Boulomouini with the Algorithmic Justice League has developed an intersectional benchmark for a dataset for facial recognition where she's proposing testing of the most widely used facial recognition systems against the distribution of false positives, false negatives and so on. The problem with that, it goes back to this inclusion question, it's like, okay, great, so we're gonna make facial recognition systems which will do much better jobs at identifying the faces of all different types of people through all the diversity of faces across planet Earth. That solves one kind of problem, but it doesn't necessarily solve the other, depends what these systems are being used for. Wonderful, next question, please. Thanks, Matias from Algorithmic Watch and Build in. Nishanda, I have a question for you. When you said that we should stop thinking of AI as some kind of innocent entity, I think the flip side of that would be that we need to ascribe agency autonomy, free will, what will you, to AI and what is your response to that? Thanks, no, I really don't want to be anthropomorphizing AI and now kind of extending the fold of agency and free will to it. I think, and thank you so much for letting me elaborate on it. I think I'm drawing more from the kind of work that people like Wendy Cheng do who say that at the heart of AI right now is the principle of homophily, right? Like things are clustered together and that's how pattern recognition forms. That has been taken as a naturalized default and the only way by which artificial intelligence systems can work. Whereas Wendy in her work actually shows how historically at the heart of computation and this was the distinction I wanted to make between probability and possibility, at the heart of computation was actually the promise of possibility which is heterophily that unlike things will continue to coexist in different kinds of diverse systems and so on. But because of lack of physical computation power at a specific time, this was actually erased from the history as a possibility that we can even go to. Like the mathematical possibility of saying imagine that this is a point. Requires the kind of computation power which is almost impossible to harness at that specific time. So when I'm suggesting that we need to think about AI is not as innocent, I'm suggesting that the scripts that we have been given in terms of what is an artificial intelligence software, what are its implications, but what are also its networking possibilities and its materialities, it's time to start questioning them because these were not just innocently made or they were conceived of what Sasha called socio-technological systems and there is a very complex matrix of domination which is why it's better to think about living within AI as opposed to living with it and trying to understand what are the different layers where it might fall under. So I hope that kind of helps explain it better. It's a moving away from logic and back into mathematics is perhaps one of the options there. And it might also inform new forms of engineering then because within engineering curricula right now, logic is the only God, right? That's the only way by which our systems are being designed. So it might be interesting to see even at the disciplinary level what happens when I teach my students of physical computation engineering for example and refuse that logic is the only system through which efficiency, efficacy or final productivity is going to be judged. Wonderful. So you will have time to ask our panelists a lot of questions after but I would like to wrap up with a question and I would like to ask you to articulate your vision of inclusion. What do you think it should look like in a kind of very short two to three sentences and then we'll wrap up the session. Yasha, do you want to start? Oh yeah, sure. Joe, please. I suppose my vision of inclusion in relation to AI is having systems in place whereby everybody who is touched and affected by AI in this evolving ecosystem in the global economy has a way to challenge the development in which AI is being produced and also that there are proper mechanisms in place to hold the power holders accountable for the ways in which AI may actually impact people's lives. Wonderful, thank you, Phillip. Yeah. My vision of inclusion is, I think it's very easy, it's a vision and therefore it's a concrete international standard on different ethically, legally, political, socially topics on which we have a definition and international definition and which is possible or which is impossible to change them. Yeah, so to me it's a two part answer and the first part is that we figure out how to design AI systems so that the way that they distribute harms and benefits does not reproduce the existing forms of structural inequality that are constituted by the matrix of domination that we've talked about of white supremacy, heteropatriarchy, capitalism and settler colonialism. And the second part of it in this comes from feminist theory, which is in addition to looking at distributive outcomes that are just, we also wanna talk about the procedural so that there's a just process of the design of AI systems and that means that people who are most impacted by any particular system get to be at the table in the design process. I think I will draw from two of my favorite authors who write from Taiwan, Ding Naifei and Lu Rangpong who basically talk about how technological systems demand that the human be continuously in a condition of speech. If you cannot speak, you are silent and silent is merely a pathology which needs to be cured. And Lu Rangpong and Ding Naifei present the idea of the politics of reticence, having the power to speak and then exercising not to speak to that system in the language that it demands. So if AI demands data, maybe the human can start producing something which is data plus plus. Let's look at other forms of human communication which is storytelling, which is affective communication, things which can not easily be quantified and from there on demand that the AI recognizes that as opposed to erasing that as noise which is not really good data within that system. Wonderful. I would like to thank our respondents for a really fantastic session. Thank you very much. And I would also like to thank the organizers for having us and for also allowing us more time for discussion. Thank you.