 Okay, so we are about to start our first deep dive, which is actually called advancing equality in the global south Before we start so I thought I will introduce you to activity number two So basically the next envelope in your wonder bag And then I have an addition to make Everything that is not in an envelope in your wonder bag You can play with it and you can eat it at any point in time So if you have pipe cleaners, you're allowed to make hearts and shapes and robots And with the little things I forgot what they are called. You can use them as decoration Okay, so for the deep dive the activity looks as follows. It's a yellow envelope and If you open the yellow envelope, it will look like this and The goal is that during the deep dive You help us come up with questions for the breakout groups that are tomorrow So tomorrow there are two breakout sessions with each five groups. I hope you all signed up for one each Because that will help us determine which groups we will put in which room But as you can see on this thing that I'm holding here in front of me They're gonna be ten breakout sessions total the topics are here at the top Yeah, you can also read more in the program, but basically here are the ten Breakout sessions and you can ask or basically write down questions that you think might spark a conversation In those breakouts particularly those if you are torn between one and two breakouts And not sure which one to go at least put your question there So then overnight, we're gonna reassemble all these post-its and give to each moderator of a breakout session All the questions you wrote down So far so good. If I didn't explain it clearly, you still have the Information and overview in front of you. Okay, so with this I'm delighted to introduce yet another very dear friend and colleague Okay, better so team my room, please welcome me on stage team my Runs the Center for Communication Governance at National Law University in Delhi And she is also on the executive committee of the network of centers. Welcome to my thank you Thank you Sandra. I Would like to welcome on stage The four fire starters for for this deep dive session That would be Rahima Rahima Baguma from Macquarie University another dear friend and inaugural member of the global network of centers you heard him a little earlier today That's K. S. Park who's a professor at Korea University Law School I'd also like to invite Jenny Bernstein from UNICEF to join us and Kathleen who has been kind enough to take Nina's place at short notice But she's truly exceptional and I'm looking forward to hearing from her as I'm sure are you So sorry just to introduce you a little bit to the deep dive sessions All that we are going to do over here is We're going to offer provocations And then the idea is really that we make it conversational You're invited not just to ask questions or to engage on comments that Engage with what you hear heard here. This is a little different from the sessions that we've had in the morning You know, please feel free to make statements to to offer your perspectives because the deep dive is really It's about all of you talking as well So I'm going to so we're going to begin with Kathleen who is a data scientist from Africa's talking And then we're going to move on move on through the session But if anyone has anything pressing to say I'm looking around this is conversational and it's your space So we can always stop have a conversation and then move on to the next fire starter Kathleen if I can invite you to thank you So as she has mentioned my name is Kathleen and besides being a data scientist at Africa's talking I run I co-organize a community for women in machine learning and data science so the Nairobi chapter of women in machine learning and data science and So I'm passionate about helping in about the democratization of machine learning and just helping individuals who would like to grow their personal skill in in the field by Informing their learning journeys connecting them with peers who they can collaborate with and finally connecting them with opportunities and Something that has come up out of my experience is running this community is the fact that Despite being a community for women. We still have our attendance being more men than women So if you're a guy and you want to attend we wouldn't lock you out But it's so I got curious because it is women in machine learning and data science But then repeatedly we would have men coming over and over again But then women would come once and then sort of not come again So I got curious about where the women are and so that conversation is what is going to inform my contribution today Just the search for where women are in the bigger conversation of inclusion So with my experience With finding out where the women are I think it starts with first of all social and cultural beliefs I can speak for Kenya in particular where I remember in primary school repeatedly being told So I went to a girls primary school just only girls and I remember we were repeatedly told that because we are girls We are stronger in Languages so we should put a focus on that and then that would make up for our grades in our sciences and then fast forward to high school Where there's a national examination at the end of the four years of high school and some subjects are optional While others are compulsory so this arises in a situation where schools can also choose what subjects to offer their students So I found you find that because girls are said to perform better in certain subjects versus others for the schools great to be kept For the schools average to be kept high They'll offer some subjects and as opposed to others for example physics And so you have girls going through high schools that don't offer physics And then at the end of the day when it comes to choosing their courses at the university level They cannot enroll into any engineering courses because physics is our prerequisite to this Yeah, and then fast forward to now being at uni I studied mathematics and computer science for my undergraduate and out of a class of 55 We were four girls and this is the point at which I found this is the point at which people starts starts to ask themselves Where are the women? Why are we not enrolling more in engineering courses? And you know we left us back in primary school when we were telling us that we perform better in languages so concentrate on that But now besides that zooming out to the greater global south From working with the community It's I find that it's not a lack of even people having innovative and creative ideas because I see a lot of AI innovations which are pet projects at the end of the day because we have individuals who have day jobs and this Data science is just something that they do on the side. So it brings me to question it occasionally brings me to question the facts that if Some of these individuals were in a different context would they have already gotten funding would their ideas now be a startup Would they be now building teams and then fleshing it out? and Just to add to that I was having a conversation earlier today with someone and we were talking about the accuracy of of Google Translate with local languages in Africa for example, Kiswahili, which is one of the national languages in Nairobi and We were saying how so some European languages, for example, will have more speakers Which have less speakers on some of the local languages in Africa have better performance with machine translation on Google Translate and at the end of the day So at the end of the day, I know one particular individual who has tried to do work in NLP with Swahili But at the end of the day, it's it's on the side. It's a pet project. He hasn't really gone far with it So I guess my contribution to this is that first of all inclusion needs to start much much earlier And then when it comes to it to the individuals who funnel down and then do become practitioners in machine learning and data science Funding would play a big role in just encouraging some of these innovations to grow further. Thank you Thank you Kathleen. I was telling her when I heard the version of this a little earlier on that That if funding etc existed earlier, I would probably want to be Kathleen when I grew up. It's uh, I Now want to invite so so that was that was a really interesting perspective because it takes it into account time the stage at which inclusion needs to start happening But also an intersectional perspective right because although this is a panel on the global south There's even within the global south there are communities of people that end up being left behind much faster I I want to move on From from Kathleen's narrative to KS who's worked on technology from From points of view that that intersect in interesting ways because he's not only a professor of law He also runs open that career and has been very actively engaged in in technology litigation So if I can invite you to chime in Thank you. I Have two good news Who want to who are concerned about exclusive? effect or exclusive risks of AI and Number one is that it is a us. It is a human beings That we are controlled the output of AI That's the first news the second news is that we have built-in Legal systems that rain in the Rain in the dominant or Monopoly seeking Aspect of AI I'm gonna explain each at a time the first one What people often forget is that AI is AI is a still a program designed to Assist what people want to do by automation Now let's think about Microsoft chatbot tail Many people will think that it shows the discriminatory discriminatory or exclusive risks of AI and many will think that it gives an advanced view of What AI may do when it when it is deployed for you know insurance underwriting or Recruitment or other platforms or other situations where human beings are evaluated but if you think about Ted but a chatbot a the problem there was that the instruction given to The the instruction given to pay was to low level And I will talk to the programmer but probably but probably the instruction given to chatbot a Is probably to become the most popular Twitter? now to become the most popular Twitter probably it just took the shortcut by imitating the sensational online behavior of racist or sexist Twitterers Now we can address this problem by giving a higher order instruction to Chatbot a for instance Become the most popular Twitter that does not discriminate against Vulnerable groups and you know define vulnerable groups or we can give much higher much higher order instruction like Become just all right Or give some Multi-faceted instructions that we've together other models are brandy than the number of our followers or the number of retweets What I mean is that if there are moral or ethical constraints that you want to build into AI's behavior you can just simply hard code those constraints into AI and We are not new to this. I mean as most three laws is one such attempt it comes from the authors hard thinking about what minimum moral norms that robots must follow and You know the laws probably the laws are simple because you know, he's not a philosopher He just wanted one. He's just wanted to write a novel we can build a More intricate set of norms that can be hard-coded into AI and The problem is that we don't know we human beings are we ourselves don't know what ethical norms that we all agree So In the end AI is not going to replace human thinking but it will enforce human thinking as long as human instructions are clear and non conflicting Look at moral machines the experiment that MIT is running in trying to decide What the what the self-driving car is going to do when it sees Pedestrians crossing the street It's they're not asking the machine to make the decision. They're crowdsourcing human decisions to Decide on what the best driving behavior of self-driving car is Was it last year that Mercedes was found as having made as having decided that They prioritize the safety of a driver over pedestrians Whenever you know stopping a car to avoid the pedestrian threatens the safety of the driver That decision was not made by machine. It was made by the officers of the Mercedes So you know, even if we use AI for insurance underwriting or loan underwriting or Police deployment or recruitment Yes, it will have risks of a deepening the discrimination inherited from the past But we can change that by hard-quoting We can change the result and we have faced those situations even before AI was developed to a level that is now So, you know strict strictly merit-based admissions or strictly merit-based the government contracting have resulted in Exclusive effects on African-American students or The contractors in the US so The US adopted a formative action To because they didn't like the result To to make the society More equal and we can just build those into AI So I don't think it's I don't think it is AI that is the problem I think it is the humans that have not agreed upon what the fair result is what the fair result we expect from algorithmic decision-making the second good news and is That we already have legal system that can obey the The development of or wayward development of AI And that is data protection law Now data protection law applies to the stage where the data is collected and In a deep learning or machine learning is made possible only by big data Is this true I Don't know I'm assuming that it is true. It is made possible only by big data and What is big data? Big data, you know three V's right velocity volume variety So the third V variety means that to do sufficient big data You have to collate you have to merge different databases and usually these databases were built for different purposes and under GDPR using or Right under GDPR using a previously built database for a different purpose that data subjects never agreed to is a violation of data protection law and merging two databases built for two different purposes is already violation of our data protection law and Especially when GDPR defines so dynamized data as personal data there is humongous illegal impediment to Doing any big data under GDPR and I don't think this is just a coincidence that data protection regulation sets up impediment to big data and therefore impediment to AI that That that feeds on The big data Because if you watch the movie those X Makina the protagonist the the guru Programming guru later reveals how he built AI Does anyone remember how he built AI? Well, he got the data from from the Internet right it is through collective intelligence it is through Data produced by made available by the people in general it is based on that data that AI was made possible and That means that People will continue to claim Some sort of a public interest obligations on the data controllers The greater the data processing activity The greater social good that they'll demand so those are the two I think Going forward the constraints that we will work with in developing AI and That will That will obey much of the concerns about exclusive effects of AI Thank you Thank you so much I want to pause over here because you've said a bunch of provocative things that at least I would have the urge to argue with if I was not being a good moderator does does anyone Want to chime in at this stage? I think that you've been offered quite a lot of food for thought just checking Okay, so then we'll continue with the next two fire starters and then come back to you if that's okay Okay, so Rahim You Had a lot of very interesting things to say about AI's potential for inclusion Specifically because I know that you are coordinator of the development informatics research group at the university and so if if you want to either respond to what you've heard or add to What both the previous speakers have said? Thank you so much. I should be my Respondents of the two key notes are talked a lot about a number of challenges of AI and inclusion Especially around the four dimensions of AI and inclusion Talked about by and staff They also talked about possible solutions So I'm going to share my thoughts about possible solutions for the challenges of AI and inclusion Particularly in in the context of in my context as an academic in a developing country I come from Makeda University, which is found in Uganda the first possible solution for the challenges of AI and inclusion is Ensuring diversity in the development teams of AI applications This is necessary in order to address the difference that are the diverse Differences in the culture in religion in gender in in ethnicity ATC I came across a case on the internet which talked about A case in South Korea where a vacuum robot ate hair of a woman who was sleeping on the ground and Apparently sleeping on the ground is a common practice in South Korea But the development of the vacuum robot didn't take into consideration The differences the perspectives of differences in culture The second possible solution to the challenges of AI and inclusion is that developing industry Standards industry-wide standards to evaluate AI applications around the fall Around the four dimensions of AI and inclusion mentioned by answer from these are develop Deceiver the identify and the bias Then the third possible solution to the challenges of AI and inclusion is integrating participatory design with some people called co-design in the training of AI expats This integration of participatory design will will expose the AI expats to the benefits of involving stakeholders in the development process of AI applications as well as Enabling them learn how they can effectively involve the different stakeholders in the development of AI applications then the last Possible solution is to to set up as well as support more Research and development initiatives that are based in the South in order to increase the amount of research coming out of developing countries as well as Applications developed from the South and by the Southerners. Thank you so much Thank You Rahima. That was very well within time Jenny, I know that you've been So Rahima has offered a lot of suggestions about ways in which we can move forward And I know that you have also been thinking about concrete ways in which we can deal with this With the inclusion problem if I recall it that so if you just want to introduce that to us certainly So, thank you so much. I think we've heard a lot about how important it is to do no harm and the different Principles and considerations to that end Something I wanted to continue on as far as the good news is beyond just doing no harm How can we leverage the transformative power of AI tools to do good? Both at an individual level by getting more people involved in democratizing it in that way, but also in bringing really the scaling ability of this technology to affect positive change About at the country level or even the international level. So to to that note Some ways that UNICEF has always thought about leveraging technology is To provide new ways of understanding where problems exist so that you can then tailor Programming and services to cater to those needs So real-time information is obviously critical For us to know, you know, how to provide services and who needs them And currently so many of the issues around education nutrition health Employment etc are that information available in more traditional ways from a census say comes about every five or ten years And even when it is available, you know, it's maybe no longer relevant. It might not be truly representative So some ways we're thinking about addressing that with AI tools include things like using satellite imagery and AI technology to to map where schools are in Liberia is our first use case context and Mapping out where a school is means that we can then understand Where there are gaps in education? Offerings where those schools may or may not have connectivity that's so essential to make sure that they're offering Quality education and that kids there have access to real information So that's one way we're doing it another thing We're looking at is using a partnership with telco in Iraq to see if there are patterns in people's mobile data So the way they text the way they use mobile financing the way they place calls that correspond with More demographic realities so that we can have census like information on where poverty is concentrated That can feed into The hands of people with the ability to you know to use that information for good and to to tailor interventions That correspond with real-time needs And there are other projects we're looking at that are pretty controversial So definitely thought-provoking for this audience anything from using image recognition systems that might be able to interpret Someone's image and and give feedback as to whether that person is suffering from malnutrition Really great potential for that to feed into positive nutrition programming, but also huge concerns around data and privacy and Other violations of ethical conduct so in thinking through all of these ways that we need to Do no harm. We also want to think about doing good While recognizing, you know that these are tools and that at some point they'll fail So asking the question of who they will fail for And how we can then redress that failure continues to be really important Thank you, Jenny I now want to invite you all to Engage either with these four people or to offer your own comments and thoughts in this vein You've we've heard a couple of very thought-provoking sessions in the morning and now you have these four input statements Which were very interesting to me. So I was just wondering Whether whether any of you has anything to add or questions As we wait for the first person to hold up his or her hand If if you were walking in late the activity for the session is the yellow envelope So maybe take it out and do that as well Please If you can introduce yourself and thank you Better not forget my question now We've spoken a lot about categorization And we've spoken about making the ethical norms explicit that are coded into data And I'm wondering if while this is an admirable goal, and I think this is the way forward I'm wondering what is not explicit yet that is preventing that from happening It's something that we want to do But if it's not happening why is it not happening and what are the steps that we need to make in order to get there? Because there's still a lot that Happens without the assumptions being made explicit. So question about assumptions and coding that into norms I think Profit-seeking motives of For instance insurance companies Will leave them no alternative But you know not to Right but not to adopt those Ethical constraints But I mean I'm not really an expert in this area, I hope I Wish there was a some industry expert in that area who Testified to how AI is actually used in policy underwriting or somebody from banks who can tell us whether AI is being truly used Dominantly or predominantly in loan underwriting I Think we need the more kind of first-hand Testimonies to Start calling them out on you know, why are you not doing this? I'm not 100% sure you know how many companies out there are actually Doing you know machine-based Underwriting yet You know you read about police deployment Algorithmic police deployment, but which a police department has actually begun doing it You know much of this Talk I think kind of Precise the actual Deployment or deployment of AI in these areas So if there is a in anyone in the audience with the information on that Oh, just to just to quickly sort of set on the record predictive policing have dictated Police locations and movements and resource allocation priorities In a number of American cities, so I think in the predictive policing case It is it is I your your point is well taken but actually in the predictive policing case It is already a system that is being deployed. Thanks. I'm ready I just want to add on to something that that you mentioned, which is how we need more regulation in the space and I've heard that quite a lot with all of the AI and ethics issues and really what I want to add to that is You know, I'm currently working on a research project where we use Industry but where I'm talking to industry experts to understand what challenges they're facing in Building more ethical technology, whether that's AI or using other kinds of emerging technology products and there seems to be a massive gap there even even teams that that really want to do good and and Want to build systems that that that's good for everybody. They are looking for guidelines. And so how do we? Sort of bridge this gap or have better conversations between Academia and industry because they're ready and they're really really seeking for more knowledge so how can we Have that conversation faster Hi, I'm the douchey from article 19. Thank you so much for your comments It's made me really think through some of the The work that I do And I just want to share two thoughts I had and maybe put it out to the audience to kind of better think through So the first was that when we talk about narratives around AI I wonder how the dominant narrative plays out across context and in this context specifically in the global south So to give you an example I've been doing a literature review of how AI is perceived in India And so I look at academic papers and op-eds and news articles and and the like and I find out of I think I found like four critical articles, but maybe 220 something articles about the potential of AI and you know There's so much that can be done and it's going to solve a lot of problems And I think this is because AI is seductive in in the sense that it offers efficiency across large populations And then when you couple that with the idea around Datification of people in the global south. I think there's an interesting Problem that arises there, which means To cons to contextualize for example the other system in India is as popular as it is because it offers people The opportunity to be identified to say that you exist to the state and you're visible to the state And so there's a race to be identified there but there's also people that are systematically hidden and conveniently forgotten within larger systems and so when you take those two Norms in context it becomes a bit worrying to think about fairness in AI in the Indian context And I I just wonder is it is it a technical solution or is it a social problem? And I think you could argue either way, but would be interested to hear what you think It's also super interesting that people feel the need to ascribe good and bad attributes right to Technology Sasha. I know that you had a question or a comment. Oh Okay Yeah, it was a comment about it actually follows on what you just said, which is the difficulty of sort of you know Binary reading of these technologies in their deployment as good or bad in the example that I wanted to Bring to the table and thinking about global south. It was a presentation that some people from Pollan tier technology did together with people from Human Rights Watch at MIT last year though pollan tier is a It's an intelligence firm that developed sophisticated algorithms for risk assessment in financial markets and then they developed stuff that they closely with the US military that they were using in active conflict zones, so you know risk systems for soldier deployment for targeting for nominating people for To be added to the lists for the targeted assassination program By via drone. So this is that Palantir technology is a company that was involved in that type of stuff And they came to MIT and did a presentation with people from Human Rights Watch and someone who was also involved with UN stuff They're talking about a new partnership. They were developing around refugee assessment and logistics management and distribution of aid and Right so my question my question I asked them there I was like so wait you're partnering with this company that's active in active military zones to literally nominate people to be murdered and you think that you can trust them when they say they're not going to use any of the data that you're Gathering in refugee camps many of which are filled with displaced people from US back to wars of Empire and petroleum Domination like and you think that they're just telling you oh we signed an agreement with you that we won't share this refugee camp data with With you know US intelligence and you believe that so So the question is you know to complicate the good bad binary. It's like you might Do partnerships with firms that are selling AI services to do you know UNICEF might sign a contract With somebody like Palantir, but I just think we have to be really really careful To Only look at the like the project level and we have to keep bringing back in sort of like history and the larger Geopolitical scale and when we analyze like how are these systems being deployed? How are they being connected with each other is the data being shared on the back end and to what end because something That might be good in the short term in project deployment in the refugee camp might also be training an algorithm that's You know, that's illegally nominating people for state-backed murder Yeah, thank you very much Last month on faith Alex again Last month there is a company called Mattel that pulled off a product was meant to launch next year Out of privacy concerns. It's Aristotle for those of you that edit the articles in the New York Times In the previous session somebody was saying this AI just keeps getting data So we keep giving it more it looks like this product what it did They decided to start getting a kid since when the kid was born and get everything the kid knows and learn along So that it has all the data now the child is learning to improve on the AI And then you can actually see the implications if it's your child and there's a camera following Recording everything the kid does and what the implications would be So is that the right way we want to make sure that since we want to be included everywhere that we ask for cameras and everything We do is recorded so that can be included in the AI Leading me on to the second question, which is I want to get the professor's Liko consideration Would it help if we obligated all AI to have some sort of Something analogous to a panic button Such that there are certain no go areas that the AI shall never go into For example, it has to have an override When on their issues that are none Disputations for example child protection Save lives, you know uphold human rights, you know gender parity certain things what would be the consequences if we Obligated or required by law that there are certain panic battle that no algorithm will go into those areas as a way of trying to include Equality not just in the global south, but worldwide. Thank you So my question is about how can we empower Talented engineers from the global south to develop AI applications that actually amplify cultural specific Values instead of just being kind of sucked into really big companies that develop platforms and just kind of adjust them to specific countries How can we really cultivate a good culture of engineers in the global south? That could create a robust AI environment that could really benefit the country that they come from Speaks to some of the things that Kathleen was saying. He might you wanted to respond How can we empower good engineers in the south to develop context-fitting AI? Applications, I think it's to do with Training them from the south itself But also supporting them to carry out research Within that context and to develop those applications from the same context Adding to that I'm just gonna say again funding because I meet a lot of people who are brilliant and they have brilliant ideas But at the end of the day, they have to keep the lights on so are you gonna build a brilliant idea that could rip off That could rip off big in like five years, or are you gonna pay your bills? So I think funding Also, I'd like to add to a comment that was made earlier about how to The narratives around AI in India That's actually interesting that you say that because it's something I thought is similar to Kenya because up until three or four months ago Conversations around AI along with ethics and regulation and inclusion are not something that I come across often And I think I attend quite a few AI and just data science meetups around Nairobi So it's interesting to see that we're still sort of in the excitement phase where we see all the possibilities that could come with it And we aren't at the point where we are seeing The dark side of things and I don't know if we have to wait for something horrible to happen before that happens But I wish we could introduce the two together so that as people are getting excited about AI They're also just cognizant of all the things the dark side of all that just could come with I think a training engineers is not as easy as training engineers in other areas because It does require a society-wide discussion of What is ethical? I mean Yes algorithm Results in some unintended consequences some unintended results that Shock people's ethical Thinking But much of those results are unintended Sometimes the strict strictly merit-based or strictly number-based Algorithm can result in Shocking results What do you do when you run algorithm and you get something bad? Is that bad or are we wrong? right There has to be some society. I mean I'm not being explicit about which but Some of you probably know what I'm talking about And there has to be a society-wide discussion about what is ethical? Only then we can train engineer engineers about what to do and It will not be at the engineers level. I think it has to be at the executive level on You know how much these? How much of this Society-wide discussion can be looped back into Building of and Since I have the mic to address Some other comments Sasha's and I Forgot your name About you know keeping some Designating some no-go areas for AI It's a I don't particularly agree with it That's why I'm That's why I'm not speaking that passionately about it, but it's happening already in Korea We just adopted Database emerging guideline so the government Assigned like six seven agencies and There you can Ask the agencies to merge Two different databases built for different purposes for instance, you know refuge camp data, right? if you want to merge that with Other in law enforcement data you can bring the data over to that agency and the agency provides the merging service and Who do you think was the first set of customers? Well? insurance companies right they came with They came and they asked other companies That have data come to the agencies and they received merged data Now this is a violation of data protection law Because those data bases were built for different purposes now. They are being merged And Korea has one of the strongest data protection law in the whole world So there's a big controversy but if we There's big controversy and the civil society is demanding that The government health data now in this case it was the merging of government health data with insurance data the Civil society is demand demanding that the government health data should never be merged for any profit purpose or merged with any data collected by private companies. I don't agree with that A lot of times private companies do valuable service a lot of a lot of times private companies You know enhance Advanced science and technology for the greater good of the society Should not just ban private company should not ban private merging of data bases or should not ban outright merging of government data with private privately crowdsourced data But I think the public's demand public's claim to some ownership on big data will continue and It will continue to continue to increase through this window of data protection law I talked to people in Europe who have you know who have institutional GDPR They don't seem to have a clear answer to how they're gonna get around their protection regulation in merging different databases Well, if we keep avoiding that issue We cannot have that kind of control that you are talking about we should We should we should confront that issue and discuss opening and discuss frankly That you know mechanistic application of data protection law will just outlaw big data period so we have to We have to cut a compromise and through that compromise we can talk about You know outer outer boundary of AI or some no-go no-go areas Thank you. I was hoping that the that the data protection angle would get fleshed out in the manner it had I know that Nagla has been waiting If I may I just want to go back to the point of engineers. You're absolutely right But it is not enough. I think the problem is more of the ecosystem so that's precisely why being here in this community and Approaching the matter from a multidisciplinary Interdisciplinarity perspective is important because at the end of the day you there are there are engineers and there is funding Not enough so definitely there is need for more But they're not doing what they would like to do because the the culture the Understanding the need for people like somebody like me who's an economist to work on policy impact to make and and many others in The room to work on making the Deliver the message of the importance of this type of technology for the good for inclusion etc Because what happens is that we end up having a scenario of brain drain and those very bright minds end up You know working in the developed world and we lose the potential all together. Thank you Hi I think I will create more challenge in the now democratic democratic countries, you know, you know, lots of data are in the hand of the Governments and they can use in the way that they want. I'm from Iran and for example, there is Cameras for control the traffic rules, you know, it's very good and the recent years Lots that they can't control the speed, you know, but in recently they used to Identified a woman who are not covered their hair in the courts, you know, and so they identified a car number and they could arrest the women so How we can force non-democratic countries to buy Ethic rules, you know regards in AI and You know, I'm definitely thinks, you know, it will go put lots of pressure more pressure in the citizens in the country Even it's come to know the How accurate the data is and how they could use the data which are their hands. Thank you Thank you. Good to take two more comments and then Thank you. We're going to take two more comments and then do a quick wrap up by everyone on the panel here. Oh, sorry Sorry, hi, I'm Julio. I work in Chile And I'm got a lot going on in my head. I need some AI to process a little faster The two things one is since I've began today or the readings prior to this experience I've had in my head that we're talking about artificial intelligence and inclusion and for some reason it always felt like inclusion was secondary to Artificial intelligence and we're talking within right artificial intelligence But I feel that perhaps if we flipped it a little at least if I flipped it a little and think, you know How can artificial intelligence really help inclusion and when I think of inclusion? I think of much bigger structures much bigger conversations I think that we touched upon earlier today, which is how do you listen to people in communities? How do you do basic direct action organizing of the 60s the 70s that today is called human-centered design? Etc to continue to learn and use this discourse that we're having around artificial intelligence But really to keep pushing changing the ways in which governments and people have Conversations about creating solutions for the people who are affected by the problem That's one thing going on in my head. The other thing going on in my head and reflecting especially in relationship to sort of the Everyday people that will that don't that relate to the work that we the many of us do is that in Chile? We have a project that is a platform Online platform to address the needs of LGBT kids or kids who are affected by discrimination based on sexual orientation Gender identity expression We're trying to kill themselves. Yeah, so a few years ago I created an organization and that organization needed help Helping other kids because the government of Chile doesn't have the infrastructure to actually respond to their needs Even though Chile has the worst levels of teenage suicide or among the worst in Latin America So we created this simple online platform with a partner and instead of responding to a hundred and fifty cases in a year We're now responding to fifteen hundred cases every semester of these kids who need help in some way from counselors And now we're talking to other countries in the region to try to help with this platform And we asked ourselves the question of artificial intelligence or how can this help us? Really respond to these needs because the demand is very high But in the end or not in the end in the car in the process story and right now I think there were two comments that are made earlier. There have been most Interesting to our work, which is affect and empathy Right, so we talk about the machine of all of this. Sorry. I don't have all the language. I'm coming from the inclusion side I guess The machine of all of this but I think in the end for us in some issues that are related to inclusion You really do in the end have to have the human on the other side Perhaps artificial intelligence can help diminish, you know of the fifteen hundred cases We can get to a hundred and fifty that need the most help and therefore need a therapist But I think one of the questions, you know in relationship to inclusion Especially in this region where in Chile the infrastructure of the Ministry of Health cannot respond to the need of these kids is Are we are we do we need to go this far or is there a simpler solution to get to including these kids? Thank you so much. I'm going to invite another very short comment and in the meantime I'm sorry. I borrowed this one from oars because I love it so much I'm going to invite the panel to do that wrap up and tweet size remarks So you have time to think while we get the last comment I have a not fully formed question, but it has to do with the use of AI in Education and it I feel like precisely asks where your question left off the use of AI and education was one of the big Most think that people were most excited about in good uses It makes me nervous. I'd be curious to hear the panelists thoughts about what would work Okay, we're going to start just to give you a little more time. I'll begin with my tweet size remark Which is I agree with Nagla or I The lady there, I I don't know which country you're from but on The the policing the use of AI for policing I think the better control is that the input phase again and if you have data protection Well, usually data protection law should have some regulation on CCTV CCTV that's used on public areas and There can be some limit on the the number and purpose of CCTVs and To your comment Yes in in Korea, you know, we just changed the Changed our president through massive demonstrations on In the metropolitan area and the police kept Downsizing the number, you know the size of the crowd to de-emphasize, you know, how much public support there is And it was technology Not particularly AI, but it was a technology that was used to calculate more precisely the size of the crowd to to help with You know people is organizing and there are tons of other cases where Technology technology was used in mobilizing the public. So yes, we didn't get to talk about You know those are big good uses of AI, but I'm sure there will be other panels. That was quite the tweet. I must say Very briefly then to respond to the initial question about well if this is possible to you know translate Ethics into code that runs these powerful AI systems. Why aren't we doing it? Well, I think it's important to remember that it's not going to be super easy or cheap in many cases and Focusing on how to build out incentives to do this work is going to be really critical I'm going to my last remark is going to be related to your comment or question about Can AI support the global south deliver better quality education? Yes, it can in so many aspects because we have a challenge of a big student Teacher-lecturer ratio also have the quality so AI can support in these many Can support education system in these aspects and also the aspect he talked about of I think it was about Student support services outside the classroom being able to follow up. What's going on with a student? But but the big question is if we do not address the challenges of AI and inclusion then we'll not be able to get useful Applications of AI in in in education in health in agriculture and so many other sectors So mine is going to be a bit of a Twitter thread. I'm just to respond to the Governments using AI to police I'm normally wondering if a bottom-up approach wouldn't be better because sometimes governments have incentive to not have data protection laws and For my context anyway, I find that people who do not work in technology are not very aware of how technology affects their every day So you think that because you're in health or in education AI is not affecting you but at the end of the day all our data is being churned into this big AI Machines that's called them. So just getting people cognizant of the facts that Everything that is collected about you could at the end of the day be used against you might push people to push the government to then Implement some data protection laws and then finally the end of my Twitter thread I think that we need to make sure that we as individuals from the global South are not just recipients of advances in AI but Shapers and champions as well Thank you all for an excellent session and for also showing Switzerland that the global South will do what the global South will do