 Hello. Hello. My name is Colin McClay. I'm from the University of Southern California where I run a thing called the Annenberg Innovation Lab and I'm thrilled to be here with you today. If you're anything like me, you're kind of torn between all the different great conversations that were just happening and we're only able to go to one, which was great, but it was also sad because you missed four other ones. So the bad news is we can't fix that. The good news is there are great notes from the group, so you'll be able to kind of access those later. In the context of this conversation, however, we want to have a conversation and not try to recount each of those five great conversations because as great as they were, we only have about half an hour, 40 minutes, and what we want to do is really force ourselves, each of these groups, to kind of highlight just a couple like one, two, or perhaps three key insights and or questions. So to not recount the entire dialogue and not even recount the most perfect questions or the most perfect insights, but to just give us a flavor of the connectors of the drivers and forces that play between in this kind of AI and inclusion discussion from each of these categories. So the idea being that if we can get each of those on the table from each group, then we can identify some interconnections and some kind of common questions, some shared issues across these and to start a conversation from there. So that's what I want to do. Does that make any sense at all? So this is really distilling into a few nuggets, part of that conversation, not the whole conversation, very Tweedish to be able to launch the conversations. We'll do that. We'll do a quick roundup of the five groups and then really just open it up for observations on those interconnections on other points that weren't raised in that first round to give us each of a flavor and then carry us through lunch into the conversation about where these points of interconnections are. Okay. Do people feel like that's a yes? Yes. Come on. We have to make it through lunch. Come on. All right. So what I want to do is just go to the, I see we already have this great picture. This is all visualized there. We even have the key findings. Which group is this from right here? That's up. Mikayla? All right. Layla is going to start. I think we need a mic so that everyone can hear you. There she is. Thank you. I'm Layla Kesar, Istanbul Belgi University. I'm director of the IT Love Institute there. And as law and governance breakout group, we determined three important key findings. One of them is the lack of legal definition of AI. And there are lots of AI definition. Since we heard yesterday, really, really many definition. But when it comes to legal one, there is no. Maybe it will be important to the legal person to create a common one, the legal definition of AI. And there is also open questions in terms of law discipline. For example, the liability is one of the important question of lawyer regarding the algorithms and AI also. And there is also no common methodology or alternative methodology. That is the second key findings of us. It's also important to know for multi-stakeholder model, local or international multi-stakeholder model would help to solve kind of problems or create more definite roadmap for the people. And the third one is there is also no clear understanding as to technology itself. We need to understand how algorithms work in order to eliminate bias or in order to make it more understandable, interpretable. We need to understand the details, the technical works of the technology itself. That's the third one. Great, thank you for modeling that so wonderfully. Just while we're here, any quick questions on that? Needed for clarification, that feels clear to folks? Okay, great. Now on to group number next. Is there an image up there? That's the second page. So maybe go back one. Thank you. Great. So our session was, well, my name is sorry again, I'm from the ITU. Our session was on business models. We had a really lively discussion. It probably could have gone on another hour. So it's a little hard to choose just a few things, but I'll do my utmost best. As you can see, we started with talking about what do we actually mean by a business model. You can't recount the whole trajectory. I promise I won't. We started by looking at what we meant by business models. We had some discussion around that and you can see some of the key points up there. There was a lot of interest in how can we work inclusion into the business model in the first place? So it's baked right in and we looked at different dimensions of what we mean by inclusion, which we felt was relevant for the business case. We also talked about another theme was how do we move from corporate social responsibility and philanthropy to thinking about creating shared value and what can we learn from other areas like corporate sustainability where a lot of these kind of challenges have been around for a while? So how can we learn from that to have new business models? Maybe we can have page two. A couple of interesting points that came up was around this whole question of prices and profiling. And some interesting examples that were shared were around how if you're looking on Google for a particular flight, you might be charged a lot more than if you're just looking for that a flight generally to that country and they're raising ethical questions. And on the other hand, would that just be a Robin Hood approach, which is not really appropriate necessarily some people thought for a company? Or is it just like bargaining in the marketplace? So you can see we had some really interesting discussions around these themes. And then maybe just one other point around the ideas around social entrepreneurship where business models, it's not just about profit, there's other types of value which is factored in as well. So there was interest in that and to see how for this area we could really build on those. So it sounds okay. Where is it? Just very briefly, in the end we were essentially dancing around what is the role of government in terms of regulating data about human beings when they are the product and when they have no means of control over it. And we built on an analogy that Ursula actually came up with, which was the analogy of the immortal life of Henrietta Lacks and the idea that she couldn't predict that her cells would be used to help others, but she did not benefit from it either financially or emotionally in any way. No one can predict that. What does that mean for data? Great. All right, next group. And then so I hope you're, as you're hearing these, kind of pulling out the themes that feel resonant from other groups to your group into the conversations that you had, right? So this is the idea here. We're identifying kind of common themes across all these different conversations. So maybe we don't have a screenshot of all yet, so that's not spoken yet. We will find you. So there's the algorithm and design group? Great. Boom. That was magic. So I think that's one of about 17 pages that we wrote, but not to sort of summarize just very quickly to give a few takeaways. We talked about algorithm and design at various levels, at input and output data, at black boxes, at algorithms, at the infrastructure, at the platform level, how you bake it into the education space and into curricula. So we looked at various strategies and approaches to actually help design of systems be more fair and equitable. And I think one of the really interesting takeaways that we had was this is an issue that the public actually really cares about. Like it's really hard to argue against fairness, so we should really be using that to help inform and engage with advocacy efforts. And I think we also looked at how if there are 21 definitions of fairness, is it okay to have 21 kinds of algorithms? And we also talked about the need for more data, not just in terms of to look at whether it's biased, unbiased, clean, dirty, et cetera, but also in terms of how when you actually affect fairness, whether it's actually working. So we wanted to look at how are there good proxies to look at fairness, even if you can't make systems more explainable as algorithms create more algorithms, which then generate new features. Is explainability or transparency even the end result of what our efforts are? Should we be actually aspiring to more things? And I think finally one thing was how if all of these are socially and politically constructed and technology has agency, do we just look for something neutral or do we actually aspire for something that's much higher than neutrality? So I think I'll just leave it there. Great, thank you. There's the infrastructure group that's remaining. So are you? Great. Okay, that's cool blue neon stuff. So I'm going to try to make a very short summary, but this process, so would they like to go first and I can go? Cool. I don't know which mine is. I'm sorry. We'll get that up, I guess. Okay, but so I just want to continue. Should we wait until we get the text up? Because what I'm saying is not on the text. The text is just additional material that you can look up on Twitter. Okay, so there is no reasoned way, no transparent way in which I came upon this summary, which leads me really to the first part that there is nothing new under the sun, that a lot of the problems that you're having with or issues that you're discussing with data and infrastructure really are issues we've been discussing across the board on everything. There were references made all the way back to media studies and 1920s debates and media studies and democracy theory to the kinds of problems we've been having with regard to open data and IoT collected data and data quality etc. So there isn't really anything new. Infrastructure level problems like access, accessibility, and procedural inclusion and fairness are still central and we can't address many of the other issues fairly until and unless we address these. But there is a caveat. There are some things, some kinds of problems which are exacerbated because of AI, things which might have existed earlier, for example groupings by identities which we don't present consciously but by statistics which we represent now and we may not realize. Now these kinds of groupings have been happening in areas maybe like insurance but now it's all pervasive and we aren't necessarily defining ourselves, other people are defining us by various identities. These kinds of problems may be new and we may face new problems also when we have sovereign machines which are emotionally and rationally sovereign but until then it's a bunch of problems. Data based exclusion, data exploitation and data colonialism. There's a large discussion to be had here on things such as surveillance capitalism versus excluding the poor which is preferable and about the, there was a point made that privileged might determine the narrative whether it is about surveillance capitalism or whether it is about things like net neutrality and some people as forms of inclusion might actually want the free choice to be a data subject itself and while talking about data colonialism how free trade, fair trade and all these things are so complicated and when it comes to even coming up with conceptual solutions leave alone practical ones, pragmatic ones in a geopolitical system, two questions of data flow and data trade and data colonialism so there's a problem. Could I end with one last line which is that there is no such thing as neutrality and we have to move beyond neutrality to other questions of how we evaluate biases and counter biases. Thank you. Last group. So thank you. I'm Arisa and I would like to kind of share the topics what we discussed in the user behavior expectations and what I thought is so we discussed within like less than 20 people and we had the person from Kenya and from the university and also the person from the Microsoft so it would be nice to share the information from both point of view so what we have discussed is that from Sarah's perspective that the AI in Kenya is not so pervasive yet so he discussed about the importance of education and could you give us some. Okay just briefly I think we came there was a consensus that there is a huge need for education so that people actually develop an awareness of what expectations they need to have and with that education also comes a new maybe a need for a new type of software engineering where people actually take into consideration their users and that sort of thing and this would lead to when you have a lot of people with the skills in AI you'd have a diversity of people solving the problems and therefore a diversity of solutions and you know as if there are many people tackling problems then you end up I think the user ends up coming into the center I think that was in the realm of education. Thanks a lot and also from the Aresandra from Microsoft gives us a very wonderful view from like the how engineers discuss this kind of AI topics and also the emotional intelligence what do you think about that so the topic yeah that was basically some of the principles that the group as a whole debated and we try to to link them those principles to the user's expectations which is AI must be transparent AI should maximize efficiency without affecting the dignity of people AI should be designed for intelligent privacy of course algorithm accountability and emotional intelligence embedded in AI solutions to calibrate Spock answers right basically saying that so that was the the teams that the group debated it is live and healthy debate and thanks a lot for the group great thank you so much okay so as you were trying to make sense of that I'm curious to hear what people either heard or didn't hear as the intersections among those different observations from those conversations for me I heard a lot of stuff about definitions and the needing needing to kind of form clear definitions about what we're talking about the need for understanding and not just kind of initial understanding and learning but ongoing strategies to continue to do that as we think about the implications for each of our different fields which seems to be a common theme also kind of adapting to saying in a legal environment or in computer science teaching how do we think about the implications for our fields issues of incentives why people orgs companies do the things they do I heard about sectoral roles strategies for the for learning which I mentioned the thinking about how to leverage the public interest the caring on these things what were the things that jumped out at any of you or that felt like they went unstated please in the back thanks I'm Shaz Jameson from tilt as was commented by Raphael in our group we've spoken a lot about moral compasses about ethics about norms about fairness and inclusion social good but what's missing is we're trying to find a way and we haven't heard social justice so much we're kind of talking around it but we're all kind of trying to find what we want to be doing without pointing to it directly so that's what's missing from the conversation for me and I one of the places that I heard that was sort of whether we had affirmative expert expectations around what these technologies could do right so that's sort of that connection of not just guiding it and trying to slow a train but really to think about where we're steering it other observations I'm attentive to the sides I realize people feel like they've been left out come on hi seeing these debates I wonder if when we discuss values and human goals that we want to address in AI considering risks and opportunities we have a framework international framework which are the sustainable development goals all governments of the world agree with with this framework so I wonder if besides checking AI against human rights instruments and humanitarian instruments we could also check them against the SDGs so then we lose last time discussing what we want to achieve and have a framework to discuss how to achieve it directly as it is speak about everything we're discussing here thank you very helpful thank you so that that's the sort of connection to there's a couple in the back there how about these we're going to take these two over here and then we'll go to Nagla I think some of the intersections that we picked up and from the groups were related to the ongoing problems of structural inequality and related to the social justice issues the you know structural inequality around human rights framework as well so the intersection of those and I think you know until we sort of try to look at in terms of strategies and solutions to these we keep coming back to the problems of lack of access lack of intensity of use lack of participation and representation in algorithmic data or big data and therefore lack of representation from global south and underdeveloped countries but and you know as long as we keep speaking in paradigms that are quite individualistic that are quite market focused not to say that these haven't driven take up and made enormous advances but in terms of addressing the structural inequalities they simply not going to address that we've got to find new strategies of doing things so I think alongside and in a hybrid way looking at these inequalities and these lack of access from a more kind of communal point of view communal access looking at infrastructure in particular but I think it applies to various other things data as well from the demand side value not just the supply side value where you have you know reasonable extraction from commercial practices looking at some of these demand side values that offer public and social goods in terms of opening up spectrum to allow people to participate to experiment with AI etc to you know provide public access so that people can supplement the unaffordable services that they've got etc have to be looked at to address these structural inequalities to follow someone on these lines I think what I keep asking myself in relationship to artificial intelligence for more in relationship to inclusion is as we are this diverse group of people sitting here talking about what are the possible solutions problems etc what are the top three top five things that we more or less agree on and how can we use the harness our power to to focus on that so for example someone mentioned this morning Felipe algorithm accountability okay there's lots of things we can take on but if we think about social justice and we think about inclusion we think about a strategy and using the power that we have in this room what are the top three issues that we're going to take on this year thank you I think I mean not to some extent is this conversation right it's trying to build on those yes I want to I want to add something that keeps coming to my mind as I listen to the other breakouts we talk about AI as one thing so in our discussions for example we talk about business models a comment that was made by Felipe was that maybe we should think of business models in the different in the tech industry AI for governments etc this also comes from my conviction that we think of AI users as one group perhaps and AI production as another activity the reason I say this is again thinking of inclusion and framing all of this under an inclusion framework it is about participating in the production of the technologies so when we say business AI business models is it one and the same it's not one and the same so how can we think of perhaps sort of this distinguishing between the different types of a eyes and the different uses in a way that can include not just the users but also the production eventually in an equalizing manner so to sort of against the asymmetry of power and data technologies etc thank you yeah I feel like that connects to sort of some of the comments about the important context about being user centric and this is sort of this whole the cycle of production and consumption really changing yeah thank you Wolfgang thank you maybe three things which I at least saw in many of the the presentations or remarks one is the struggle to find out what is actually new when we talk about AI there are obviously some things maybe this probabilistic turn maybe the categorization and that has of course a lot of potential impact on inclusion issues but other things are simply talking about growing autonomy of technology it's about algorithms as such and so on so what I find really interesting and as he has all struggled to find find that out together is what are really the points where AI makes a difference that is important for our societal discussion the second one is and I think that's actually a really a good thing in this context is that the importance of technology really forces us to operationalize and think about specific things like inclusion like fairness and the procedures behind that what is the role of politics and law there the normal procedure was that we agree on a political process on things we have as common values and then translated into laws and then it's a compliance issue but of course it's much more complex there are moral debates and different communities and I think that we have the chance now to make clearer what our values actually are how we frame fairness how we frame inclusion is really something we should be thankful for the development that we have the chance to do that and then see how that translate into our discussions about technology and governance and maybe the third thing that came up in many discussions was that there is the need to bridge in a specific way different communities software engineering governance and so on and maybe one role of the network also already touched that issue in the morning one of the the thing we could do as a network is to think about specific educational tools that we develop for one community for the other the governance people for for software engineers and the other way around I think that's there are some attempts already but there's still something to be done and I think we can make a difference there thank you very helpful other thoughts thank you a lot of the conversations here actually remind me of the old days of code is law and I think when we actually think of AI as an object of governance we're missing the fact that it's also producing governance and it's actually enacting it in many ways so I think in the same way that we started to think of code as something that was a lawmaking artifact I think if I'm hearing from a lot of the conversations that we're beginning to see AI in the same way as something that implements values that embeds normative practices and aspirations and therefore seeing it as only something that we're going to regulate or not or leave alone I think it misses that element of AI actually enforcing and enacting certain values and norms and actually being a media of governance so that for me is really helpful I think that brings me back to the definitional piece of what are we really talking about and there's sort of there are moments where as a technical matter it's important to know what we're talking about but also the way that we're talking about this suite of issues right as an object of governance as a process or otherwise I feel like that's another place where the way that you've just framed it Malavika and Wolfgang sort of helps orient how we might engage or what we need to be able to engage productively with it. Nisha. Hi thank you so I thought what was what's really striking in all these conversations is that there is a sense of temporality and a crisis of time that seems to kind of divide groups in different ways because there are groups which are thinking of AI as disruption and they are happier to be slow in kind of communicating with AI we are waiting for it to kind of unfold to figure out what really is going to happen go more into speculation and then there is a different organization of time which is looking at AI as crisis like there is no time left like you have to act now and do something about it and I think that reorganization of time that computing has always done or might be good to bring back into conversation because we don't want to fall back into an either or debate about do we have enough time for research or do we need to act now they both need to happen simultaneously but they need their own temporality for the crossovers to happen so that's in terms of developing tactics how this group can go further or bringing in the time scale at what we are looking at because for activists who are working on let's say human rights watch three months is too much and for people who want to do deep dive research which includes for example historical and definitional work we'll need at least five years to get there and how are you going to kind of bridge the divide between the two might be interesting to follow through so I think that's wonderful to me it's an urgent important another way to describe it and I but I also feel like in in the conversation we've had so much that this is about process right it's about it's not going to happen and be done but it's an ongoing set of issues and so I think that's very helpful to say we need to both act now and engage with this but we're going to need to keep engaging over time and recognize that in the three-month time frame nothing's going to happen but something's going to happen in the five years so that's wonderfully helpful. Others? Hi, I'm Vidushi from Article 19. I just wanted to respond to a suggestion that was made earlier that we should look at sustainable development goals as opposed to human rights standards. I think there's a case to be made to keep human rights standards as the minimum requirement against which we then build legal and ethical frameworks because I think what Malvika pointed out to in the morning about how companies often use terms like privacy very conveniently that kind of happens when we focus on just ethical frameworks without an actual legal legally binding grounding and I think there is a there is a conversation to be had about how different frameworks and competing values can be better established in the AI context. So there's a bunch of these comments which have been kind of about how AI these issues change existing institutions and how they interact with those institutions and I think so like to me that feels like mapping those interactions and being aware of how do we learn from the past and build on the things that we that exist that are very powerful but then also asking per Wolfgang what's really new here and what is changed and what what what for what does it force us to reevaluate and to understand how those things may be changed. I think it's important to think about the fact that we shape the technology shapes the way that we work and the way that we govern and the way that we think but we also shape the way that the technology is built so in a sense I think that something that we established here with a lot of the comments yesterday is that these kind of discussions are still very high level they're very abstract and we really need to put an emphasis and this is kind of like what the conference is doing right now which is really important is to really make this into actionable points really think about these kind of things and for example I was in the business model we need to create a business model that is actually something that is actionable for companies for governments for startups to think about in that sense because the shaping goes both way the technology shapes us but we also shape it so the fact that a lot of us are here and there is this sense of inclusion and and collaboration means that we are going to go out and shape what the technology is going to look like so in a sense it's a we we I think that the discourse in itself is kind of disempowering because we talk about how the technology is going to change everything and what are we going to do and how are we going to stay in control of it but we are in control of it we are making it this is a human product so in a sense we need to kind of start thinking about actionable points that we can take from this to make sure that all of the visions that we come out with them out of this conference for inclusion will actually become a reality. Thank you I think as we think about big overarching issues really important to ground them in things that we can act on in the case study conversation that'll happen later but to think about how we go further and deeper and become more sort of through through a more micro explorations more precise in our understandings more nuanced. Other comments especially over at the back. Okay just an overall suggestion Alex Kukuru that perhaps one of the ways we could adopt is an analytical framework is to look at AI in different components or clusters such that we could have maybe we deconstructed into what may be process components of AI what could be system components of AI what could be instrument slash tool components of artificial intelligence and then what could be outcome components and then within each of the realm of these clusters we can try and fit in so that we are able to start looking at issues in a sort of a compartmentized way knowing that they are interconnected at some point but it gives us a good analytical framework. Thank you Melanie. Thanks I would like to to continue on the Kodas law comparison but it's completely true and it's something we started to discuss in the governance session and I just forgot what I wanted to oh no I remember now what I wanted to say we mentioned that if we compare earlier when we were meeting as a creative community in order to try to build an alternative to what by then seemed to be the most pervasive dangerous thing for what was inclusivity by then was access to knowledge what we tried in terms of resistance was to develop and build an alternative and which took the form of a legal hack of of copy left so what would be an alternative way to to conceptualize and and to develop this regulation through technology what would be the the copy left of AI. There's a fellow called Stephen Duncan Duncan who talks about the tyranny of the possible and the challenge of seeing what's in front of you and all of the barriers and then only being able to imagine a gradient away from where we are now versus kind of an imaginative approach to imagine that future that you want the the copy left version the thing that is the AI and inclusion atmosphere that we want and then think about how do we build back from that what does it take to get there other we have about five minutes left is that right on this or okay so thank you so it's two separate interventions I'm cheating a little one is building off of what Nishant was saying which is that there are two tracks to this there's the urgent and then there is the longer term I'm I'm thinking a little bit about how internet governance worked and how for advocacy it was useful to have a few powerful ideas like intermediary liability and safe harbor that were easy to pick up and and build into systems around the world so I think it would be useful to start doing that for AI something like the campaign against killer robots for example self-explanatory powerful but I think that as a as a task this is this is something that I might prioritize the second is that what's been useful to me as an academic in a setting like this is that I'm hearing a lot from disciplines that I have no conception of how to navigate but equally I'm recognizing a lot that so for example the whole code is law model that's a Ryan Kelo jack Balkan series of papers which I think that people that are not lawyers and obsessed with liability might enjoy reading so I one of the outcomes that I'd love to see is sort of a basic list with a flagging of this is something that someone from another discipline can understand because I feel like that would increase our shared knowledge much faster like it only to clarify I think about bridging communities there is no opposition between checking AI against human rights humanitarian and development goals we need to do all of them and we spoke here about jobs about the impacting economic political elements of social inclusion that you will not find all of them in human rights instruments you need to complement it with a development agenda otherwise you lose the north and south divides and inequalities and at this moment there is a group of diplomats discussing that in New York without all the knowledge and evidence that you have so about bridging communities maybe they should have more evidence and discuss more with you but here also maybe we can use what we have which is a grid framework as a reference that is very progressive and addresses the social justice elements that are missing in some human rights approaches thank you thank you any as we bring this in for landing any last final thoughts quick comments thank you one thing which kind of flows from what Melanie said as well is how environmental movements and and other kinds of justice movements are something that we can learn from because there's a lot of you know long haul work to be done as well and and so movement building around the various kinds of intersections and overlaps of the of the discussions that we had today are also what we need to look to as a next step thank you that's very helpful I mean as we think about this and see this as a place of intersection of so many different interests around the world across so many different communities it seems it demands a diverse participation right and that's an opportunity I think to bring in those other conversations because AI will have implications for all those different spaces and sectors and then to kind of add that to the recognition that these movements take decades to reach fruition that's very helpful okay so as we are about to transition to lunch first a final word from Carlos but I just wanted to thank all of the groups for reporting out so briefly and thoughtfully and all of you for engaging in what was for me at least a really productive interesting conversation so thank you