 Welcome this premier, it is premier. This is the first entirely virtual Berkman Klein Tuesday luncheon event. I'm Urs Gasser and I have the great pleasure to moderate this one-hour session both in principle and in practice I hope. As the announcement says, we will take a closer look today at the many AI, ethics and governance principles that have emerged across the globe over the past few years. In the first part of the session we will hear from Jessica Field who's a assistant director of the cyber law clinic here at Harvard Law and at Berkman Klein who will share some insights from her recent principled AI report which provides a mapping and interesting analysis of norms. We will hear from her about some of the common themes and threats across the different principles as well as about also differences among them and maybe even gaps. Now we all know it's one thing to write principles but putting them into practice is a whole other story and I'm therefore particularly grateful that Jess and we all are joined by our colleague Ryan Beutich who's an assistant research director at the Berkman Klein Center who will highlight the few of these implementation challenges based also on his work as a member of OECD's AI governance expert group which was one of the bodies that came up with a set of principles. So I'm looking forward to both opening presentations which will of course set the stage also for our discussion afterwards. After these two opening statements I'm really thrilled to invite three respondents to join us. We have Mutale and Konde, we have Dora Abu and Yunes and Vivek Krishnamurti. I will briefly introduce them after the initial presentations. We will have a Q&A as well although only virtual today. Please type your comments using the Q&A function which I will monitor. I will then select and hopefully also cluster some of your questions and share them with our speakers. Please also note that this session is recorded. We'll share it later on. Of course we know this is a little bit of an experiment. We will use the webinar style mode for this luncheon but we also experiment with other technologies and other modes going forward. If everyone who joins could go on mute to the extent it's not the speaker that will be great and without further ado I turn over to you. Thank you so much for joining us today. I'm delighted to be here with all of you and really excited to be able to talk about the principled AI report that we put out in January. We had been looking forward to doing this as an in-person lunch for a couple of months now and so in this new world that we're all in it's good to have this opportunity to discuss it and I'm really looking forward to all of your questions and to our discuss sense reactions as well as to the more practical perspective that Ryan brings as we were discussing this event we thought it would be really interesting to kind of go at AI principles which have you know come out on such a hot and heavy schedule for the past few years both kind of from the macro view that the principled AI report takes and then also from the micro view from Ryan's perspective having been involved in the drafting of the OECD ones that are particularly influential. So also thanks to Reuben, Megan, and Liz at Berkman for helping put this together and of course my co-authors on the principled AI report including Adam Nagy, Nellie Ockton, Hannah Hillegoss, Madhulika Srikumar and the rest of the research team who helped put this together. So principled AI was the result of a year-long study of principles documents that set forth standards for socially responsible AI which seek to ensure it will be ethical and rights respecting have a positive impact on the world. As I noted we released a white paper in January which is available on the Berkman website and also this visualization. Now if this visualization isn't immediately transparent to you don't worry I will be going through all of it in a minute. I'm planning to talk for just under 10 minutes just to give you an overview of the project methodology and findings and then of course happy to take your questions after Ryan's presentation. The top level finding that we have to share from principled AI is that in spite of all the chatter and concern over the fact that there isn't really a vision for socially responsible AI we were able to isolate some real some strong themes in the 36 documents that we looked at and we believe that they are the signs of the sort of earliest emerging consensus for societal norms around how AI can be should be used. Now of course principles are just a piece of governance and they should exist in a broader scope of governance that includes you know everything from the everyday practices of professionals who are involved to this all the way up to law and regulation at multiple levels of government. So here's a timeline that shows all the documents in the data set just on a slightly different way you can see that the earliest one we have in the data set is from 2016 the tenets of the partnership on AI and they go all the way up to late 2019. The data set is a curated set of 26 docs that we assembled using what's called an expert or purposive sampling method so it is not an attempt to be comprehensive we're aware of approximately a hundred documents that would loosely fit our definition but what we wanted was a kind of manageable set you knew that we wanted to build the data visualization and that we couldn't have too many documents on there or it would go from where it is now which is challenging and intricate to basically unreadable and we wanted nonetheless to be able to include the large number of documents so a variety of documents so in terms of stakeholders in terms of the timing of the publication I will note that we were hoping for variety in geography and we were able to achieve some but for example we were not able to find any documents from the continent of Africa we're aware of some that are in process but there were none that fit our definition that were published at the time of the report and so that obviously marks a significant shortcoming in our finding about the existence of a kind of global consensus so here you can see one vision for when we look at the variety within the data set we were particularly interested in having a variety of stakeholders represented because it was our hypothesis that this would be a significant area of variation between the data sets we also included it's worth noting documents that looked at AI as it AI technology generally as it is applied in specific sectors for example the justice system or the workplace but we excluded documents that looked specifically at a particular type of AI technology such as autonomous vehicles or facial recognition because as we looked at those documents they were just different in character than the generally applicable AI I said broadly applicable AI documents also worth noting not all the documents in our data set include the word principles they don't all use that to describe themselves but we our understanding of that word our definition of that word was documents that make normative in the sense that it's used in the legal community so a sort of proscriptive statement about how I how AI ought to be used we excluded empirical or observational documents like for example the annual reports that come out from AI now which have a lot of interesting insight on how AI is ethically used and deployed but but don't sort of contain that that normative statement so let's go back to the data visualization now that you have a little bit of an understanding of what is represented here and look at how to read it so each spoke on this visualization is one document with the exception of the OECD and G20 principles which are represented on a single spoke because the G20 adopted the OECD principles more or less verbatim the principles themselves verbatim they excluded some of the descriptive text the sector the stakeholders are color coded the same as they were in the pie chart I showed you on a recent slide so green is government orange is intergovernmental organizations blue is multi stakeholder pink is private sector yellow is civil society there are nine rings in the visualization the eight inner ones are the themes that would be isolated the outermost one is international human rights where we collected data on whether or not the document mentioned human rights or explicitly noted that it proceeded from the human rights law framework the framework documents are indicated by a star the documents that mention human rights or related international instruments are a diamond for the themes you'll know that there are circles and that they are different sizes the size of the circle corresponds to the percentage of principles in that theme that the document contains so if there are 10 principles in the theme and the document hits all 10 it gets the largest size circle if it hits just one it gets the smallest size circle because there are different numbers of principles within each theme it's instructive to compare within each ring but not between the rings so what are these themes here we've zoomed in a little bit and you can see them better the eight themes in order of how frequently we see them appear in the documents are fairness and non-discrimination some principle related to fairness and non-discrimination appeared in every single document in our data set privacy is the next one and accountability they were both in all but one document transparency and explainability then safety and security professional responsibility human control of technology and the promotion of human values we got to these eight themes by hand coding every principle in the data sets in the data set and then grouping like principles together so it was very interesting at the same time we were working on this project there were a couple of other sort of similar studies of principles which all came up with a number of themes that are you know parallel ours in many ways though everyone came up with a slightly different number so for example the report that came out from ETH Zurich has a theme called beneficence that sort of loosely lumps together I think our promotion of human values and some of the accountability and safety and security principles so slightly differently divided but I think researchers around the world are making similar observations some people have come to us with frustrations about the themes for example we've heard from a few people that they wish that sustainability or environmental responsibility were sort of or more of a top level item it is represented both in the in the promotion of human values and the accountability principles but because there wasn't a large number of principles under that heading it didn't rise to the level of these themes so every every principle has a different number every sorry every theme has a different number of principles within it this just lets you see a little bit better what those are the accountability theme has the most the greatest number of principles within it the promotion of human values and human control of technology themes have the fewest and you can also stand this to kind of get a sense of the range in the themes right we see some that are really big sort of consistently capital letter abstract concepts like equality under fairness and non-discrimination whereas we also see you know very particular policy recommendations like under transparency and explainability the idea that there should be a notification when an AI makes a decision about an individual so my last slide just sort of what's next for this project what's next for these observations I like to think about it in terms of what is wrong with this chart now this is a chart that shows the documents in our data set broken down by geography you can see that in spite of the fact that we built a multilingual research team with roots around the globe we nonetheless had the the principles were dominated by North America and Europe with a substantial chunk about a third from East Asia mostly China and a Japanese document we had one document each from India and from the Mino region and then a handful of documents from Latin America I think what that means is that while those eight themes will be important in AI governance we have a lot to do to expand the conversation to ensure that all of those who will be impacted by AI are weighing in on this governance piece and because these principles you know for example a word like equality we just picked that up on the last slide can mean very different things in different cultural contexts and two different people and because the people who are likely to be most strongly impacted by AI technologies are marginalized and vulnerable populations I think it's absolutely key to continue to make this AI governance conversation accessible to a broader number of people and and to ensure that the voices of a really diverse set of of individuals are and and organizations and governments are represented in it so with that I'm going to wrap up and really looking forward to your questions thank you so much Jess I have a quick question that's coming up from Padmashri and that is in addition to some sort of the mapping of themes and the content of the principles did you also map or analyze further what the underlying accountability mechanisms are not for those who design AI but for those who design the principles so are some of these principles more robust in terms of the inbuilt mechanisms for enforcing their oversight and others and did you map that it's a great question and the I think the initial answer that occurs to me is that it really it sort of goes stakeholder by stakeholder so for example you get something like the Toronto Declaration right massively co-sponsored organized though by amnesty and access now that is largely a coalition of civil society and individual and academic actors there's not a lot of you know accountability measures for an organization like that that's circulating principles on the other hand some of the government principles are adopted in the context of AI national strategies and do if there isn't sort of explicit commitments immediately at least includes sort of recommendations for the study of adoption of new regulations so I'm thinking in particular about the German and British AI national strategies there that are sort of like looking closer to regulation worth noting though that the actual the first government to adopt regulation which it actually parallels these themes in many ways was Canada which did not produce a set of principles first it really did just go straight for regulation which governs government bodies acquisition of AI tools so I hope that's helpful excellent thank you so much we'll have more questions but I think this is a good moment to turn over to Ryan to share your perspectives as Jess already indicated what's of course ahead are all these hard implementation questions and I was wondering whether you could share your perspectives based on your work with OECD but also beyond thank you Ryan great I am really excited to be a part of this experiment in BKC events so thank you as Jess said just to repeat the thanks for the BKC team that helped make this possible and of course it's always hard to follow Jess but hopefully I think what what I aim to cover here should follow on nicely with some of what she was sharing and in particular I had two main objectives for what I wanted to cover in the next few minutes the first is that you know when you look at that really amazing visualization that that Jess created and her team created you know you see all of these as finished you see all those principles as finished products and I actually wanted to provide a little bit of some personal insight personal reactions to to the process that I experienced being part of the the AI expert governance group that the OECD created in developing their their set of principles and then secondly I wanted to sort of go a little bit deeper into at least one of the principles within that document to highlight some of both the challenges and opportunities looking ahead as we think about moving from principle to practice so as far as the the process that the OECD went through there were really four stages the first was that they created this group that they called the IGO the AI group of experts for the OECD and I'll talk a little bit more in a second about who was part of that group and then and they met four times I was lucky enough to be a part of that group and we met four times between 2018 and 2019 then the the group essentially drafted a set of recommendations but that was not the final step then that draft set of recommendations was then passed up to the OECD's committee on digital economy policy and then that group had the opportunity to revise the the draft principles and reshape them and then ultimately voted on them and then sent them one level higher to the OECD's ministerial group and then that group had a chance to continue to amend the principles and then it was finally approved in June of sorry June of 2019 sorry that's a typo in June of 2019 that was finally approved now in terms of the the composition of the of the expert group there were 14 what they called outside experts invited outside experts and those individuals came from academia and business then there was another nine representatives from other OECD committees and those representatives came primarily from civil society organizations like those that focus on on labor issues or those that focus on privacy issues as well as additional representatives from business like IBM and then there were 33 representatives or so from OECD member states and many of those people came from from specific regulatory bodies within the within those countries that have overall jurisdiction for issues relating to emerging technologies or telecommunications issues now within the OECD principles there were really two operative sections one is what the principles for responsible stewardship of AI and that really related to a set of values things like transparency human rights issues like that and then there were national policies and international cooperation for trustworthy AI and that that section related to principles that governments would the audience for that half of the document was really governments so that related to future of labor data sharing making investments in AI research and the document also contained a broad definition of an AI system what does that mean to be to be an AI system now as Jess noted in her in her comments the OECD document is actually fairly unique in in the influence and adoption that it's had many of the principles are important for the organization that created them our company puts out a set of principles that largely defines how they're going to implement AI but those principles in many cases don't have widespread influence what one company says that they're going to do doesn't necessarily shape the whole industry what makes the OECD principles somewhat unique is that it was the OECD process operates on the basis of consensus and so in order for it to be adopted all 36 OECD member countries had to agree to it in addition there were six non-member countries that adopted it and then shortly after it was adopted the G20 adopted the essentially adopted the OECD principles verbatim so I wanted to as I said to provide a little bit of sort of my own personal reaction to being you know being on the inside and seeing some of how this one set of of principles were created and there were really three three things that I wanted to mention first the framing of the process really matters and by that one one example that I'll give is that the OECD frame the process as principles to to advance the adoption of trustworthy AI and what you'll notice about that phrasing is that it has a very sort of positive pro adoption bent to it and that meant that that certain things that I think are actually important parts of the conversation when we're thinking about AI things like are there spaces where AI should not be used are there areas in which we think that AI should not be adopted and should not be advanced that really wasn't in scope for what the OECD was was considering so the initial framing was important for determining what kinds of principles could be in this document versus not secondly there the question of who ultimately decides who who's the sort of final decision maker I think is really is really important and as you saw when I showed the the sequence of events leading to the adoption of the OECD principles the group of experts who created the initial draft we were not the ultimate audience for for the principles it was the member states that were ultimately going to have to adopt it and so so what what at least I saw from the process was that it was really designed in many ways to try to reach the end of the process where there was something that the member states would be able to to adopt and would be comfortable adopting and so you could certainly have imagined alternate processes that they could have gone through that that would have yielded a very different document that ultimately again because of the consensus driven nature of the OECD process never would have been approved by all of the final decision makers and so I think there was a real conscious decision to design the process in a way to get to something that could be approved and that really get to my final point that the conveners and the the staff at the OECD who was doing a lot of the drafting in between meetings and responding to comments really had a lot of of power in terms of of how they they structured the process in order to try to get to that endpoint and so I think it's certainly fair to have some criticisms about the OECD process whether it's questions about who who was invited to the table or the sort of should AI always be adopted and advanced those kinds of questions I think it's totally fair to have criticisms of the of the of the process and of the the document but but ultimately I think as measured by the by how I think the the OECD viewed it as reaching a point where it could be adopted and ultimately implemented by by the the countries that adopted it I think it really was successful in that in that regard so uh next I just really wanted to quickly give a little bit of an example of of some of the challenges and opportunities by looking at one specific part from the the principles relating to transparency and explainability and the first thing that I wanted to highlight here is that there's four sub parts to this principle but really when you look at the the first three sub parts are really all about advancing understanding and you can see that that the key words are foster a general understanding of AI systems awareness understanding the outcome and so these first three are really not about changing AI systems but really helping people understand how they're interacting with AI systems it's this last piece really the the fourth subsection that really actually is the the most challenging in some ways because it's it's about creating a way for people to challenge outcomes that may be adverse but when you start to read it it actually in many ways raises more questions than it gives answers for instance it says that it talks about people adversely affected by AI systems well what does it mean to be adversely affected what if uh what if you don't even know that you've been adversely affected or what if the system actually performs better than its human counterparts but uh but compared to certain other people your your impact was less less good than others so what is it's a it's a complicated concept to challenge its outcome what does that mean is it uh in the moment uh is there some sort of appeals process is it a human review uh what does it mean to challenge the outcome based on plain and easy to understand information uh you know based in different contexts different kinds of information might be more or less relevant and you know for instance is it uh the the single factor that was most important in the decision is that the top 10 factors that were most important in the decision is it the one factor that if changed the least would alter the outcome any of these could not could be depending on the context the most important kinds of information and finally the logic that served as the basis for the prediction does that mean the source code does that mean the training data that that helped create a machine learning system you know that so so there's you know all of these terms raise a lot of questions and that's not they're they're all answerable but this document on its own doesn't necessarily help uh someone figure out how to answer those and there are certainly other organizations that are out there that are thinking about these issues uh but it sort of creates more work for someone who's trying to think about how to comply with these principles that then they have to start to try to answer as many of these complex questions so the OECD recommendations are non-binding um but the OECD does have monitoring capacity in collecting data and some of that data is being uh uh collected at their recently launched AI policy observatory and I think the OECD themselves they've created a new some new groups to help really think about how do we move from from these principles to practice recognizing that that that there are a lot of unanswered questions right now that these principles in many ways do provide a helpful agenda to governments and to organizations that are thinking about how to move forward but uh recognize also that that they really raise even more questions um so I think that that that's really where where we are at this at this particular moment so I'll stop I'll stop there so that we can get to the uh to the questions so thank you thank you so much Ryan uh there are maybe two quick questions uh and I ask for short answers so one is who is the intended audience or the users for these of these principles maybe just you can take this one and then there is a question by Nakla um uh with regard to the OECD process in particular whether there have also been non AI experts uh involved in the consultations just do you want to take the first one yeah I would do um so it's interesting this was actually a piece of data that we collected on each of the documents in our data set um and then ultimately like didn't find a great way to represent it in the data visualization um but we were curious in the what we could glean from the text of the principles themselves about who the anticipated audience was right was it policymakers was it individuals you know users or affected folks was it companies and the private sector uh or others academics for example um so uh there was a significant variety um I think it's sort of perhaps the most notable ones are the private sector documents are often um sort of have two purposes right one audience is internal so organizations you know like google and microsoft and others that have adopted these types of principles have also built teams who are responsible for ensuring that the development and deployment of AI within the organization corresponds to the principles that they've circulated so the documents are internal facing to some degree right they're persuasive and binding more or less on um the teams within those organizations but they're also external facing right they also have a pr function um and so they're aimed at us um and perhaps at policymakers um and others um to the extent that you know that's the sort of ethics washing argument that's often made that like um when private sector organizations adopt these principles it is in part to make an argument that they're um the regulation is perhaps less necessary because the organizations are doing it themselves um so uh certainly with those those documents you know I think we see the sort of primary audience as being well I suppose it depends on who you are and what your perspective is how skeptical you are whether you think that the primary audience is inside of the private sector organization or whether you think the primary audience is the kind of PR or um staving off policy function um and just I think that's that's good yeah thank you Ryan do you want to respond to Nagla's question please yeah so uh I think the the question was about whether non AI experts were consulted in the process and actually I think it's uh it's a little bit more the the reverse that the the title of the group uh the AI experts governance uh you know it's a little bit of a misleading title because I would actually say that most of the participants uh were uh came from from uh different kinds of policy backgrounds legal backgrounds there were certainly some people who are part of the 50 some people in the uh in the group who had uh computer science and AI specific expertise uh but uh but the majority actually uh brought very different uh perspectives to it um and I can talk more about that later if there are other questions relating to it thank you that's great um Ryan and Chester there are a number of other questions coming up in the Q&A box and and if you would be so kind to respond to those that you feel you can respond right away we'll return to those in a minute but I wanted to really um ask to the floor to the virtual floor um our respondents um Mutale you're the CEO of AI for the people you're a fellow at the Berkman Klein Center um you have done a lot of work focusing essentially on the unequal impact that these next generation technologies have of people from different backgrounds and in different uh circumstances and in different geographies and I was wondering you know as as Ryan said there are all these open questions and now where do we take it from here and and how do you see your work fitting into uh this question of principles and and practice so far yeah my first question was the report was very rights based um in terms of how you're thinking about AI going forward much of my work looks at equity which is a different type of lens and uh given that we have shorter time and I could probably speak about this for seven years I was wondering whether there was any conversation around equity based and I'm thinking specifically of negative negatively racialized communities that want rights but also have this equity deficit that was very very fast do you want to add two sentences about your your work and chess we will collect a few statements and then open it up um I can qualify why equity was so important for me I was doing this this work I was a practitioner in congress the US House of Representatives and I spent a lot of education time really letting lawmakers in the US know that often human rights frameworks are not looking at the reality that if we if we ignore phenomenon like anti-black racism and how that impacts the deployment of AI I'm thinking specifically in this moment of corona I live in a city where the police are now thinking about using um a biometric technologies to figure out who's social distancing finding somebody who is white and rich is very different from finding somebody who is poor and black but the it's one principle and that's just a very concrete example of why thinking about equity in terms of impact is something that I very much dedicated my work towards and I would love to know whether this was a consideration in the report thank you so much Mutale and we'll return to that after hearing from a few others but also thank you for your important work in this field it's really great to learn from you and I was wondering whether Vivek who joined us in the meantime was a law professor at the University of Ottawa where he leads the Canadian internet policy and public interest clinic also former Birkinite you've done a lot of work from a human rights perspective working with companies and thinking hard about the intersection of technology and human rights and building on some of these questions that were so nicely framed by by Mutale I was wondering where you see some sort of the rights frameworks coming into play but also possible limitations of such a framework and it's good to see you by the way yes it's great to see you whereas under these interesting circumstances so I'm delighted we could be here virtually yeah so I've worked again on technology human rights for a long time and I see the value of human rights approaches I've seen them actually be quite transformative inside companies with regard to let's call it the web 2.0 set of human rights problems and I think companies are are grappling with what do we do with this very wide set of technologies right our algorithmic systems and AI if you think about it or a cluster of technologies and and a lot of the human rights impacts are particular to the use of that technology in a use case right and that presents a challenge for companies that are trying to use their let's say global network initiative and that's something we've been very involved in era tools to assess human rights risk and apply it to this open-ended set of systems so just to step back though I mean I think there's a lot of value in the in human rights approaches and I'm really glad to see I think the report that Jess and colleagues put out is incredibly helpful the visualization just in sort of showing how different human rights conceptions are found in different principles is incredibly useful as a descriptive measure to show what the landscape is but to me from a normative perspective the value of human rights you know even in these challenge times is that they do provide a baseline set of understandings you know either law for one thing that states have generally accepted that they feel that there's an obligation to respect these things and there's a normative framework there a common way to talk about problems even if we don't agree with what the solutions are now I think the hard challenge and I alluded to this before I think this is reflecting the title of this event today is how do we take those various articulations of principles right high-level human rights principles the more granular principles that the OACD companies that your report has shown and provide practical guidance right to different actors in the AI stack as to what their human rights responsibilities are based on differential impacts in different use cases and I think that's a nut we haven't cracked yet and it's a really difficult nut to crack because of the diversity of the tools and of the use cases right so I think that's a it's a really difficult human rights challenge that we're at the early stages of thinking about but I actually think that all the work that's happening makes me quite hopeful because we are thinking about it at an early still relatively early stage of the development and implementation of these technologies I'll leave it there it's very helpful and also a good reminder how much context matters in these discussions and it may be a great segue to to do our whose SJD candidate here at the law school and has done a lot of work not only actually at the OACD but also studying the use of algorithms in the criminal justice system and I was wondering though whether that is an initial use case whether it's AI properly or or algorithm or where of course many of these questions around fairness and transparency and bias are at play and with Vivek's reminder that this is still relatively early stage how helpful are principles like the ones we've been discussing today in the work that's front and center in your research thank you Urs and hi everyone I'm happy to be here it's funny they say if you have lemons make a lemonade so in the normal world I probably wouldn't be able to join because I am not in Cambridge but it's good to have the opportunity to be here I want to divide my observation into two in the first one I will wear the hat of someone who was working in the OECD part of the time where the principles were considered and the expert group was meeting and in the second is like the academic had a someone who is thinking about the regulation of AI so for the first part so as Ryan said the OECD operates on the basis of consensus and reaching a consensus between so many members and other actors in the field it is a hard job so the principles are broad indeed and one of my favorite exercises is to give this list of principle to someone who is from the field of computer science and to see oh what do you make out of it and usually the answer is like not much and but I want to emphasize that although it is hard the the principles raise more questions than answer perhaps it's a good thing because we don't want to be too limiting in the approach that especially if if so many countries are adopting those print we want to give the room for either countries or yeah companies and everyone to adopt it to their needs but the OECD but the principles so they have a very important I think declaratory purpose they kind of put the important topics upfront and then the implementation into practice is this kind of will be discussed later on as probably some of you recall the OECD has been very powerful in shaping regulation around privacy starting from the 1980 the council similar in the same way that AI principles were adopted in a council recommendation there was a council recommendation on privacy and this has been shaping the privacy regulation around the world massively so it can have an impact and now as an academic who are thinking about the regulation of AI I think the hard part is to kind of think what is the how do we balance between all the principles and not just how I try to look at several case studies from criminal justice from welfare and to how to not just unpack each one of the principles but how to balance between all the principles are they all do they do is there any hierarchy between the principles is there any difference where is all the discussion that we have in the legal world about checks and balances let's say that for example I'm doing very well on the transparency in a certain case do I need to comply with the others similarly I think what is lacking at this point and I'm hoping that with the development of more and more case studies is that we'll see this conversation being developed not just only what is fairness mean in each context but how to kind of look at all the prints the the guidelines simultaneously the requirements simultaneously in what to make out of that I'll stop here it's a super helpful thanks for sharing your perspectives both from the inside and as a researcher it's great to to have you here we have a number of comments and questions and before giving it back to Jess and Ryan for some sort of a concluding remark since we have only eight minutes left I wanted to maybe ask Padma Shree to share your observations that you also put in the Q&A but it's just also nice to have you your voice live here and then Amy Johnson will be next and they hope I can unmute you Padma are you here yes yes do you hear me yes very well thank you thank you Urs and and thank you everybody especially Ryan and Jessica for presenting and I'm really happy to be part of this so the the the question I asked and I think that maybe and I think it's a relevant point is that whether on whether we should reflect on if a human rights framework is is not contextualized in terms of new inequalities considering that a lot of the human rights principles that we are considering today they were created 17 years ago or 70 or so years ago and there are new kinds of inequities and inequalities in society of course you yourself have written a paper about digital rights and the different kinds of digital rights that have been actually proposed and I think that there's a need to have a discussion on what kind of new thinking we need about contextualizing a human rights framework to AI and inequalities and inequities of the kind we see now and what about the people who are normally not part of this conversation how do we take them on board thank you Padma and Dennis Redeker who's the lead author of the paper kindly mentioned this is also on this call I'm so sorry they have so little time but we will hopefully weave it all together so Amy do you want to share your question or your thoughts as well and then I turn it over to to Jess and Ryan I was wondering about the question around the adversely affected the scales of these systems and the effects of these systems are so large that it seems odd to me that the only form of challenge would be the person who is directly harmed and so I'm curious if there is consideration of other folks whether it's a bystander intervention style or some other kind of method of challenge if that was under consideration and if not why not excellent thank you so uh uh Jess and Ryan no small task in five minutes to talk about rights-based versus equity-based approaches question of hierarchy among principles and ultimately also the question of interventions by standards over to you oh yeah well it is a tall order in um in five minutes um I'll just uh coming out of this report which of course is like a deep dive into the principles themselves mention a couple of principles that came up for me in this conversation so um one we have a cat under um fairness and non-discrimination we have a principle called inclusiveness in design it was really interesting to see how some of the different principles documents interpret it so um some of them interpret that idea as basically just like we should build more diverse design teams right include more women and minorities on design teams but there are a few documents of which the IEEE ethically acceptable design is the ethically aligned design is the like perhaps primary one that actually think about inclusiveness in design as um in more of an equity than a rights framework so it's not about the design teams it's actually about designing technologies such that they allow for broader participation than the present state of things um and while um Mutale brought um a race-based frame to it which I think is incredibly important um the case that the IEEE document highlights is um actually disability rights um so how could uh AI technologies be designed to build a world that's more inclusive um for folks who are hearing or vision challenged um so that's one principle that I wanted to bring up um from the perspective of Amy's question which is I think a wonderful one and and uh highlights a shortcoming in um in sort of current US law at least speaking as a US lawyer right where um if you have a sort of very small harm as a member of a large group who are who are harmed in small ways you sometimes have standing problems and actually bring you a lawsuit to enforce those rights um the sort of equivalent of a bystander type um enforcement that we observed is that quite a number of documents I think almost half the documents in the dataset recommend some sort of um exterior audit or evaluatory function um so that uh and that's it's interesting to think about how that might take shape in um various jurisdictions around the world um whether in administratively or otherwise um so that's I think a sort of space to watch and a group that could function in that kind of bystander role um with with civil servants thank you just writing your thoughts on some of these questions thanks so uh I actually wanted to come back to something that that that Doa said because I think it's a really important point to emphasize is that I think when you look at for instance the OACD principles it's really a stake in the ground it's not it's not it's not an end point uh in itself but I think it it provides direction hopefully to to to nations to other nations and to to organizations that are thinking about uh about these kinds of questions um and I think really that's what a lot of these principles are um and uh and so I think that one of there there's sort of two two interesting things you know one relates to Amy's question that uh that when you when you read these principles there's both the sort of uh the specifics of you know why why isn't there you know bystander rights but uh but viewed you know two steps back looking at it as this sort of marker it's really more of a question of how do we build effective accountability mechanisms and it provides some ideas on how to do that but uh there's obviously many ways to actually implement it in practice and uh and things like bystander uh rights may may be part of a more comprehensive approach and I think related to that uh my my other observation is that I think one thing that I'm interested in seeing is to what extent do things as we move from these principles to practice to what extent do they become differentiated in different contexts so there may be some some cases where the the the right to challenge as articulated in the OACD principles is actually good enough but there may be other other domains other areas of application where a very different approach is going to be uh necessary and so I think it'll be interesting to watch going forward uh whether there's sort of forks taken in the road and different approaches that vary depending on context and how these sets of principles are used or or or not in thinking about those differentiations thank you Ryan and I also want to acknowledge we have a number of of additional questions and inputs on on the Q&A in the Q&A window and I'm sorry that we're running out of time I want to acknowledge our briefly what what Julie wrote there and which may be one of these context Ryan is talking about real time what's happening now in the COVID-19 crisis how will AI be deployed to combat this particular public health crisis how robust are these safeguards or at least norm statements that that Jess was presenting and Ryan was discussing too so I do think this is kind of a first dramatic real-world test of what we have been discussing a little bit in the abstract today but that gets concrete very quickly so I'm sorry we're running out of time but it was a wonderful one hour with all of you thanks so much Jess and Ryan and Mitali, Doha, Vivek, the entire BKC team who made this possible thanks to all the participants for listening in and please be in touch and stay safe and be well thank you