 Hi, thanks, we'll just wait another minute for a few more people to join and welcome everyone who's watching this live on has cake TV and on YouTube and whatever else in the world you are and everybody who joined the web, the zoom webinar as well. I'm David and as you might have noted, I'm going to be presenting some of my research on automatic decision making by governments by public agencies in India, and generally trying to build some more critical conversation around AI automated decision making and big data all these terms that you increasingly heard today and particularly, you know, the very concerning and consequential context and Mr being used I think it's high time that this is a conversation that really builds up in India and that there's kind of more participation in the kind of policies and the kind of laws and the kind of technological systems that are being built around So just give us a minute and I will start with a presentation of a project that I launch will be launching today, which is called the AI observatory, which has been built by me and the wonderful team and design becu who is also on this call. Okay, so I'm going to start now, I'll share my screen to, and I'll also share the website link to everybody who's here. So you can all log on to that, it will be redirected to a non GitHub site soon. But for now, if you go on then I'll just take you through kind of how to navigate this and what it really contains and also why I've done this project in the first place. So, the project is called the AI observatory. I think, you know, even though I kind of am always very hesitant to use the words AI, or use the term AI because you know it gets very confusing, it's a very conflicting use and it's not really a term of art. I prefer to use the term automated decision making systems because they kind of center, I guess, you know, they center what AI does and why, you know, the kind of consequential users that it's put to. And also centers them as kind of socio technical systems, which are embedded in very particular social and institutional and normative contexts, which is why I guess throughout this I don't use the term AI except where it's used in policy documents or in laws. I prefer to use the term automated decision making. Also because it conveys a range of sophistication of automated systems and technologies, right from simple softwares, which employ certain kinds of logic based tools to more modern machine learning systems. So they're all post similar concerns. I guess within a particular spectrum of sophistication and particular spectrums of, you know, the kind of affordance that they provide other kind of arms that they can potentially be used for, but they largely fall within particular limits of algorithmic systems are automated decision making. So, what you're seeing here is the homepage for the AI observatory which very briefly explains I guess the imperative so why this research, why I conducted this research, and why we built this particular toolkit. So, the biggest reason I think was when I started doing this as in my role as a Mozilla fellow was the disjunctions between the kind of grand hyperbolic claims that we were seeing in policy documents by the government of India as well as the kind of claims that technology companies and others embedded within these institutions were making about how AI is going to solve, you know, these gigantic and, you know, long running long standing social problems. Simply by being efficient by being, you know, non human. And the idea was to push back against this idea that, you know, that artificial intelligence is this technical solution to very complex social problems, and also that it's not a human system in that it's something that comes from nowhere, and is able to solve problems by being completely neutral, you know, by being neutral and efficient in a manner which has no interaction with the kinds of, you know, social or other context within which it's actually embedded or within which it's working. Another reason for this was to kind of serve as a counter mapping exercise. So while we're being pushed this narrative of AI hype and AI hyperbole, we're seeing that policies is also increasingly disjunct from participation, particularly in India, the ability for people to just see how these systems are functioning, or be able to participate in them has gradually shrunk. So these, these decisions are made within, you know, very narrow spaces of policy of tech policy particular, which are kind of almost overrun by the interests of large technology corporations, or the interests of government who may want to use these systems for surveillance, or for other means. And then a final reason I think is just that the, it's also meant to kind of showcase that this is something that's happening and it's a very current and real problem that's happening in India, often kind of the use of these systems, even though there are a lot of them as you'll see if you go to the database later. A lot of these systems are developed by last technology companies, you know, in the US in the UK, or in the EU. And, and a lot of the conversation around AI ethics, or the accountability of algorithmic systems happens, you know, in those contexts, whether it's standard setting institutions, or the kind of conversation that reached these large technology companies. It always focuses on, you know, what you could call the global north, but the same conversations are both, you know, limited in the global south and also, or in countries like India, and also aren't really reaching the same, the institutions and policies or where these technologies are made. So this is kind of an effort to indicate also to these spaces these consequential spaces that the systems that you're building have consequences for people in jurisdictions that you're not looking at. So it's an effort to kind of push against that narrative and try to reclaim some of that space as well and just an example of this is, you know, how Microsoft and Amazon, for example, are selling facial recognition systems to police in the USA after the Black Lives Matter protests erupted a few months ago, but the same companies continue to sell these technologies in India. And, you know, that dysfunction is very apparent and it's quite concerning so this is an effort to kind of, you know, to bring this conversation to bring this lens into the conversation at a global stage as well. So the project is structured in four parts very broadly. And you can look at that. If you go right below and these little pointers over here are actually expandable models. So if you just want to see, for example, a note on the, a very short note on the method that I've tried to follow and the definitions I've used you can just click there. So the first and what one big part of this is the mapping exercise so I've tried to understand through desk research through interviews with bureaucrats and journalists with people who are using these systems. What kind of systems are really in place, what kind of public documentation exists about these systems, including for example responses to RTI is, you know, procurement documents or presentations that the government has been making at other levels, including say the World Bank of the WTO about these systems. So the first part is a database which is a mapping effort. The second part is an effort to understand the legal institutional and technical architecture of the system. So, who are the actors and institutions. And what is the kind of context within which these systems are going. What are the kind of databases and you know what are the data modeling technologies that they use which I think is incredibly important not just the kinds of data that are being used but also how those data is and how those information systems are being built to serve as backbones for decision making. And then note on another technical component of this which is the algorithmic systems of the statistical models that are, you know, the mathematical statistical models that are used to power these systems and, you know, to basically turn particular data inputs into actionable outputs. A second part of this then goes on to talk about what the various harms are both real and both real perceived and potential harms of using the systems are this draws from a variety of literature, particularly in the field of fairness accountability transparency and ethics in machine learning or in algorithmic systems, which and specifically draws from case study is relevant to India. So, you know, so all of the kind of the databaseing effort also helped me map out how these concerns of transparency and accountability or surveillance and profiling of of dispossession of rights and entitlements and of discrimination are playing out within the systems being used in India. And I'll quickly get to you know how that is structured within the document within the website itself. A final part of this which is unfortunately yet to be uploaded is talk talks about principles for the design and also tools for taking action against the systems. So the first is the first part of that talks about how constitutional principles are very relevant, not just as judicial standards to review these systems once they might reach court, but also important to help aid in the design and ensuring the appropriate regulation of these systems. Once you start building laws and policies around another part is to ensure that there's kind of greater participation in how these systems are seen. You know, because this can't simply be an effort of a few people embedded within particular, you know, tech policy institutions or even public interest institutions. So, I've tried to make an effort to, you know, a little bit engage with the everybody who's trying to use these systems, or trying to use this website to how they can, for example, use the information act to get relevant public documentation and information about these systems, how they can use consumer protection law to maybe file a suit against a particular consequential system which is being used against them. And also some resources for strategic litigation, which I think we'll also come to in the second panel for today. I'll quickly also take you through the database, which I guess there's there's multiple ways to navigate this. One is if you're, you know, concerned about a particular area in which automated decision making is being used. You can simply go here. I've mapped out a few of the areas in which it's being used. So, if you want to see how ADM is being used for welfare, you can see all of these various systems. If you go, for example, to some agravirica, you know, you can see the purpose for which it's being used, where it's being employed, what year it was deployed, you know, kind of the relevant public documentation, whatever that's available. So, and some of the other kind of news reports of what else has been written about it. And in this way, it's kind of meant to both be something that's useful as a click about. So if you're exploring this for the first time and you just want to general outlay of, you know, what's being done. Or if you're interested in looking at more specific systems, then you can also use you can filter it by purpose you can filter it by your or by the jurisdiction in which it may be used. So if you go, for example, you want to see systems that are being used across India. So I mean, it may not be used everywhere across India, but they're developed to be used across states or developed by the government of India to be used at the central level, you can see all of these various systems here as well. So some of the referencing is missing from the website currently. So once we add those links, you can of course see the same text in these documents. But also if you want to download the PDFs and see the references for yourself, you can do it on the homepage itself. So this is the how it's essentially structured the first, you know, first part of this kind of talks about the laws, policies, actors and institutions, and really the politics of AI policy in India. And I think what's particularly concerning from the data that I've gathered and the kind of research that I've done on this is how decision making is essentially being delegated to a set of private actors or to a set of actors who don't really have responsibility towards the public. So most ADM use within government is seen as a procurement problem. So it's a problem that is identified by public agencies and then a tool is identified to solve that problem. And there's very little negotiation between the two tools that's developed by mostly by private companies or private companies in collaboration with a government agency, but it's ultimately used by a third party public agency with very little knowledge about what went into that tool or very little in terms of, you know, ways to actually go about interrogating it or seeing what problems might arise. So if you look at the regulatory landscape for how these systems are used and even in the systems that I've documented almost none of them really have any kind of regulatory governance framework from how they are procured to how they how they should be audited to any kind of impact assessment about, you know, for public agencies to understand how the systems could potentially go wrong. Or, you know, what kind of data should be used within these systems or what kind of algorithm should be used. It's simply seen as a very basic problem solving approach through a technology. And I think that's, that's very, very concerning. There are a few very specific context specific regulations, for example, in the adhar app, but as you know, as I've also outlined here those also end up being completely insufficient for the kinds of problems that we see. Coming across through automated decision making. There's also an astonishing variety of data and databases that are being used by public agencies in India. You know, everything from your electricity, build your vehicle registration is going and feeding into different kinds of algorithmic systems and different kind of database models. So, you know, the idea of making relational databases or making linkability between databases is really catching up very, very fast. And which is why I think it's both relevant to understand the kinds of data that's going into this. So legacy data or the kind of categories about, you know, criminal data or that we've tried to outline here, but also the kinds of database modeling technologies that are going into it. The technologies that allow for greater retrieval of information across context. And then there's this chapter is kind of an effort to understand what it means when, you know, data is retrieved across context. And when you shift context of particular information from one to another, and then try to merge it and try to get, you know, analytics or predictive power out of particular data points, why that's concerning and how that's happening. The next chapter is about, and the other technical component, which is algorithms that are used within the systems. You know, this case study, for example, looks at the effort by the Ministry of IT a couple of years ago to build in automated censorship platforms, automated censorship within online platforms. And the idea was that AI can remove unlawful speech, which was a really ridiculous idea and it again indicates the kind of technical solutionism that always comes up when these conversations about AI happen that AI can somehow magically understand what unlawful is and what lawful is. And that it's, you know, that it doesn't really need to be as relevant to our ideas about prior censorship or prior restraint, or even ideas about how speech works within context. Right. And one of my favorite quotes to really again understand how public agencies have been thinking about this is if it is a quote by a bureaucrat who's in, who's responsible for the Samagra Vedika system in Telangana, who made this amazing quote about how, you know, he was, he was asked about difficulties and biases which might emerge in machine learning algorithms. And his response was simply to say that the more data you feed into machine learning, the better it gets, which is a fairly ridiculous response in my opinion. And it really goes to show, you know, how little the people in charge of these systems are really aware about the systems and what their potential consequences can be. A second section and I'll very quickly run through this for two minutes is about about the problems in the transparency and accountability of the systems. Of course, you know, it's always been referred to as a black box, you know, algorithms as black box systems. But what does it really mean to be a black box? Where does that opacity arise from? So I've done a kind of review of where opacity arises both in technical context, for example, the difficulty of actually knowing the kind of data that goes in, or passing through the algorithmic statistical methods or mathematical methods that are used, but also the institutional capacity. You know, the failure of the Right to Information Act, for example, has been a big revelation in this research, the fact that the more you try to get information out of these, you realize that this network of, you know, private development of the systems as well as the concerns of, say, national security or privacy that are thrown up and you try to get information out of these or trade secrets, which is another big blockage to transparency about the systems is really leading us into an area where we can't really question these systems because we don't know enough about them. Similarly, these systems also kind of, you know, obscure areas of accountability and responsibility, something that we saw in the Adha case. And I'd urge you to read some of these cases to really understand how accountability and transparency issues are thrown up in the Indian context. And three of the major harms, you know, that I've outlined here are the major blocks of, you know, harmful consequences that are occurring here is in surveillance and policing and policing as an institution has, you know, really taken up very strongly the use of automated decision which is a very, very consequential use case right there. And particularly a growing use of these systems in welfare contexts, which started of course with Adha, but is now kind of moving far beyond Adha into the use of machine learning into the use of, you know, different kinds of state level identification systems, which are also being used as automated decision making to identify, you know, who's a relevant beneficiary, who's potentially committing welfare fraud. And we've seen that these systems, particularly, I mean, we have experience from Adha, but also we've seen in other jurisdictions, like the Netherlands or the UK how these systems tend to fail, and also put the people who are most, you know, marginalized at risk, or the people who are most dependent on state welfare at risk. And I think that's really something really concerning that you need to watch out for. So, that's very broadly the content on the website and that's very broadly the project. And yeah, I hope you guys, I hope everybody who's attending here everybody who's watching has the opportunity to kind of read through it provide some critical feedback. This is meant to be, of course, part of the page three. It's meant to kind of, you know, be a starting point for conversations about this in India. So if you have experience, you know, interacting with these systems or if you if there's something that you think is missing from the database, or missing from my analysis of these systems, then please feel free to reach out to me I've given my contact information. The last thing I just want to note that of course this is, you know, in the effort of an entire community, the, you know, wonderful people who have spoken to other lawyers activists and journalists who have interacted with, and particularly of course design and the team there, Padmini, Paul, Pratush, in particular, who have really kind of, you know, we've been working on this for the last three or four months right closely, and they've done a wonderful job so all of the, you know, wonderful type setting all the design work that you see in the database system that's been designed. The amazingly, the, the nations that you'll see the little characters all across the project and all across the reports as well have been designed by Paul and there's a very specific idea behind both, you know, the the way things have been designed and the way the narrative has been created through the illustrations which you can also see on the home page if you click on map territories and learning knowing where they've kind of briefly explained what the idea behind the color scheme is what the idea behind the illustrations is and why it's important. So I'll stop with that and I think we'll move on to the first panel for the day, which is building on this theme to kind of explore how automated decision making is really working in practice in India. And why that should concern us all. So the first panelists for today are Arunjit Basu from CIS. Hi Arunjit, with Rishi Mardha from who's at Article 19 and Padmini Rehmeri, who I just introduced and leads the team and designed it. So, yeah. I hope you enjoyed clicking through that as much as you could. I know it is a bit of a short presentation, but really building on, I guess, some of the, you know, some of the points that this research threw up. I want to start with our energy to kind of talk about what the, you know, what the regulatory and policy landscape of automated decision making in India is. Why aren't we seeing a more concerted effort towards how these systems are being used by public agencies and really what is a policy conversation that's been happening around these systems in different sectors from say defense to agriculture. Just, yeah, quick thoughts on that. Yeah, thanks, thanks. First of all, congratulations on this observatory. I think it's a really useful tool for anyone working on AI in India. And I hope that people add to it and it really becomes a very useful repository. So I think on the regulatory ecosystem in India, I think the first word that comes to mind is enthusiasm. And I think you referred to that a little bit in the sense that there is a lot of enthusiasm among the bureaucratic establishment to create new forms of regulation for AI, because AI or whatever, whatever the framing is, is an entirely new conception according to them, which offers new possibilities. And therefore, we have seen a slew of policies that have come out from across government entities and departments. Often they speak to each other in a very uncoordinated fashion. And the idea is that you mentioned one of the key ideas very clearly that access to data and data as an economic asset for the development of these systems across sectors is something that's beneficial for society. And therefore we need to harness this data in a manner that asserts India's sovereignty. So when you look at any digital policy ecosystem in India, I think I'm including artificial intelligence, the word that comes to mind is in the embellishment of Indian sovereignty. And I think the issue there is there is not enough attention given to the core beneficiaries of this idea of sovereignty, which should be, which should be the user. So when we speak about the regulatory landscape, it's not that there is a lack of regulatory ambition. So from the Niti Aayog itself, there have been a number of policies that have come out. Some of them do ask some of the right questions. But what we need and I think definitely more some of that work is happening by people on this panel. In particular to actually study what the empirical realities in India, what the kind of challenges that are posed, not by algorithms themselves, but by the socio economic context in which these algorithms arise and therefore then to see how we can create a regulatory ecosystem that works for everyone. So just to build on that, I think something that you are doing in your project which hasn't, which I haven't seen spoken about enough in any policy document is the Constitution. I think that the word the phrase ethics has been thrown around in policy documents, the phrase data for public good or the data for economic benefit data marketplace has been thrown around, but usage of specific provisions of the Constitution or constitutional ethos. I have not seen it being used in any, any document, any document thus far, right, or at least in a way that applies these principles and I just like to end with this and which is why I think the observatory such a such a powerful tool that the challenges that come with algorithms or the ones that we associate with algorithms, they actually existed way before, or at least some version of them existed before AI became a thing. And therefore they were obviously because these challenges existed, there was some thinking that people before us had put in in terms of combating these challenges. To that extent, the Constitution and its related developments throughout the many, the seven decades that it's been a living document for has actually solved to a certain extent some of these challenges right and I think that forgetting that when we are trying to create this ecosystem and not using that as the tool that actually needs regulation together and therefore coming up with new ideas such as data marketplaces, a garage for emerging economies there are so many regulatory ideas I don't think I'll be able to cover all of them in five minutes. But rather than trying to do that and come up with new emerged innovative regulatory techniques, it's far easier to actually cement them in the existing Constitution and then build a regulatory ecosystem around that. I think that is really what is missing from the Indian regulatory regulatory scheme and I hope that in the next round I'll be able to discuss some more critical ideas on how we can how we can regulate better. Thanks. And yeah, I mean, you mentioned particularly how different, you know, how different the kind of vision that's articulated in policy documents is from the work that's actually been or the practice that's actually happening on the ground. And with that, I kind of wanted to move to Vidushi, because you've done so much, you know, ethnographic work on how the systems are being implemented and this stark difference between the hyperbola and the reality. So if you could just speak about your work, and you know what it really implies for I guess, regulation or policy going ahead on ADMS in India. Yeah, thanks. Thanks to vision. Thanks. I found myself nodding vigorously to everything you said. I think the first thing that I want to say when we talk about like automated decision making systems on the ground is that there's a huge presumption of usefulness before it's demonstrated. So there's a huge presumption that having predicted policing in Delhi will reduce crimes against women. There's a huge presumption that having smart sanitation in Pune will, you know, make the smart city a better place for everyone to live in. And the way in which policy has been positioned and the way in which these systems have been positioned, especially over the last maybe four or five years, is that it's increasingly difficult to question the premise. It's almost like you're being annoying when you question the premise. So there's an assumption of legality. There's an assumption of usefulness as an assumption of efficiency, which I find really problematic but also really important to kind of study. And it can feel almost anticlimactic for that to be the end finding of a huge project, but I think it's an important point of evidence to start building on. And the second was that I think the use of ADM systems as you call them which which I like is that I often think of them as like a PR management tactic, and that could be overly, I admit that could be overly cynical of me, but I often find that these systems are being used when public agencies want to seem like they're doing something. So what we found in Delhi was that following the horrific rape in December of 2012, there was a need to show that that work was being done. There was a need to show that, you know, the police was taking this seriously and so they came up with C maps and they came up with, you know, all sorts of tracking and hotspot mapping initiatives after that. So I think the public management aspect of things is far bigger than we give credit for in our policy documents and in our legal analysis and things like that. And I think one of the ways in which I saw this really manifest was when I did joint work with Malvika Prasad studying smart sanitation in Pune. So what we found there was that you know smart sanitation was like as we know Pune is like a champion smart city right like it's it's it's thought of as one of the poster children for why we should have or we should aspire to smart cities everywhere. And when we started studying smart sanitation, the first thing we realized was that the reality was far more sobering than the documents that we were given so it wasn't like AI or machine learning being used to help with something was mostly just sensors being put in public bathrooms. So I think what was more important and I think the biggest learning for me from that entire process was that the use of ADM systems by public agencies means that public service delivery such as sanitation quickly goes from being a function of public agencies to becoming a business use case. So when we found when we tried to get sort of information about you know, where do these smart sanitation systems get embedded what are the processes by which you figure this out and we were hoping to find something like okay each municipality or each section of society or whatever. And the thing that we found was that the the driving force of having ADM systems was whether it made business sense or not. So you are looking at, you know, fundamental rights that are now suddenly being dictated and allowed or disallowed just by virtue of a business's bottom line which I think is again one of the really worrying aspects of having these kinds of systems kind of introduced and pushed into public life without the legal and technical deliberation that it deserves. So stop there and happy to talk more in the next round. Thanks for the issue and yeah I mean that's really something that you know really comes out in a lot of this research or a lot of studies about this is, you know, who does this kind of who do these norms of efficiency work for like when you're looking at when the outcome of all of these processes is cost cutting, or you know saving on welfare costs or saving on government administration, who does it end up working for and I wanted to kind of expand that question and come to you Padmini to kind of talk about, you know, what are the normative assumptions that these, you know, both the technical systems as well as the kind of different context in which they're embedded like smart cities and the assumptions about efficiency and safety. Who do they end up working for you know what kind of biases and what kind of, you know, assumptions do they reproduce and where you know where can this kind of turn harmful for the people for whom the city for whom, you know the government is supposed to work. Yeah, thanks for that. I think I'm going to echo some of what both with the she and or energy has said, I think first, but I might kind of kind of review my interest in this is not only to do with the which is wonderful website which we've had the privilege of working on but I've also been working with Professor who's at university college London on a project called generating the smart city which has been going for around two years now. And, and the whole intention and the reason why we felt that we really needed to do this project is the, the, you know, to foreground the lack of attention that is paid to gender in any kind of policy around smart cities. Which is kind of an astounding kind of omission, because obviously cities are experienced by different demographics in different ways. And, you know, the way that somebody who is marginal and of different gender, you know, kind of fundamentally determines your experience of the city. And so I think as with usually very correctly said I think a lot of these kind of moves that are kind of visibly taken by, you know, the government. And we all kind of are also whether many of these kind of contracts are public private kind of relationships that kind of allow for these technologies to enter the public domain. It's very much kind of a very top down kind of way of thinking about what the citizen needs or what the city needs. There's a piece of work that I kind of really always can come back to when I'm talking about this, which is written in 2014 about it was a kind of a study in like Brazil, South African Peru, looking at how knowledge management works when it comes to informing policies around smart cities. I think one of the most important things that that kind of document revealed was that when we have conversations about mobility and you know fastness and these qualities that determine the smart city. The whole focus is on economic development obviously and how to provide facilities specifically for a very middle class lifestyle. So when these conversations happen kind of both in the public and the private sector that purport to be participatory and we hear that a lot and it's interesting for those of us who kind of work in the design space because there's obviously been a kind of wholesale embracing design thinking, which you know purports to kind of value such as empathy such as participatory processes, which are then kind of used to, in a sense, whitewash the fact that you know these participatory processes are often just box ticking exercises because they are only speaking to people when the participants in those conversations are the middle class and people of privilege and who are educated enough to actually participate in those conversations. So there's very little kind of proactive engagement with communities who are much more likely to be affected. You know so say some some communities or you know marginalized communities for whom the city is a very different place. And so because of this lack of engagement. It's no surprise that the systems that have been built kind of valorize the certain kind of values that are embedded at the heart of the smart city which only as we will know kind of only works the advantage of a certain segment of society and not the rest. So I can talk a little bit more about this as we go on. And yeah, that's wonderful to kind of just link how the social and institutional context link up to a lot of the, you know, the technical debates maybe that have been occurring in the space as well right the idea that, you know, say the even the fingerprint matching system underperformed on particular demographics or as we're seeing machine learnings grow in biometric identification. You've seen how that doesn't work for particular ethnicities or other underperforms by a particular you know metric or measurement of performance it underperforms in particular ethnicities. And that's, yeah, it's really fascinating to see this convergence in conversations in all of these spaces. I, so we're going to move on to, you know, the second part of this panel I guess to really think about, you know, ideas for where we can go from here, I know it's definitely not. You know, I'm always a little hesitant to say that we should always have a solution to, you know, once we identify these problems. But I think it's, it is necessary to kind of start thinking about what lens we can apply. We can identify the problem or where we can, you know, start moving towards or what kind of goals we want to move towards as well so I thought for the next panel we can talk about, you know, what are the principles that we should think about in terms of regulation and design of these systems. And I'd start I'd like to start with you with Ushi again. You've done work on both, you know, constitutional law approaches to this as well as international human rights approaches to automated decision making and AI. So if you could just talk about how principles of human rights and constitutional values can really inform, and how it would really play out in practice, when they inform regulation and design of these systems. Yeah, thanks. I'm happy to take a stab at that. I think the first thing I want to say right at the outset is that there's, there's often talk about you know what kind of new rights do we need so we need a right against automated decision making which we need a right against the GDPR and some to some extent. But kind of bringing that conversation back to the extent to which current laws can apply and current regulations can apply is something that I find hard to do on the ground, especially when you're talking to policymakers, especially when you're talking to businesses who are incentivized to roll this out under the presumption of a legal vacuum right. So the really important step is to think about how first principles of constitutional law and regulation whether it's consumer protection or competition law can actually apply. So, as you said I've done work on human rights and thinking about how human rights standards international human rights standards but also, you know, human rights standards in the national context can inform the design and deployment of automated decision making systems. And I often give this analogy of a room, right so I think that human rights should be the absolute minimum requirement below which no ADM system should go. So it's not to say that human rights are a checklist and if you can, you know, if you can take off the checklist and you're good to go because that would kind of defeat the whole process of trying to ensure that the societal impact of these systems is not a checklist. So it's not a checklist it's just a minimum requirements the floor right of this, this regulatory room that I'm trying to build in four minutes. So, if we think of that as the, as the floor, then ethics would be the ceiling, right, and this would this kind of preempt some of those conversations around ethics versus rights and ethics versus regulation because they both have their place. And they shouldn't be completed with each other because the minute they are you actually defeat the purpose of both. So if you have human rights as the floor and you have ethics as a ceiling ethics tells us how far we should go what we should aim towards what we should be looking towards, you know what is a legitimate aim and what isn't, and things like that, that can be dictated by the ceiling. And then I think of the walls and the way in which you reach the walls and the ceiling as being the technical ethical standards of fairness accountability transparency. Right so because if we, if we work on say for example fairness with an assumption that human rights standards would dictate ABC things. This is a little overly simplistic but just bear with me for a little while. And if the ethical standards to which we're aspiring to other ceiling, then you can actually give us ways in which to reach both by the technical formulations of ethics, whether it's fairness accountability or transparency. And that is just in terms of frameworks and how we think of them working together it's it's much more complicated it's not a matter of choice it's actually a matter of figuring out how they work together. I can, after I finish talking I'll drop a link in publication that actually talks specifically about this and how we think of the limitations of each of these frameworks and things like that. And the other thing, and I'll stop after this is to just kind of talk about, you know, I think data protection gets a lot of airtime. When we talk about area systems as it should, right. Because it is a fundamental piece of this regulatory puzzle but I think often what is overlooked is other frameworks like consumer protection or competition law, even I don't know if intellectual property would help I haven't really studied that consumer protection and competition law, as very much that only useful. When I was doing some work on China I found that when we're trying to get companies to behave in a certain way auto, you know, respect user privacy, Alibaba in Jan of 2018 had this massive leak. And of course, you know the knee jerk reaction is to say this is bad for privacy this is horrible for individual privacy, you know autonomy and all we can go down the data protection route which is fine. I just don't know if it would have worked with the Chinese company that doesn't necessarily think about rights and data in the same way that you know human rights organization would. So what consumer protection groups in China actually did where they, they pursued a case against Alibaba not to say anything about privacy but to talk about consumer trust, and to say that you violated consumer trust and as a consumer I don't have reason to trust you anymore. So fix the problem, pay your fines and then we can move on and that actually worked. So I think some amount of creativity also and reading the room so to speak, is really important when we think about regulation because it can't just be confined to, you know, the main frameworks that we've already talked about. Yeah, that's okay. Yeah, absolutely. And I think you raised a really important point about I guess the mix of first principles and different regulatory approaches right the fact that, and it's, I guess we'll discuss this in the second panel more the fact that people are increasingly waking up to the the fact that you can start applying existing legal principles existing constitutional principles to challenge some of the, you know, the legal vacuums which in within these systems are implemented and then you realize that there are existing principles and standards that exist to is they have to be held to account even if there isn't a clear standard of regulatory governance even if there isn't a specific law for AI. But on that note, I wanted to turn to Alibaba and also ask him. You've done some research on, you know, kind of talking about how policy should incorporate a range of different regulatory responses. I just wanted you to expand on that a little bit and speak specifically from the standpoint of regulatory governance right like what is good regulation look like is it, you know, is there do we need an AI law will it, you know, will it achieve anything or should we leave it to sectoral efforts, how do we go about bridging that gap. Thanks, thanks, Dirige. And I think a lot of what I was going to say is already been covered by the U.C. exceptionally well as always. So thanks to her for that. But I think the first point that I wanted to make is that when we regulated this point that I was making earlier is that it's important not to get caught up in a narrative and right now the narrative seems to be enthusiasm about asserting the country's economic sovereignty and it is important therefore when we actually think sort of take a step back and think about regulatory governance to poke holes in that narrative and try to improve the way the narrative is being shaped. So I think that before we even come to the sort of the brass tacks of regulation it's important to understand that the narrative is going in one direction which is that AI is good for asserting India's economic identity and I'm not saying the entire narrative is incorrect or inappropriate but there needs to be it needs to be challenged at multiple levels and which is where the approaches to regulatory governance becoming important. So first is the constitutional schema which I think there was a question on and she already already covered but let me go into that little bit. I think that the Constitution itself should be the starting point for any form of regulation and across countries of course because I'm familiar with India I'm going to talk about the Indian Constitution that are basically two broad aspects to one is the protection of certain core fundamental rights that are intrinsic to human existence so whether it is the right to free speech which includes a number of different associated rights one might say that the right to privacy is included within this in some respects there's the right to equality and non-discrimination and of course the overall right to life and dignified life at that. So when we think about regulation we need to think about this aspect but also another very important aspect of the Indian Constitution which is that it is designed to enable the state to remove or to at least mitigate structural inequality historical structure inequalities when we look at whether AI is being regulated in a manner that enables the state to augment these vulnerabilities or to mitigate them we have to understand that the Constitution actually allows to lay down the first principle and I think that that's whenever we do undertake a regulation it's not that the say Niti Ayo is coming up with the regulation or Google is trying to self regulate the principle when they say that we believe in an AI that does no evil or when we believe in AI that protects privacy or when we believe in AI that does not discriminate it's not Google or Niti Ayo setting the standard for what is discrimination the standard for discrimination has been set through decades of constitutional jurisprudence and that is what regulation is missing particularly we are talking about the public sector and the Indian Constitution clearly says that whenever an action is being taken as part of a public function that is that it has a direct implication for fundamental rights even if it is not being undertaken by the government direction the government is working with a variety of private actors but that shouldn't change the fact that the accountability the redress and the extent to which AI needs to be held responsible is the constitutional standard and it being held to those two broad facets of the constitution that I mentioned earlier that that's the first point. Second is that on now we get to the regulatory brass text. When do you regulate AI? How do you regulate AI? First is can you have a sort of one size fits all law that regulates AI before the solutions come in without considering the work that we do she's been doing without considering how it actually works on the ground. I don't think so I think it's impossible for any regulator given the knowledge that they have right now and of course you mentioned that maybe in India the knowledge is less than what it should be in certain cases. It's very difficult to say that we are going to devise a law that regulates AI before the solutions come into play. The second possibility is that we adopt responsive regulation which is that we see how it goes we see the risks posed by a certain regulatory endeavor and based on that we take a characteristic approach which is we basically escalate up a pyramid of regulatory options starting with the most lenient to the most severe. That might work for several emerging emerging regimes such as say tax compliance where you can judge how effectively individuals are complying the risk is obviously there if individuals don't comply but you can escalate up the pyramid but with AI and this is the argument that I make is that it is absolutely the risk is too severe for us to wait for a system to be rolled out and wait for particularly vulnerable communities to be impacted for existing societal fissures already that exists getting augmented and then deciding that we will regulate AI in a certain fashion. So we are stuck between a rock and a hard place can't regulate before can't regulate after. So therefore the solution that I in a paper with my colleagues umber and an alumni that we try to argue for is what is actually I mean the author of this regulatory strategy termed it smart regulation perhaps um I think it was gunning him from from ANU who turned it a smart regulation perhaps it is self prodigy in that sense but I think the strategy that actually mixes a variety of tools regulatory strategies and regulatory phenomena to regulate AI basing the socio economic context as the starting point so you take the socio economic context you derive a certain set of principles based on I mean constitutional ethos and then you decide how exactly I should be regulated in the context so just to end with this I think if you study and there have been several surveys of the myriad AI ethics instruments or AI governance instruments that exist but all of them seem to articulate five broad principles and this is in line with the principles articulated by the Constitution if you read it holistically first is the protection of human agency second is equality dignity and non-discrimination third is safety security and human impact fourth is accountability oversight and redress and fifth is of course privacy and data protection. So if you take these five principles together within each of these there are certain questions that are regulated needs to ask I don't want to go into all the questions for all of them are just to give an example. Take human agency, there are various questions that regulators should be asking before devising the appropriate regulatory strategy, but what they need one of the questions is what is the difference or what is the power asymmetry between the entity that is devising the solution and implementing the solution and the entity that is getting impacted by it now and you've probably covered a variety of these solutions in your observation but let's take two different cases one is predictive policing the second is agriculture right in the first case it is a police person in the police force who is devising a solution working with some private sector actor and implementing it in a manner that impacts people and impose a standards of criminality on them in various parts of the country. There is clearly a huge power asymmetry between the end user of the product which is the police person and the party is getting impacted by it whereas if you look at AI in agriculture one might argue that there the regulatory intervention needs to be different because the in several cases farmers are directly getting the information and then they are using that information to implement it now obviously there are also power asymmetries but the kind of power asymmetries that exist when there is a police person versus a person who is being accused of criminality vis-a-vis a farmer who is using it largely to make autonomous decisions and that's where human agency becomes a centrifugal aspect. I'm not saying that the second solution doesn't need to be regulated but the regulatory strategy needs to be very different and I think that needs to be at the centre of any form of regulation. So it starts with the principle you look at the socioeconomic context in which it is being unrolled and they need to study what's actually happening in both the sectors and then based on that you create the appropriate regulation for that sector. That I think is the broad regulatory model and of course it's not complete it will only be complete once we have enough empirical and pilot studies but that's basically the idea that we argued in that paper and I think it's a useful way to take it forward. And you brought up an interesting point about how often there's something inherent in the design of these systems that they take away from ideals or the principles of human agency or human dignity. And I think say that the example you give out criminality the idea that you have something far removed from your actual real interactions and something that's abstracting your data to make an assessment about whether you're a criminal or not and feeding that into a very sensitive context is really important something that absolutely should be kept in mind while regulating. And with that I also wanted to turn finally to Padmini to kind of talk about, you know, how you've written about and worked on ideas of design justice right how design, how systems can be designed to kind of improve human agency to facilitate values of human dignity. How do you think that these ideas can kind of, you know, be roped in or be spoken about in the context of ADM use particularly within the context that we're talking about in public agencies or in consequential users in these large scale, you know, pervasive information like Yeah, so if I can actually just pick up actually from what our energy was saying, both earlier on and just in his recent response which was that what is really important is making technology legible to citizens and you know this kind of quality of non discrimination that is a you know value that you know one should aspire to. I think that value of non discrimination immediately fails because most of these technologies are not legible to citizens so you know in some work that I've done on the personal data protection bill for example. We realize that a lot of the kind of specifications made around the bill, where only kind of could be only availed of users who were fairly sophisticated in their understanding of say something like consent. And so, you know, even though the bill kind of makes these claims that you can, you know, kind of find out through a consent manager, you know how to revoke consent to data that you may have shared. That is not an easily accessible kind of designed experience and don't know where does any kind of kind of document around kind of smart city or smart delivery of services in India at least as very little conversation about the design of the interfaces about the those experiences that would make it legible to everybody who's using it and not only a select few so even at just the interface we're finding that you know we're failing we're failing users. And I think that is something that really kind of needs to be changed in order for, you know, these kind of experiences to be much more just in the way that they're delivered. In which I feel you know there's a there's a profound lacuna is that the kind of idea of embodiment itself so embodied experience as you know part of this conversation and this is something that kind of when we talk about design justice. You know, fairly prominent as a principle, which is that speaking to communities about their own lived experience and their lived realities and allowing that to shape, you know, the design of technologies is actually much more likely to guarantee a higher quality of life for those people who are kind of having to use those technologies. So, you know, one way in which we tried to kind of, I mean, almost gorilla style, kind of explore how this could be done was, you know, as we all know a lot of the kind of formulation around the smart city is through maps and maps is obviously the kind of panopticon that vision that kind of dominates a lot of our platform economy. So, you know, we did a small exercise with a group of young women that we were working with on using Wikipedia to kind of map their own experiences and map their neighborhood, just to see kind of what it means to map a neighborhood from the bottom up rather than from from above. And obviously there's a deep contrast in how you know one experiences their neighborhood. One experiences, you know their interpersonal relationships with various communities when it comes to embodied experience and none of that is factored into the degree of the services. So I think you know that the focus when it comes to design. As I said earlier on, I think there's this kind of championing of design thinking which has definite limits because when you say you empathize with the community, it actually kind of, I feel like it's quite insulting to those communities because there is no way many of us can, you know, kind of even begin to imagine the kind of conditions in which many people in this country have to have to live and survive. So I think it's just a question of kind of taking taking that into account that you know we need to be drawing on people's own experiences and then allowing that to inform, you know, the way that we design these and making and therefore ensuring legibility, rather than imagining what might be legible to the user. Thanks Padmini. Yeah, you're absolutely right and I think that, you know, it's essential that we get these ideas out to policy makers to technology designers. You know, to get them to start critically interrogating these things we really shouldn't have senior bureaucrats or senior technologists who, you know, say things like you can recognize criminality by looking at a person's face, or that you know the smart water system that's been designed for Bangalore, which tries to, you know, which looks at only, I guess the, you know, the formal water network within city infrastructure but doesn't look at that large body of informal infrastructure, and designs only for this particular group of people You know, your points absolutely well taken. I just wanted to ask all the panelists if it's okay that we take about five minutes just to take a couple of questions from some of the attendees. You can like, give me a thumbs up if that's fine. Awesome. So if anyone has questions you can raise your hand and I'll allow you to talk and ask your question direct them to a particular panelist. We don't have about five minutes of questions so maybe we'll take two or three at most. And if you don't want to do that you can also drop it in the chat. Yeah, Maya has a question. Maya I'm going to allow your speaker access. You can speak now. Thank you so much, David. And congratulations for, you know, putting this database together it sounds, it looks amazing and to the design vehicle team as well for for making such such a wonderful looking website. I'm really excited to use this more as a resource and share it with people and that's really what my question is about that what is the afterlife of this project and who is it open to and how will people be able to continue to add content to it or refer to it and yeah so if you could talk a little bit about how this is imagined to go out in the world beyond this moment. Thanks. Thanks. So the idea is very much for this to be the starting point for kind of a larger engagement. Particularly I thought this would be a point for people to kind of look at what systems are in use or also for people who have been affected by the systems or have seen the system is being used in different contexts to try to contribute and do a sort of like I said like almost a counter mapping of the systems right to push back against the kind of hyperbolic AI claims and to look at what the systems really mean on the ground and how people are being affected by them. And the other part which I guess is to the interaction with the database is in the case studies here. And other part I think, which is kind of yet to go live. But it's again about promoting participatory responses to these systems through things like, and one thing that I've noticed is, you know, that there's an intense proliferation of the systems like kind of they're basically everywhere in every city in every of the 100 smart cities that you have their smart solutions. Every police station or every state police force every district police has their own system that's going on and I think there needs to be a kind of a collective effort at extracting information out of the systems and also challenging them, which is why so the idea is to go ahead and look at what kind of participatory tools exist. Like I said, three of these I think are ones that will be coming up very soon which are about using the right to information act. And also kind of going against, you know, going, like looking at questions that can specifically be asked of these questions. I'm keeping in mind the potential and real barriers that you face within the RTI active within these legal barriers. So, you know, if they threw out the idea of that there's going to be a secret involved, how do you surpass that barrier and ask questions in a more tactical way. So that's kind of like a tactical tool to understand how to draft RTI as and get that information out. Second one is as she said, to talk about consumer protection. There's a fairly robust consumer protection infrastructure, you know, in terms of quasi judicial and judicial bodies within India, and also, you know, some very interesting law and principles, which can be used again to kind of identify sources of harm and challenge them in, you know, particularly through administrative and judicial processes. And a third is mostly for lawyers and also for people working within advocacy institutions to try to understand how people around the world and other jurisdictions have tried to, you know, fight these systems of tactical and strategic litigation challenges, which as I've realized is not simply a legal effort, but really a kind of it takes a lot of hands and a lot of kind of different voices to bring together. So I want to put together some resources for people to understand how all of these different voices can come together in drafting or kind of, you know, putting together a sustainable legal challenge, keeping in mind the legal principles like the constitutional principles that we spoke about. So that's the idea going forward. I also hope particularly to kind of build on, you know, to investigate some of these systems in depth a little more through kind of investigative journalism approach and kind of which can maybe frame some of the, you know, eventual responses that come out of this to just get the word out basically a lot more on through various forms of media so I was hoping that journalists can use this database to try to unpack and follow the leads that this provides through the public documentation through the news reports and and try to kind of investigate some of this a lot more. Hi, can you hear me? Shall I go ahead? Yeah, we can hear you. Okay, so I'm a technologist in the whole machine learning space. I was curious about this presentation so join. So one of the things that struck me is that, see if you look at what I observe day to day right all of these systems are very buggy. And it takes enormous effort to make even the simpler machine learning systems work. Now, what I don't see, see, even if the, even if you pass the law, even if you make the make it compulsory and things like that, the capacity in the engineering community and the data science community is actually very limited to understand all of these algorithms, many of which are baked into libraries. I don't, I mean, much of the conversation was around government should do this which is regulated should do this and we should have this kind of law, what I think you did not adequately focus on is whether there is capacity in the system to these machine learning systems and assuming their intent is aligned with yours. How do you make it possible for them to deliver these today there are, you know, very few data scientists who can actually explain what a machine learning model can do. So the capacity in the system is a big concern of mine. You know, of course, you can add your thoughts but I don't see how this will work without growing the number of aware and capable people. Thanks. Yeah, I definitely think that that's something that needs to be done in terms of capacity building or, you know, general efforts at educating people who are responsible for the systems. I mean, do you have anything to add specifically to that question. How do we build. Yeah, I mean how do we bridge this gap between like, you know, clueless people using these systems are being affected by these systems, and also clueless data scientists and engineers who don't know what the consequences of them are going to be. I'll just very quickly say that I think it's a systemic issue that we, we find a way to pass the buck on to different people who are involved in the supply chain, so to speak, of machine learning systems so having some semblance of accountability but also not just at the stage at which a system has come to exist and is being used on the ground but also at the stage of which it is conceptualized. The assumptions that are baked into it being conceptualized. All of that's really important and I think that's why we. I mean most people who work on on the policy aspects will always say design development deployment so we're not just talking about a final product we're talking about the whole life cycle of the product. That's an important way to like reframe it at least. Sorry just very quickly to beat a dead horse perhaps is I mean I think like it is like this multi stakeholder approach or I think it's already been mentioned actually the need for genuine consultations and capacity without that it's it's it's very difficult I agree and I don't really have solutions but I think just broadly speaking it needs to be bringing in as many communities as possible and ensure that capacity is built among as many people across the Thanks. Any comments. I think I'm pretty much in agreement with the she and the vision. So, with that, we're a little bit over time already so we're going to break and allow maybe 10 minutes for people to get a coffee as their legs before the second panel begins, which I'm also incredibly excited about. So, you know, some of the panelists will be staying on, hopefully. If you have any other questions you can, you know, of course post them on the chat and maybe continue those conversations offline we really want to, you know, keep these conversations going. And of course, if you're interested in knowing more about the observatory please feel free to reach out to me my details are in the website itself and you can just drop me an email or you know any other form of communication I'm always happy to talk about it. Thank you and see you in 10 minutes. Hello. Okay, hi, just wanted to check that you guys can help me. Okay, fine, cool. So we'll just start in three minutes we'll go live straight right. Yes. Okay, cool. Thank you. Yeah, we'll go live in just about two minutes thanks everyone for joining again. Hi, Anton. Hi. I thought you dropped out for a bit. Yeah, I meant to make her some coffee. Hi, I'm Mathias. Hi, I'm Vrinda. Hi, Vrinda, hi. Hi, lovely to see all of you. Yeah, it's so nice to see everyone again. Thanks for the invitation. So, hi, welcome back everybody I hope you were able to stretch your legs grab a coffee or a drink, depending on what time it is. In panel I'm really again really excited about to bring together wonderful kind of and diverse community of practitioners who have, you know, made the effort over the last few years to go in depth into how the systems are operating in the system that you spoke about in the last panel, particularly in how they've been operating and affecting people's lives and challenging and pushing back against the systems. The second panel the idea here is to talk about what algorithmic accountability, you know, the kind of conversations that we've been having at a level of principles at a level of technical standards at a level of also ethics really plays out in practice how do you start investigating these tools as Matthias has done how do you start advocating for more responsibility from the government as Corey has done, or how do you do legal action as Vrinda and Anton have done. So I'm really really excited to be convening this. Thanks everyone for joining again. Thank you for moderating this discussion. So over to you. Thanks, thanks, really excited about about this panel as you mentioned a lot of the things that we were discussing at maybe slightly more abstract level we'll actually get to hear from the experience and expertise of people who've actually been doing it and being part of the citizen centric resistance and centric mitigation of the algorithm and it's really exciting to have this this panel here today. Thanks a lot for joining. So the first in the first segment I just wanted all panelists to share broadly their experience in engaging with government ADMs as the rich automated decision making process. So, um, so let me start with Matthias and and really ask you of course, I'm sure most people here would be knowing about algorithm watch and the fantastic work that's that's it that you have done. But broadly I wanted you to comment on trends in algorithmic accountability particularly with regard to government algorithms that you observed across the globe and and maybe also share the kind of work that algorithm watch has been doing because I think we've all been familiar with it but I think it would be great for you to touch upon that a little bit. So Matthias. Yeah, thank you very much and I'm really happy to be in this round because I also expect to see a lot more ideas and perspectives than I, you know, I'm usually confronted with. So the what we are trying to do is be a watchdog and an advocacy organization at the same time. Now the watchdog function of course entails that we do a lot of research we try to find out where algorithms automated decision making systems are being used in the real world and we do this in a quite range with a quite range of methods. And what we just produced is the so called automating society report it's the second version of this the first one came out at the beginning of 2019. And here we look at 16 European countries and where the systems are being used. And this is a mixture of let's say, academic research and journalistic research. So we have different kinds of people in these countries, some of them are academics, some of them are journalists, some of them are civil society members, and all of them need to have domain knowledge and also the knowledge of the context of their societies to even identify where something like automated decision making is being used. And what they cannot do in the scope of this project is, for example, do some investigative research or try to do some, you know, experiments reengineering an opaque algorithm or something like this. This is completely out of the out of the question, they can do that they can basically monitor what is going on. And because they are all experts in this field, they know more about this than the regular person, you know, on the street. So we do get information that is not in that sense publicly available, of course, most of this is publicly available. Some of this comes out because of the research and wasn't available before, but it's a collection. But the value of that collection is to demonstrate how widely these systems are already being used. Right. So I need to cut this short because it's such a huge issue, because at the same time what we are trying. And I think we are also succeeding at this, at least in small instances, is that we look at concrete algorithmic systems and try to find out whether they are doing their job and doing what they're promising or whatnot. And one of the examples is that our journalistic colleague, Nicola Kaiser-Brill, he does investigations like the one that was published under the name of undress or fail, where he developed a model of how to test the algorithm for the hypothesis that people who show more bare skin on Instagram get their posts promoted, whereas people who don't do that get their posts demoted or at least not promoted. And that this is a problem, you know, from several perspectives. And this is where a lot of technology comes in, because the way we do this is we call this data connection projects. People get an opportunity, a technical opportunity, a tool in that sense, a browser plug in to install it on their computers, and then this collects data that is sent back to us for us to analyze. So this is where we actually try to scrutinize the system itself. And this can be successful to a certain extent, because you can usually only find indications of what is going on and then you confront the companies with that and the companies say, oh, you have misunderstood everything about our model and what you're claiming is not what is actually happening in reality. But no, we are not going to offer you any kind of data to refute that. We're just going to say that what you're saying is wrong and what we are saying is right. So this is, you know, usually the outcome of this, but it helps us to produce this evidence that there are at least questionable assumptions. And this is what we do. And then what we try to do on the advocacy side, on the basis of this research, we develop policy positions, recommendations and demands. And I'll just give you one example. It is that we are asking for a register that is publicly accessible of all automated decision making systems that are being used by the public sector. I can probably explain why the public sector only and not the private later on. I'd skip that at the moment. And provide information about the purpose of the system that is being used, how this purpose is sort of being modeled into a decision model and who participated in this, meaning that was this done by the public sector entity themselves or was it outsourced to IBM or Microsoft or Amazon or whoever. And this information needs to be widely accessible, because then civil society in the form of journalists, academic researchers, organizations like us, it may be even oversight institutions have a better chance of looking into what is actually going on. And I leave it here because otherwise I'll be talking for 10 minutes. Thanks, thanks, but as I think that covered everything really, really succinctly. So now let me go to go to Divvij and we've heard in the previous panel from Divvij about the observatory project that he's starting and he's of course going to, we're all going to build up. But I think the question that I wanted to ask you Divvij is, in terms of the future of algorithmic accountability in India, what are the broad trends that you see can we conceptualize a possible watchdog what would that watchdog look like what should the government be taking to actually not just in terms of the trends that have gone on so far but also the future trends, what do you see, how do you see this landscape evolving particularly because you've been engaging through the AIA observatory with the politics of this space and also with the law and the law and policy developments. So what I've seen has been incredibly concerning, I think you know, and I spoke about this in the last panel, but what I've seen is the increasing delegation of important and consequential public decisions to private actors or to private systems, whose logic is completely obscured, which doesn't follow, you know, principles of democratic accountability or even constitutional principles, which are meant to kind of, you know, go against harms that may come to particular communities or individuals. So I'm not super optimistic about what the future of this can look like what does make me a little optimistic is I guess the efforts of people like Vinda, the people, you know, people like Vidushi or people like IFF, who are kind of trying to engage people in understanding and pushing back against the system so and by people I mean both people who are affected by these, you know, one good entry point into this, I guess, is to build these conversations around communities that can kind of participate in these conversations more realistically in terms of people who have access to the internet, people who are kind of more savvy about these technical concepts, I guess, the bigger challenges roping in people who are affected by these decisions, but who don't have the ability to really know about how data is being collected about them is being used within automated decision once they face the brunt of the consequence of the decision. Right. So if you look at the numbers that emerge out of this research, a few million people in Orissa just one day stripped of their welfare benefits because IBM discovered that there was a fraud. Right. Nobody, none of them were like made aware of how this happened. If you look at the NAIRTAP algorithm, the electoral roll purification, 1.5 million voters in Telangana denied one day of the right to vote. And these are really ridiculous and the question then is, you know, about what do these systems mean and how do we just design for responsible regulation or accountability for users who aren't at, you know, who don't maybe see discriminatory ads on Facebook or people who aren't able to interact with these systems and experience them or build up an experience through their interaction with information systems. And I guess the idea needs to be of emerging, you know, emerging standards and structures of systemic accountability from within the government. As well as kind of bringing together coalitions of, you know, journalists, activists and lawyers to take these causes to the right places to take them to court to take them to policy rooms and to take them to lawmakers. So that's kind of, yeah, that's where I see this going. I'm not sure what kind of regulatory structure would eventually emerge but you've seen how, you know, constant advocacy against Aadhar did to some extent lead both the Supreme Court's acceptance of its failures, as well as to the legislature's acceptance of the need to have a statutory regulation for something like this, whatever its other failings might be. And I guess we need to keep having these conversations and building on those kinds of successes. Thanks, thanks, Divij. And I think a lot of what you said is actually a great segue to my question for Vrinda, who is of course a hugely respected researcher and a hugely respected practicing lawyer. And I think so, I mean, her writing is something that everyone should definitely check out. But she's also, and what I would like to ask her today is about her involvement with a number of advocacy efforts, which Divij, some of which you referred to Aadhar more recently, the entire political and advocacy ecosystem. So, Vrinda, what I wanted to ask you was, in terms of both your experience with these efforts, what were the hurdles that were faced in the political ecosystem but also what do you feel enabled you to sort of go forward and what are, I mean, just learnings that you had from the process. I think that would be really useful for the audience here to share. So, Vrinda. Yeah, thanks a lot. I should really preface by saying I am easily the most underqualified on this panel. So I think there's a lot more for me to learn from people. But I think given kind of my experience, you know, looking at strategic litigation in India, whether it is, it was Aadhar, the contact tracing app Aarogya Setu, or even thinking through, you know, potential challenges about the use of facial recognition technology in India. And I think there are different levels of hurdles, right? So first is always information collection. We do have a system where the government is not necessarily proactive with providing a lot of information. So you have to get information either through parliamentary questions or write to information requests. And unfortunately, some of these write to information requests are often blocked, citing, you know, security reasons or, you know, exemptions, claiming that we don't actually need to provide you this information. So I think that is the first level where we often face some hurdles is collection of information. And which is why what Divij was saying about working together as a community, you know, so working with practitioners, working with civil society, working with even people who are affected is very important to bring all of that information to the court. So apart from information collection, it's also explaining that information to the court. I mean, I think we really saw this in both Aadhar and Aarogya Setu. So you have one version that's the predominant version that the government says, which is, oh, contact tracing apps are amazing. You know, they work really well. We should have 100% uptake. They should be made mandatory. And so the litigation that we had done for Aarogya Setu was saying that you cannot have 100% you know, mandatory contact tracing apps. Especially in a country like India where we don't even have a data protection legislation, nor do we have any legislative basis for these programs. So you have a government kind of version. And so to explain and counter that in court, you know, to explain the idea of false positives and false negatives and bias in algorithms. Remember in India the conversation around AI is still very much at a policy level amongst experts. It is broadening, but it's not broadening fast enough. I think in the public space, you really do see much more about the benefits of algorithms and big data rather than some of the issues with respect to bias and you know, even how effective these things are, you know, the idea of false positives and negatives. So I think that is something that also has to build up. And I think the institutional constraint that we really faced, especially in Aadhar where we actually had a full blown litigation was the fact of information asymmetry in terms of being able to present that. So the burden of proof. So as a petitioner who's challenging any of these systems, it is the proportionality requires us to prove, you know, is this the least restrictive alternative? Was there any alternative that was better that could have been used by the government? Some of this is information that may not necessarily be, you know, in the petitioner's domain, which is why I think it's important to have a lot of evidence that you give in terms of expert evidence. So one of the things we have started doing, this was done in Aadhar, this was done in Aroge Seethu also is we get expert evidence. We actually get affidavits of experts, which is not something you've seen that often in Indian courts, but you start getting affidavits of experts to explain what are some of the problems, right? So what are the surveillance concerns when it comes with a digital identity? You know, to explain that what are some of the exclusion concerns and how is it affecting people when the digital identity is not working? How is it affecting people on the street? You also have in the case of contact tracing, we had an expert affidavit that was speaking about what are the issue of false positives and false negatives with the contact tracing app that the government is employing. So you have, you know, you're trying to get around that by having expert affidavits. Finally, though, because this is all happening at a constitutional courts, at a high court or a supreme court, there is actually no cross examination. There is no, you know, there is the evidence of two parties is not speaking to each other. The government says this is great. We say this is not, and the court is not really those two parties are not speaking to each other. So in Aadhar, for instance, we kept on saying, look, the government is making all these claims. They have had a, they had a PowerPoint presentation which said that Aadhar is great and all the security concerns are met and there are no concerns. And we said, let us, you know, allow us to cross examine the people involved so that we can counter these claims. You cannot just use those claims as basis. The government did not, the court did not allow us to cross examine them. So you had a government spokesperson who came and, you know, explained the benefits of Aadhar, whereas we as the petitioners were not allowed to have any, any expert who came and explained the issue in a simple manner to the government, right, to the court, sorry. So you also just have this inequality of arms where the state which already has more power is actually getting more power because they can get an authority. They can get the machine and say, see, this is how the machine works. It's so easy. It's so perfect. And you then are not allowed to get your expert to come and explain it in a simple manner to have a PowerPoint presentation that will explain some of the problems. So I think these are the different areas of concerns. I think there is some change, you know, judges are becoming more aware of these issues. But it is a long, I think it's a long haul fight. And I think as the which pointed out, just having conversations just having litigation does help bring the issue into the public domain, you know, does then have people speaking for and against an issue explaining the reason why they agree or disagree with the court. And the one advantage in India is court proceedings in India. This is what a friend of mine who had come from New Zealand who lived in India for like a couple of months. He's like, your newspapers really cover what is happening in the Supreme Court in the front pages. Fancy doesn't happen everywhere else, right, but in India court proceedings get recorded in the front pages. What happens in court is discussed on prime time television and in in newspaper editorial articles. So I think that is also one of the advantages of any litigation. So sorry for this long answer. I think that that was great. And I think the last point you made is well about litigation. Of course, one is that if it's successful that that's brilliant. But even otherwise, I think the point that you made is very powerful about that bringing particular in India bringing the discourse into the public domain and getting more people actively interested and participating in the discussion is I think a very important point. So now let me let me go over to to Anton, who's, of course, for those of you who'd not know he was involved in the in the litigation against city which was a touch, I believe a fraud welfare risk scoring scoring system and he, him, he was involved in litigation in the final litigation. And that was of course it was covered in several several news outlets as well and he's been involved in several litigation since. So I want to sort of pose the same question to you, Anton in terms of I mean, what what factors do you think made it right for you to actually change that specific algorithmic decision making system and and I mean what are, and recently was speaking about how you are, you continue to be involved in holding technology companies using algorithms to account so what were the factors that prompted you to actually take on the litigation and what were sort of your experiences and expertise that you can share on the litigation itself that might be useful for this audience so over to you, Anton. Thank you. Thank you so much. Thank you for inviting me, and I've been listening into the previous panel and I think all the remarks and all the concerns are very, you know, similar to what we are struggling with here in the Netherlands. I wanted to briefly introduce myself I'm an independent lawyer from Amsterdam in the Netherlands and I have been involved in assisting number of privacy organizations from from Netherlands in bringing a case against this system as the same so Indikasi in Dutch risk assessment system system. And this was used by the Dutch government to detect various forms of fraud and including social benefits and allowances and tax fraud. And to do so of course there was a collection of data from all kinds of government databases. First we tried to gain access to information about the system by filing a FOIA free of information act request. But this didn't produce any useful answers. You know, the ministry just simply refuse to answer questions basic simple questions like how many citizens are involved how many case of fraud have you found how many, how much money have you been able to get back. No answers, partially because I didn't want to give it, but also I think that government just this didn't have oversight, which is very concerning by itself because these projects are all decentralized right. So then we decided to go to court to the court of Amsterdam. And in February of this year. The court decided that the Siri act which allows for use of Siri is violation of the European Convention of Human Rights, mostly the most important arguments, of course it's the system is insufficiently transparent. It's not verifiable, and it's, it's contributes to stereotyping and bias towards specific groups in society, such as people with a lower social economic status or immigration background. So, after the judgment we were very happy of course. And also to learn that the Dutch government did not appeal the case. They didn't expect that they didn't appeal it even we found out two weeks later that why because they introduced a new legislation which is even it's we call it super series much worse even than what we had before. So this was just phase one of our struggle will have to keep fighting. So maybe some. I think it was very important we, there were no individual people affected, we didn't find anybody, just because they weren't aware, most probably. So what we did instead we used public figures that were involved in the case is applicants. We know beforehand that they weren't would not be found admissible most probably. But anyway we included them and they would go on television and talk about the case. A few important parts of the judgment, first of all, the court has says well the state has a special responsibility when applying an instrument like Siri. And it also said that risk reports by itself do have a significant significant effect in the means of the GDPR so there's a number of criteria to say whether or not you are dealing with automated decision making in the sense of GDPR and one of those is there are significant significant effect and the court says yes, the risks alert is, in fact, considered to be having a significant effect. Like you said, and since then I also started a number of other cases about automated decision making against Uber and Ola caps. So on behalf of the European drivers, together with on behalf actually of the, the app drivers and couriers union from the UK was doing very important work to represent all those drivers from the from all these different countries. And last Wednesday we had a court hearing very interesting. It was a hybrid hybrid hearing because of the corona crisis crisis, which was actually good in this case because many people were calling in from all parts of the world including India. And I think the court was like a bit surprised but also this emphasized how important this issue is for many people around the world. And if we can use GDPR to get transparency here will also be good for people from outside the EU right. So, a bit like Vrinda just said, Ola and Uber are both also arguing there are all kinds of exceptions. So they're saying well we have to protect the privacy of the passenger. I told the court listen. Most of these data and the algorithms that we want to have transparency about they have to do with the service agreement between the driver and the passengers. Uber and Ola explicitly states, they are no party to this agreement. They are just matching. So, why if I go to a restaurant or if I go to a hotel and they asked me to keep a rating. And I give the rating and they put it in their system. Is it an infringement of my freedoms, of course not. It's just a normal part of the service of the driver. And that's the response from Ola and Uber on this point so I think might be a strong strong argument we'll see what the court says about this. But so we asked for a lot of data we asked for matching data, how is, how are terrorists being calculated calculated and root information GPS data driver profiles because, you know, Ola and Uber are denying that we have presented evidence that they in fact have profiles about drivers which can have a very significant effects for those people. And also the fraud probability scores they're using facial recognition it's, I saw the court struggling, they told me, this is uncharted territory will have we will need some time. No, it's not going to be six weeks like we'll have we will need more time. Anyway, and so it's going to be exciting to see how this will turn out. And I'll keep you posted. Thanks, thanks and I think already some some themes things emerging right through through the comments that I've heard in terms of governments when rolling out the solution often don't have a lot of the answers and I mean also due to the fact that often the level of oversight that needs to be exercised is simply is simply not enough and that's something that all of you have been have been picking up on and that's that's really, I think important to think about algorithmic accountability going forward. So, to Corey, who's of course, I mean has extensive years and experience in public rights advocacy. Of course, we with algorithms but even even before that she's presently the director of Fox Club legal which has had huge success this past year, again in terms of through various advocacy measures, I'm ensuring that the algorithmic decision making in UK are are basically brought to account. Some of them have been covered widely by the media including in here. So for example, I was following closely the A level, the use of algorithms to basically as a substitute for for examination so again, Corey I wanted you to shed some light on your experiences, the expertise that you had in terms of bringing algorithms to account in any sort of suggestions that you can share for all of us here. Thanks. Thanks, it's great to be here and the first thing I want to say which is not the question that you asked is that what's so exciting about panels like this is the opportunity to compare notes and learn from people kind of doing the work in different areas of the world, because as Anton said, these problems are absolutely everywhere I am and actually I've been incredibly inspired by the lawyers and activists in India who've really been at some of the forefront of pushing back against some of these technologies on her and others really been inspired by it. I was once I was at the privilege to kind of listen in on a call where like people who litigated against our people who litigated against a similar system in Jamaica were advising people who were litigating against the Domanamba and I think we need to do the same work with some of the incredibly exciting stuff that Anton is doing for gig workers which I think is maybe transferable to other platform workers like content moderators that we work with or also in India. I mean is I think I really do sincerely think that that's, you know, it's one of the most interesting uses of the gpr going really that case. And what I think we're going to talk in the next wave about kind of democratic accountability aren't we that some of what Matias said about about the AI registers. We're thinking about that but anyway you asked about the cases. I will just do a really top line summary of three of them I think that kind of lesson I would draw out from all of them is whether a couple I mean one is that I would say that these are some kind of algorithmic decision system or things where the state starts to fuse with a private company in one or another way. All of them have been what we would call permissionless. In other words, nobody has gone to the public and asked explain what they're doing explain what they have sought to prioritize and asked whether in fact the system is democratically acceptable to people or not and that's which is why we need something that the register that Matias was talking about. And so I'll talk about the cases but the other theme I would say emerges from them is the GDPR is a piece of it in each of these challenges it's a string to the bow but it's definitely not the only fruit and that there's a lot of value for public or constitutional lawyers or whatever you call us in seeking to apply the wider really basic range of public law principles to try to bring these systems within the rule of law and greater democratic control GDPR I don't think was the most important part of any of the cases that we want. And then the third thing that's common to all of them I may be actually I would say is that we won them all before the judge said that we want right the government cave in each of these legal challenges, which I think speaks not to the kind of amazing argumentation that we put forward, but actually to the fact that these systems have gone almost totally unchallenged for a long time, and that there was very little reflection within government before the systems were built or bought or rolled out. So three cases that we had one is actually a transparency case so during the kind of first peak of the coronavirus crisis the NHS and the health service decided to whip through what was called a data store so a massive amalgamation of various kinds of NHS is one health data, and very, very valuable troves of health data by the way it has some of the largest longitudinal health data sets in the world, and put them together and what they were calling a giant health database of COVID-19 data store. They said that the system was emergency so they didn't have to go through the usual procurement of transparency processes. They said that it was anonymized and they said that it was going to be unwound at the end of the pandemic. The reason that they said all of those things and start to reassure the public that way is that they were working with some quite controversial big tech corporate firms so not simply the Amazons and the Microsofts of the world but faculty AI kind of shady firm here in the UK, and a security linked firm called Palantir who essentially cut its teeth working with the CIA, and special forces in Iraq and Afghanistan then moved to offer various kinds of policing services. And I think as the UK people come to know more about the company are going to have real democratic concerns about whether it is a fit and proper partner for the National Health Service. And anyway, so we an open democracy news website basically said look you've got to, you know, it's all very well to say that this is an emergency but that's no, you know, this is the thin end of the wedge for what may become a permanent system and maybe creeping data privatization of the NHS and maybe used to affect kind of decision making we don't know about people's no compliance lockdown or whatever we don't know because so little was said about the system. So at least publish the contracts and explain more about what it how it works. Under the common it wasn't you know it wasn't just an FY case it was actually the same we have the right to the information of the common law. And just as we were about to file the papers they came and they disclose the contracts, and we learned something from that disclosure that I think is quite important when we think in the future about the relationship between public authorities and private corporations, which is that the first iteration of the contract. And they amended this after we after our transparency lawsuit was in prospect, but the first version of the contract retained the kind of intellectual property rights anything that the companies learned from their privileged access to the data that that sat with the company. Right, so if they if they if they were able to use the access to refine or improve their product. Now as I say after we put in the transparency lawsuit, they, they voluntarily handed it back to the NHS, then they kind of amended the contract. So that there was no project specific intellectual property rights going to the corporations but there I think those are going to be really important concerns and I think the other lesson I would say from that case for advocates is there was a there was a groundswell of concern. Okay, it was partly about privacy but I don't think actually in the main that was the thing that was most successful I think the main concern that people had was about creeping privatization and about this what the relationship was going to be had by between corporations in the NHS, and were we getting public value for this incredible and important public asset. In the second case, I'll go really quickly what involved an algorithm that the Home Office had used to process every visa application to the United Kingdom since 2015 so it determined without people really knowing about it, the rights of millions and millions of people to come to the UK to study to work in determining who got to go to the wedding or who didn't who got to go to the birth or who didn't who missed the funeral. And it just turned out that the way that they had designed it was explicitly discriminatory. They had a, they had a list, which they wouldn't disclose the names to us but they had a list of, let's say undesirable nations who were automatically likely to be streamed as high risk it was a traffic light risk score that said how likely were they kind of default on a visa likely to be streamed red and therefore much more likely to be denied a visa. Again going back to this question about GDPR, there was a GDPR line of argument that actually the main argument in this case for why this was unlawful was that it violated the Equality Act that you just couldn't engage in that kind of explicit discrimination. And there was also and these are hard to win in public law in the UK but actually the way they designed it was so crazy. We actually had a like a very serious argument that the system was irrational because we asked them how countries got on the list of dodgy nations and they replied and said well negative events. And we said well what's a negative event and they said oh well you know things like being denied a visa. So pause there for a minute and think so if you were streamed red because your country is on the list denied a visa. And that information was that a data point that went back into the system to justify keeping your nation on a list of dodgy nations lather rinse repeating so there was a kind of algorithmic doom loop set up by the way that the system was designed and you have to just say it in English to see that that is just not just a racist system but an irrational system and the fact that they hadn't seen it until we tested it and we sued with the Joint Council of the welfare of immigrants shows I think just how little reflection there has been around the design and roll out of the system. Okay the third one which you mentioned and then I'll let somebody else talk. The A levels so that I think really shows you just how permissionless these systems are right if you would just ask the students of UK hey listen. Look with coven there's going to be some great inflation and to avoid this one time spike we're just going to roll out this algorithm that's going to not. It will it'll do fine in the aggregate it will maintain the curve, but it's going to systematically privilege people in small subjects and small schools so that's rich people and rich schools. And it's going to systematically downgrade say the bright student and a large subject in the historically underperforming school. In my view, was a very old school public law point that we call in the UK ultra virus so we just said that off wall had these that's that's the government department who like designed the testing out would had exceeded its statutory powers its legal powers by designing a system that can actually measure students on the basis of their individual performance because that's what the statute empowers them to do right you've got you've got to grade the kid, not the school, and we say by just substituting in the schools historic performance set the grade, they had exceeded the statutory powers. Again they caved, you know, I wish I could say it was only our case but when you have everybody from like the pop school eating on down running angry letters to Boris Johnson I don't think I can find the credit for it but but but I think that we did have the notice that the kind of strongest in this critical set of legal arguments and they wouldn't have a lot of advantages and so I think that they're just let people talk but there there is I think there are a lot of different opportunities there to apply kind of old traditional standards of public law systems as well, but I hope we can talk about more widely about, you know how to make them more democratic accountable to people in the first place as well. Yes, and we certainly cancel it let me let me stick with you for the second sort of segment which is more of the sort of political and you mentioned this that the government gave in several cases and of course, when we look at the political ecosystem of course there is the government themselves who possibly didn't put in enough thought before devising the systems there is the private partner whether it's Palantir or whatever it is who plays a role in the political ecosystem, but the most powerful part of it of course is broadly speaking, the resistance right and of course as I mentioned, I was following here from India the A level, the backlash against that there were, there was some form of protests or protests as well that went on so could you tell us more about the political ecosystem, and how that worked you mentioned of course that there were various actors in play and getting someone like Boris Johnson to say that or his government to say that they were wrong is of course a great achievement so how exactly did these political factors work together to actually ensure democratic accountability? I think what was unusual about A levels and we're not going to see this replicated for a lot of these systems is that they have they made the mistake of downgrading people with significantly more social capital and economic cloud than is usually the case in these systems right like look suddenly you got you got parents right you got middle class parents whose kids lost their university plays out there so in a way that I wish I could again I wish I could say that we organized it but actually in many ways the outrage was organic right it was suddenly hundreds of thousands of kids lives were thrown into chaos by the decision that was made and I think I think our challenges as advocates and activists is to try to convert that to convert the fact that now you know hundreds of thousands if not millions of people have the scales have kind of fallen to their eyes and they see that the power is being exercised in this partly automated way to say listen it's not just about rates actually this happens everywhere and it happens for people who have much less political power than let's say middle class parents and people in schools right it happens to people at the front end of predicted policing systems it happens for people in benefits of organizing a kind of wider set of political debate, but I think that that took a lot more kind of, if I can use a horrible instagram word like influencer action, and not, and it wasn't the one you wouldn't have probably otherwise organically have had just thousands of people going into the street in the same light that you did with the, that you did with the students. Yes, I will respond to that. One of the effects of our case has been that some members of parliament have come forward and actually apologized for not striking down the series act in the parliament. They said well we weren't paying attention we were wrong. And I think that's a very important step forward, because they're acknowledging it was their duty to prevent this. And now with the new act is new super serious we will have to see if the parliament will take more responsibility. Can I just add one question or comments or I know you have a quarter of running here but just the last thing I want to just to pick up that point that Matthias made earlier about these AI registers right you can't like we have to be a data scientist or a computer scientist unpack these, these systems right we're founded on the basic principle that actually system has to be democratically accountable to everyone. And we should be on the public authority to save what they wish to bill or to buy, explain how it works, explain why they need it, and ask the public, whether it is democratically acceptable. And, and, and, and I, you know, there's a lot of talk about how to edit auditing bias of new systems and so forth and so on and I would really like to see the democratic debate around the systems not just engage the how they work that actually, which areas of life is appropriate to have them in at all, and which it isn't. So we're going to be kind of launching a project early next year, trying to work with local authorities to try to copy some of the interesting models that we've started to see out of Amsterdam and HealthSync and try to build these registers on the idea that, you know, the first training that you can have the debate about whether the system should exist. But I really do think that we should have power system citizens to kind of ask well not just the how but actually the whether this is something we actually want. Can I add something I think in the Uber case, we see the role of unions is very important because all those drivers by themselves they cannot stand up to Uber they're just, I think then they don't, they're just afraid to go to court and start soon Uber So they need a union to do it and in court last Wednesday both Ola and Uber were saying well they cannot be assisted by their union that's an abuse of rights. Doesn't make sense to me. And it's starting here also so the Dutch worker union FMV now also is interesting in interested in joining the effort to represent people from the gig economy. And I think they have overlooked that group for a very long time. Thank you and I think that also speaks to something that Linda was saying earlier that I mean just the having litigation itself and putting it into the public eye. I'm guessing that is also by itself raised awareness in that regard right so you have workers who maybe weren't as aware of their rights now being more aware right I think that's something that just across across panelists that that's that's a theme that's emerging as the Siri coalition actually said at the first meeting we don't expect expect to win this case. We think we're going to lose it. This is not the point we want to make sure people will read about in the paper people will start debate. I was called in the last four weeks by more than 40 drivers from the Netherlands, who suddenly were reading about their own situation suddenly understand well this is actually an issue I'm not the only one who's been at the activated like this. And then they get angry they start calling me now I have a problem because but well we'll we'll find a solution for that. Yeah, I mean definitely but I mean I suppose positive items going forward. So I'm now now coming back back to Linda so we've heard about the political ecosystem in the UK and how how some of these challenges worked out so you've got spoke about in in great detail in your first intervention about how it worked out in the court how it was inequitable and in many ways entrenched certain already existing inequalities but I wanted you to sort of weigh in on the overall broader civil society ecosystem and how that played out. I think that's a great question for which was asked in response to your previous intervention that sort of speaks to this question is that how much of policy research takes into account bringing the law and civic society to a generalized state of commonality about technology and AI that can broader Syriq participation. This is a question from Vishal Kumaraswamy and I think the broad question that I'm trying to ask you is what is the political ecosystem that you faced when you undertook the advocacy efforts that you spoke about and then what were the challenges there, not just at the court level but also broader challenges and opportunities and the second question about how we can the policy research ecosystem how that fits into more broadly technology and so I know that's a bit of a convoluted question that I tried to merge my question or the question that Vishal asked there so over to you Linda. Sorry, I'll try answering both because I also know for a short time but actually this goes to both I think what Corey and Anton said one in terms of you know who is the who are the people who are affected are they the privileged class are they the kind of lower class and I think we saw this in that ad-hah debate was basically on two prongs one is it used for it can ad-hah be made mandatory for welfare payments right and to some large extent most people didn't really care about ad-hah when it when the debate was just about welfare payments because you know whatever. Of course people who take ration from the government there may be some cheating that's happening there and it's fine we need a card. And so the debate was very much like really limited to privacy activists who said you are not understanding what the concerns are. When ad-hah started being made mandatory for mobile linking and bank linking or for the payment of taxes. And when it started affecting middle class people right when they had to call up and they received 20 calls from telecom service providers saying link your ad-hah suddenly they were like oh my god like what is this that's being made mandatory right so I think. That is when you really saw the development of ad-hah as a national debate really happened when it started affecting the middle class and I think that is unfortunately very sad that I think to issues that were related to exclusion and welfare it was sort of you know in one place. I think and then going to what Anton said which is you know victory is just a first starting point and then the government acts again and again and I think you're seeing this in ad-hah again right so the Supreme Court laid down various guidelines. Those guidelines haven't necessarily been followed. There are now states that are saying oh we may now need to we may need ad-hah linkage for your vaccine distribution. There is absolutely no reason why a COVID-19 vaccine needs to be linked to ad-hah like just I mean absolutely no reason right you also have a government that is saying that ad-hah is the one card that is going to solve all the identity problems. But then the same government is releasing a new digital health ID card right so now this is a card that's going to digitize all your health information so just why do we need so many of these right like. Either you're saying that ad-hah is not the answer to all the problems which is then completely contrary to your initial claim. So I think that's kind of one problem. I think in India specific problem is the government and maybe this is happening in other countries. The government is also just doing so many things at the same time that it's hard to focus your attention on what is the policy issue that I want to take up or what is the policy issue that deserves national attention right. So I think facial recognition technology. I mean you know we do she has said this in another panel when we were together which is that in the US you're actually seeing you know this summer with Black Lives Matter etc. A lot of the companies themselves saying actually we will not sell facial recognition technology to government law enforcement agencies at the same time in India that is simultaneously you are seeing an uptake in the use of FRT by law enforcement right it's being used now for election polling. It's slowly being used by you know law enforcement agencies it's been used in protest and again it just hasn't had that similar kind of pushback because all these protesters you know there may be someone who's violent in there like oh it's been used more against like in Telangana for instance you know it's used more against Muslims so you just don't have that similar pushback. And what is interesting to me is that and I guess this goes back a bit to the A level example. The central board of secondary education which is you know which conducts the A levels in India. I said that oh you will have facial I mean they say it's not facial recognition technology it's face matching technology. I'm not sure what the difference is face matching technology will be now necessary. Why is facial recognition technology necessary for marks like for the writing of exams so I think that is you know that it's going to be interesting to see if that issue is now going to get spoken about. And I think coming to the second question which is how do you ensure these debates come out in the public. I think like I said the fact that so many different things are happening we still have a government you know that was pushing for instance citizenship amendment. You know amendments that are going through so this year itself you see there are protests that are happening with respect to farm laws right that are happening all the time right now in the national capital for the last one month. You have protests that are happening to India like love jihad laws that are being passed by various states. You then have you know these technology things that are happening with respect to COVID vaccine with respect to contact tracing with respect to facial recognition technology. And I think somewhere all of this is getting bunched up so the outrage is also just is one general outrage and I think the nuances of each of these issues has to kind of be brought out and I think part of what policy has to do is really open these conversations wide have those debates in the national media in the papers you know in parliament like made parliamentarians parliamentarians feel like there is responsibility for these actions that you take up. Right and I think also yeah like I think if you get parents involved right like it's great because no one is more angry than parents who believe that the children have been cheated. So I think you if you can get kind of everyone involved and not make it seem like something that either only affects the marginalized communities, or the religious minorities, but actually affects all of us, and it can affect us tomorrow the police is using facial recognition technology for law enforcement, it can be me tomorrow who is just you know called up wrongly. So I think what you need is really just a broad based coalition that can keep these issues in the public domain. So yeah, wonderful. Thank you for answering my rather sort of polluted question very effectively. So now, again the politics. Oh, sorry, maybe my am I audible. Okay, sorry, my connection possibly unstable. So, um, what I wanted to ask you was you already covered this a little bit when you spoke about your AI observatory and the section you have on the politics of the AI ecosystem in India. I was wondering if you could quickly summarize. I mean, in terms of algorithmic accountability, how do the politics of the AI ecosystem in India actually shape the algorithmic accountability. So of course, it's not just the courts or the government, there are various other actors and play and windows already spoken about a number of them and Anton and Corey earlier spoke about them in EU and UK context. So I mean, what what's the broad ecosystem like like in India. That's, you know, a really relevant question because absolutely the way these systems are being developed, designed and implemented is in meshed within, you know, very much broader political and economic changes that we see occurring, you know, these trends of kind of privatization these trends of prioritizing particular claims of efficiency particular claims of, you know, what I would call like technology over the claims of people, you know, who are affected by systems were affected by policies and laws and developments and so to give you an example, I'll give two examples. Right. So, first, let's see the way in which the right to information movement developed almost a revolutionary movement in a, in a country where public administration was often incredibly inaccessible to people. The RTI act and the people's movement for the right to information kind of, you know, really struck towards really, really moved towards a more radically open government, at least in the law and on the books. And now we're seeing all of that regress, right through trends that are both linked to the kinds of systems that we've been discussing because of kind of, you know, the trade secrets that algorithms are and the private involvement in them. As well as these concerns that keep being thrown up of national security or privacy of bureaucrats and so on and so forth. We're gradually seeing this regress. And that's a privatization of a particular claim, right. It's a private prioritizing the claims of the private developers who hold business interest in these. It prioritizes the claims of the mathematical models and the kinds of things that are being developed here, you know, at the risk of greater opacity at the risk of, you know, at the risk of obscuring democratic accountability of these systems. So that's one way in which you know this one example of how the political larger political economy is playing out within these systems as well. Another example is say from our right. Similarly, now in our the government. If you follow the case that there were two kinds of evidence that were presented to the call there was empirical evidence that have been collected by a, you know, through kind of ethnographic work as and quantitative work to show that there were instances of discrimination that there were instances of exclusion linked to particular demographic categories are kind of very obvious. Direct discrimination, which should have been unconstitutional, right at the same time you had the idea I present these very technocratic claims of this is how our model works this is the efficiency rate, the accuracy rate and supposed to be 98% accurate without releasing the metrics of those accuracy without telling people how this accuracy is to be measured. And in fact, what it really means to be even if you're 98% accurate, what does it mean for the 2% that's left out. Right. And if you if you then read the judgment you see that the court also decided to prioritize those claims of technocratic efficiency, those claims of accuracy that are thrown up by the idea at the expense of the lived experience that were documented by the petitioners. The experiences of the people who were denied ration who were denied welfare because the fingerprints did not match. And this is kind of again, it's this larger claim of prioritizing efficiency in the market prioritizing cost savings and welfare over the very legitimate and human experiences and claims that people have which may not be mathematically accurate which may not kind of reflect perfectly within the particular data model that's been created, which is assumed to work in perfect scenarios and this kind of repeats over and over again repeats and you know demographic bias and facial recognition systems. It repeats in these fraud welfare claims that you have again an example of how this is playing out in Telangana right the Samagra Vedic system. It matches your electricity bill to your ration card or your pension or an housing scheme to determine whether or not you should be eligible for a pension scheme now what the system assumes is that if you have a large electricity bill. That means that you likely have a larger income and therefore shouldn't be eligible for such and such scheme and now this conception of the problem is missing out so much contextual and relevant information like how many people are within that particular household that you're measuring Is there a business that's being run out of that household that relies on heavy electricity use and you know so much other kind of like so much other like human context is just missing when we start to abstracting these into levels of data and efficiency and data models. So, yeah, I absolutely think that it's really relevant and really critical to keep in mind is larger political and economic trends. You know there's been some fascinating studies of how this has been occurring in various parts of the world how, how kind of like, I guess, new liberal governmentality is being prioritized in the way in which these technologies are emerging and also being embedded. I think a particular area of study should be this link between like neoliberal neoliberal technologies and prioritizing the claims of these large technology companies and the link between governance and public administration. And really, I guess, you know, this latest bout of these are the AI observatory is also an effort towards making those linkages through revealing the data and revealing the kind of discourses that are surrounding it. Can I respond, can I respond to David short. Yeah, yeah, please. What puzzles me is if I look at the uber case it seems to be a private sector issue right from a commercial company. But if you look closer actually Uber is using algorithms for one of the reasons is that they want to comply with very strict requirements by for example, transport for London, which is a government agency. So they're telling Uber, you should take measures to protect passengers. And if you don't, will take your license away. So Uber from a commercial point of view there of course they want to protect their license they're going to use algorithms in a very strict way. There will always give the benefit of the doubt to a passenger who complains, and the individual drivers are just you know being disregarded because they're sort of puppets who are replaceable right. But what I want to emphasize here it's like an interaction between public and commercial parties actually. So you must fight both, you must go to Uber, but you must also explain to transfer for London for example, that this is happening, you know, they might not even be aware what makes it even worse. First, Uber must make sure there is no fraud on the platform. They use very broad definition of fraud if they deactivated driver they reported to transfer for London and transfer for transport for London revokes the license of the driver so he cannot work for Uber, but he cannot work at all anymore as a taxi driver. So this is again, private and public entities interacting in a way that like makes everything worse. Yeah, yeah, of private and public. Yeah, thanks, thanks Anton so I just I think my tears is would be great to sort of wrap this this segment up and I mean I suppose I have two questions for you two related questions one is again what are the broad trends that you're talking about in terms of political accountability in the political ecosystem around the world. And again from the perspective of algorithms watch because of the fantastic work that you're how does algorithm watch fit into and then and sort of use the political ecosystem to do the excellent work that you are doing so I think you're over to you, but yes. It's complicated enough, you know, around the world is probably pretty too much of a question to answer for me. But what is what is happening what is happening in the European Union is that there is a ton of proposed regulation. You know, you have a hard time following what is being done because this week, the so called digital services act, a draft of it was published. At the same time the digital markets, then there is the AI strategy that is in the making there is a data governance act that is that has been proposed. In addition, there is a democracy action plan that is involved in all these discussions when it comes when when you're looking from the perspective of this information. And of course, you have all the discussions that are going on about questions of competition. And where there's also legislative action but also action on the litigation side, you know, the European Commission trying to be more aggressive towards large platforms. So, I mean, everything which I, of course, I enjoy being complimented on our work, but at the same time we need to remain realistic, our organization basically has been around for four years, and we are 16 people. We can't monitor all this that is going on at the same time. So we also need to focus and at the moment we are focusing on this transparency issue on on these two sides. First of all, we are asking the legislator to think about these public registers to make the use of these things more transparent and but but also at the same time to to push for better access to data. And at the moment, we are focusing, you know, for organizational reasons on information intermediaries on platforms like Facebook and YouTube and Twitter and so on. But of course, there's a tremendous importance in platforms like Uber and Amazon and Airbnb and other, you know, click working organizations or platforms because then you have all the issues of, for example, the labor laws, which is also something that we are addressing. And I'll give you this as an example because I think, you know, I could try to describe in broad strokes what is going on in the US for example but it doesn't, you know, it, I think it doesn't really make sense if you try to break it down on the on a concrete case. So what the European Union suggested in in the white paper on artificial intelligence is the so-called risk based approach to addressing problematic implications or implementations of artificial intelligence based systems. So they are thinking about a structure or criteria to define what could be a risky system. And what is not a risky system and then try to find out how to address this now first of all, we think it is problematic. I mean, we don't have an answer in the sense of there's a better approach to this. The risk based approach of course is somehow risky in the sense that it says risky itself in the sense that it could fail to address some systems that are just not seen to pose a certain risk but then when they're implemented they still do because we have discussed today with a lot of cases and examples of how people fail to see where the risk actually is. But leaving this aside, you know, you also need to focus on what is there and what you can make use of. The white paper then proposes that was states rather that all systems that are used in connection with workers and employment are deemed high risk anyways, you know, so recruiting automated HR and so on and so forth. Now, what we're trying is to use this as an opportunity and say, well, if that's the case, and the paper already defines that they need to be increased transparency requirements there. Then let's take them by their word and say, if you've already proposed this then what we need as the next thing, for example, is more transparency about algorithmic systems that deal with workers in the entire EU. Now, is that going to be successful, we have no clue but the opportunity here is that right now, labor law in the European Union is not very well harmonized, you know, everyone is doing their own thing. So when we are looking for opportunities for horizontal legislation here that could increase our knowledge and therefore also our agency when it comes to the systems as an opportunity here. Now, as I said, you know, in the end, will be successful I have no idea. But but this is what we are trying to do, because at the moment we are not doing strategically to get which I'd love to do, but there hasn't just been an opportunity. We are not focused on this as an organization we need to collaborate with others on this, and there also needs to be funding but we are looking at these legislative processes and there we are trying to identify where where there's an opportunity to increase the level of transparency. And then hopefully, later steps to identify the, let's say, substantive problems and also address them so of course transparency is never a means in itself, it's always only be, it can always only be a basis for further action. I hope this at least partly answers the question. Perfect. Thank you. And I think I think I personally and everyone here should thank David for getting together this really excellent panel. I mean, because I think every panelist has like similar experiences but also I mean ones that are different based on context and the role that you all play and it's been really fascinating for me and I'm sure everyone is there to observe and learn from this. It's definitely been one of the most interesting sort of zoom webinars have been a part of. So I know that we are coming to the end and Vinda great that you're back I know that you have to leave soon also a little bit over time so they can ask everyone to just stay for maybe five minutes more if that's fine to answer some quick audience questions if that's fine with everyone. So Vinda actually I think this, there's one question that I think is, I think you're perfectly suited to answer is which is from the live stream actually even conducting the YouTube live stream so the question is, I conducted auto ethnographic research working as a food delivery worker to closely understand data use and algorithmic control within gig work platforms. What rights and protection do workers and researchers like me have in revealing specificities of algorithms seeking accountability from these systems they work under. So it appears that there was some ethnographic research conducted and it's very interesting research to actually work I suppose as a as a food delivery worker what the question is what rights and protections to workers and researchers have in this context I think Vinda if you can answer this and be that great and anyone else can chip in as well. Yeah, I actually think right now at least our framework our legal framework is not really designed to actually deal with any of these things so I think the problem is there's no real, you know governing law you obviously have the Information Technology Act which is we don't have a national data protection legislation. We obviously only have the Information Technology Act which doesn't apply to governments or to independent research, you know NGOs or research organizations so that's I mean it only applies to corporate so I think that's one issue in terms of where you would fall under. I think some of it would also be governed by the terms of service or the privacy policies of those platforms and what they talk about I just think this is I mean it's a great question because I think no one is even thinking along those lines right now in India I mean it just shows that we have this absolute bare minimal legal framework. So actually I think at this stage it would really depend also on the particular algorithms and kind of what those terms of service are and how you know those companies may kind of react. Yeah, unfortunately I don't think there's a clear answer to the question. Thanks, thanks so much Vinda, I know that I mean, I think Matias just said that he needs to leave as well. So there is actually one more question. Unfortunately, I think because we've been taking question in the chat might not get time to actually interact with the audience but there is one more question for everyone and before people leave if they can answer this quickly. I think this is a I mean, Setu is asking about Siri and asking about the example of how this process propagated inequalities. The question he's asking is can someone explain a little bit more about how automated decision making directly leads to greater economic inequality so whether we use the example of Siri or any other algorithmic process. Does anyone want to take this question? Well, one of the issues is the echo chamber effect. So the data that's being used to feed systems might contain biases by itself. And then it reinforces. I think Corey has also mentioned it right in her a levels case, the algorithm reinforces the already existing biases. I think that sums up and then of course for Setu as well there's a lot of work that's been done, a lot of excellent research that's been done on this of course in the Indian context please look at Bidushi's work who was there in the previous panel and then of course in other context there's work by Virginia Eubanks who don't use it in a book called Automating Equality which I think is useful in that regard. I think we have time for one more maybe quick question from the audience. Mattias wants you to say something. Oh, sorry, I missed that, sorry. Sorry, if there's another question, you know, I can. No, that isn't actually that isn't so we can just end on this. Thanks. Okay, so so first of all, I'm really sorry I need to leave on time so I only have two more minutes, but one attempt as an answer here as well is if we take into view the more broader implications of this that you know these things that we're talking about are always social technological systems and not technologies just in itself you know that are not connected to the wider context and when trying to answer that question about economics and economic situation is also why are they used for these purposes. Why are they always used for example to detect welfare fraud or to, you know, to monitor people who are receiving transfer payments, and so on and so forth why are they not used for other purposes. And of course what always comes to mind is tax abuse or taxation and things like this, you know, they, then they, it doesn't necessarily mean that they are used in the right way, you know, they also always needs to be controlled over that. But in the broader picture, you know, there is a clear indication that they are always first of all experimented with on, let's say, vulnerable or marginalized communities. There is little evidence that they are used to address other problems that we face in society that it could be used for as well. In the court, Siri court hearing I asked the court, why is Siri only used in neighborhoods with poorer people, because if you go to the rich neighborhood you might find much more money. In the state, please answer that question. No, they really couldn't. And then I thought the court was quite, you know, convinced. Wonderful. So any final thoughts on this question or otherwise from any of the panelists, otherwise I'll hand it back to the bridge for closing thoughts anyone. Sorry, we, I think the conversation was so exciting. We didn't have time for the audience Q&A, but I really enjoyed myself. So thank you to all the panelists for sharing our thoughts. As I said, one of the most fascinating panels of a quick list. So that was great. If there are no further thoughts then. And I just say one super quick thing. Small organization, but I really I couldn't be happier that you've done this because I think that international comparing of notes and actually kind of collaborating on projects like I already do Anton, but I don't know all of you. And I would just encourage people who are working on anything like platform work for justice or algorithmic accountability kind of drop us a line, because we're really interested in kind of teaming up and building a proper kind of international So thank you. Thank you so much. Sorry, I hope I've not missed anyone else. Anyone else. I mean, I have a lot to say about that but I think that we're running out of time and just, you know, I just wanted to kind of close on that note and just express my gratitude for all of you to taking time off and contributing to this. So meaningfully, I definitely think that, you know, building on what Corey said that there is both a space for and an absolutely critical need to have built international solidarity on these issues. I mean, one of the reasons that I also did some of this research was to indicate that technology that's being built in very different contexts is kind of being imported into the systems that we see and use in India in public agencies in India. And, you know, that we need a diversity of actors and a diversity of voices to make these meaningful changes we need to get this perspective perspective from different places out to the standardization committees to do the committees or the technologists who are designing this to the lawmakers in other parts of the world who are designing for this I mean one example is just how influential GDPR has been to, you know, developing data protection law around the world. And I think that that's one really interesting way in which I think, you know, cross-border collaboration and solidarity can also kind of help people across the world. And yeah, with that note, you know, I'm so grateful that you could join and also so thankful for the great work that you've been doing in this issue. I really hope that, you know, we can keep pushing for this keep keep kind of making this network stronger kind of educating people lawmakers bureaucrats people on the ground, and other lawyers to kind of engage with this more meaningfully. And yeah, with that. Thank you, and have a good day. Bye. Thank you everyone for joining again and if you have any questions about my project. Please feel free to drop me a line. My information is on the project website. I'll drop the link here again. And do have a click through and see, you know, I'm happy to respond to any comments critiques additions to this. And yeah, I'm happy new year. And thanks. Thank you so much. Your moderation was fantastic. Yeah, thanks. Thanks, everyone. Thanks. I just want to stay back to say thanks. Yeah. Thanks to has geek as well. Thanks for organizing this and providing all the support. And I mentioned this in the chat, but my colleague Pranav who's been working with us. So yeah, thanks. Thank you.