 So, I may like make some generalizations in case you need to stop me when I make the window stop. So, since I come to a legal background in something which is part of positive humanities, I see it differently from somebody who comes from a STEM background. And what I see as a generalization is that in STEM backgrounds you are far more likely to rely on something which is to be seen as objective sources and so you will fall into certain problems which I put most famously with the kind of controversies with the bell curve where you make generalizations on race and IQ by saying that people of certain racial backgrounds are far more likely to have more IQ to say lower economics are. And so it becomes very difficult for me to understand how any sort of differences can be made or when anything in the STEM field or even technology specifically can be used to ensure that like greater accessibility to people from disparate communities can be there or you don't oppress people because STEM in general I see relies far more on modernity as a philosophical approach and neoliberalism and capitalism as an economic approach and so it's very difficult for you to like you know randomly show reasons for you know talent studies or brilliant studies as they call it or like people who oppose it by anything of them or talent studies or black studies as seen as brilliant studies which are unscientific and therefore cannot be relied upon for building any sort of anything which is commercial in nature which can be used to make you talk. No so I think that I think you are right in that there is a certain discourse that there is a kind of objectivity which is not colored by political bias that exists in STEM that you know gives its judgments a certain legitimacy and they can't be challenged on these grounds. Let's say two things I'd say one is that I suggest I'd recommend and I'm sure you've read Thomas Koons structure of scientific revolutions where he goes into the whole history of science and he actually shows how what we think of a scientific objectivity is actually not all their objective and at any given time in history there are paradigms of there are certain paradigms which which provide the horizons to how we can think and what questions can be asked and how they can be answered and it's these paradigms that then structure within them these ideas of objectivity and knowledge divorced from political or moral bias. I think that that book is really helpful in I think just clarifying that that objectivity might not be all that we think it to be. I think secondly and more importantly to bracket some of the issues that that you've you've you've raised in your question and to say okay these are things that require discussion I think one point to stress is that the interface between these these tech models and constitutional rights is I think you can actually something as I've shown if you actually show for example that the that the criminal justice system right in in prisons as under trials the disproportionate representation of of Dalits and Muslims and that you can't trace that back to ideas that there are you know inherently more prone to crime and and so in that the only answer is that that there is a systemic bias in the criminal justice system against these communities you are using the very tools of reasoning of reasoning from evidence that are valued in stem so I think that they can be a conversation between you know people building you know tech systems and constitutional thinkers that can use the same framework of thinking and have a discussion within that same framework now of course I agree with you that there's a sometimes we need to go beyond that I think they can we can start with the discussion within that framework and then after a while maybe think of of moving on beyond that and debating the very idea of activity itself but you don't need to start with with that foundational debate I think so I'll start with property saying that laws provide for organizations especially private organizations who probably a lot of them are technologists so to go back to your point on maternity benefits I mean for a corporate the fact that there is a maternity benefit act will preclude the possibility of organizations possibly not keep you know not giving employees benefits but that does not exist in technique so in the absence of legal frameworks of by you know removing bias transparency explainability in our systems what can organizations and companies actually do to have more and also to take into account that we are all human beings with our unconscious biases and we know of the narrative of the tech pro culture which in itself has multiple blind spots so what can companies actually do to sort of work on this and create a more valuable system which actually presents you know that there could be biases and training data sets and so on and so forth and the check I think that I think there are many answers to the question but I think that the the most important for me is to have participation on equal terms with all the people who are affected by those decisions just to come back to this whole I began with project Cybersyn right so one of the real revolutionary ideas in that project which couldn't be implemented because there was a coup was that in that control room right you wouldn't have I ended sitting over there on his own you'd actually have the workers from the factories coming in and and sitting in that control room and they actually looking at the data and and working out what it means so I think that that if you're looking at for example if you want to if the question is that that why does this particular organization have a lack of representation of either say women or of people from a certain caste right Dalits then the first thing to do is to ask them right what in these practices discourage you from applying or being here and that's the starting point of of any any attempt to to have practices that that will then be more equitable so I think that the at least the the the opening has to be a conversation between the people who are framing the policies and those who affected by it and to ensure that those affected by it can can articulate what they would frame the policies to be and then of course you have a number of I think in social science by now you have a lot of literature that shows the ways in which hiring practices unconsciously you know discourage people you have things as basic as if in any classroom you have like a seminar then then the person of the number of people asking questions are predominantly men and that speaks to a certain kind of unspoken norm and so on and how do you mitigate that so I think there is exists a large amount of literature by now that that speaks to all of this and also of course as I said I think the most important thing is to start with the conversation so you've been saying how the people need to be given more voice right giving a chance to make their own decisions but don't the recent events and also history by itself prove that people are not ready to take that decision on their own like time and time again the people have supported or made way for dictators time and time again the people have been involved and actively been brainwashed by various things and taken the wrong chance even in a democracy taking a wrong vote taking giving a wrong opinion to that if you give that much power to general public to the general public who is actually not that woke not that aware of everything don't you think that will actually create more anarchy actually a complete anarchy or a complete you know complete lack of democracy over time right no I mean so I think it's a very difficult question to answer and and I think that the the historical record is quite patchy on that so it's hard to generalize and one thing I have read is that if you actually look at the if you're going to in countries that have had fascist regimes and I mean outright the fascist regimes if you look at the level of support that that that comes for the fascist regime from the population divided along class and income lines you'll actually find that that amongst the the working class which you think is most amenable to that kind of rhetoric given the economic difficulties they're facing you actually find that the support is very less among that class and much more among the classes that are above them in the income and and economic spectrum I think that that the which class of society is most amenable to going along with the leadership itself I think a fraught question that that may often surprises if you look a little more closely but I think more specifically so I don't think the argument really is that that you have a situation where there is no there is a pure rule of majority that you know without any constraints the point of of I think a constitution is that it creates a structure within which that conversation can happen on equal terms right so rights like right to free speech right to non-discrimination right in India for example rights against untouchability the rights against discrimination and access to private spaces all of these are meant to provide certain ground rules that ensure that people that the people that the people when they come into being constitutional citizens are coming into it within this framework of overall equality of overall a certain kind of parity of you know statures and so on and so it's within those constraints that the discussion happens I think a very good source to read about this idea is Habermas so in his book between facts and norms he talks about how civil rights so the rights to free speech to equality and so on are what he calls co-equal with popular democracy so we often think of these rights as being checks on democracy because you know they constrain what the people can do but Habermas is actually not the other way around because if you want to have any meaningful democracy then it needs to operate within these these frameworks that ensure that nobody can be a tyrant nobody can you know dominate so and so they move in tandem so popular democracy moves along with these rights together give you a model in which you try and insulate yourself from the worst tendencies of you know tyranny and so on it's an imperfect answer but I think that it's it's the best answer we have right now. Hi, I, Simon again we have understood that the algorithms are biased and data monopolies like Facebook and Amazon base a lot of their businesses on these biased algorithms but is there a legal remedy to ask them to disclose these algorithms to public scrutiny because they claim you know IP rights and over it and it won't disclose them. So there's been one strong argument for right to the source code so in many in many I think the movements in many countries now saying that disclose the source code so we can audit for ourselves how this algorithm has you know been designed and again Srinivas has been has been somebody who's been arguing for that but the but there's also been there also been objections to that so again the general public obviously wouldn't understand I would I would have no idea like you told me the source code I would I would have no idea what it would mean so again you're basically excluding a large number of people from understanding how these algorithms affect their lives. One interesting remedy is something that in the EU they call the right to an explanation so it is that in case another algorithm has taken a decision that affects you or negatively affects you you have a right to an explanation of how that's happened and by a human being so in that sense you can't be affected without you know without that happening. In the Adhaar case we tried to advance a similar kind of an argument because my whole issue was that if in the ration shop your fingerprints don't match then you don't get that ration because you know according to the machine you're not who you claim you are you're a fraudster and so one thing we tried to argue was that that if the if the post machine shows that you are not who you claim to be because your fingerprints don't match then you have to have a right to an audience before like a human being to explain to them that there is a mistake with the machine and not with not with you and so the final call can't be the algorithms unfortunately that didn't fly in court and there's no discussion or judgment but there are there are strong movements to deal with the problem that you've identified in legal terms and still very much an embryonic issue that's on its way. In one of your earlier responses you did mention that one of the ways to ensure participation is for technologists to talk to constitutional whatever frameworks in concrete terms what does that really mean who is it that technologists should be talking to and a related question is that the critique often from social sciences comes from the fact that technology and technologies don't necessarily recognize multiplicities and complexities in the ground from your own experience what is it that you recommend in terms of awareness and education to technologists and to practitioners in terms of what is it that they need to I mean I don't know books I don't know what what would you recommend? So I think that to answer the first question I'll take a tangible example right yesterday there was this at AP Wazim Premji University students are protesting against Vipro the reason for that was that Vipro was involved in designing the tech for the NRC in Assam now the NRC in Assam has unleashed the kind of horrors that we read about in in like books about Nazi Germany right you have detention camps people killing themselves because they're off the list people having to travel 500 kilometers and in floods to show a document to like one government official to prove they are who they are and camps that resemble concentration camps right so it's all happening over there and you have people who are at the frontline people who have seen for themselves what it means to live in a detention camp right people who have lawyers who have been at the front lines trying to persuade a foreigner's tribunal that look a spelling error in this person's name in two documents doesn't mean that they are not a foreign they are foreign or it means that it's just a mistake and I think that if these people right are put into conversation with designers of tech it becomes at least for me very difficult to ignore someone telling you that look there is a detention camp in Assam and in that detention camp this is what happens right and and and so at least I would think it difficult to say okay like we don't care and you know it's it's it that would mean something is I mean then we have problems that are larger than like you know we need to like start from the basics on the second issue of what to read well I guess that again prescribing reading lists is easy enough and and we can talk about that but but again I think that that the important issue is to is to generally be aware of the of the criticisms coming in so again my own experience is is is about so I'll talk about that I think right from the beginning of the project there were a number of people who clearly were not out to disrupt it there was no there was no there was no usual someone like for example John Reyes right who who is a professor who has given up a very comfortable life abroad to teach in Ranchi right like it's not even a Jindal or Shoka he's teaching in in Ranchi right and and and when he says that look rethink in the in the politest of terms like rethink what you're doing and writes explains why I think that it's hard if you are aware of of or if you're looking out for the critics you'll find them because they're they're there and they're and they're speaking up and then you have the platforms on which to engage engage with them so I think of course reading lists are important I think it's more important to engage as an evolving matter as things are happening because then the conversation becomes about specifics and and the readings give you a framework but I think it's much more important to to talk as things are happening yeah so my question is something with regard to why there was a justification from the state with regard to other talking about e-governance and reduction of corruption yeah so I wanted to actually know does corruption really reduce when you introduce other in a workspace so EPW ended up coming up with an article which is like a research article done by two individuals who are working with Aadhar in Telangana and talking about how implementation doesn't really take away middlemen but makes it easier for middlemen to go ahead and corrupt by by various ways so then I started thinking that is it ever possible that either e-governance or technology actually makes it easier for resources to be allocated has it ever happened anywhere or has it ever reduced corruption to the extent that governments can actually do anything or does it make it even more difficult for us to see corruption happening because for us when we look at whether the amount is gone into someone's bank account through Aadhar and whether they have withdrawn it it looks like corruption is gone right because it directly reaches them in the bank but the ground reality is that person still has to leave his working day go to the bank which is one bank far away and that person can do whatever he wants in terms of giving you the money from the bank so I wanted to know if you have had any study or known about any place where in e-governance or technology has helped better resource allocation or reduced corruption you know I think I think it's a very good question and in fact I remember that there was this a story of in UP there was a school in UP where they apparently found that there were 130 ghost teachers and Aadhar had eliminated like ghosts you know because they didn't exist it was all they actually existed it just happened that their fingerprints hadn't matched and so the system said they're all ghosts and so you were overpaying all these teachers who don't exist and now we have saved this much money by not you know by and that would become a savings right and so to get all these all these examples which came out of the time I think that that at least from my my own work and I should have obviously clarified that as a lawyer who was opposing Aadhar I'm obviously biased you can take what I'm saying with a few tons of salt that it seemed to me that that the evidence on the reduction of corruption was very sketchy and at many times it didn't seem to add up and people like Ritika Khera have written extensively on different kinds of things that happen at the level of the pause of the distribution and how Aadhar can only take care of one of them and even that so there are all these studies right on your issue of whether tech can enable I think I mean the answer is obviously yes and examples I can't think of well examples of the cuff but I'm sure like they exist but even in this whole debate I think the debate was never about not using tech right the debate was about using tech in a way that empowers people and doesn't disempower them by forcing them to go to the post machine and put their fingerprints in at the risk of you know not getting rationed so that is the problem it's not that the tech is being used in fact I think that at many points there were various solutions for example the smart cards in which the biometric details are stored on the on the smart card so that you don't lose your it's not a centralized state database that's like keeping them but you have control over them so those kinds of options where tech is used to enable and not to disable were often discussed and were on the cards I think for more on that you'd have to ask somebody who's more familiar with this history because I'm not that familiar with it so yeah so in this question of you know does tech solve problems you know I think as technologists we tend to forget one very fundamental thing about technology technology is a tool that empowers whoever you give it to so it amplifies your intent like if I want to commute and I have to go from here to somewhere two kilometers away I can walk on my own feet but if you give me a car I'll move faster okay so but ultimately the tool has empowered me to do what I intended to do you know so it doesn't it doesn't empower me to do something else I mean it doesn't impose someone else it empowers the person you give the tool to so one of the things that's commonly mistaken about implementations like other is who are you empowering in the first place no it's an empowerment tool yes but who is it empowering and that is something that is often not considered in the discussions about technology so for instance if you empower the state or the machinery of the state and you're also complaining that the machinery of the state is corrupt and not benefiting an individual then how does empowering them help the individual it is empowers them to do more of what they're trying to do okay and so that is something that you have to bring into any discussion when you talk about whether technology can solve the problem or not just a second passports so when passport offices 15 years back it was run rampantly by corrupted people you had to give extra money and you would get your passport within five six days even and now because it has the technology and Tata has taken it up or PCS as far as I know it is centralized there's a database they cannot do anything to you they cannot do any favors to you they cannot take money from you so yes corruption has reduced due to technology because at least in my experience in that particular field hello yeah I don't need to stand up so mine is a little bit of a personal question I guess I've always been of the opinion that people should be completely free to say whatever they want to see like I think a lot of us in this room would probably be along those lines but I've also seen what unfettered free speech usually leads to so I don't know how many of you are on reddit but there was the issue that certain groups of people like when the whole Donald Trump thing was happening there was also even the whole word Donald Trump election was happening in the US the Donald sub reddit there were allegations that there was like hate speech and brigading and that kind of stuff and a lot of people moved away from reddit to another platform called both BOAT which turned into kind of this hotbed of extremist ideologies I mean basically far right kind of philosophies and stuff like that so even though I believe that everyone should be able to speak freely like he said technology enables people to do things and when there are a lot of loud people coming together and kind of snowballing into a bigger force it can lead it can have very unpredictable consequences which I don't think I don't think maybe I'm wrong the legal system has yet seen mathematically these are kind of new models of behavior of you know how people are are interacting yeah so yeah no so sorry no I think I have so my question is personally and okay maybe from a constitutional standpoint do you think there is a very clear line or a clear guideline by which you can kind of if you are creating a product or if you are kind of you know define your own personal models or boundaries or stands or whatever do you think there is a line that clearly demarcates censorship and I don't know what you would call it preventative yeah I mean see that's the answer is no I don't think there is a clear line it's a very very difficult call to make and the reason for that is that even if you were able to to define what constitutes hate speech what constitutes speech that we all agree should be beyond the pale language is a slippery instrument so I'll give an example there was a famous case in the in Europe it was I think it was in belgrade yeah so in belgrade they had put up a red star outside a house one person so it was they what they basically wanted to prosecute that person because they said that the red star 75 Stalinism it was Stalin's you know symbol and and Stalin's crimes obviously are so great he is he is he's massacred so many people that it is equivalent to hate speech right it's a bit like holocaust denial in in Germany it's a scale the scale in fact you kill more people right so the scale is that much so they said look this is hate speech so take it down now when it went to court this guy says that look the red star is not only Stalinism it also is a worker's unity symbol so how can you ascribe to it this one meaning that it's Stalinist right when it is reasonably possible to have two at least two meanings to it and the court says well and that's right I mean if it was unambiguous like a like a burning cross a clan symbol right there are certain things that are like that the German the German sastra any of those things right there's a unique meaning there which you can't get over so then yes that's that's there but there are so many things that that that are open to more than one meaning it becomes very difficult then to affix a single meaning to them and say okay this is the meaning that is beyond the pale so I think that that the clear line fails simply by virtue of the limits of language in principle should there be should there be like a limit I think that yes there should be and that's because words are not just words words are always embedded in a larger context a larger social context so which is why in some of the more evolved constitutional democracies it's not treated as a question of free speech but a question of equality in South Africa for example their hate speech law comes under the equality act and that comes from the experience with apartheid so the idea is that in an apartheid state a white's only sign on the rest are not just those words but it's an integral part of this that entire system of apartheid that subordinates one group to another I think the best exposition of this and I hate to recommend the judgment but it's a very very clear very clear judgment is a Canadian Supreme Court case called Saskatchewan versus Wadcot where it really spells out why hate speech is a problem and what and why we are trying to to deal with it. Now the second issue there is that people often assume that once we agree that certain kinds of speech should be you know beyond the pale criminalizing it is the next step but that does not at all necessary so for example in the US Catherine McKinnon and Andrea Dworkin the two feminist scholars devised a law that meant to deal that was supposed to deal with violent pornography and pornography that specifically subordinated like women to men and their solution was not a crime but a civil offense which requires you to pay compensation because the idea was that the problem here is that you are causing harm to women by normalizing certain kinds of behavior so the response is to is to is to compensate for that and so you can have a range of remedies that are not censorship but deal with our intuition that certain kinds of speech is problematic so I think that that's that's the answer there and I have personally I mean I haven't done much more research on this but I I've been hearing of late that at certain models that deal with self-moderation of communities are the way to go forward something like what Mastodon does it seems to be working recently but I wouldn't I mean I don't know I mean I don't know enough there but it's something I find something intuitively attractive is the kind of self-policing self-modrading communities that come together and decide okay our values are are your B and this is what goes beyond our values so we saw a question from that side about helping people represent to which the answer was participation and this question about snowballing when there's a majority so I want to ask your views on the amalgamation of two which is when you give proper representation to minorities still in a room of say 10 15 20 people there'll be one person representing that and even if you go by a democratic system if other 11 people agree out of 20 there's nothing that one individual can do which will still lead to whatever issues the minority or that sect is facing so is in your view is there a good way forward to address this kind of a problem technically there are many design issues you can you can bring to address that so one is you have veto veto points for example so you can have a system where if that issue affects that group or that that individual then then there will be two over it you can have you can have a much lower threshold you can have for example the requirement that two-thirds majority they must be two-thirds majority for you know any kind of of change that affects those kinds of issues in the Indian constitution you have requirement that if you want to amend the constitution that affects a state interest then 50 percent of the states must agree to that so I think that that you can you can depending on how much weight and what kind of weight you want to give two these issues on the one end of spectrum you can have a simple requirement of participation and consultation and but then majority wins or whatever at the other end you have a veto state of veto in the middle you have a range of options that that can you know deal with it depending on what you think you can have a requirement that you can have a first veto and then second time it has to be done so there are many many models that you can look at depending on what you think is the is the correct balance between between these points so I think that again I would say the most important thing is to ensure that in the beginning when the conversation begins it's in a framework of equal overall equality it's not that the minorities are coming as applicants as people who are trying to persuade you of their view but as people who have a view that must be given equal respect and then you can move on and see how you want to design specifics of that model we have a question from twitter the question is that what is the law in india that guides free speech on social media by keeping the post private to friends is it really private what is the law's standard this is very interesting because there was a there was a case that never went anywhere but there was a case where some members of a certain institution basically on a private facebook page made certain comments against the management and someone has always happened to a screenshot and send that to the management and then and then they got you know they were they were suspended and the proceedings began and so one big issue was that was this private if it was private then can you take this action or not and the answer is indian law has no answer to that yet so so so that is very much still the first principles case that you can you can you can argue on both sides broadly what are the rules on of free speech on social media it's the same the indian penal code applies in the same fashion so you can't have obscene speech you can't have speech that creates hatred between classes unfortunately sedition is still very much an offense so speech is seditious so all of that applies in the same way as as it does offline so I think yeah the question about private groups is a really interesting one and and I think so far there is no answer in law but I think you will soon I think have a case that deals with it but so far there's no there's no law that that will answer the question although incidentally on defamation the moment you make defamatory speech to another person then it's already then it doesn't matter private public then it's need to be published and then you can be you know subject to defamation proceedings I want to ask a question from the technology camp so I do research in computer vision or something that at least could come under the broad ages of computer vision so I mean technology is not a monolith right I mean it comes at various levels of abstraction and interaction so you know at one level there's the product extreme adha or face tagger is a case in point it's technology which is as deployed in a specific application in a specific context and you know that's almost the easy case to take down I mean I've sat on panels on adha arguing that the design is really really flawed at several levels along with you know lawyers and sociologists so but if you take a step back I mean you know take face tagger as an example at one level you can critique the product for the horrible questions it poses about surveillance and bias and so on and so forth but backing it is a more abstract field of machine learning backing that is a more abstract field of numerical optimization so as people working in this area how can we interrogate the work that we do in the in through this constitutional lens that you talk about and then several other people have also talked about because we talked about these conversations that need to happen between scientists and technologists and constitutional scholars or people who are invested on the ground in sociological experiments what you do if you're not directly dealing with the product but with these more abstract levels which do not directly necessarily lead to that obviously controversial product I mean one obvious answer I can think of is just look at the breadth of applications and try to figure out which of those applications are directly controversial which are not are there any other ways you would recommend looking at this somewhat remove from the product work I think I know I think that that's that's a wonderful question I think that's the answer the answer that you said I think first of all I think it's important to say that this is the question that can only be answered by the people who are doing that work because what the constitutionalists or the people like they can put forward what their concerns are but at that level of abstraction right there's no way that he or I can possibly gauge you know what this process is or what what that will lead to that is something that we are completely locked out of because it's a highly specialized process that has its own vocabulary its own protocols I'm actually reminded my dad's a mathematician and he's an abstract mathematician and he recently found out he didn't even know that in Grenoble they were basically using his mathematics to do something regarding a brain to artificial limbs kind of connect and he had no idea until they called him and showed it to him sometimes even even the abstract scientists don't know what what can be used for so I think one answer is yours the second answer is something that that I've read I was reading recently which is that it's a more kind of direct action kind of a of an answer which is that in various in Canada and various universities they have this movement where they basically wanted a complete severing of of links between of of links between the Pentagon and the defense establishment and universities so the idea was that if your funding comes from the defense industry right then like it or not whatever you do the direction your work takes will in some way be influenced by the demands of that of that industry and and so even though in the beginning you you can't see that link ultimately it will I mean you will be making products that will have some use for them and then that will happen in various insidious ways so if you cut off that of the source right and make sure that your university does not or that department which you are does not have funding from the pentagon right then you are ruling out one possible source of of of thing at the beginning and in many departments they did agree to divest from from that so that's one possible solution it's a very hard one because money is hard to come by and and of course but but that's that's one thing I was because I'm an art because I'm an artist I don't understand vehicles in how it works but I was thinking the way Ukraine thinks about technology it feels like it's kind of final from the beginning as if there is an intent and that intent has can be achieved but often technology is always in the process of becoming it's not really finished and I was thinking of that defamation case against herd scene which also goes to the twitter question sorry against herd scene this herd scene okay yeah and I was wondering what are the legal frameworks that also kind of interacts with technology not as a finished product yeah you know I mean I think I think yeah no I completely agree so which is why I think I tried to say in the beginning that that that awareness of the constitution and the constitutional values has to pervade the whole process so it's not something that you look at at the end or even in the beginning but at each step there's the there's the awareness that this could have implications that that that go go down that route I think in the that incident where the high court asked Instagram and Facebook to reveal the identity of yeah so I think that that's the interesting part right because if you're a designer you I think would be well aware that at some point the state will come and ask you for the details of the people on your platform and so again I'm speaking as a novice as a designer is it possible for you to design a product where even if the state was to ask you you would be unable to to answer the question and remember a few years ago I was I was abroad in and there was there was someone I think the design of the cloud flare had come and was speaking and so one thing he said was that our system was secure that both Hamas and the idea he was it so that's like how how like you know I clueless we are about about how to influence we have about the the final thing but he said that there was something called I can't remember something called a canary what he said is that if if like the state asks me as a designer for the keys to the system and something will activate automatically the message will be sent so everyone will know that the state has asked me for details and then they can take the steps that they want and I'm really forgetting the details but he had that model so I think that that you could I think easily predict that that the kind of product you're designing would be of use to the state to do XYZ and then you can have at the very from the very beginning certain design frameworks that would that would mitigate the compromises that you would then have to make later on you know when the state comes calling so that's yeah that's here so I am a techie and I can see law as code so often I see it as executed by people not really by computers but I'm kind of I see law as code yeah right it's executed by people not by computers but the consistency paddles between like you have these conditions and you kind of if this happens this is what we done etc but now I'm kind of worried about the recent trend that the law is replaced by actually code and executed by machines conditions are made made by the machines with no man in the middle yeah yeah I mean all that is the one outright re-replication that happened at the bottom this place is okay what kind of understand what's your use and what does constitution kind of is the thing in these kind of things from happen or what's up way forward yeah no I mean that's like that's really like I mean a huge question and and again so I have I have used but again I would say that I think I think the person who expresses this whole issue really well is Virginia Eubanks in her book Automating Inequality like that book really I think speaks to the heart of your concern and she has examples where she shows the exact ways in which code is replacing human decision makers in you know core fields of insurance of the housing market of child custody and so on and and she kind of shows that so I think that that book I would recommend very strongly to just get a grasp of how the law interacts with with all of this I think what does the constitution say again nothing directly because the constitution was framed at a time at which these problem is not even foreseen and so it gives you certain principles and values that that you try and and apply and I think what's more interesting is is the concrete developments that you see around you as I said a little while before this whole idea of a right to an explanation the right that you will not be subjected to a decision that impact you adversely simply by algorithmic you know processes but you need to actually have the right to access a human being and be able to make your case before them before the final decision is taken so I think these are some of the modern legal frameworks that recognize the ubiquity of code in our lives and then try and see how the law could could work to mitigate some of those those influences that it has of course nowadays even lawyers themselves are being started to being replaced by by code so in in corporate law firms now many of the tasks that lawyers used to perform are not being performed by algorithm so I think even within the legal law itself is being taken over by code apart from being like a code so that's something to I think keep an eye on in the linear future yeah I want you to make a couple of comments you talked about the military applications you know and cutting off funding but if you see in the US or in India many of the things that the civilian population uses the the seeds of them were actually sold in military including the internet right so I feel personally that is a little dangerous line to take because source of funding and the application may not be divorced the other point adding to what I think Kiran said is often I feel that as a technologist technology is often demonized but what I think people should probably also look at is who is wielding that technology right great example is like nuclear power right nuclear energy can be used for like really really good work but you can also use it to make bombs similarly when the internet came in a lot of us who were the early users of the internet thought this is great and you know you can have access to all of the information but now we have the problem of filter bubbles so sometimes the first and the second order effects of technologies one not immediately apparent and it's a little harder because in some sense law actually has to do a catch up and we are in technology pushing the frontier so even to the technologists not very obvious like you know what would happen and a great examples of this in history for example a lot of the early internet the reason we have like security problems is because there was a trusted network but obviously that didn't scale and then hence we have spam hence we have break-ins and all kinds of stuff one thing also I think missing in this discussion which I kind of want to bring to forefront is that everyone's has been talking about algorithms right but increasingly with machine learning and artificial artificial intelligence applications I think there has to be a lot of emphasis on the data that drives those decision making and unfortunately a lot of the people are not talking about the data that drives the input data and I think that is something that you touched upon but I think largely I feel that a lot of the people are just talking about because algorithms may not be the only place where bias is and also I feel that many cases including Aadhar I think the even if because we cannot estimate the first and second order of effects of technology I think what would also be a good framework and I think this is where I want to comment is to have like effective maybe redressal mechanisms and I feel as technologists do not look at the redressal mechanism in case something goes wrong so we don't have like the safety hatch right for example if you see and and technology can build for that a great example is reading a book called endurance about this guy called Scott Kelly who went into ISS recently the number of primary and secondary you know fallback systems that you have for like re-entry into space or going into space a lot of these can actually build and that can even those can be regulated so the law can actually mandate that technology have some redressal mechanisms and that is I feel something that has not been as a tool by law has not been effectively used so maybe some comments on that would be useful yeah no I mean so a lot of points there and I agree with with with most of it and the funding issue definitely so it's always a complex decision so it's never a clear cut one so it's always I think worth thinking about exactly where the funding is coming from and what military it is what have they done off late I think like that and what is in what ways might your department research you know be used for so I think the number of issues that that that would that would be at play yeah so on on the on the point about on on unpredictable second you know and third-order results I think I mean so obviously I mean there's no denying that that that technologies can't see where it might go and now can people and so on right so there's definitely a case to be made for taking into account that those consequences could you know not be seen yet and could be seen in the future I think it's equally important to also say that here and now there are people who are being affected right and there needs to be a you know an equal amount of our attention paid to the fact that in service of a possible future second or third order effect what are you doing to people right now who are being made the subjects of this technology and then that that's where the issues of constitutional values come in and to what extent are you willing to to potentially sacrifice some of those values because you think that there might be in the future of certain kind of benefits I think that that's very complex I think decision to make but it's important that both things we take into account you know at all times on a vessel mechanism as well exactly right so so that's that's exactly where issues like that to an explanation the right to not be subjected to a decision that this advantages you purely on an algorithmic you know process for all of those are a vessel mechanisms and their attempt there is to to ensure that when things go wrong the result is not something that is is deeply damaging now the of course the whole issue with the other was the vessel mechanisms in question you know but themselves had the same kinds of flaws that were there in the original design so the thing is that okay your your your your ration your fingerprint fails so you don't get rationed so you you have to talk to the guy but the guy himself is the one who's denying the ration right so then what kind of a vessel is that so I think that that that requires a little more thought the vessel mechanism shouldn't suffer from the same kind of of potential for that that makes things go wrong in the first place that's what I would say the issue of of input data I think is a very important one and it's it's regularly discussed in the literature around this so the whole idea is and so it is there's a famous term called automation bias which is that because the algorithm itself cannot be biased like human beings can be biased therefore there's no real issue of equality or discrimination when you have the algorithm coming in in with our human being and that of course ignores the fact that the person putting the data into the human being that human being has their own biases and that will play into what the output is and that's that that's like a like a big issue in fact a lot of the literature around the interface between AI and algorithms and and constitutional you know values like we do she mar the some of you may know I've done a lot of work on that discusses the whole issue of of the problems with input data and how that can be dealt with I think my last point is that that as I think as you said one should not be of course demonizing technology and it's its value is immense and of course very very important and the second order point you said is also important I think on some occasions though you can see at the time at which you are developing the tech what how it will magnify fault line right enough and face tiger is the best example when you know that that in our country the criminal justice system is systematically biased against you know Dalits and Muslims you're putting into the hands of the very people who are responsible for that bias a tool that will enable them to do it at a faster more efficient scale then you already know that that that's going to cause these issues so I think that that the the value of bringing in the constitution into a discussion is that it helps you to understand that right at the beginning what you're doing might lead to x5 or z consequences and that might then make you think about a design that will mitigate those consequences and might still allow you to keep your second order and third order consequences you know on the horizon if possible that's that's what I would say to that so currently they're using like facial recognition systems in India especially during the PA protests and all that yeah so two things one is I'm according to me they're doing it fans any parent laws so how do you think they're going to justify this if it does go to court and secondly I mean even if it does like even if we do make a post-factual law uh has that been done before is that even permissible to make a law to accommodate to the scenarios that it took before the parents or the quickest answer is adhar because adhar was for five years there was no law uh then they made a law and they said that everything happened before before is now deemed to be under this law uh we said in court look this is uh this is absolute nonsense I mean you didn't have a law for five years you're collecting all this data uh without any law this is clearly unconstitutional uh the court kind of hemmed and hard and finally said that okay fair enough you're right there was no law our answer is that okay those people whose data was taken before the law came into being can now opt out of course if you already upheld adhar for subsidies and tax that is meaningless because nobody can really opt out now unless you're really bloody minded about about being the adhar so that's what the court basically legitimized uh that to answer the question more broadly retrospective lawmaking is a pretty common issue of Indian uh lawmaking in tax it happens all the time you make a tax law and you say applicable from 1960 that happens all the time and so this could well happen here as well um yeah I mean right now there is no law it's clearly beyond authority of law and and the data protection uh bill has been pending for a long time but if you look at it carefully you'll find that uh that it gives wide leeway to the state to do all this so even that may not really uh help and of course the issues with face record are well known and you don't have to like go over them all over again right um we're kind of running out of time it's 440 um we were going to just wrap up and have 15 minutes of coffee and discussion it's up to you you have another event to run do we know uh do you want to take q and a till five o'clock i can go on i can go on you're all right okay great also just a quick note that there are t-shirts with the preamble of the constitution no sorry oh sorry article 14 on on them uh and got them's book uh outside in case anybody wants to buy them after the event so my questions with respect to your response to the question on censorship so you suggested self-regulation right but then the problem with tech is that it is empowering or increasing the concentration of power manifold with the people who are already already privileged so how do how do how do what do you think will it even work in that in that sphere yeah i mean i mean that's really the part of the issue and i mean i don't think i have an answer to that because um um yeah i mean the the people who have access to uh the tech themselves already you know people who have certain kind of of privilege there's in free speech theory there is this whole idea of of the infrastructure of of speech and and the idea that everybody should have an equal access to the infrastructure of speech up to a point that made sense because you had newspapers and tv as being the primary infrastructure uh uh of of speech and so you could talk about equal time for political parties during elections access a right to a reply in a newspaper there was a certain way to argue that but now i think it's very difficult to to have a model which you can guarantee that kind of equal access so i think that that um this is a problem that that doesn't yet have a meaningful uh answer to be sorry about that i had two short questions so taking from your bias in the criminal justice system i wanted to understand how much uh weightage is given to the class factors of people who are disproportionately represented in the justice system so does technology that is currently being used perpetuate this bias or even a claim is made that we are reducing this bias through the use of technology and secondly when you talked about the pdp bill so right now how we see section 35 which is dealing with state surveillance and power so the state has now made a provision which is uh resembling the restriction under article 19 to the freedom of speech and expression so i mean what's the explanation that the state views data used by itself as a restriction on freedom of speech and expression so like is there a link here that i'm completely missing second answer i'll take first answer is no because the state uses these words wherever it wherever it goes i mean it's just it's become a template now and basically it insulates the legal challenge the moment you take a legal challenge the state will say that okay look 19 to same words what's the problem right so that's that's basically the reason why they put those words now everywhere it was it's a long history so they began with it being in the uh cinematograph act for censoring films in the it act and so each time you'll have these words being repeated so that's why that happens on the first issue yeah so i think the i think the issue is not so much that that tech itself perpetuates this uh systematic bias in the criminal justice system but tech empowers the uh implementers like the police uh as scale and speed right that that's what the issue is and the class yeah so so it's actually religion class and caste are all uh axes around which you see uh disproportionate representation in the criminal justice system you see it in a project on the death penalty it was found that if you are if you're poor you are much more likely to be you know uh given the death sentence than if you are not and that also operates across the board uh so i think that that um again to come back to the point it's that tech is not the is not responsible the fault lines that exist the question is that that tech allows for them to be magnified at a much greater intensity and and scale so uh i had a question around an interesting question that was asked to uh founder of a company called and rule industry he uh and rule industries so he used to he was the original founder of oculus rift and then he started another company that builds out surveillance technology for army bases so they build tech that allows army to secure their bases and the question was asked is like you know today you're building a tech that is not you know uh aimed to be able you can you can there is a gray line but you can argue that it my tech is there to you know help out army secure that way their basis uh so another one is like how do you you know go about questioning that and that another one is what is stopping them to you know repurpose that the same tech that they sold off as a as an key hey this is not evil uh into evil tech you know like what is the role of law in that and the another question that i had was uh regarding uh tech enabled propaganda and uh misinformation so we have seen it in brexit uh like i'm not sure if you've seen the movie there is a movie on brexit where they talk about they did targeted advertising to sway the pole so again if you give uh power to people uh people will use tech to sway the vote right so basically tech enabled propaganda cameras and analytics again tech enabled propaganda and misinformation fake news tech enabled click so like what is the laws role in it you know how can we build or have laws are over yeah so on your first question that i think as we have discussed i think even before a couple of q and a is is that um is that i think it if you if you engage with say the history of what the army does you engage with what normally happens when a tool begins as a defensive tool right you will have enough evidence to know uh that what you're doing can and in all likelihood will be repurposed for something else and in that case you have to take responsibility then for that it can't be an evasion of responsibility you could still say if i and i'm i'm okay with it but you have to then say okay you have to own it after that i would say that look this consequence something that is on me uh that's the first thing the second thing i think uh is that yeah so so i think so far law has clearly failed to deal with this and and the normal uh legal responses of prosecuting people for fake news you know uh all of that that's clearly thought worked now one argument and and again i i'm so all of you will know this better than i will so i'm just putting it out there one argument i've seen of late comes from antitrust uh lawyers and uh what they say is that a lot of this is enabled because of tech monopolies so uh the reason why uh fake news on facebook is so powerful is is because for so many people uh facebook is the only source of news that they have uh and so what they say is that the answer is to break up the uh the tech giants and and and avoid uh tech monopolies and i i don't know if that will work and so i'm just putting it out there as as something i've i've read from a legal point of view although i think it's very much still uh i think a little bit watered in her campaign has now promised to look into that so we'll see if that goes anywhere so my question is a little more diffuse because as the layperson for me design is a very diffused term anything could be designed and if you think about what's happening right now where we talk about the citizens responsibility to come on the streets and protest about what they think is constitutionally or how constitution is being wronged at this point of time or as a citizen how you're being wrong when you think about the entire system it's so complicated and so complex that you don't understand where do you intervene and what is an individual's responsibility in order to understand these links because to make this more concrete the example that i was thinking of was we're living in a country where an abortion laws are very clear it says 20 weeks and after that it's illegal even then it's very fuzzy you don't know if it is actually legal it says that no only with consent of the doctor with the approval and here is where the technology comes in and says that for various diagnostic tests there is a way that the state also because it's trying to make sure that the sex the issues with female infanticide are dealt with so it also says that a lot of information is not made available to the person who's directly concerned with and i'm not even getting into women's right and how the body is out of control but so when it is such a complex system wherein there is not a i can't just look at the state and say you're being evil so there is a certain role that the design is playing in terms of how technology and diagnostic tests are being run in order to say you have 70% chance of this going wrong or 80% chance of this going wrong so now do you have the option to exercise yes or no but then the state comes in and says but no you have gone past the deadline so you can't do this and over here again there's class discrimination you can clearly see that women who are from upper classes get a better treatment possibly that which has been documented many of the court cases that we've seen and recently the government has come out and said no there is no unilateral decision that a woman can take so it's such a complex system so as a citizen or as a lay person where do i see my individual response to it and how do i understand the design of it and i have just curious as to what you would think i mean i think that i i'm going to have to give a very unsatisfactory answer to that which is that all of us have a certain domain of expertise and that's where we can see most clearly the fault lines and what the system is doing so it's i think it's it's uh if you focus on on that and get clarity on that at the start and then everybody has their own areas of expertise and then a conversation between different people can give you a sense of of other points in the overall structure where there are issues and then that'll give you a a holistic view so i mean that's a very bad answer but but that's the best i think i can do on that just requires i think a constant process of engaging with people who are not in your domain on expertise because that'll show you what other things are hi i have two different questions the first one is along the lines of what we were just talking about uh for censorship but it's more specific to lying so we to lying to telling a lie we're beginning to see sort of facebook now be in this position where they're impacting lies in the public domain and facebook says that we're not going to stop you from lying in a political ad but you also mentioned defamation so in india you could be subject to defamation laws if you lied in a in a political ad anywhere anywhere so what is the sort of test that is used to determine whether you have made a political point and exaggerated a bit versus whether you have detained someone so that's one part of the question and i actually have one more uh and that goes back to free speech and you sort of made your stand on this very clear about we should have limits there's no such thing as fully free speech so the question i have here is this why should the state have the right to regulate the bigotry in my head the the point i'm making there is essentially the thing that is bigotry is actually a it's a shifting target it's not one thing what is bigotry today may not be bigotry tomorrow so having one law that targets one specific type of bigotry is never going to be enough it's good you're always going to be subject to changes so what is the underlying principle for that so why does the state get to decide what i think or don't think this is a more fundamental sort of question yeah i mean so on your first question uh is a very interesting story uh president zuma of south africa he no longer president uh he um basically uh there was a he sanctioned some money to improve his presidential hope uh that money somehow snowballed into an insane amount of money and ended up building a swimming pool and and a crawl for his like domestic animals and all kinds of stuff happened um and um and so just before there was a big scandal about this just before the election was was going to happen the democratic alliance which is the principal opposition party in south africa sent an sms to around two million voters saying that you know that zuma has stolen your money to build his ran 2.5 million home vote da vote out corruption ruma is furious and he goes to the court and he says that that uh this is a false statement i didn't steal money there was no theft and so these guys have to retract it apologize and all of that thing which falls the high court says that uh dismisses the claim and says that in election you can use words with some somewhat loser sense so it's fine court of appeal says no it's not fine because it's a lie so so the constitutional court split seven to three and some say yes some say no so you see over three um uh levels of the south african judiciary nobody can agree on whether it was a lie or whether the word stole actually meant in metaphorical sense like just doing something wrong with money and and stealing in its daughter sense right so again i'll come back to the point that that languages has so many limits that any attempt to pin it down as to what is a lie what is political is always going to be a very difficult task and and experience shows us that it's it's it's you will have a clear clear uh um you know uh area so if someone says that i mean so there was this meme that going on on twitter that there's a photo of yashuri from the emergency days uh reading out letter to indra gandhi saying that look resigned and it's now being turned into uh the opposite so somebody's spreading it saying that that um he is asking her to forgive him for for you know uh protesting so that's obviously a lie right like there's no dispute about that so you have the clear cases and and that's why the law exists to regulate those clear cases but it also accepted there will be a large domain of very very unclear cases and that's just the way law and language work so and that's not i mean it's not a good or bad thing it's the reality and you have to know uh live with that secondly bigger trees i i don't think it's about regulating the bigger tree in your head i think that like nor well a they can't and b obviously they shouldn't either so that's that's not it it's more about the fact that when that bigger cure whatever it is takes the form of speech that is out in the world then it's it's important to understand that it's not just words and as i said earlier speech is always part of a much broader pattern of of conduct of social relations so again going back to the apartheid example right it's not the the whites only sign right it's not just in the abstract it's not a whites only sign that's a bad thing to say that whites only sign cannot be separated from entire social structure of whites subordinating blacks so that's the very limited uh uh point that's made which is that that if you if you agree that speech is embedded in a whole host of of non-speech patterns of behavior and conduct and if you agree that equality is a value that we should all you know subscribe to then there comes a point at which the two clash now when that point comes is of course a very short question if you can disagree over and who decides when the point comes well again your point is fair why should the state decide but then who else will right in in we agree that if we agree that the court uh in any state will decide when equality as a constitutional value has been breached then we have to agree that then they will take that call of course we may have a society somebody else takes that call but in our society right now that's the body which takes that particular call and it's imperfect and uh it it's often wrong but but that's like i mean again the kind of partial answer i can give to to that answer well acknowledging that what you're saying is very valid and it's a very hard question to answer in a in a true sense yeah yeah so i'm a technologist uh i interact with various complex systems at various places of work now earlier till the nineties right like uh systems uh were designed as cars are like they just amplified your intent that sort of sees to be true uh in the modern era when this individual interacting system started gaining their own biases which comes back to the point of the email and ai parts right where a system develops an intrinsic bias that can be reinforced through out uh outer stimuli right and when you compose a system of various interactive systems with their intrinsic biases which are also subject to outside influences and that constantly changes these biases these systems became become very opaque and unobservable this is what our current like the social system and the way we are modeling current systems uh interact and behave uh so earlier we technologists used to apply this theorem of reductionism like somebody brought about objectivity right how did we come to that determinism that everything is an input a process and output because the systems were reductionist we kept breaking the systems down till it did only one thing for a given out input it only gave one single output that has ceased to be the case there was an interesting youtube video an experience where the decision made by the self-driving cars to weather in an ambiguous situation of hitting two different passes by was a student young in his twenties in their twenties and the other was an old person in their eighties right and a group of people were given the choice to democratically select this bias like move a particular input towards whether you hit person a or person b and how the system interacted and who who the system who the self-driving algorithm finally hit now think about the ambiguity of moral choices a system can make in those scenarios and not stack a plethora of multitude of such systems on top of each other and what is the final output the final output is not even what was intended to begin with right so no the question comes to us that how can we determine how can we make such technologies like I am a technologist I know how can we in our hubris make such a system determine you know legality like how like you know the philip kittick's book about pre-crime right like how can such an ambiguous system which is so error prone and biased determine whether the person has done something wrong or right like how can we give the responsibility to do such a system which we know as technologists can be so wrong so error prone and so biased so and why aren't we like directly sort of stopping them I mean I think the answer is that you shouldn't and particular policing the very idea has been shown to because exactly these kinds of issues like this happened with other right like the error margin of fingerprints of facial recognition so why are we incorporating already known lot technologies into a system which relies on accuracy right yeah I mean that's exactly I mean the question I think that that's all of us to believe it together and and that's what it's not just like a think of little more deeply so yeah you're right of course so we're nearly at the end of our time here you're free to stick around and chat to each other and chat to Gotham as long as he is here people might be able to follow him to his next event as well which is I think in 10 minutes away on mission road is that right yeah it's very close by if it's open to the public you could follow Gotham there otherwise I think we're going to wrap up but please feel free to stay on have tea coffee and conversation Kiran do you want to this oh okay there's one more question on from the social media stream so this is someone on YouTube asking you a question you know says one of the key ways that technology makes contact with society is through markets and market mechanisms as the political philosopher Michael Sandel says many of the problems we face today are because we are no longer market economies but we're becoming market societies that we're making social decisions using market mechanisms so what are your thoughts on how markets mediate the relationship between tech and constitutional values yeah I mean the Sandel's point is a great one and the point he makes is that the logic of the market of this competition and atomism and isolation is creeping into non-market based relationships friendship and even beyond that so I think that the answer briefly is that I don't think there's a unique issue with on the interface between tech I don't think it's a unique problem to tech is a broader problem with the increasing marketization of society and how that is important to understand and to and to push back against tech again simply magnifies that that and so I don't think the answer is tech specific there's a broader issue with thinking more about the extent to which we want market logic to penetrate into domains beyond the market right thanks thank you thank you so much over here