 there. So, welcome everyone. Today, we are having the Internet Freedom Foundation discussion on COVID surveillance in privacy in India. We have with us Siddharth Deep of the Freedom Foundation from Subashish Panath of the behind segment at IT Delhi. And we have Uttar Prasad who is a PhD scholar at the IT Bonds Center for Policy. And so, to set the context right, today is May Day, Labor Day and we are facing this issue of a constant push of apps by the government and as well as by the companies who are pushing the ROKC to any individual and employee in their office that are unable to let people look so an employee can't attend their offices without it. It could be doctors, government employees or the Swiggy deliveries or military boys. So, it's important that we discuss some of these issues. The energy is becoming ubiquitous, even though it's being stated as voluntary. There are different ways. This is we will start off with Siddharth who is going to tell us, walk through us what IFF has been doing with the air work on documenting the surveillance sites, the nature of the technology is being now. Siddharth? Okay. Thanks, Srinivas. So, just to give you a sense of exactly the nature of IFF's work and I hope I'm audible for the entire audience who joined. So, initially when the coronavirus sort of started taking place within India, one of the initial responses that we saw from states like Karnataka and Punjab was the publishing of quarantine lists. Those lists contained details for instance people's names, their address, their travel history and even their pin codes. And through that, we also got wind and reports of the fact that people were being ostracized within their communities, either them or their friends and family or the fact that they were being denied access to essential items or in certain circumstances even getting eviction notices from their rented accommodation. Given that that was the case and it was a clear violation of the right to privacy when we look at it from the perspective of the Supreme Court's judgment in August 2017, we wrote to the central government quickly issued an advisory that state governments should refrain from such cases when they are released directly in the intervention of individuals. Luckily the government did respond and did issue an advisory that states should refrain from these practices. But what we've noticed is there are still instances where quarantine lists or similar sort of lists are being published within the public domain. Now, while these are less sophisticated instances of government action interacting with people's privacy, we've also, we noticed that there were more sophisticated attempts as well through, let's say the mapping of people's cell phone matched of signal spinging off cell phone towers, for instance, or the development of specific apps at the state level. Then we noticed that certain states were using either drones or facial recognition to also complement their surveillance activities to ensure people were adhering to either the lockdown or quarantine directives and so on. And then of course, we got wind that, you know, the government, the central government is developing a contact tracing app which was initially in the upstart of the corona coverage app, which was put out for download for a couple of days and rolled, rolled back and then of course the ROGSA2 app was launched on April 2nd. In that time period, we noticed that all of this was happening in a rather fast manner. And so, and there was an appreciation within IFF that this interacts with people's civil liberties and the right to informational privacy. So it became apparent to us very quickly that we need to study the subject and have a nuanced position on how to ensure that should a government use technology to respond to the public health crisis, which is the coronavirus, it still is in a manner which is consistent with the rule of law and nevertheless respects people's informational privacy and can essentially hold the central, the central government or state governments accountable, which is why we developed a comprehensive working paper on the subject. Where we've studied both domestic deployments of technology and international deployments of whether it's location surveillance, app-based surveillance, etc. And we studied international human rights related literature as well to try and figure out a way forward for countries like India. The findings I will get to in a bit, but since the publication of this report on April 13th, there have been certain changes made to the privacy policy of India's ROGSA2 app as well. And of course, since then we've gotten multiple reports of the fact that as Srinivas earlier alluded to, that technology platforms were mandating gig workers to install, download and install and use the ROGSA2 app while administering deliveries. We've received similar reports that in government hospitals doctors and nurses and healthcare professionals are also being required to download and install the app and use it. Similarly, there was a notification yesterday that public sector, people working within the public sector have to use the app. And therefore, because it's Labor Day today, we have come out with a joint statement to be sent to not only the Prime Minister's office, but also to the Labor Ministry, the Health Ministry and the IT Ministry to issue a clarification that the ROGSA2 app is not mandatory or cannot be made mandatory. And if done so, that is incompatible with people's right to privacy. Now coming to the actual deployment of these technologies and sort of integrating our analysis in the working paper to reasonable restrictions to people's right. What we first must understand is that in 2017, when the Supreme Court recognized the right to privacy as a part of India's fundamental right, it contemplated restrictions to that right when responding to epidemics or public health crises and so on and so forth. The court very clearly says that a government can use health records and related information to respond to public health crises. But nevertheless, there is an onus for them to ensure the anonymity of people's personal health records and related information. And it must also be administered under a lawful legal regime and must satisfy thresholds of necessity and proportionality. So when you look at that three-part test, the first step is anything that the government does should be under the framework of a legal regime or a lawful basis. Now the government of India would argue and even state governments would argue that it's either in the form of let's say your Information Technology Act. But if you just closely study the limited data protection and privacy provisions within the Information Technology Act, they are not applicable to the government of India. They are only applicable to private sectors. So there is an instance where there is no accountability as such when it comes to government usage of personal health records under the IT Act. Then if we even get into a more specific domain which is health-related frameworks, if we look at the Medical Council of India's Binding Code of Ethics, they are again silent on the treatment of health data and related information once it is disclosed to government. Similarly, when it comes to the electronic health record standard, again, while there are certain specificities with respect to anonymization, for actual accountability, it links the framework back to the IT Act and that again doesn't or is not capable of holding the government of India accountable. So in such a scenario, what we notice is there's a clear legal vacuum where there is no lawful regime that can hold or tell a government that these are the instances where you can use health records or other technology data points to respond to a public health crisis and that in itself creates a unique disadvantage through which India is responding to this crisis. The second step of the three-part process is necessity. Now, most governments will just say that look responding to a public health crisis in itself is enough for them to justify the use of technology to respond to the coronavirus, whether it's in the form of a contact tracing app, like the Arogya Seto app or any other mode. But the reality is that there is ample literature now which is coming out or has even come out in the past as well that talks about the fact that when you look at necessity, you need to have a second layer of analysis. That second layer of analysis is the fact that you need to justify that any technological system that you're deploying is an effective system which responds to the public health crisis itself. Because of the fact that when you're using these systems, they interact with people's civil liberties, the demonstration of effectiveness is imperative. Now to show exactly how it's translating let's say in jurisdictions like Europe to ensure that any deployment of a contact tracing solution is consistent with the right to privacy there, the European Parliament is already thinking about mandating upon European states the need to establish a proof of concept. A proof of concept to show that through statistically relevant models, the fact that if I deploy this technology in this manner and under these conditions, this is how it will be effective in responding to the public health crisis itself. Now that is something that has been completely overlooked in the Indian context. The Indian context is also handicapped by the fact that the government of India has not been very transparent in the development of an app like Arugya Setu. So all that we have to go by is its privacy policy, its terms of service and then a periodic referral to the front end of the app given that firstly, there is no publication of the underlying source code of the Arugya Setu app. Secondly, researchers are forbidden from reverse engineering the app. Thirdly, unlike other countries or efforts to build contact tracing solutions, there is no comprehensive portal which can tell you exactly what the app is doing, what is the information that it's collecting, how it's using it, whether it's stored locally or exported to an external server. You look at the terms of service and it doesn't even mention or the privacy policy, it doesn't even mention which government departments are actually accessing the app. So there are a suite of issues that we've captured in our working paper itself and when it comes to its failure to address a justification under the necessity standards. And finally, when we look at proportionality itself, the proportionality has to be viewed through a few different lenses. One is there needs to be an adherence to the purpose limitation principle. What is the purpose limitation principle? One is that you have to establish that these systems are being used for a singular, specifiable purpose. Now, when you look at both the terms of service and the privacy policy of the Arugya Setu app, there is enough vagueness in the language for it to be repurposed for greater or more expansive capabilities. That's also corroborated by our observation that a day after the app was available over for download, the government of India set up a committee which was headed by Niti Aayog, the principal scientific advisor, the electronics and IT ministry, the department of telecom and TRIES. So that's clearly alluding to the fact that the government is looking to expand capabilities of the app or repurpose it. We've also seen that subsequent to our working paper being published through the fact that the app now has capabilities of being an E-PASS system. The fact that now just a day or two ago, Media Nama came out with a story about how greater capabilities have been introduced into the front end of the app and so on and so forth. That's one. Then you come to the fact that when you look at the actual data which is being collected by the app, there is a principle of data minimization that these apps or at least technologies across the world are trying to wrestle with when it comes to contact tracing. Some people are talking about solely going down the GPS route. Some people are talking about solely going down the Bluetooth route. But in the case of the ROGSE2 app, it combines both and the justification provided for combining both is not apparent if you scan either the terms of service or the privacy policy. Similarly, when there is a self-identification test right at the beginning of the app, it collects about eight or nine data points such as your name, your age, your sex, I think your address, whether initially it also had a question about whether you're a smoker or not and information related to your travel history, for instance. So it collects a lot of information and then exports it to a central server. Now a lot of, there's a big debate going on in Europe whether how can we minimize the ability of a government app to export or collect people's information and transfer it to an external server and ideally keep everything on people's devices. The reason behind that is that once you start creating an external system, that's when the risk of permanent architectures of surveillance become a reality. So that in itself is something that the government of India is already doing. That's not to say that that is their intent, but the design of the app and the way that it functions and the institution suggests that the incentives are aligned towards not keeping this a temporary sort of response to coronavirus itself, but something which could mutate into something which is more permanent in nature and incentives are a big thing in terms of creating the right sort of checks and balances where you can use technology to respond to a particular outbreak, yet if it's not working, you can roll it back or once such purpose has been satisfied, you can roll it back. And what we've also noticed is in the institution design itself, while other governments are giving the primary responsibility to use the technological system to public health authorities, the role of public health authorities in India is minimal at best if you look at just the constitution of committees and who have been driving the development of the system. And the fact that the Arogya Setu app as I mentioned earlier does not refer to the exact departments within the government of India who have access to these databases which are of course centralized in nature. There are also reports that the Arogya Setu database which is maintained by the National Informatics Centre is already being integrated with other government databases like the Integrated Disease Surveillance Program database or other databases with respect to people who have traveled recently back into the country. So when you start enmeshing these databases together, there is a ton of literature which suggests that it becomes a lot harder to delete it at a later stage. We have already seen that with India's experience with Aadhar where Aadhar has been seeded into multiple other databases and that makes it a lot harder to untangle that database and then destroy it or roll it back and so on and so forth. Another challenge with the way that the government of India has been going about it is there is no independent oversight mechanism. That is a key core component of even what the Supreme Court talks about in terms of reasonable restrictions to the right to privacy. What I mean by that is you need an independent oversight mechanism to be able to hold government activity with respect to these practices accountable. The reason is that when you have let's say that oversight mechanism which comprises people from let's say the judiciary, maybe some either other people from the legal community, civil society, etc. it sort of diversifies the actors at play and if you don't do that, then the natural instinct of any government regardless of political leanings will be towards trying to access as much information about its citizens as possible. What you need is the right kind of institutional mechanisms to have the right checks and balances. So here what we have suggested is the fact that you need to first explicitly mention that the Arogya Seto app has defined sunset periods of review. So let's say after every three months there is that oversight mechanism can review whether the Arogya Seto app is let's say being successful in terms of the public health response. If it is then to what extent and if it is not then there should be an override function where you can just kill the entire program itself and also on top of that what you need is an assurance that these systems will not be used for law enforcement purposes or for the enforcement of quarantine directives and so on. We've studied a bunch of different global models throughout in our working paper. In fact we're updating our working paper which we will eventually share with government all the relevant government departments early next week. We've particularly done a case study of the Arogya Seto app as it stood prior to April 13th. Singapore's Trace Together app which is better than what India has done on some level but it has its own privacy concerns as well which we've highlighted in the working paper and we've also studied the Massachusetts Institute of Technology's Private Kids Safe Paths app as well. Well that is promising on some level because it looks at a decentralized structure of using GPS which sort of may be more inclusive given that not a significant part of India does not have Bluetooth enabled smartphones. So that's an interesting model to look at from an effectiveness and inclusive perspective but what we've noticed is that if you were to use that model there are several issues with respect to holding let's say an institution like MIT accountable when it shares or creates an underlying infrastructure for governments to use. So for instance how does an MIT hold governments accountable that's something that we flagged as a concern. Also the app or this Private Kids Safe Paths protocol does not have its own project specific privacy policy and it also creates certain concerns with respect to it outlasting the purpose itself because they've talked about how maybe the system can be repurposed for general monitoring of human behavior during public health outbreaks and so on and disease outbreaks and so on and so forth and we've also dedicated an entire chapter to the Google Apple contact tracing or exposure notification announcement as well and we've highlighted certain concerns with respect to how do they hold governments accountable with respect to surveillance capitalism and with respect to issues of competition and conflicts of interest. I'll stop here for the time being and let other people contribute. Professor Benanti. Hi, am I audible? Yes. Thank goodness I was facing problems with the connection. Okay, thanks for inviting me. So I will start with the caveat that I'm not a specialist in this business at all. So what I'll talk on will be primarily based on my common sense as a computer scientist, as a practicing computer scientist for a while. So I have never worked on Bluetooth and stuff like that. So to look at the app, the objective of the app is contact tracing. There's been some talk about geofencing for enforcing quarantine and so on and so forth and some of these apps have also tried epidemiological modeling to figure out how infection spread and so on. So what I'll do is that I'll first try to cover the basic principles under which the engineering principles under which these apps may work and then I'll spend some time on the utility versus privacy. So I'll first talk about utility and then I'll talk about privacy. So the basic technology is based on two things. Most talked about think is location, location with GPS. So GPS works with triangulation from satellite indoor and the accuracy ranges from a few meters to a few tens of meters outdoors and it is completely unreliable in the vertical dimension. So you can probably have only 40-50 meters resolution in the vertical dimension. It does not work very well in high-rise buildings. For example, if you have run in New York with a Garmin watch, you will find that your route is completely off because of the tall buildings out there and it is completely unreliable in those. So if you're in this building, it can easily say that you're in the next building. So it can work with some kind of a rubber banding over time like it does in Google Maps but rubber banding requires manual marking of certain things. So without any rubber banding, the geolocation is up to a few tens of meters and if you're in one building, you can easily say that you're in the next one. So that's how unreliable GPS will be for location. Cellular data has even lower resolution. So the resolution can be 50 meters or worse. So those are the two basic location technologies that you can use. Many of these apps talk about proximity using Bluetooth. So it is technically called low-energy Bluetooth or VLE. And the way it works is that a device transmits a low-energy radio beacon intermittently. So this transmission is isotropic. It's not directional, which means that it transmits equally in all directions in the entire 360 degree solid angle. And the other device, a listening device, picks it up in a certain time slot. And once the two devices establish a Bluetooth communication, the distance is estimated based on the strength of the received signals in the send signal. So there's a formula that translates the signal strength to a distance estimate. So now the difficulty with this is that if you transmit too frequently, the battery will dry out in Bluetooth. So the transmission will have to be periodic. And the transmission frequency will determine your accuracy of contact tracing. So if you transmit say once in 10 minutes, you will be able to find out proximity only once in 10 minutes. So what is the right transmission frequency for contact tracing is unclear. So there is no theory that I have seen that to determine spread of infection, what should be the right frequency of transmission. So a time window threshold is sort of indeterminate and they are arbitrarily set to 10 minutes, 15 minutes, five minutes and so on and so forth. And your battery drain out determines on the intermittent frequency. So those are the two basic technologies that one can use. And so the first question that comes to mind is in the utility versus privacy debate is the utility. So utility of contact tracing essentially will depend on the reliability of contact tracing. So if I understand correctly, I'm not a virologist, but the infections spread in two different ways. One is a direct consideration of the droplets in the aerosol form. And you're required to be, you're required to be within a distance of two meters of an infected person to get a sufficient virus load from the aerosol. And this is a direct spread of infection. And the second mode of spread of infection is some inadequate pickup from a contaminated surface. So somebody's sputum or a sneeze droplets can spread on a table or a hot surface and you can pick it up with your fingers as a second mode of contact tracing. Now, the way I can figure out the second mode is completely inaccessible to either GPS or Bluetooth. You cannot do anything about the second mode. There's no reliable model by which the second mode can be determined that whether a person is picking it up from a surface or a table. So you have to associate a certain risk of if you're in close proximity, what is the probability that you're picked up from a with your windows. Now GPS resolution is definitely not enough for the first, for the two meter resolution, especially in dense settings. So if there are only two people in a 20 meter radius always, and if one of them get infected and the other picks up, then you know the causal link of the infection spread. But if there are 10 people out there, for example, and they mingle around in a 40 meter radius in various ways, then the attribution of who picked up infection from whom is simply not possible with GPS because GPS does not work at a lower than two millimeter, two meter resolution. So GPS will be quite unreliable for contact pressing in my opinion. Now comes Bluetooth proximity. Now Bluetooth proximity can perhaps work for the first case for the direct aerosol pickup. It definitely doesn't work for the picking up from contaminated surface because you know one person may leave the infection on a surface and I'm told that it can stay there for 12 hours before the second person can pick it up. So Bluetooth proximity does not help there at all. Now whether you use Bluetooth or GPS proximity calculation, the path intersection will require some centralized aggregation from multiple cell phones. None of the cell phones are powerful enough to do the intersection computations always. So the intersection computation cannot be computed in a reliable manner in a distributed fashion on cell phones. This is something that is unlikely to scale and nobody does that. So everybody uses a centralized centralized server and ideally you should be computing path intersections with either GPS or Bluetooth but you should which most people do but you should also be computing intersection of space time volume given that you can pick it up from surfaces. So for example if my path crosses yours in just in space at a point in time that's not enough because I can pass a spot 12 hours after you have passed and still pick up the infection. So ideally one should compute a space time volume intersection and whether any of these apps contemplate doing that is not very clear. To be able to do space time volume intersection one will require persistent IDs because my ID will have to persist for over a certain temporal duration if you can do a space time volume intersection at all. So with rotating IDs or dynamic IDs this will become sort of harder to do. So it appears to me that there's a leap of faith from geocollocation or Bluetooth radio proximity to infection risk. So what is the theory that says that if I am in Bluetooth contact with somebody else's phone I have a chance of passing what are my probability of passing an infection to the other person is not at all clear to me. To me it appears that this has to be rooted in theories in biology physics and probability theory and I for one have not seen any white paper or any document that says that you know how Bluetooth proximity translates to risk an infection risk model at all. So you know so the assumption that every Bluetooth proximity will transform to an infection risk seems untenable because my viral IG friends told me that if I am in a 10-minute conversation contact with an infected person from a distance of one meter then the chances of infection spread he would think would be less than one percent and if that is the case then there will be just too many false positives with Bluetooth proximity for it to be useful at all. So you know any measurement instrument and in this case this is a risk measurement instrument must have an associated error model. So a risk instrument is a sort of elementary engineering principle which teaches this in first and second year that never measure anything and declare a measurement unless you can say that what's the error in the measurement you know this is called the principle of least count. So if you are saying that there is a certain risk of infection spread it's your responsibility to also say that what is the probability of a false positive and what is the probability of a false negative and in none of the apps not only Arupya Sethu but you know the two MIT apps the Singapore one I don't see anything even you know that can be called even a semblance of an error model. So the possibility of too many false positives and too many false negatives exist and given the situation I would say that the utility is extremely doubtful and it is not at all clear to me that the app will do anything that a simple local community cannot do much more effectively. So you know I'm not saying that the utility there can be no utility in it but the utility has to be established and I have not seen any study that has established the utility at all. So we have to keep our mind open but the first way to establish the utility is at least come up with a theory paper you know come up with a theory white paper to say that why it should work under what principles and then do a pilot to evaluate whether it works at all. So all of that is missing and without that trusting it on an unsuspecting population I think it's a little uncalled for. You know given the situation if I have to come to the utility versus privacy debate at all I have to assume that the utility is there because if there is no utility anything you know that violates privacy is unacceptable. So the utility versus privacy debate makes sense only if we assume the utility. So for the time being let's say that there is a utility and then let's ask the question that what are the privacy issues out there. So you know just the other day I heard you know a group of lawyers in a webinar like this and I think one of them mentioned that privacy is a luxury right it's a right but it's a luxury right and of course there were indignant protests from several others. This is not a computer science view computer science has never ever taken privacy to be a luxury right in fact privacy protection is considered to be paramount and computer science has over 30 years of privacy research you know we have about at least 10 or a dozen frontline journals that deal just with privacy there must have been at least 5000 PhDs on privacy and you know there have been privacy papers from 1970s. So to say that an application should not take care of privacy is definitely not the mainstream computer science view and if I take a computer science view I think that you know computer science will probably take a more stringent view of the utility versus privacy trade-off and then law will and I think that the question that has been asked in computer science most frequently is why a trade-off at all you know if there is utility the onus is on the designer to show that there is a need for a trade-off. In fact computer science would like to believe that you could have both utility and privacy and complete privacy for most situations. So if you are saying that we are doing it in an emergency and there has to be a trade-off the owner should be on the designer to show that the trade-off is necessary and a complete privacy protection with technology is not possible and I think you know if I hold this app to scrutiny I think most of them will turn out to be bad computer science because they have come up with designs without conclusively establishing that a trade-off is necessary at all. In fact you know so when two devices exchange information like two cell phones do the privacy protection principles were established even before cell phones came into use you know there's a seminal paper by David Chom in 1984 that showed us how two devices can exchange information without compromising privacy at all and that was based on principles of virtual identities which most of these contact tracing apps seem to be using but unfortunately they are not using it even to the fullest extent in which Chom outlined they should be used in 1984 so that is somewhat surprising. So if I have to look at the privacy protection in several apps you know I'll just restrict myself to Bluetooth. I won't even talk about GPS because it is well known in computer science that with location no privacy preservation is possible because it requires only five measurements of my daily movements to exactly identify me. There is absolutely there is apparently only one individual in the whole world who will exhibit a certain movement pattern which is exactly mine for example you know I live in a certain place within IIT my office is in a certain location and I visit a few places so this is almost a biometric and if you make a signature out of my movement pattern you can identify me with extremely high probability and there have been several papers to show this so with geo location privacy preservation will be extremely hard if the geo location is made public so the only way privacy preservation is possible with geo location is the strict access control that who can access the geo tracking information. The situation is a little different with proximity sensing and the proximity sensing happens in two ways everybody downloads the contact tracing apps and suppose they are A and B so A and B both generate contact tokens which rotate over time in most cases and they cannot be used to reveal the identity directly or track over time now this is most of these apps use dynamic IDs that change very frequently in a matter of 10 minutes or 15 minutes in fact the original BLE protocol also changes identity I don't like over what is the time duration but the identities keep changing. The identities don't change in arbitrary the identity is static they don't use dynamic IDs and they're pretty static and once this static I think privacy preservation is almost impossible there can be a variety of privacy attacks under auxiliary information that can come about all well known in computer science literature and any determined user should be able to identify fairly easily with access to the information. Now if the IDs rotate then when two people meet A and B they exchange this contact tokens this randomized contact token is over bluetooth and both A and B keep a list of all the received and all the sent tokens with them in their cell phones and this is what happens in the MIT apps or the proposal of Google and Apple that these exchange these randomized exchange tokens these randomized virtual IDs are maintained locally on the cell phones. Now if B is later diagnosed with the disease then B submits the tokens to our server for Google and Apple that server is untrusted for the Singapore app the server has to be trusted for Araguse to the server is completely trusted so whatever the case may be the the infected person submits the tokens to our server centralized server they can either upload the list of all transmitted tokens or receive tokens or both so in case of the Apple and Google proposal both are transmitted but in some of the proposals that I've seen only the sent tokens are transmitted and the list of tokens from the user's diagnosis with the disease is maintained in the private database to user can either submit queries or published in public list so the users can check for intersections on their own device right so there are two models in which the first model there is the users who voluntarily upload and others users they check it voluntarily and there is no central authority that ever gets to know who is infected and in some models a central authority is also alerted about the about the infected person now in this model among the various apps that that one can see you know what are the attacks on these things there are two kinds of it there are two there are primarily two but but there are two other kind of secondary attacks that are also possible so first is that who gets to know the infection status so so if the if the sent tokens are exposed by the users completely you know so whatever tokens you have sent is is exposed then anybody who has seen the tokens and have maintained the list at the time stamp of that the at which they've seen the tokens can construct an identity fairly easily so the infection status becomes visible to so b's infection station can easily be become visible to a if a has also kept our time stamp with each of the tokens so a has to know that i met this person at such and such a time and so when the tokens become public she can match the time and figure out that who is the person who is who has got infected because of which i'm getting the alarm so this is a this is a bit of a problem in the vigilante situation especially in india when the stigma of the disease is spreading faster than the disease itself i am not sure that exposing the infection source is such a great idea but there can be easy attacks if you if you if you expose the sent tokens to find this out well second is the infection status determination by the server itself so if the server sees all the sent tokens and the received tokens then the server can determine the identity of the of the infection spreader so in case of or a visit to this is not a problem at all because when you register you give them their phone and your names so the server can always determine that who is the infection spreader but in some of these applications some of the other applications especially in MIT there at least has been an attempt to prevent the server and the central authority from determining that who is the infection spreader so they can just alert users but they cannot figure out that who has spread the infection the server can also construct a social graph you know if they see the sent and receive tokens of the infected person then at least for the infected person they can reconstruct the social graph of who are the other people they came in contact with and this is again a again a massive privacy risk and finally there can be a false claim by the user so for example if I want to if I want to you know put some of my colleagues whom I don't like in quarantine all I have to say is declare myself as infected and and everybody who is coming my contact will be forced to go into quarantine so I can play through and out there and there has to be protection against this false claims I think that out of the city doesn't have this problem because this is verified you know that when I when I declare myself as infected somebody verifies this claim so for example if I look at the Singapore app the Singapore app the infection status by the user is non-determinable where the server is obviously determinable because it is a centralized server trusted server there's no protection against the social graph reaction but there's a protection against the false positive of the user methods like private kit and pact of MIT or the or the Apple and Google they don't give any protection to infection status discovery by either user or the server both are possible under ordinary due diligence and some determined attack they give protections against false positives so the so the protection against the determination of user identity by other users or the server is very weakly implemented at least on the initial proposal there is a recent proposal called epion which has just come up on 28 by a group in Berkeley and and one of my friends in Montreal and epion seems to be a completely privacy preserving contract tracing app so they're just constantly concentrated on the privacy preserving and seems to have done a fairly good job so I and the first reading could not find any vulnerability of the paper and so in a computer science sense it appears that a completely privacy preserving contact tracing is completely possible provided you have strict access control at the server and you don't leak information in the server so in case of it is not at all clear that what kind of access control policy they have and access control policy without a regulatory oversight has no meaning so the server security is a is a is a big concern concern out there so just before finishing I'll just before finishing I'll like to say that this you know all this seems to suggest some kind of a techno determinism to me you know so it's a you know do something just because you have to do something without adequate thoughtfulness there are two things that worry me you know that the utility is not clearly established and it is so ad hoc it sort of looks like a voodoo science and second thing that you know if you say that you are implementing privacy you are taking care of privacy how can you do so so poorly how can you do with the static ID and I mean this is not a criticism I can start up to say through but against Google and Apple I mean I think that design looks childish to me and it doesn't take care of established computer science wisdom of 30 years and so it it looks like that lack of due diligence is is completely shocking and this techno determinism has some risks for example it prevents looking for simple simpler solutions like like get a lot you know they didn't do any of this and simple community building could not only do contract tracing but you know their doubling time has gone to some 70 or 80 and they're clearly shown that you don't require this techno gizmos to to be able to combat the decision and you know this techno determinism also prevents looking for the simplest solutions and it distracts the attention on to something like like out of the city and sends the administrators on a wild goose chase so we have to be a little careful about these things and it also prevents understanding of a complex problem which is you know I say that it is dimensions in physics biology sociology economics epidemiology and in human overall you know most of all in human compassion and I think this techno determinism comes in the way of developing a comprehensive understanding of the whole thing I think I'll stop here and if there are questions then I'm happy to answer we have put the going next she is going to share her presentation you can see her presentation I'm expanding it you can also control the presentation box size but yes I think you are audible you're breaking here and there if it gives I'll let you know and you can turn off your video and just the audio continue sure please do that yeah okay hi thanks to Sidharth and Professor Banerjee it's a good jumping off point for me so I'm going to talk about surveillance in India but I'm going to take the conversation a little further than technology and the tech solutions that we're seeing and I'm going to try to contextualize this use of surveillance technology within a larger socio-political context and look at then what is surveillance technology could mean in that and I'm going to talk about these four aspects that you can see and show his aspects along with surveillance technology are a product of and oftentimes aim in a aid in deepening authoritarian tendencies of democratic governments and leaders it also I think it's important to say that the COVID-19 pandemic has come at a time when India's democracy was already under threat and the reason which he responds to this crisis the reason which this crisis is handled can further damage perhaps for the democracy and leave impacts that will continue long after this crisis a lot of theorists and writers have been writing about this and therefore I think it's necessary that we sort of talk about this wider context even while we are in the midst of this crisis okay so to begin with why do we need to contextualize the use of surveillance technologies and I argue that it's necessary to do that so as to not address tech solutionism or tech determinism with technology we need to sort of expand the way in which we look at these problems and we also need to see how interrelated a lot of these issues are so surveillance technologies case is I argue part of a larger pattern is impacting our rights and our democracies and contextualizing these technologies can then help us deepen our conversation on rights it can help us think more about holistic solutions when we see these various patterns that emerge and it can help us think more seriously about what a post-COVID world could or would look like like Shoshana Zuboff says when she talks about surveillance capitalism that this is not a story of technology but a story of institutions and therefore if we are to understand these institutions we have to contextualize them and if we are to understand this technology and the sort of atmosphere in which it emerges we also need to understand these technology these institutions so the first of the four aspects that I'm going to talk about which also relates of course very directly to the COVID-19 crisis is the rhetoric of war that is being used this rhetoric is of course very evocative it brings about feelings of patriotism of duty in many ways it's a useful sort of metaphor to explain the crisis but on the other hand it also breeds secrecy fear and it requires us to pinpoint enemies and I think that this can lead to a lack of accountability and transparency so in a situation where you should be sharing clear information where that information should be reflected by the various levels of government what we are seeing is surprise announcements that might lead to panic that has had much more serious consequences and I'm afraid that this lack of accountability will continue even after this crisis is over so citizens will be left in the dark when questions are raised about why the government was not more prepared why resources were not consolidated between January and late March and it leaves a little space for us to talk about this crisis for what it is for one it is a public health emergency and another it has shown serious failures in planning and governance another impact of using this kind of rhetoric is that governments are then keen on passing very dangerous laws we are seeing this in a lot of other countries from Turkey to Hungary to Israel India tried to muzzle the press luckily the Supreme Court stopped that from happening and finally this kind of rhetoric I think also lends itself to techno-solutionism so the questions we are then asking are not what are our rights but what are our weapons where is our enemy how do we keep track of our enemy related to this is the second aspect which we have seen in this in the response to the COVID-19 pandemic which is a police response so in a lecture on the COVID-19 and the Indian economy professor Jayati Ghosh said that India has from the very beginning responded to this crisis as if it is a law and order problem rather than a public health problem and such a response breeds a surveillance approach rather than a rights-based approach what we have seen is huge amounts of police violence of course and we have also seen that this kind of violence percolates down to the individual level so we have seen that healthcare workers and journalists have been attacked airline workers healthcare staff they have been either evicted or have been harassed in the places that they live and even before the ROG say the app was introduced we were already seeing egregious reaches of privacy with the apps that were introduced for quarantining with the release of personal information at the local level about who had been self-isolated so this kind of response and then what we see in these apps is about making the population more compliant is about finding ways to control the population rather than focusing on the fact that we need to address a public health issue and I think that this is made even more evident like Siddharth pointed out by the fact that public health experts and epidemiologists do not really seem to feature in the development of apps like the ROG says so what then is really the purpose of the app is a question that I think emerges quite clearly the third aspect that I want to touch upon is the failure of institutions now this is a pretty large aspect of cost so I'm going to talk about a lot of things the first one of cost is that we still lack a data print law and this the lack of this law is not because data protection is emerging in this present crisis data protection has been a discussion and debate for a long time now we have the grammar of what these laws might look like we have execute principles that have been drawn up and countries across the world have legislated very seriously on what data protection and data privacy means so India lacking a law is not because not because these are not because such a law needn't exist this law should have been introduced ideally a few years ago there was already a draft and yet it is not on the statues next I want to talk a little bit about the behavior of the courts even in the present situation I think all of us should be very well the chief justice of the country says and I quote that this is not a situation where declarations have as much priority or as much importance as in other times I think the opposite is given that this is a public health emergency that is showing the serious gaps we have in our health systems this has to be treated as a declaration of rights and whatever ways in which we combat this emergency the basis of right the next institution that I want to talk about a little bit is the news media for a long time now the news media has been problematic and disappointing in this scenario we are seeing that rather than spreading verified information it is spreading stigma it is being encouraging islamophobia and of course spreading misinformation and I think all these various aspects point really to a lack of institutional trust um having trust in a single individual institutional trust I think this centred in power has played a role in the erosion of our institutions this centralization of power has left state governments and local governments are failing without adequate resources to address a problem that is a ground level problem so this lack of institutional trust is of course that we will build over a long period of time and in such a situation having these kinds of surveillance technologies being introduced does not necessarily show institutional trust or help to build it I think quite contrary is what we can expect to happen the reason why we need strong institutions especially at a time of crisis is because a pandemic like this is not actually all of us are impacted by it does not talk about that but in any scenario we are impacted the most so if we have our institutions not competing and not living up to their constitutional duties we have a situation in which the vulnerable will be much will be disproportionately impacted and we have already seen that in the the migrant crisis in the deaths that have have occurred because of people trying to walk home hundreds of kilometers so this really does point to a failure of institutions and this failure allows for two things on the one hand it's an over age and on the other hand it also allows government to absolve its response to the disease so this is a question that that we need to talk about and this is a situation in which we need to then look at the use of surveillance the last aspect that I want to talk about is that these induction of of surveillance and this sort of dependence on technology is this yeah um this dependence on technology and uh techno solutionism has not emerged with the COVID-19 or with the ROK say to us uh for a long time have been this uh this in Indian polity uh I studied the smart cities mission between 2015 to now if you look quite the documents that have emerged we see that there is a much greater focus on data and technology wherein now smart necessarily means smart it necessarily means the introduction of information communications technologies even at the cost of other structures changes we've also seen uh the I mean building of the natural social registry quote unquote during time which kumash Srivastava describes as a single searchable our seeded database on multiple Indian databases that use Aadhaar and uh sort of give a throw light on various aspects of personal information of the citizen in the team uh based on have read about the the terms of service and the privacy policies that presently exist our ROK say to app if we could well be looking at a vision where uh this app also I mean the data from this app also becomes integrated into these larger data sets uh telangana samagram system has managed to uh bring about this kind of um sort of citizen data or make it accessible to the state without even using Aadhaar increasingly we are seeing law enforcement use facial recognition technologies we have seen that during the anti-caa protests that were ongoing and again all of this uh exists in a situation with where we don't have a data protection law we don't have a based on which we can then demand our right to privacy when it is uh when it has been um sort of taken away by these various applications and technological interventions so now come back a little to um the COVID-19 uh pandemic and I think that I want to say like say two here one is that it's true that this pandemic is undecided it requires very drastic responses um and some of them technological responses and that's the second thing that I want to say that technology could be very well and important however if we are to start sacrificing our rights um these situations um in again what I want to say is should be a rights-based response we are looking at a scenario from which we may not be able to come back uh rights that have been ceded privacy that has been ceded is not easily restored or rolled back so even if technology is an important tool and like the south for example that that is talked about what is also very necessary mentioned in that text is that along with technology there was a much more thing that was going on there was a healthcare system that was much better equipped to deal with a situation like this than ours is furthermore uh south korea is also dealing with the outcomes of not concentrating enough uh on privacy at the moment they are they are looking at very presently at why they needed to have better privacy protocols in the applications and the technologies uh uh secondly I think it's important to talk about who is also impacted most the Virginia U banks has written uh about how technological tools especially those aimed at surveillance and policing always impact the furthermore so but as I'm not down we are looking at a situation where the ROG say the app might be forced upon people and it will largely be forced upon not the privileged who cannot afford to continue to practice physical distancing who can work from home whose children can attend online private this before much more on those who need to work outside the house who are daily wage workers who live in much uh much more much greater proximity with much more density one has to water to constantly wash their hands so already we see that uh that data might will take place has very clear class uh also we don't necessarily need to come and say you have this app being how people because of stigmatization because of ostracization because as a society we are well versed with division uh we see that it's very easy for now buildings to come and say no you have you cannot enter to make it yes you have a community to do similar thing and of course this raises the question of the efficacy of the app itself in a population where most people don't have so already contact tracing is much more effective 50% more population losing here at the most we'll have about 30% of the population so why then are we into a technology that may not really solve the problem at hand and this is to say nothing of the further divisions uh and digital exist up across so I think that when we talk about these tech interventions just so we don't have to worry about the behind uh these interventions and whether or not Professor Banerjeev will be effective for the purpose in which they're being created and most importantly then come at the cost of other vital interventions are they at the moment is the ROG say to app coming at increasing the capacity of the hospital very important question that if we are to again look solely as a surveillance we may not answer surveillance comes an important sort of to which we we move these how we sort of shift on this as I said in the beginning we are trying to contextualize this discussion in the first place and very good is ensuring privacy in these applications in saying that these applications have to be privacy first we have to ensure the fundamental right to privacy is protected and they have to be justified any aspect of this every aspect of this app has to be justified uh so like the IFF report draws out is there legality proportionality and are there safeguards irancian auditability of these apps can people help governments to find various lacunae that might exist is there a sunset clause that says that once we are part of this pandemic we uh with this app to our data that time and I think as soon as we start talking about pandemic in jet space left these might come if we don't don't move the beginning where this is some kind of uh conflict at the back but this is a way in which we need to protect our relations then we protect our population and their rights in many different not only from this but also at the side of this crisis institutions are intact our rights are still intact our democracy still um and beyond this kind of change beyond this conversation about privacy rights are we also then talking about the right health care are we create a situation wherein if we say never again we don't mean that we have the technology to surveil the population and ensure that the vise and spread but instead we have a healthcare system that can actually meet the needs of its population we are guaranteeing economic rights both during and after prices the important way in which we can do this is what Ben Green calls tech and this need to to sort of approach the problem and look for technological solutions so maybe if we remove these tech goggles of ours we can think of solutions that are contextualized that are inclusive and system but not these are ology based solutions and finally I think as we think think more about this crisis and what a post-covid-19 world would look like it's necessary for us to not start thinking what collective action might look like in a post-covid in India we entered the COVID-19 crisis in the midst of a number of public movement we don't really see how those public movements can restart and most likely they will not for a long time so we inhibit and engender collective action and demand that we can are before increase survey and come to Shoshana's was I think and this we really need to be certain epistemic inequalities that exist so that information and knowledge become accessible so that the right to privacy is internalized as the fund and I guess we will be taking questions if you have questions please enter in the public chat and maybe we can allow people to meet themselves one by one and I see there are a few people who have already had questions to Professor Banerjee I'm sorry before you unmute yourself just bring your names in the public chat so that I can call you out so that we could do it one after one and there is no complex audio complications you can you can just put in me in there and I'll just start allowing you call out your names and you can ask questions hi Vijay can you unmute yourself Vijay Ramanathan and maybe you can ask a question yes so having said that more than 30 years has been spent in the academia considering privacy to be one of the vital component why has there no push from him to ensure that our laws are consistent with the technology update that we are looking through okay so you know there are so I I cannot answer for the community but I've got a great deal about this and I think they're about five or ten years back computer science was not so much in public life you know the only public life use of computer science was relatively restricted to databases you know banking databases and so on so forth and it started coming to public life with web pages and and and suddenly you know over the last five years or five ten years there have been a plethora of applications of bringing computer science to public life you know and and this has been a deluge you know starting with things like other digital entity systems things like payment systems and things like contact tracing you know and I think that much of it is related to smartphones and smartphone applications and and stuff like that now unfortunately you know computer science applications or IT applications have come to the public life but the rigor of computer science has been left behind and there's been a rush to roll out systems and in many of these systems that are being rolled out you know it is amazing that there is an absolute disregard to existing literature completely and I think that the response to this has come from the legal community the most you know for example in India if you look at it has come from the legal community has been most active in their anti-adhara activism with the privacy judgment and so on so forth but in this conversation the technology has somehow taken a back seat and and it has not been easy to have a dialogue because the languages of privacy or computer science and law and I've been trying to read up some of the languages that are spoken in law they're quite different and the concerns are also quite different so I see there's a problem with you know proportionality as Siddharth mentions you know in his in his talk when you talk about necessity or balancing or showing that this is the least inclusive thing when you talk about those tests you know I suspect that operationalization of those tests is going to be difficult you know in fact I think that the operationalization in the adha judgment was pretty much random you know both in the majority judgment and the dissenting judgment I think that the operational you know the judges took some random views so I think that where computer science can play a big role is trying to operationalize some of these definitions you know and I think it is it is very very essential but that will require a lot of dialogue back and forth. A quick follow-up question so rather than putting incentive on the law make there'll be regulations on the broader roll out by the by the government so as to know what kind of products they are developing what kind of impact that would have maybe that approach I can understand that the language of the legal community and the academia don't particularly sink but the companies do understand academia per se. I am not sure and you know so I would say that there's some of those some of those applications that I've seen especially from the government are quite proper you know VM, adha, contact tracing and I think that in India there is also you know something which I would like to call crony expertise that you you certainly select a handpicked set of experts you know three experts might become three IITs and they certainly declare it safe now when you when you go to individuals and when you go to institutions the response is completely different so if you came to me as an individual and asked for my opinion as an expert I am free to say pretty much what I think but if you came to my head of the department and after a departmental opinion then that will be much more moderated you know will be close to do a much more informed advice right so I think that picking up experts is what is happening in India and there's a certain amount of danger in that so yeah thank you thank you Riya Yadav can you unmute yourself and ask a question okay seems like she may be having some issue but her question is essentially what's the view about Kerala High Court's latest judgment on sprinkler app is it inclusive of all that is needed to be done Siddharth you want to answer that yeah I'll try my best officially ISF internally is yet to formulate an official position on this we will be coming out with a position on the High Court's order sometime early next week but the one of the big things that was circulating as per my just quick perusal of the news was the fact that the Kerala High Court in its order mandated that all data sets data points which were shared with sprinklers are to be anonymized now that is an interesting point because it also correlates back to the ROGA say to app as well which is there is a need to be able to figure out how to hold the how to hold anonymization or audit anonymization or hold it accountable so for instance if there is a mandate that the state government in Kerala must ensure that any data which is shared with sprinkler has been anonymized there is a need to create means for public auditability that that in fact has been administered by the Kerala government similarly in the ROGA say to context if you look look at just the privacy policy of the app it says that any data data which has been anonymized and aggregated none of the privacy protections apply to those kind of data points and in fact there is no onus or requirement on the the government to even delete those those data points now that we contain I would contend that it's problematic because when you're trying to anonymize any and probably professor Banerjee can also elaborate on this is that when you're collecting as many data points as the ROGA say to app is and when you combine it with the fact that these anonymized data sets are linked to people's gps coordinates its reidentifiability according to literature that we've gone through is a lot higher and there are several information security risks so given that that is the case there is a need in the context of let's say something like ROGA say to be transparent as to how it's anonymizing the data set how will it ensure that it remains anonymized and the risk for re-identification is taken care of similarly there's a need to also figure out when a court makes such an order how do you operationalize something like this that is one of the things that our official response sometime early next week will address and we are IFF will have a more concrete position on this sometime early next week and we will submit some substantive comments to the Kerala government's committee which which which is looking into the matter the two person panel that is investing studying the issue. Can I come on anonymization a little bit? Yes, yes please. So you know so in computer science you define the notion of there's something called informational privacy and informational privacy is defined in the following way that you say that the informational privacy is complete when whether you have access to a statistical database or you don't have access to a statistical database doesn't make a difference to the amount of information you can figure out about me right so if you with or without access if you cannot figure out then you say that you have completed informational privacy and anonymization is a facet of informational privacy so anonymization means that if I have access to the database with anonymized data and if I don't have access it doesn't make a difference it's exactly the same and then you say that it's completely anonymized. There is a theorem and there's a theoretical result by Cynthia at work it's a it's a seminal paper where she has proved that informational privacy or anonymization is impossible under arbitrary auxiliary information it's an impossibility result so you say that if I have access to arbitrary auxiliary information then any anonymization can be broken right so a complete anonymization is a myth and and this is a proven theorem in a paper that has got probably over three to five thousand citations so the cavalier nature with which anonymization is talked about and it's implemented in the IT sector is a bit surprising given given this kind of theoretical results that exist so you know I think that anybody who claims anonymization the onus is on the designer to prove that the access to auxiliary information is bounded so that the anonymization is not possible so if you cannot establish that the default assumption is that that anonymization can always be broken and there has been in the last five to ten years deluge of papers that show that all kinds of anonymization can be broken you know none more tellingly than from Erwin Narayanan in Princeton so he has got a series of papers which shows that how an anonymization can be can be bypassed so you know the people who are making these claims don't seem to be ever be aware of the standard literature on anonymization now that is the state of affairs which is not good in my opinion thank you sir because oftentimes the phrase anonymization is sort of used as a get out of jail guard to like try and do whatever without having to comply to certain informational privacy or privacy obligation and to be able to use it for different purposes and so on okay next question has IFF approached Google with these concerns primary distribution channel for such apps and app stores what's the responsibility do they have in curbing apps that they have such serious issues as pointed out by the panel like any play store responsibilities yeah so at this stage what we're doing is given that Apple and Google will also have a fairly big role to play with respect to contact tracing and exposure notification once we update our working paper based on public feedback early next week aside from government stakeholders we intend to share it with Apple and Google representatives as well so hopefully they can so hopefully they can also take cognizance of the various challenges with the issue I hope that answers the question yeah then we have Ray Ray what can you try and mute yourself and ask the question I think he's remotely mode so yeah the question is in case of our health data falls in hands of pharmaceutical companies insurance companies in what all ways they can handle harms of data and data sharing do we know if government is actually sharing this data to anyone else not that we're aware of with respect to whether government is actively sharing it to these sort of actors of course the private actors have helped the government build the ROGSA to the underlying system as well but we've already seen previous in given that we don't have a robust legal mechanism like a data protection law that Uttara was also talking about during her presentation it becomes a lot harder could someone mute there yeah thank you it becomes a lot harder to hold the government accountable with respect to who they share the data the data collected through the app with and for what purpose there is already certain references in the privacy policy with respect to the fact that it can share data with third party actors towards respond towards responding to the medical health crisis that is going on right now so theoretically the government may have some scope to be able to share these underlying insights with these sort of companies but ideally if one had a robust data protection law these sort of interactions could be regulated by legal systems more robustly and transparency and just and just like one more point is secondary data markets are an issue across domain so yeah it's particularly problematic in the instance of this kind of data but it's a problem in all kinds of information and communication technology markets to be very frank and what the core thing that you need is a data protection law as a first step okay we have a question from Sharda Mahesh Sharda you can unmute yourself and ask a question now hi so my question was considering we don't have any legislation right now for you know data protection or anything if suppose it does come into effect say after because of the covid surveillance or problem or whatever do you think this law can be do you think it should be prospective or retrospective and that if so why do you think it should be like for example if you think it's retrospective then say what would happen to the data that has already been collected or prospective then what should they do see the the thing with the data protection laws and this is the reality of the situation is once it is actually passed it'll take another 18 the way that the law has been designed and the fact that multiple regulations and institutions have to be set up for it to be operationalized even if the law is passed sometime in the middle of 2020 or towards the end of 2020 it'll take another 18 months for that law to be operationalized the reality is we need what we need is a more specific legal framework to deal with contact tracing in the context of covid 19 and should it be a prospective or retroactive I think it should be applied applicable to all data which has been let's say collected by a particular contact tracing app whether it's before so there should be retroactive applicability I would argue because that way you can hold the entire system accountable if anything is not in conformity with let's say the new legal system if it is passed then you amend the systems accordingly to respond to the requirements that will that that can be one way to look at it researchers in the UK Lillian Edwards for instance has talked about how there is has come out have come out with like model legislations which can govern contact tracing apps whilst protecting people's privacy that is one of the things that we will be studying in our update to our working paper IFFs working paper also like other sort of alternatives are tools such as privacy impact assessments as well so for instance Australia before it released its app a few days ago or maybe last week just in the last few days essentially they undertook a comprehensive data protection impact assessment so that's at least something to work towards to create some sort of checks and balances to the deployment of these systems right in India unfortunately it's very unfettered it's just like so the the sequencing in India has been on March 19th you start building the app and by April 2nd it's available for download and then after that just as you hear complaints you change tweaks the privacy policy here and there and start adding new features to the app so that's sort of emblematic of the fact that there is little consideration for checks and balances and safeguards built into the design of the app and it also points towards the techno-solutionism that both Professor Banerjee and Usra have been talking about in the lecture right now so I would say a data protection law even if it is fast does not really help maybe we need a COVID-19 contact tracing specific legal system to be passed at the earliest so we need to seriously start thinking about that with the right kind of institutional oversight and so on okay I think one last final question from Vijay to Uttara I think we are almost done okay my question to Ms. Uttara is that given the nature that institutions crave power and technology gives them upper hand in becoming overtly authoritarian so do you think that technological perspectives or how technologies could be utilized by various stakeholders when any policy is being framed would be essential rather than having one particular bill which speaks entirely about data protection and as such because as from what Professor Banerjee was saying it is not just confined like before the past 10 years the technology would not come with the public domain but then it is now a deluge so do you think one particular data protection policy or act will solve all the problems which will come in the future or rather every policy should have an aspect of this technological impact that it would have Uttara okay I think she is having some connectivity issues or maybe sit that your opinion on this uh Vijay could you just quickly just summarize the question one more time I lost a bit of the question okay basically given that technology is going to be a vital part in the policy making perspective on the public sphere and government institutions tend to take technology to become overtly authoritarian so will one data protection law cover entirely on this aspect or do you think that every policy perspective which comes henceforth should have some restrictions on how the technology aspect is to be misused by stakeholders see the thing is that ideally all government projects when they're using data or ICT information communication technologies to respond to any sort of policy challenge or objective should ideally adhere to fair information practice principles the FIPP they're fairly old they've been developed since 1974 in the US they were crystallized as privacy guidelines by the OECD and all modern data protection laws data protection frameworks and government projects adhere to those principles the FIPP now there are certain core aspects that all these government projects must adhere to which is let's say purpose limitation collection limitation accountability for the government institutional oversight to ensure checks and balances so any government project even without a data protection law should benchmark itself against the FIPP but what is what we view with even like for instance what government is doing with our OECD is if you look at the terms of service it has a clear liability limitation clause which says that any error by the app including but not limited to four examples or four specific conditions government will not be held liable for any issues arising out of the app now that is problematic because that undercut any degree of accountability the citizen can have with the government it does not create for a framework for legal remedy it does not articulate that people have a legal remedy against the government at all within either the terms of service or the privacy policy then on top of that with respect to adherence to like limitation principles you can clearly see that for instance the limit the principles are vague enough for government to repurpose this project towards not just responding to COVID-19 in its different dynamic ways but there is scope for and we must be alive to the risk that this project could be more permanent in nature and could be expanded or justified as something that is essential for other diseases as well in the future so which is why you need to be able to also hold the government accountable vis-a-vis you know sunset clauses and stuff like that so what you need is ideally your data protection law if you don't have that benchmark it against FIPPs and while you're benchmarking it urge or pressure government into coming out with either a legislation or a standard operating procedure which formalizes a system of accountability and that system of accountability should not reside within an institution within the executive but an independent institution so it's a really difficult process idea luckily the government has right now a committee which is studying different assets what a citizen app is so i think that is yes sorry uh to interrupt you there is an addition to some of the statements that's me what's stopping that the arm actually brings out its rules right like we do have a right to privacy judgment and so it technically it's your fundamental right we are waiting for this law and we're discussing about the law for years now for at least three years now but there is no uh rule or like why aren't even the high courts uh giving any orders based on the supreme court why is it that we continuously discuss this on and on and why are there no orders or rules that in place what's stopping the government to do it is it political will is it lack of resources uh economy though it's all dead so are we worried about the economy what is it that is actually stopping the government to do it i i think it's just been a function of right from the government's perspective it's trying to like create as many exemptions for the government to continue doing its day-to-day operations where the data protection law is passed yet they aren't held accountable so it fulfills certain objectives which is look we've passed a data protection law to protect people's privacy but the reality will be it'll be stringent on companies while lacks on government so it's essentially a negotiation also somewhere where businesses also try to like delay the passage of the law because you know there are certain provisions that they don't want to be enacted in a hybrid manner so it's a combination of both it's it's a combination of economic and political considerations i would say and right now the thing is that when we're pushing for a quick passage of the law we need to be saying that you need to pass it quickly but also amend the fact that the government has card blanching exemptions under the current bill which has been referred to a parliamentary committee so when you're advocating for speed you have to advocate also for the fact that there are clear issues with the bill because otherwise the government can just turn around and say oh you guys ask for a bill we've passed a bill and it protects people's privacy and that fulfills their optics of we've passed something it's better than nothing you know so we need to be careful about those sort of advocacy initiatives i would say thank you siddharth uh yes professor bennaji but let's make it quick so that we end it and we don't want to make uh take more time yeah so um you know um so my guess is also that uh you know the guidelines whether it's on the data protection law or on gdpr uh are not operational enough uh so for example i'm sure that the arapa seto developers have read the data protection bill as i have and uh if you give the data protection bill to a birding engineer or the gdpr to a birding engineer and he wouldn't know what to do you know so what are the what are the database development principles uh that he has to invoke to be on the uh the right side of the law so i think that the law is uh too general and too vague for an engineer you know and this is not a criticism just for the data protection bill but uh but the entire discussion so when you say fair and reasonable processing you know as a machine learning engineer you don't know what does that statement mean or not or even if you know you don't know how to constrain yourself with that statement or when you say that purpose limitation the question that will come to mind is what should i do to ensure purpose limitation so i think that there has to be a lot more effort to operationalize many of these laws with examples and these operating principles must translate to technical guidelines and then it will be effective so if i may just add one tiny point with respect to that i apologize but uh so for instance one of the good things i think about the gdpr vis-a-vis what the indian bill does is it provides a lot of illustration within like let's say different parts of the law to at least create some sort of a picture of what it means by let's say purpose limitation the challenge that the challenge when it comes to all of these laws is at least in a country like india a lot of it is driven by lawyers and the the conversations with technologies sort of don't happen and that is a that is a feeling i think in the way that consultations have been designed within india thus far so it it's also a question of having like conversations such as the one we're having right now but at a larger scale with more lawyers and technologists and even sociologists and economists sitting together unfortunately that space is crowded out by by lawyers and that is something that is sort of a feeling of india's consultation processes i would argue which is why we have a computer science professor and a phd scholar you're underrepresented in this space but thank you for making that out uh we're gonna end this call we are going to continue having more technical conversations hopefully on encryption or anonymization uh not necessarily i think just this i'm preparing there are too many technical problems that we are not discussing all of you while we hope to get some of the discussing going on future without government consultation said public talks hopefully thank you all for joining and thank you that professor vanerjee unfortunately had to quit some network issues but we will soon get this video processed and uploaded on uh and i'm gonna end the the session here you all thank you thank you