 Hi everybody. Good evening. My name is Ashok Hariharan. I basically work in the legal tech space. But today I'm here to talk about something completely different, you know, something that is put into all of us. And I have a very special guest here that I'm going to be having a conversation with. And I'd like to introduce Mr. Sean McDonald. He's the CEO of Frontline SMS, which is one of the oldest and still being used monitoring systems in the humanitarian space, which is based on text messaging. It's been around for over 15 years. He's also a lawyer in the international law and alternate dispute resolution space. And he's also a visiting fellow and advisor in various institutions. I mean, it's a very long list. So just to keep it short, I'd say Stanford Law's Digital Civil Society Lab. He's also on IEEE's Attics and AI Committee. And he's been published in various outlets like foreign policy, Stanford Social Innovation Review, and even the Cornell Legal Information Institutes Journal. So the reason I'm talking to Sean today is about something that concerns all of us, not just in India, but everywhere, you know. We are here to talk about COVID-19, specifically about how technical solutions have been rolled out left, right and center, apparently in a bit to save all of us. These choices have very deep implications beyond the immediacy of the pandemic. And what we will try and understand today is how will the widespread use of tech in response to the pandemic impact society and you individually as a person. So I think that aspect has been kind of missed out how it affects you as a person. So I'd like to bring that out today. Sean can be considered as an expert in this area since he's been working in the space. He's also written extensively about it. He wrote a very detailed study of the use of call data records, which was an early kind of contact tracing during the Ebola epidemic of 2014 in West Africa and also kind of spread to other parts of the world. A link to that paper is in the show notes. Plus he's also written an article about the digital response to COVID-19. I urge you all to read both those papers when you have the time. So anyway, let's roll ahead. I'll let Sean talk. So Sean, my first question to you just to get the proceeding going is how is the tech solutions context different between the Ebola epidemic, which was six years ago, which is like a completely different era in terms of technology. And during compared to now in the age of COVID-19, what do you see as the common denominator? And how has the scale and the scope changed since then? And what do you see as the lessons that were not learned since then? Yeah, a short thank you for that extraordinarily generous introduction. I'm really excited to have this conversation. Also want to thank Corona for convening it. It's really great to be speaking with people who are considering and engaging critically with so many of the sort of technologies and issues that are happening kind of everywhere around the world. So it's been really, really exciting to get to get to engage in terms of what's different. You touched on it for sure. Scale is a major difference here. I think that one of the things that we really saw with the Ebola epidemic and I think that we see a lot in terms of public policy focusing on technology in response to something like a public health crisis is that it becomes a challenge of leadership. And I think what we're seeing in the COVID response and certainly in the ways that different responses are, different countries are engaging, you have a really clear view that high quality leadership makes a difference here. And so much more so in many instances than the specifics of the technology. And so in Ebola, the idea was called detail records or mobile phone kind of backend databases as the technology that we might use to help track people. And what we're seeing with COVID is obviously much more app based, much more sort of Bluetooth proximity. Some of what has changed in the last six years is we've gotten a much better sense of what the limitations of call detail records are and how much more work we need to do to make it a technical solution that works. And I think that, you know, we've also seen a lot of the apps fail and really big in public ways because they are at because they're consumer facing because their user facing so called detail records happen. Essentially, their analysis and exchange happens mostly behind closed doors between public health institutions and mobile networks, international organizations things like that. The government's obviously, whereas here what we're talking about is needing the public to participate by downloading and using and recognizing and integrating into their daily life, some part of a mobile phone application. So I think that what the public is getting to see in a much more granular way here are the limitations of a technology, right, an app crash apps crash, right, whether it's because it's delivering you dinner or it's telling you about your medical results apps crash. And so I think that one of the things that we're really finding here is what things we think are okay to do via apps and what things, you know, feel like an appropriate use of technology versus throwing a random technology at a problem and sort of hoping that we figure out how to make it impactful after the fact. And I think we've seen that happen in both responses. But yeah, scale is the main difference here it's happening globally. So, you know, also I guess other factors like, you know, the disease is different. It's got different incubation times, the people who don't have symptoms they get symptoms after 14 days. You know, and also the fact that, you know, the whole, the whole Ebola thing took place in Africa, you know, which were for most people is like has not exist, you know, it's out of their bandwidth of viewing. And, well, now this is kind of come home, you know, it's there everywhere. So, I guess that was what is that also a big qualitative factor, I suspect. Yeah, so a couple things just and as someone who's run a business in Nairobi for a long time I definitely understand both the visibility of Africa and it's very complicated in that regard. I think that we, you start off with the most to me, one of the most important points, we actually know comparatively a lot about Ebola, certainly compared to COVID. And what we're seeing happening in this response is that the science isn't really all the way there yet, right, we've seen big public institutions shift in the advice that they gave from the WHO, a number of national governments, for example, have changed their position whether or not masks are important. I think it's pretty clearly proven at this point that they are. But you may remember that in the beginning, a lot of people were talking about transmission on surfaces. And that's why masks didn't seem important is because we're talking about it as a, as transmitted in a completely different way. If you're approaching that from a technology perspective, what you're doing is you're saying, here's the known science about this disease. Here are ways that we can model aspects of that known science using data that we have access to and proxy it for or provide some increased capability, some increased insight. And what we're seeing is that actually, when the science is as underdeveloped as it is around COVID. And when you really need a huge amount of data, when you're trying to figure out transmission, like how it transmits alongside who it transmits to, I mean, as a sort of always happens, but is much, much more difficult here than it was in Ebola. You're starting to look at how does the app produce data that's useful. And so I think that the, you know, to your point, proximity is the is the thing that we use as the as the indicator right now. So proximity, we were measuring through Bluetooth, and in most of the apps are using some form of Bluetooth at this point. And it's a weak indicator, right, like if you're wearing a mask, and you're outside, and it's potentially a very different risk threshold to being inside or not wearing a mask. So there are all these contextual factors that actually make the technology design quite experimental. And I think that that's one thing that's really similar is we're seeing the the the turn to technology to experimental technology as a sort of very hopeful, but not very scientific approach to solving the problem. So, you know, since you mentioned hopeful and experimental, one argument that I keep hearing in favor of exhibitions, you know, specifically to contact tracing in India, I hear it all the time is that even if it does not always work all the time, it can make things worse. You know, so, you know, why is this wrong argument? You know, could you could you just dwell on that a little bit? Yeah, absolutely. I think that it's a really common instinct during times of disaster to think, well, it's so bad, we just have to have something. And as, as, you know, as a set of humanitarian responders tried that for a long time, and it turns out it's what you can do quite a bit of harm, you know, what we're finding is that the problems of technology are the problems of technology. And so when they're known, when we go into a new setting, we know and introduce the technology, we know we're bringing the challenges things like interoperability, things like energy access to energy, the certainly the very real economic considerations that go into various device adoption various amounts of internet usage various amounts of comfort and skill and digital spaces. So there are lots and lots of known differences that digital tools amplify. But what we're seeing is that I think we don't have a very good sense for how those those problems map into how much we know about what value those technologies will deliver. So we know the problems. The benefits are also very known. Sometimes we know that communication is faster and that when communication is faster, response can be faster. But what I think that we're what we're really trying to say, and I think this is also is similar to what happened in Liberia during the Ebola outbreak. Our rush to do something means that we become pretty bad at distinguishing what good things to do are and what bad things to do are. And so what we do is we waste a lot of time and a lot of resources and a lot of good energy, focusing on really edge case unlikely solutions, instead of doing the things that we know are valuable, right? If every person who was downloading an app, or was was was thinking about how to participate through technology was instead focusing on PPE production, or supporting members of their community in need, or, you know, investing in resilience infrastructure, I think we'd see a very different response both qualitatively and quantitatively to cove it, but I think that because we're focusing on technology and because we're focusing on surveillance instead of, you know, engaging with people and meeting them where they are. We're seeing a policy architecture emerge that can really punish you for something is as, you know, is something as innocent as living a normal life and as as you were instructed to, you know, and so I think that it's a really, it's a really difficult context to be hoping for technology to solve what we know to be very entrenched kind of entrenched in noble challenges and I think that's what makes it experimental and it's also why we need to have what we need to protect our protections, essentially. The quick example, I'm sorry, this is longer than I meant to be, but when you look at vaccine trials, right, we don't go from I invented a chemical to let's deploy it on millions of people because there's a couple steps we need in the middle to make sure that it's good, that it works, that it doesn't cause harms we don't know about. The human trial process gets compressed during emergencies, right, we, we see very large upticks in prioritization, we see reductions and administrative barriers we see lots and lots of energy go into conducting those experiments rigorously. But we do that because we know that the cost of getting a vaccine wrong is also huge, right. And so what's important is that when the science good when we do speak to something with a weight of science that it has that validation it has that experimentation process behind it, and and we're able to speak to it with that degree of clarity, and otherwise what we get is a lot of garbage medicine that hurts people. And so you can you can very much hurt people by not by not engaging with the substance or potential for harm and an intervention. Yeah, you know, so it's actually good you mentioned the vaccine and the vaccine trial, because, you know, just to bring it back down to earth that it's this app is kind of not even it's like a vaccine, which is being deployed on people like a live experiment. I mean, in India, it's not mandatory by law yet. But it is kind of mandatory as an compulsory for many things, you know, if you're a civil servant in many government offices now, you have to go, you know, you have to install this app by government order. There were gas station attendants who were to forcibly install it, you know, otherwise, they would probably be fired, you know, people in those contexts, they don't ask, they just do what they are told many of them. So it's kind of forced down from the top, you know, that that kind of idea. So it has implications, you know, like in the context of a vaccine trial. In this case, if the app flagged you wrongly or rightly, you could lose your job in many cases, you know, and your wife could lose her job because you got flagged as a positive, and you went home and your wife, because she's in proximity to you. She got flagged with a red flag, even though you were completely okay, you know, so there was a case just from a couple of days ago. I mean, if you just search on the Indian Internet, you find dozens of cases like this, you know, I think many people, most of these people don't even come on Twitter. There was one case, I'll just read out, you know, the guy is called German and the case was reported, he was flagged as positive, because somebody got tested for COVID, and either by accident or by, you know, on purpose, somebody put a fake number. And that number turned out to be this poor fellow's German number. So when he installed the app, it showed him as COVID positive. And which was a shock to him because he had never installed the app and, you know, he couldn't understand that. Then he has essentially gone and flagged the app people on Twitter, who have responded by saying, let me read out the reply, because you need to be a scientist to understand this reply. You know, just imagine, this is a common citizen. It has been reported here that some test lab has wrongly entered the phone number due to which app status has turned red. Since ROGC2 gets data from ICMR through APIs, it is essential that necessary correction is done at the back end after verification. Imagine that, you know, the guy got flagged as COVID. He might, you know, his office has probably told him, don't come to work anymore, you know, 15 days, and, you know, now it's either where if you ask to stay at home and not come or whatever, you know, they, you know, they would inspire you. Imagine on three things that I, at least, but three things that I think are really, really important as themes, right? One is that this information is not a notification in an app. Your, like your health status when it comes to, most of our, most of the way that public health information, health is regulated means that you have to be in a position to receive care or get expert advice. You have to be around a professional in many instances to receive health news. Obviously it's different in a lot of places. But the main point is that we recognize that being told that you might have a life altering illness is not common information. And that when we deliver it to someone in a way that we know will impact their lives, we can't take that as casually as we would take a notification about your dinner being ready or on its way or something like that. And I think that that that fundamental kind of humanity and care piece is a really important element of not getting too swept up into the role of technology. But I think that, you know, this a second and really, really important piece of what you're saying is that the second that it is often second order effects that make things like apps extremely dangerous. Right. It's not, as you're describing, whether the app is correct or not. It wasn't, but that's sort of immaterial, because what is happening is now a number of employers are making decisions based on that. Like you're saying your access to revenue, your access to the economy, your access to well being in many instances is going to be gate kept by this thing that we know mostly doesn't work. And this is how this is really often how it happens. This is often the kind of main set of issues. It's the second order effects. So it's really it's not it is not it is not easy enough to convince the government not to abuse the technology, but it is comparatively easier than convincing an insurer not to factor in COVID mitigation around the way that they engage with businesses. So what you start to see is that there are these infrastructures that become very risk avoidant. And the fact that it's a probabilistic risk, which we would not talk, which generally speaking, don't tolerate from employers related to how they engage with the labor. But the fact that it's not certain doesn't even necessarily matter in a lot of contexts. And so I think that we're seeing you know this this the second order effects of this are huge. And then I think the third piece, and I just want to, because you did such a beautiful job of explaining it in the way that you told that story, the way that you brought the use case up but our engagement, our accountability through these systems, you know the ability to track down an error and to be told Oh, it's like it's a backed up. It's an API problem. And on the other end of this impenetrable terms of service, they should be upholding a guarantee that you will not enforce any court. You know, there's a real, there's a real scary indignity to being given the run around to make sure you know when when you're what you're trying to figure out is why do you think I have a life altering condition. That is affecting the, you know, a pandemic, that's the kind of thing that you should have to be pretty sure about before you start telling telling people they have things and the complexities that go into accountability. When what you're dealing with is like a technology or data supply chain, as opposed to say a doctor or a public health services provider is a really, really different. It's a fundamentally different construction. And it's, it's scary to me, I, you know, I would, I don't know how to not. It would be it's very unsettling to not know how to verify information that you receive through official channels about your health status. You know, in one sense, it is actually far more damaging than, you know, actually taking a test for COVID. And it gives you a false positive. At least you can go and retake the test, just to re verify, you know, whether that because you can see on the box of the test that what is the false positive rate, what's the false negative rate, the app does not have anything on it like that. They don't have any data, they don't have anything. So I mean, it's not just the Indian app, but it's the case with all these apps all over the world. The pieces that that people really haven't brought up enough is that the heads of almost every major program who have run contact tracing apps are saying they don't make much difference. So the heads of the program in Iceland, South Korea, Singapore, South Dakota, you know, across a pretty wide range of contexts. All of the early indicators is that it doesn't make much much actual difference but the one thing that I think does make a difference and it's the point that you're that you're really focusing on I think and well is we talk about these notifications as though you might get one. Right, like you're going to get a notification and then you're going to make a decision. What if you get 100. What if you keep getting 100 no matter what you do, what no matter what you change about your lifestyle, what if there are ways in which you can't change your life that still expose you to COVID, right, our ability, you know, our focus on this is a probabilistic risk score, means that we don't engage with the humanity of people who have to we know there's a risk not as solving a problem for a person. And I think that that's when we when we start risk scoring health health quality and health status. I mean as we already do in many instances of course, but in in this way, and is this large of a gatekeeper it's it has a bunch of very concerning and you know there's also the case that you know, not everybody in the country has a smartphone that many people I know myself who just have feature phones or don't have phones at all. And then there's also the expectation that everybody's got to have the internet and I you know I read in your paper on Ebola that some American billionaire came in fluent and gave thousands of mobile you know mobile phones to everybody. People had to recharge and getting cards being able to afford cards recharge for data access was was was very difficult for many so you know I you know I don't know what you have say on that but you know for sure. It's, it's, I think that it's a it's an endemic problem to short term thinking, right is that a lot of times what crises do is they reveal fragility, and it is. It's a, it's optimistic in many ways, if, if not somewhat naive and others to think that, you know, the absence of a device is the problem, right and technology solutions to international disasters to domestic disasters. You know, very much have to draw on the capacity of the people in the services that they're connecting. And so there are these limitations that I think technology doesn't always do a great job of, of explaining or reflecting I think that, you know, smartphone adoption has always been a barrier. It's why I work in a semester. But I think that, you know, for the most part what we're trying to do is make it as accessible as possible. And what we're learning mobile phones obviously are variably accessible. But the, but, but COVID susceptibility is not right. And so what we're finding is that the structural, the structural challenges that get it that create the unevenness and mobile phone adoption are also creating real fragility and unevenness in the way that people engage with the response. I don't know whether or not I've lost a show. Hi, sorry. Yeah, I you missed you miss nothing. Okay. But yeah, I think that that what we're essentially what we're seeing is that there is no kind of billionaire active of one time generosity that is going to fill or or level or change the underlying circumstances and ways that that effect or really meaningfully change our ability to respond to a long period of time because those are those are structural and systemic problems that need real attention. So, you know, since you talk about structural and systemic problems, you know, one thing that is very different between India and, let's say, Korea or China is they have public health systems. They have public they have functional public health systems. They have infant mortality rates which are like, say, Europe or in the US maybe better than the US. And they have public health care, you know, you don't need private insurance to get safe healthcare. So, you know, in India somehow it's very bizarre to me that people think the app is an answer to not having a public health system. So isn't that the case I mean with every other country who have handled COVID to some extent and where the app was just an addendum to the presence of hospitals and people is and masks and hospitals and doctors and staff who knew what they were doing. Is that the case? It's hard to argue with, you know, as an American, we don't have a public health system really. I mean, certainly not one that that meets people's needs the way that you're describing. So definitely, and given the state of our response, you know, certainly bears out as a theory. I think that, yeah, I think that I mean, it is true of public health systems specifically, and it is also true of collective support structures broadly. And I think that, you know, what we're seeing is that leaders that that have acted in a limited but explain, you know, transparent way have had more effective engagement. People that have shied away from politicizing making it a political issue and focused on the science and the public health nature of it have been more successful. But I think to your to your sort of specific point, public health systems and social infrastructures and safety nets are what, you know, this is why they exist. And I think that we're seeing both because of the privatization of a number of vital industries but as well as sort of our focus on individual action and our folk and individualism, you know, individualism over collective action in many ways, you're seeing a lot of different factors, a lot of different social safety nets play out. So social design infrastructures, but I would say that I think that giving people health care is pretty obviously the right thing to do. And certainly when the entire health of the world depends on it as it seemingly always does. One of the things I am finding really interesting is I was doing some work recently with some folks in public health and you're seeing a real culture change in the narrative of international debate about public health, which, of course, there has always been a strong kind of pro public medicine case but I think you're now seeing this become a really galvanizing moment, because it is so clear how interconnected everybody's health is in this in this pandemic. I think I got you on mute. Sorry. One thing I just wanted to touch upon again is, you know, this use of apps and, you know, this belief that tech is going to solve everything. You know, I don't get it. Why do, you know, why are people somehow blind to the fact that if the hospitals are not there, if the PPE supply chain is not there, if the masks are not there. It's not going to work if there's an app. But somehow I see that tendency a lot and people I know, you know, this app will help me. You know, I don't know, is it connected to being able to order food on an app and it comes home at the press of a button. So maybe the app is going to wish away COVID and build whatever infrastructure is there required for it. I mean, I don't get it. You know, maybe you have a perspective on that. I have a perspective on it. I don't know. I don't know how right it is. But I think at the individual level, it's really understandable to want to believe that something is happening. Right. And I think we all want that during times of great upheaval. We all want someone to have a plan. We all want there to be something that will fix this. And I think that what we're seeing right now is that in a lot of places in a lot of instances, there just isn't. There's not a plan. In some places that speaking from my own country, there are elements of our leadership that seem to feel like actively against our response. And so I think that there's a real where there's a void, right? Irrational hope sometimes steps in. And I think that, you know, it's also not really fair to put this on the individual. There is a lot of money that goes into building surveillance technologies that then needs new use cases, right? New growth arenas. And we're certainly seeing COVID apps become that. We're already seeing data breaches. We're already seeing commercial contact tracing apps that are selling data back to advertisers. You know, this is the same kind of churning tech industry machine that existed before COVID. And, you know, whenever another major issue arises, I suspect we'll see a lot of technologies, you know, a lot of surveillance practices and technologies pitched as the sort of solution there too. So I think that to your point and just, I think that there's a lot of individual, I really hear what you're saying on the individual level. I know a lot of people really struggling and also who think that the app really might do it. And people don't really want to say no when there's so much hype and so much public enthusiasm. But I would also say that there's political value to this and that's sort of full circle, right? That person's hope, that person's irrational hope that someone is doing something that will solve this is also a government's kind of cushion from accountability, right? We can all be going to government and saying we know that PPE isn't there. We know we need better hospital beds. We know we need, you know, all kinds of healthcare system capacity that is very not technology driven. And I think that if we're doing both, then maybe great. But, you know, all the way back to the beginning, I don't think that it makes any sense for us to be pinning this amount of hope on something so unvalidated, so unproven, so experimental. I think we should expect more responsibility and more accountability from both the corporate actors and the public actors who are lining up to launch these technologies without a really clear impact statement without a really clear view to actually making a difference in the response and often without any real articulation of what success would look like, failure would look like, or when and under what conditions they'll shut it down. And to me, those are all things that more mature industries take as preconditions for even getting involved. And so I think that, you know, a lot of what we're seeing is that this is an opportunity for technology to grow up a little bit, right? To be able to build its research and development infrastructure to be able to speak to its value proposition in ways that are more certain and less experimental. I mean, to me, it is very interesting that, you know, like the tech to battle COVID is really ancient tech. I mean, if you think about it, quarantines go back to the mosaic law in the Old Testament. You know, the mask goes back to the black plague, you know, people putting on masks to keep diseases away. Even PPE is if you see doctors in Venice, I mean, the whole Venice festival is about wearing some kind of mask with some kind of filters inside to keep the smells out. So, you know, to me, isn't that really, that is all that together is more effective than at this contact tracing. I mean, to me, it is very bizarre. You have AI robots, self-driving cars, but we don't have this, I can buy a robot in the market that can make me 10,000 mass in an hour. Yeah, so, you know, so just to give a very laughable example, but, you know, to me, isn't that, hasn't COVID really exposed the failure of tech to a large extent? I mean, they were focusing on the wrong things to a large extent. That's such an interesting question. I think that it's, in many ways, is less about, is it a failure of tech and is it a failure of our ability to remain kind of self-possessed adults in the face of tech, right? And I mean that at the individual level and at the collective level, because you're right that a lot of what we're talking about is an approach to developing something new, some moonshot, when in fact what we know more often is that the simple things that require work and discipline do the job, you know, and this isn't new in public health, right? Like, we all, we all, I don't think diet and exercise are new medical advice to anybody, right? And yet in a lot of contexts, obesity is a major issue. And that's not to say that that's just a question of discipline by any stretch of the means. That's why I'm saying that putting personal responsibility and even knowable treatments isn't necessarily the solution, right? We have to create an environment in which, and a more complicated environment that involves things like social safety nets. So yes, I basically, I completely agree that this idea that we're going to invent our way out of every problem is a dangerous one for the way that we define social safety nets. I speak to doctors and I speak to, you know, medical practitioners, you know, people in this space, but they got it from the beginning. You know, they understood, you know, they even told me what's an app going to do. I need a mask or, you know, I need a PPE. Somehow, as you say, you know, perhaps there is an element of making people happy with an app like this and feel a bit safer while, you know, I do see a point very well there that it's just a tool. There's an old presentation that I used to give that had a two slide progression that comes up a lot and I just, I'm only bringing it up. The first is sort of like what science fiction promises us. And it's this painting of this beautiful utopia. We're all in flying cars. The earth is green, you know, and bountiful is just, it is a vision, right? And then it's the next slide is, and what did tech actually give us? And it's a roomful of people sitting with virtual reality headsets on, right? And I think that this is a lot of what we see. This is a lot of what is important to be, this is why critical thinking around tech is so important, right? Is that tech is very good at giving us the appearance of things, but it is not always as good at giving us complicated or difficult things to build. And I think that we have to be really careful about what we assume tech can build based on what it shows us. And so I think that what we're seeing here in a lot of instances is it is a palliative. It is a nice thing. It makes you feel better maybe that there is some technology approach or some tracking approach that will be progress. But I think that, you know, emergency is a very difficult time to be a rational hopeful. The other thing they've done, at least in the Indian contact tracing aspect is because they got a lot of pushback. They said, we'll do it open source. And what they did was they just took the front end of the application and they made that open source. And, you know, I have personally, to me, the whole concept of the app itself is not functional. So making it open source, how does it solve the problem? I mean, you know, you might have a different view on that. Is it of any benefit here, you know, in this centralized contact tracing context? I think that open source is necessary, but not sufficient for something that requires this amount of public trust. I think that it's fair for people to want to understand the code, and that there are a number of really important sort of security standard testing and generally understanding and capacity building reasons to open source code. So I think doing that is positive, but it's not sufficient, right? And the license of a code is not, or even, you know, being able to understand code is not the same thing as the license of the data or, you know, the operational security of the parties involved in managing the data throughout that supply chain. I mean, you mentioned, you know, the gentleman had several different services he would have to interact with in order to understand his COVID diagnosis and presumably each one of them have terms of service agreements and legal architecture. So I don't think that open source, open sourcing the code does not solve the problem of is this a good app? It doesn't demonstrate a positive public health impact, but it may help some, some proactive folks go in and limit harm or make sure reduce some of the things that could go wrong, which of course is a positive thing. So basically open source cannot turn a bad product into a good one. That's, that's, that's what you say. It can't turn an ineffective product into an effective one. Into an effective product. There's one question that I keep getting from people is that, you know, so it seemed to work in China and in South Korea, you know, this whole business of using apps and technologies, you know, they've done it there in China and Korea. So why aren't we, why isn't it good for us? The shortest answer is because they shut down when they get bad news. Right. The even, even in the best of circumstances, a COVID app can only give leadership information about rate of infection. Right. Leadership in South Korea and in China and in a number of places have chosen based on that information to shut to resume lockdown and to engage test and trace sort of programming. So to your, you know, to your point, I think that there's a lot to be said for health system capacity, which is a big deal. But I also think that there's a lot to be said for leadership that is willing to transparently and frequently shut down in case of breaches of containment. I think that that's what allows that. And so where you have governments that are unwilling to take action to shut down or are unable to really compel containment. There is no app that fixes that. And even a, you know, I guess testing is a big part of that because I saw that Beijing alone has a testing capacity of one million a day. So, you know, I don't think they're doing one million tests a day in India, you know, leave alone, you know, forget about the city the size of Beijing the whole of India they are doing a million tests a day. So I do understand what you're saying. There's a question here from David Joshi. How do we assess a voluntariness of consent in a situation where you are being sold a bridge to health, particularly by the government. If you wanted mass participation in some sort of widespread digital contact tracing trial. What's the way to ensure informed consent. I love that question. So there is a whole space of humanitarian research ethics. And one of the things that's really fascinating about apps is that they saw that they can solve one of the major challenges around consent which is that that anything that acts as a bridge to a system theoretically remains up so you can continuously get new consent. The idea that that people's rights are typically protected in large scale and emergency experiments is through a heightened degree of transparency. So the people who are performing that experiment very typically have to tell you quite a bit about both the methodology, what it is doing, you know what they're testing, and how it works in your body, what things you need to look out for. So there's a significant amount of increased monitoring. So if you participate in an experiment you get an elevated level of care, typically, or at least an elevated level of sort of monitoring and awareness by the public health provider. And importantly, research ethics also impose experiments around necessity, proportionality, and accountability. So essentially, it is illegal in most places to unnecessary or wildly disproportionate experiments on humans. And you sort of always have a due process right to leave. And so one of the things that I think is really important and really different here, right, is that we're seeing a government launch an app. And in any way make it compulsory is to exert a type of government power that is typically only available during emergencies. So a government normally cannot force you to download an app without a very compelling public process. So what we're saying is, a government may force it. And even if they don't force it, they may create a set of conditions like letting your employer require it that functionally or constructively force it. And those kinds of those kinds of, those are the kinds of policy protections that when one is designing an experiment as an experiment, that the structure of the process, generally speaking, you can work through some of those concerns, and you also work through those possibilities and you take on those roles. It's, it's, so there is a lot of, there's a lot of progress to make just in what we know, right, that's not to suggest that that will form that consent is perfect. I think that the question is also really pulling at this idea that if it's a bridge to hell in the middle of a pandemic, is it consent, right, it's in law it's called cohesion, so or adhesion, sorry. And that what that is essentially that you are, you make an agreement against your will. And, and there's a really compelling argument that in humanitarian response, any consent that you give is basically being given against your will because it's being done under duress is being done so that you can get access to something really basic that you need something really fundamental and vital. And so I think that that is a broader question, which is really difficult and really contextual to answer. And it's why those principles and the sort of systems that regulate and oversee human experimentation are so important. And also play such a role when we do things like vital science during emergencies. So in a sense, you know, just, you know, just correct me if I'm wrong. So inflicting an app like this on people making it mandatory in the middle of an emergency is kind of like doing a challenge trial with a vaccine. You know, like, you know, in a, in a challenge trial, you basically inject people with the disease, and then you give them the vaccine, you first inject people with a vaccine that you don't know if it works or not. And then you give them the disease. And they are kind of a live experiment, these people, it's a way to speed up vaccine trials done probably only in very adverse situations or in countries where they do it on prisoners on death row or something like that. Is that a fair the military is doing this now. The Chinese military has started on on vaccine trials on human vaccine trials. And so it's a really interesting set of questions right is is how do we, what is the consent there I mean presumably as a military officer they have a very different sort of due process footprint to what you might expect to an average person in that same context. There's just another thing I wanted to touch upon which is, you know, I, if I read Indian media, for example, but even if you read media in other countries, the general trend has been not to understand, not to try and understand the tech but to reflect the same biases in the sense that the tech is great it's going to help us. So I see it only in very selective journalists or you know in in very selective parts of the media where there was critical questioning of tech from the beginning. Now there's a lot more people waking up to it because there are people reporting issues with it and so on, but still not enough you know there seems to be an implicit belief. It's okay. Yeah, I definitely as someone. Yeah, I definitely hear you that there's a real strong there seems to be a very strong. What I usually call it is inevitable ism it's this idea that technology is inevitable. And so therefore we must find ways to accommodate it as opposed to technology is, you know, we can say no, which we can, of course, but I think that, you know, I think that the overreliance on tech creates really big blind spots, and there's some of the ones that you've been touching on, you know, throughout there are older there's older tech that works that we know it works their solutions here. Wait, we know what it means to invest in social safety net so there's a lot of things that I think a lot of misdirection that goes in the direction of technology and we think that we're starting to see productively, as you say we're starting to see a lot more a lot more critical diversity in the way that tech is getting covered. But I think that we're also starting to see the failures of our protections right we for example whether you think about your main data rights coming from data protection as some places do or from human rights as other places do or property rights. You know, there are a number of ways in which people are trying to engage with how we control how we get control. And I think that what we're seeing is that it's, we don't have a set of protections that are future proof right that ultimately will not prevent the kinds of harms that a covert app creates and and privacy alone is important. And it is a set of protections, you know that we want that lots of cultures want to carry forward I say this from the United States, there's a lot of patchwork, you know, a lot of patchwork efforts to build a privacy law here. And I think that the thing to say is that not only do we have this kind of problem with the means of control, but we also kind of, we have this problem with that our defenses aren't our future proof that our defenses don't. And so the tools that we have to kind of manage these things emergency powers are always going to be bigger than privacy, for example. So we need to be able to think about what digital emergency powers look like. Right should a government be able to compel you to download an app, why, and under what conditions and under what sort of accountability and those are the ways that we define protections in most other, you know, that's the way that we limit government power during times of emergency in other domains. And so it's just from my perspective and I'm sorry I acknowledge is a bit of a tangent, but just to say that I think that we it's really important for us as people who are concerned about these issues to be thinking critically and and creatively about how how we might protect the protect these interests and these rights in the future, not just what exists now. And you know just on that context since you mentioned the future and you know emergency powers. So, you know, it's, there's a possibility that this pandemic is not going away soon, you know, it's going to be there for the next year for the next two years for the next year, we don't know. So, is there a context, keeping that in mind where, you know, where, you know what's the context where a techno solution could actually be useful. And, you know, like, you know, is, you know, I mean in terms of, should it have a sunset date, should it have a clear sunset date you know what's the context for it it's not forever does it require a sunset date. Do we need laws that mandate its precise use of every specific bit of technology if there's an app that needs a law controlling it what is its specific use, and what happens to people who are flagged by the app do they lose their rights do they keep their rights on. I mean, is that is that something you know what would work. Yeah, it's a good question. I think we're to you know to your point, I think that you touched on something that's really near dear to my heart at the beginning which is that we need a diplomatic capacity around digital issues and and and I haven't seen the infrastructure arise for obviously they're international organizations and many of them have important and relevant mandates, but really specifically each of each of these countries. Everyone who is experiencing co vid in one way or another is making decisions about how it will respond what standards that will hold travelers to what types of technology and monitoring we expect what type as you say what type of wind down what are the conditions is it only for citizens is it for everyone who's resident is it for how does it apply to different populations. How do we apply that same set of concerns, you know concerns for people who don't meet the same technology needs, or the means the same technology requirements. You know I think that there's a. I think that what we're looking at in a lot of these instances not that this is we're going to get to a techno solutionist response and in many ways, if we do it is sort of a failure of all of the other systems which is you know which we don't want. Technology generally speaking is you know is it's a marginal solution, it improves the rate and speed of things but it won't fix political disagreements it can't fix in transagency, and ultimately we're not going to predictively model our way around the need to provide basic human care to people when they knew it. Especially during a pandemic when all of our individual health or so so inextricably linked. So I think that rather than think about what does it look like for this technology to work. I think it's, you know, more useful to think about how, how are we able to participate in deciding what we want this response to look like. And some of that is of course the things that we can create, but a lot of it I think we have to find other avenues to participate that aren't just I'm going to invent something right we need to be able to find vehicles of participation we need to be able to find vehicles of collective action some of those will be technological most of them probably won't right I mean, lots of gatherings have nothing to do with technology, thankfully. I'm told, I attended at least one I'm sure, but the point is is that, you know, that I think that the most successful thing that we can do in the narrative around techno solutionism is to re center the debate about solution outside of technology, and to reorient that technology is in support of a broader and more holistic solution than it is a solution in and of itself. And from my perspective the places where I see technology driving the most conflict or driving the most potential for bad outcomes are really in need of governance there because we don't have any space to have conversations about political speech and bio surveillance or you know what we just have our people who are great at analyzing it and talking about it and writing about it but we don't have participatory decision making spaces and I'll just end with with this piece. I got into technology in the very beginning, because of access to justice. And actually, India was one of my first use cases because the, or one of my first sort of examples and concerns, because the backlog of civil cases is so large and many of the instances that it can be difficult to process just administratively process law. And, and so, in the United States is also true. There are a huge, huge backlog and huge unevenness to the way that people are able to protect their rights and, you know, to take the example the very human example that we gave, you know, if that person wanted to demand accountability, they would presumably have to get access to a court, they'd have to put together a case, they would have to, you know, fund that case, wait for the amount of time it took to litigate that case. And then they might not even, you know, by then, if whatever the disposition is, even still matters. You know, then it might be some small amount of financial reward, it might be some small amount of, you know, of, of, you might get a rule change but it really depends and it's pretty unlikely. I know that you, you know, you've got a deep experience in legal technology. I came to Frontline by starting the Frontline legal project, but I think that the, the reason that I focus on that. And I think that one of the things I think is critical to think about with any covert app is that any app that launches should also have a dispute resolution system embedded in it. Right, that we should be building technologies not only expecting, you know, that will take them to court if they get it wrong, but that they build that capacity into the system itself, so that participants can shape the rules as they go. And I think you get a lot further that way. Of course, governance design gets more complicated. But it's that dispute resolution piece. It's that how do we make sure that the things that we're creating take responsibility for and give people access to remediation in the kinds of problems that they cause, not just, you know, turn it off. And I think that as we recognize that complexity, we're getting deeper and deeper into understanding the need for digital governance. And that's really how most of my work ties together. No, I think, you know, that's a great way to, you know, to basically close the conversation, because what you mentioned about the app being like a one way traffic at the moment is makes no sense. I mean, if you, if you, if you fail our actual COVID test in the medical space, you can go and retake it. So in real life, when you get really tested, you can go and retake the test. But in the app, it's not there, you know, there's fundamental lack of thinking, you know, I don't know how it can be modeled like that. So yeah, so, you know, thank you, thank you very much for this really enlightening discussion. I believe I've learned a lot myself, you know, just talking to you and, you know, you're talking about your experience and I'm sure the audience has also learned a lot out of this. I incidentally I forgot to mention Karana. So I just want to thank Karana for putting this together. They are a group of active people in the, you know, they were originally people active in the other space. And now it's gone beyond that it's mostly in the digital right space. And it's a brilliant bunch of people I learned something from them every day. And I want to shout out to them, everybody, you know, Srinivas, Zena, all the people who put this together. Thank you. Absolutely. And also just to say that the work that you're doing and the context of India, and it's the convergence of public and technology, the public service and infrastructure and technology is so incredibly important both globally and domestically, obviously, it's just, it's, I feel like I learned from the work of this group and this, this group of people, all the time. Just to say thank you from somewhere very far away. Don, if I can take two minutes for one more question. So there's an important aspect I think we did have a discussion around this in the past on what's happening in India is that the government is using crisis at every moment like there's a lot of talk around disaster capitalism around the world right now, especially with the push of 5G in US and driverless cars and this whole idea of contact less technology systems that we need to build as being pushed by everyone right. So we are also witnessing a similar trend within India where the government's now trying to promote health responses through technology right to telemedicine and stuff and the contact tracing app kinda is going to stay around and they're trying to build data trust. This is something that you have really worked upon. So, if we are building some of this based on the current crisis. How do you think these things should proceed. I mean that your organization which is public works a lot on that so. Yeah, thank you for the question. I, I think that it gets you know, I'm worried or wondering how much we're going to see a kind of wave of legal solutionism really similar to technical, you know, techno solutionism where people really confuse instrumentation for good, you know, good or equitable governance. And so I think that data trusts are one important they build on fiduciary theory and law. There's a lot of useful tools available through trusts and through accountability and transparency the way that trusts create. But they are an instrumentation right and so they are also very you know trusts have been used to manage assets both in front of government attention and away from government attention globally for a very long time so I think that it is in some ways very encouraging to hear a government to recognize the complexity, you know, of the social complexity of asking a population to engage in some of these services or to build into this kind of technical or digital infrastructure. But I think that you have to look at the governance design of each of those trusts and you have to think about what rights does it really give the people who are reflected in that data who are using that data who could be harmed by that data. And so it's the, it's really the governance design piece and then to the extent that that governance can be accessible through that technology stack I think that's the you know that's the other piece of that is that you need the mechanisms of dispute resolution and and and changing the system to be available at the point where the person experiences that and so if that's through technology and that has to be in the technology itself so in a lot of ways I'm really hopeful about, you know, things like telemedicine are very old they obviously happen long before mobile phones and, and in some ways, you know, are underexploited and in some ways are over exploited but I think that so rather than from it from my perspective and of course you will have picked up on this as a bias of mine by now but I think it's most important to map the power or map the relationship to power that's not mediated by a technology than to focus on the technology itself. And so when when you're focusing on what rights does this change or afford me or not afford someone else and what issues does that introduce. Those are the places where I looked to intervene those are the places where I think groups like like you all have so much opportunity because a lot of its capacity. I just don't know about research ethics or experimentation work and design. So a lot of it you know there's a lot of room to move the conversation forward positively, but it is, you know, any, any, anything that a government that a government or a company or a person says look, this is it this is a solution we just did this we solve this political complexity by giving you X, right, you know, is almost never, never the case. So engaging with that political complexity and focusing on the governance design that comes out of data trust I think might might be a really interesting opportunity but you know certainly using them by themselves won't solve much. Thank you, Sean. I guess that's it. I just want to check if anyone else has a question. I haven't seen anything on YouTube or zoom. Maybe you can just end it here. No questions. Thanks. Thank you.