 Hello, everybody. Imagine the National Security Commission on Artificial Intelligence, the NYU Law Rease Center on Law and Security, and the Berkman Klein Center for Internet and Society at Harvard University, as well as just security walking into a bar, what would they possibly talk about? And if you guessed security, privacy, and innovation, reshaping law for the AI era, you would be correct. And here we are at that bar, and here is our chance to begin this three series symposium on exactly that topic, looking at how the law must adapt to promote innovation while addressing important growing concerns around the development of use of AI in the United States and globally. Our first session today is titled Responding to AI-Enabled Surveillance and Digital Authoritarianism, itself a really capacious topic. And subsequent sessions will be, let me just see, my screen just kind of blew up. There we go. Subsequent sessions will be held on each of the next two Fridays at the same time as this, noon Eastern. And on September 24th, there'll be a panel to explore constitutional values and the role of law in the era of artificial intelligence. And on October 1st, in two sessions, we'll look at patent eligibility reform as an imperative for national security and for innovation. So we should jump right to it, but first I am impelled to present to you a statement of the CLE code. Every person has to have a code, and we together share the CLE code. This event, you may or may not know this, has been approved for one credit hour in the areas of professional practice for the New York State Continuing Legal Education Program. And at a certain point of the program, one, we will not tell you when it will be so that you're compelled to speak here through all of it, we're going to give you the code or codes that you can then punch into some probably non-artificial intelligence computer later on, along with your attorney affirmation form. I affirm all of you, but this will affirm as an attorney for you that you actually attended this session. You should have already in fact received, if you're in this zone, a link to the attorney affirmation form in your reminder email, and you're going to get it again as well after the event is concluded. And we have deemed this event, the Motion Picture Association of America, as appropriate both for newly admitted and experienced attorneys. So we rate this E for everyone. So that is the CLE code. And now I'm so pleased to turn to we have four total superstars ready to weigh in on this topic. And what we agreed we would do is just have people start out with five minutes each, truly five minutes, of what is top of mind for them, drawing from the work they are already doing in their respective spheres around AI and around worrying about surveillance, dissent, human rights violations in each of these areas, the ways in which AI is influencing it. So I'll introduce each person as they do their five minutes and they may even start to appear on the screen as we go. And I'm first so pleased to welcome my longtime colleague. I should say, by the way, I'm Jonathan Citrin. I direct the Berkman Klein Center for Internet and Society. And I'm so pleased to introduce my colleague, Ron Debert, Professor of Political Science and Director of the Citizen Lab at the Monk School of Global Affairs and Public Policy at the University of Toronto and somebody whom I met now. I think, Ron, at fair to say, 22 years ago when I had set up a computer in my office to just ask for every possible web address it could think of through a proxy going into the internet in China and then recording what it could get to and what it couldn't. And Ron blessedly had a more methodical way of potentially going about that. Together we started something called the OpenNet Initiative, limiting internet filtering around the world in various regions. We've co-edited three books together, I believe, at last count and those were weirdly simpler times. So Ron, welcome and love to hear what is top of mind for you. Okay, well, great to be here with you, Jay-Z and also with everyone who's together on this panel and all the audience who's joining in. So I'm actually gonna come at this topic a bit sideways, I think. So rather than focus directly on AI or even security, I think it's important that we start at a foundational level with the business model of social media that has exploded within the last decade and basically dominates our lives described by Shoshana Zuboff as surveillance capitalism. Let's put aside any little reservations we may have about her theorizing and just acknowledge that there is this business model that dominates our lives today. However much the big tech platforms describe what they're doing, they're predominantly about one thing and that is to push as many sensors as possible as close as possible to us, their users. So one simple way to think about this is we are the livestock for their data farms. So that's where I want to begin and what does that mean? Well, when you have more and more sensors being pushed out relentlessly, they're getting closer and closer to our lives, a couple of things happen. One is that you have this huge data exhaust. So we've effectively turned our digital lives inside out. That includes individuals, but it also includes organizations and includes states and includes big corporations. We all live in this new kind of global ether of data that is connected to but separate from us. And overall, if you step back and you describe the political economy, the characteristics of this global digital ecosystem, it's invasive by design, of course. It is very insecure. You know, we know the phrase from Mark Zuckerberg, move fast and break things. It's poorly regulated and hence prone to abuse. Now it didn't start out this way. It certainly wasn't the intent of a lot of the engineers to service the security market, but naturally that ended up happening. So with all of the companies out there that are routinely going around harvesting this data, trying to figure out ways to monetize it, naturally they would look to government security agencies as their clients. And we got the first glimpse of this, I would say with the Snowden revelations, but now it's become routine to hear about how, for example, location tracking companies that I think of as like a cesspool marketplace. They're like bottom feeders, parasites of surveillance capitalism. Most of these firms, most people have never encountered or heard of because they're one step removed from the tech platforms with whom they interact. They service many different clients, other advertisers, other tech platforms, insurance companies and so forth, but government security agencies, police forces, law enforcement, intelligence agencies have become major clients. They have voracious appetites for this sort of data, and that's where we really have to grow concerned. Because if you look at the trajectory over the time when Jay-Z and I started thinking about these issues 22 years ago to now, it's kind of like a one-way street, right? We're having a more immersive environment with wearable technologies right down now to nearly networked. So there's a one-way street here in terms of the invasiveness and the proximity by which government security agencies can get close to people. So the way I like to think about this is we've seen really a profound transformation in technologies of let's call it remote control. And yet we are still governed by safeguards that come from a different era. So we have these 21st century policing powers, safeguarded by Victorian era, 18th, 19th century rules and laws. And so the difference between those two, if you think of it, if you can see me, you know, we have this huge dramatic leap forward in technologies of remote control, but the safeguards have remained more or less flat. I would say the dip down because of problems in decline of democratic accountability, there we have a huge problem. And that's where the prospect of the abuse of power lies. And so we have to be thinking about this technology, I think in historical terms, especially as the world is descending into authoritarianism, the risks around the abuse of power have never been greater right now. And so I see it's five minutes and I'm gonna stop right there. Thank you so much, Ron. And let me just ask a quick question to kind of further contextualize your wonderful start to this. I hear you saying that there is an ongoing even longstanding now for at least a few decades relationship between the world of commerce that's producing these sensors and people are voluntarily acquiring them happily entering their personal information into them linking them to Facebook, whatever it might be that world of commerce and the world of government because I still hear you saying that government is a distinct governments generally a distinct threat actor with respect to civil liberties. And I guess you're kind of raising the question then of is the world of commerce a point of intervention to deal with the problem you're talking about just asking you to look ahead a little bit is what ideally you think should be done is it some combination of constricting supply trying to prevent the commercial intermediaries from accruing or analyzing or trading in data and how doable is that? Or is it about constricting demand having at least some governments that embrace the rule of law simply say even though it's out there I won't buy that geolocation data I won't be allowed to get that as a way of restraining the power. Well, I think many people, that's a good question and I think many people will be talking in this panel I suspect about things like moratoriums on the use of artificial intelligence and facial recognition by government security agencies. So I hope we get into that conversation but the way I would reframe it slightly is to again step back and think of this in terms of legal and regulatory restraints and we need to apply restraints to both the private sector the commercial sector as you describe it and government security agencies. I think that looking at this in terms of historical technological pathways is very hard to turn the clock back. What we can do though is apply restraints around how those technologies are used that has to come principally though not entirely through laws and regulations that apply to both the private sector and to government security agencies. Of course, the big problem here is that many governments around the world the ones that are most dynamic in fact when it comes to innovation in the digital economy happen to be ones that lack proper oversight over their security agencies where the rule of law is weak and where they may even be authoritarian regimes or autocratic regimes and that's the real fear that I have moving forward. That's great. Thank you so much. Please leave your camera on and I realized the irony of saying that given our topic but there you have it. And next, so pleased to welcome Funmi Orewa who's the Shusterman Professor of Business and Transactional Law at Temple University's Beasley School of Law. Her research focuses on technology, music, film, business and Africana studies. In July, she released her book Disrupting Africa, Technology, Law and Development through Cambridge University Press and Funmi has received both a master's and PhD in anthropology from the University of California, Berkeley. A master's in applied economics from the University of Michigan and a JD from Harvard Law School along with a bachelor's from Harvard College. This leads to an extremely busy wall of diplomas. Funmi, thank you so much for joining us today and so eager to hear what's on your mind. So thank you. It's really a pleasure to be here and I think my remarks will naturally follow from the remarks we just heard. A lot of what I'm gonna discuss is based on themes from my book and I wanna focus my remarks today around the topic of AI insecurity and digital powers. I wanna talk a bit more about some of the digital powers that we just heard about, the companies that actually develop AIs. So what I wanna talk about is, I wanna talk about meanings of security, harms from AI and I wanna talk about what I refer to as human insecurity, particularly with respect to two areas, AI and authoritarianism, as well as AI exclusion and bias. So the first point I wanna make is is that we have a lot of different meanings when we talk about security. Security can be related to infrastructure and systems. It could be a security of country and national security. We could be talking about security and privacy of users. Many of my remarks are focusing on security and privacy issues in relation to users, which can be pressing concerns in context of innovative technologies. These technologies are more than just tools and we have to think about them in actual context and that includes historical context and we have to assess them within those contexts. So my big question is what the technologies of remote control we just heard about, we have to think about it from the perspective of my work is what those mean for human wellbeing and also draw attention to harms that may come along with innovative technologies. It may also bring significant benefits, at least to some. The UN High Commissioner for Human Rights released an interesting report earlier this week entitled The Right to Privacy in the Digital Age that looks at how AI, including profiling, automated decision making and other machine learning technologies, how AI affects human rights. And it highlights issues related to human wellbeing and the harms that might accompany even beneficial technologies. I think the issues that they discuss relate to how technologies may lead to a high level of insecurity for some people who might use the technologies. And I'm gonna talk more about that when I talk about what I refer to as digital powers. AI can be a tremendous force for good but it can also be negative. This is the UN High Commissioner for Human Rights. It can also be negative even catastrophic if deployed without sufficient regard for the impact on human rights. Now technology companies today who create, test, deploy and make decisions about AI have immense power and broad reach throughout the world. As we know, regulating private actors is difficult. That's not just in the digital era that it's been difficult to regulate these types of actors. However, the incentives of such digital powers in cases of harms resulting from AI and other technologies may be problematic. And I think I wanna draw attention to a recent Wall Street Journal series about Facebook. It's called The Facebook Files. Many of you may have seen some of the articles. There's been a series of five articles thus far. The fifth article of which was published today. And the article published yesterday is most relevant to the topics that I'm gonna discuss today and that I talk about in my book. The article published yesterday talks about how Facebook treats postings by drug cartels and human traffickers among others as well as how it deals with authoritarian government suppression of dissidents and hate speech. And I highlight some of the details from the article because I think it underscores how difficult regulating some of this behavior will be. So a former Facebook VP who oversaw partnerships with internet providers in Africa and Asia before resigning states that Facebook treats harm in developing countries as simply a cost of doing business. For instance, in Vietnam, Facebook is aware that the Vietnamese government is using the platform to silence dissidents but it tolerates abuse because Vietnam is a fast-growing advertising market. So notably is the Wall Street Journal notes Facebook commits fewer than fewer resources to stopping harm overseas than in the United States. Even though at this point, more than 90% of monthly users are outside of the United States and Canada. So the vast majority of users are outside of North America. Now Facebook operates in 144 countries and has 2.5 billion, more than 2.5 billion monthly active users. If Facebook users just in India were a country, it would be the third largest country in the world. So we're talking about a country with immense reach to an immense number of people. So the question becomes, how do you actually regulate the types of behavior that are described in the Wall Street Journal article? And I'll just highlight a couple of different things that were raised by the article. In one document, as the Wall Street Journal reports, one document from earlier this year suggested that the company should use a light touch with Arabic language warnings against human trafficking so as to not alienate buyers. And the buyers they're talking about are Facebook users who buy domestic laborers contracts often in situations akin to slavery. So these companies that I refer to as digital powers are implicated in issues related to both AI and authoritarianism, which is relevant to what I've talked about earlier, but also AI exclusion and bias. This also implicates the internal structures of these companies, which in cases of many companies founded after Google are often controlled by a small number of people. Facebook is entirely controlled by Mark Zuckerberg who owns close to 60% of voting control. Google, for instance, two companies that have ad-based revenue models have voting control, the two founders have voting control of Google. So the question becomes, how do we think about regulation in this context? And one of the more interesting things, so we have issues related to both authoritarianism and support of authoritarian governments throughout Africa and Vietnam and other countries. We also have issues related to AI exclusion and bias. Facebook, which has posts in obviously hundreds, thousands of languages probably, they don't actually have people moderating that can actually read those languages because they haven't hired people that can actually interpret those language, that can actually interpret the posts that go up on Facebook. So that makes it extremely difficult for them to monitor things like hate speech and other types of speech that may lead to violence. And we've seen this, this was an issue in Ethiopia last year with violence that led to a lot of death as well as the assassination of a very well-known singer in Ethiopia. So I wanna end by talking about how we would, how we wanna think about this and from a regulatory perspective. I think regulating these types of actors is difficult. We have some sense of how a regulatory picture might look. I think transparency and liability would be two aspects. And I think it's one thing that's notable if we look at Apple iOS 14.5, which was launched earlier this year against which Facebook took out, had an extensive ad campaign, Apple introduced something called app tracking transparency, which highlights tracking on your phone and gives you the option to stop. According to Flurry Analytics, the latest statistics I've seen as of late August, 15, only 15% of iOS 14.5 users in the US actually opted in to add tracking four months after the software was released. And that's lower, actually 21% of users opted in worldwide. It's understandable given these stats, why Facebook was really opposed to the deployment of the ATT. But it also may point to us ways we can think about regulating these types of companies, despite the fact that we have some enormous barriers and potential impediments to doing this. But I think I would like to, what I focus on our transparency and liability is two ways to think about regulating them. I'm gonna stop there. Okay, thank you so much. And you're right. It builds on what Ron was saying, in the sense of even further sharpening the distinction between public and private actors here and how much of the actual shaping of speech and the gathering of data is being done by private actors, but then the difficulties of regulating those actors. And it particularly jumps out, when you say you pointed out that basically we should be skeptical of any one person including Mark Zuckerberg having such unilateral power to intervene and to shape things here. And at the same time, you were pointing out that worldwide there are areas in which Facebook really isn't monitoring very well for hate speech, if only because of language barriers, but for other reasons as well. And that seems a classic formulation of the observation that the food is bad and the portions are small. Facebook isn't doing enough and how dare they be doing anything to caricature a little bit. And so anything more you wanna say about your solutions of transparency and liability, I gather that both transparency and liability with themselves would have to be frameworks established by public authorities, but we're not exactly trusting them a whole lot either. And I guess my question is then, is the idea that there'll be some public authorities that we do trust maybe in jurisdictions that embrace the rule of law that in turn would shape the behavior of the companies in areas in jurisdictions that don't embrace the rule of law or is it also offsetting power by other companies like Apple? Well, at least if we get a lot of different companies that are competing somehow, there'll be less unipolarity. Yeah, I agree that is the thorniest problem to address because I think we have a lot of regulatory capture even in countries that have rule of law. So these companies are crown jewels in many respects. They're the biggest companies in the world. They've grown very rapidly and how you actually effectively regulate them, especially in international context, I think is a continuing challenge. I would suspect we need some combination of domestic law frameworks because some of the negative impact of Facebook is occurring domestically as well. There's a lot of discussion about democracy in the United States and the article in the Wall Street Journal today is about COVID and misinformation. So I think there needs to be a thought to, I think transparency and liability will at least get us part of the way and what we need, it's good if we have in a sense competing business models, at least as long as Apple has a business model that's based on privacy, which seems pretty core to Apple right now, but as we know, business models change. These data information business models have really arisen in the post-Google era to a significant degree and have really come of age in an era where we all carry mobile devices that make it very easy to track us. If we all left our computers at home, these technologies wouldn't be so robust, they wouldn't have so much power. So my research focuses on both US and African countries and I think in places where we don't have rule of law, I really worry because I think many of these companies are doing a calculation of benefits to me versus colluding with authoritarian government and that's leading to, I think some pretty adverse outcomes in Myanmar and Ethiopia are just the tip of the iceberg in that respect. So I think we have both a domestic problem and sort of a transnational global problem. I think transparency and liability are places to start and with liability, I mean, I think if Mark Zuckerberg wants to control Facebook, maybe he should really own it because I think he relies on some of the standard limitations of liability that we have embedded in corporate law but I'm not sure those are appropriate in the context where he's both a sole controlling shareholder to, major, he's a controlling shareholder and a manager and on the board. So I think we need to maybe think creatively about domestic legal frameworks that can get us to the right place from a liability perspective and also really think through the impediments to international law and perhaps give people outside of the United States a cause of action in the United States for harms caused by companies based here, which is very hard to get at right now in many respects. So I'll leave it at that. There's a lot to think through, I think, in terms of thinking about this. Thanks so much. All right, so stick around. Next up is Chinmai Arun, the fellow of the Yale Information Society Project and a affiliate of the Berkman Klein Center for Internet and Society, founding director of the Center for Communication Governance at National Law University, Delhi and her recent work has focused on social media governance on online. Hate speech and the impact of AI and algorithms on human rights in the global south. Chinmai, so great to see you as always and so curious what you've been thinking about. Thank you. It's such a pleasure to be here and to be a part of this panel. More so as I'm listening to the conversation, which I feel ties in so closely with the ways in which I've been thinking. So I'm going to introduce my remarks by saying that you might think of it a little bit as a reframing of what you've heard from Ron and for me. I'm thinking of this in terms of the asymmetric relationships that are enabled by the introduction of AI as a technology as well as the effect, the manner in which it is constructed within our political economy. That's the business of AI that both Ron and for me have described so beautifully. So this fundamentally reorders relationships within society and that exacerbates old problems and creates new ones including problems that we can't anticipate which you, Janice, have called informational debt in some of your writing. I think that one of the key issues with this kind of impact is its asymmetry which is that it is impacted scale but it's clear to see gradually that who is affected and who uses the technology to affect these people are separating into populations characterized by powers that's political and economic power and that there are global and domestic narratives to how this works. So although I thought that Shoshana Zuboff's book which Ron referred to was powerful in many ways, one of the things that had left out was a global account of the way in which tech and business relationships work. And so I think that what's clear by now and what has been described already is that the computational power and the ownership of both the technology as well as the legal rights over the data sets that power the technology are concentrated in a few hands and we can think of them as sort of the powerful elite. They're concentrated in particular countries but there are also certain kinds of people. And I think that that means that we've got two major concerns to think about in the context of AI. One is datafication itself which is that where are the data sets coming from? How are they framed? Who are they rendering legible? And how are they presenting people in their data-fied form? And the second is the ways in which this datafication and algorithmic mediation by companies is now affecting the lives of these people. And this is both through the delivery of public services but also the reordering for horizontal relationships. Things like who gets loans? Who gets a job? How are they seen by private companies? So all of this is dramatically affected in the algorithmic society. And so the trouble in part with datafication is that it uses certain imaginaries in a way that is opaque and that is protected by the law. So a simple example is that if people are data-fied as either male or female, people that don't identify as male or female are erased. And there are many, many ways in which the very framing of data sets, the access to data, defines populations in particular ways that then affect them powerfully. This isn't exactly a new story, as Ron pointed out. This difference in power, the descriptions of people in particular ways that then end up affecting them are also old tools of bureaucracy that were deployed in the colonial period. But the way in which they're being used now and their pervasiveness has changed. And I think that that means that we have some lessons to learn from the past but new effects that we need to think about more carefully. The trouble I think with international law in this context is that although it offers us powerful norm setting, I say this as someone who worked on the High Commissioner's report that would be referenced. I think that it shows us ways to think about these problems in powerful ways and indicates ways in which we might deal with them. But when we're dealing with cross-border questions and with holding powerful tech companies and countries accountable, the question we have to ask ourselves is does international law create accountability? And I think at present, largely it does not, although there are trade agreements that might possibly get us there. It's I flag this as a problem that we need to pay attention to because if we're looking at international law purely as norm setting, I think that that takes us into the realm of thinking of norm setting in terms of ethics or soft norms that were set by the business and human rights principles. And that doesn't quite hold the AI company's feet to the fire. There's one more thread that I want to highlight before I stop because I'm almost out of time, which is that the AI in the majority world narrative is usually seen in terms of relationships between companies. The US company exploits Indian people and that kind of thing. But there's more that's taking place, which is the elite within countries also deploy technology in authoritarian ways against their own populations. They do this both with indigenous technology as well as technology that they purchase from other countries. So there's that piece of it. And then there's also the part where there are populations within what we see as rule of law democracies that are also exploited by AI. This is racial bias, the way in which technology is used against refugees. And so I want to stop here and say that there is a serious problem of power created by the asymmetric use of this technology. And we haven't found a way to hold the elite that control this technology accountable. That's really the big challenge of our time. Got it. Thanks so much. And it's been a really good progression here because so far, I think quite appropriately, we've been talking about AI as basically just a further reason to worry about the development and deployment of technology in the absence of any boundaries. And you've kind of gone a step forward to kind of broach some of the specifics of what AI generally and maybe machine learning operating on a good dataset or not so good dataset can do. And I just wanted to invite you to do one more sort of beat on an AI example with an eye towards what you've and everyone else has identified. It's just like the hard question. So what do we do about it? And I don't know if that's you mentioned like the misgendering that might take place through an inferential AI based system. And is the right kind of intervention there a right to know the inference it's making and then to fix it, which of course entails disclosing more about yourself to the surveilling party. Is it somehow trying to prevent the inferences from being made to begin with and how generalizable are these interventions? Is whatever we would do with misgendering in one context be applicable to the machine learning that salts our news feeds on a Twitter or on a Facebook or any of their counterparts around the world? Is it no, you just shouldn't be using AI to salt feeds or is there a quote right way to do it? I can't pull my usual scam on you which is that I usually say that as a law professor I ask more questions than I answer but I'll take a shot at it, which is that I think that the fact that these systems are opaque both for reasons of how algorithms work but also because of the way in which law protects companies and their review of their own algorithms, their use of data sets and the compositions of their data sets. I find that the most useful literature that I'm seeing is coming from the computer scientists. So Deborah G. Timlet-Gibru, Reda Bebe, they're now beginning to talk about ways in which this accountability can be hardwired into the building of these systems and certain methods of audit that can be used to check AI's inclinations when it comes to potentially misgendering, reading populations in particular ways. So that's one level of scrutiny. I think the other which is very helpful which is in the High Commissioner's report is to consider each instance of use of AI and monitor its actual effects on the population. First, by running tests and trying to anticipate the effects that it might have but secondly, by monitoring to see the actual effects that it does have and then creating mechanisms for walking it back. I understand that the EU is also beginning to think of AI in terms of the risks that it poses but yes, my sort of a high level answer to this is that there are two parts to this. One is within the construction of the systems themselves and the second is within use and our legal systems need to acknowledge that this is the only way to check and see how high-risk AI is before deploying it to populations. All right, and so far then, certainly in the US context, the past 22 or so years maybe a little more of the default and this was alluded to earlier I think by from me, the framework of let the markets do their thing, let them innovate and then unless something goes horribly awry, don't bother the goose laying golden eggs. Certainly the attitude I'm hearing so far is a much more dare I to mix national security metaphors, 1% doctrine that says that the risks, both prospective and as they are materializing are so high, it's really leaning hard on government to be somehow whether it's monitoring, assessing, intervening to be playing. What I'm hearing is playing a much more active role. Why don't we introduce our fourth and last participant as a member of the panel, Eileen Donahue, Executive Director of the Stanford Global Digital Policy Incubator, Multi-Stakeholder Collaboration Hub dedicated to protection of human rights and democratic values and digital society. She served as US Ambassador to the UN Human Rights Council in Geneva during the Obama Administration and later was Director of Global Affairs at Human Rights Watch, where she focused on internet governance, digital rights and digital security. So welcome Eileen. So curious what you were thinking about. Well, first I just wanna say thank you to the organizers you've created such a rich symposium and I'm really honored to follow all of the amazing speakers in this panel. I think what I wanna talk about is a really natural compliment to what's been said and really hopefully round out this panel because I'm gonna focus primarily on authoritarian states and the first point I wanna make there is there's no question that AI has been a game changer when it comes to the new risk to privacy and civil liberties in authoritarian contexts and it has turbocharged all preexisting forms of repression both mass and targeted surveillance, censorship, spread of propaganda and contrary to the original expectation that it would be impossible for repressive states to control the internet AI has enabled a whole new level of state control over communications infrastructure and the information realm both on the restriction side, scanning, filtering, forbidden content, dissenting views and in the other direction in AI enabled generation and amplification of favored content, flooding the zone with favored narratives of repressive states and controlling civic discourse. If I was forced to pick one single invasive AI enabled technology that really bothers me, I would point to all the AI enabled social and engineering capacities that shape citizen motivation and behavior like China's social credit system because these not only violates privacy and civil liberties but they really undermine human agency and go to the heart of human dignity. All that said, the issue that really keeps me up at night is the larger threat posed by digital authoritarianism as a governance model, an entire techno social system that promises control and security for the state as opposed to the liberty and security of citizens. And so my core point would be that rather than see the digital authoritarian threat as a series of discrete apps used for repression for which we need work around circumvention tools which we do, we also need to see the problem through the lens of system rivalry. And the concern I wanna put on the table here is that the digital authoritarian model of governance is spreading around the world, competing with democracy as a form of governance. And I don't think it's hyperbole to say we are in a geopolitical and narrative battle over the governance model that will dominate in 21st century digital society. That battle is being waged on a variety of level, diffusion of technology, diffusion of values, norms, concepts, propaganda, economic conversion sometimes referred to as sharp power. And even through concerted efforts to influence standard setting bodies, tech standard setting where the potential for repression can be embedded into tech protocols for the future. And all of these things come as a package deal. That leads to the question where is this showing up in the world? And the clearest example is China whose digital authoritarian influence is being felt on multiple layers. And I will really briefly mention five, the first of which is just role modeling very effective use of AI enabled repression at home. Smart cities, Panopticon level control in areas deemed security threats like Xinjiang, very effective control over the information realm. As I said, social credit system also building a sovereign digital currency through which they will gain substantial new repressive power. Second layer is China's asserting its digital authoritarian influence abroad in international trade and economic development realms. Exporting repressive technologies, spreading the capacities, normalizing repression through digital tools around the world. And also through broader economic development efforts such as Belt and Road Initiative, Digital Silk Road, establishing bilateral relationships to build entire infrastructure systems in the developing world through which they gain leverage over fragile states, weaker democracies for decades to come and gain new sources of data that can be sucked back to Beijing. Third is China is really active in the international normative arenas, flooding the zone of multilateral diplomacy in traditional bodies like the UN Human Rights Council where it's stunning the arguments that they are winning and luring the majority of delegations to support their narratives. But also, as I said, in tech standard setting bodies like the ITU where interoperability standards for the future are being set and China has pushed their preferred protocols especially over IoT. So, and then fourth, I would mention influence in the global marketplace of ideas spreading propaganda about the weaknesses of democracy with so-called wolf warrior diplomats attacking the competence of democratic governments, spreading concepts like cyber sovereignty which is really an updated version of a longstanding authoritarian position that there should be no external criticism of what happens within sovereign now cyber borders certainly not on the basis of international human rights law. Final point, fifth layer where we see China's influence is we have to recognize that all of this influence starts with massive investment in cutting-edge technology, public commitment to win the AI race against the US by 2035, deep investment in other emerging tech like quantum computing, as I said, to be the first in the sovereign digital currency race. And this is just, it's important for us to recognize it's hard to hold these things together but China recognized very early that dominance in the tech realm translates into power in every other realm, military, geopolitical, economic and even normative realms. And they are on a mission to dominate the global order and I think democratic stakeholders need to see this as an existential threat and recognize the competition with democratic values-based leadership. Thank you. Certainly put a lot on the table there, Eileen and maybe I can ask you, it's similar to the prior questions about points of intervention. On the one hand, I guess this is a whole complex ecosystem you've described lots of different manifestations of it. And I'm wondering if there's a particularly fruitful place that you would single out of intervention in mode or menu or the way that you ended up in the fourth and fifth point it sounds like this is recursing to just be just government to government stuff at which point it's just in the grand dynamics of international relations. And if there's one state acting in a particular way then it's up to other states ultimately to come up with a way of pushing back or shaping if they think that's out of line. I'm just curious if you have any other quick thought on points of intervention. Points of intervention. So I don't think it's only about states and we can't disregard everything that has preceded about the role of private sector companies in this ecosystem, which I completely agree with. I'm just emphasizing a different threat. In terms of intervention, I think the first point we need to see is we can't beat something with nothing. And there's this digital authoritarian model that's very aggressive. On the democratic side, we have basically failed to provide a compelling alternative. And this goes largely to what Ron was talking about is that within democratic societies we have failed to institute constraints on government use of data and technology for repression at home. And we have failed to regulate technology companies to protect citizens against violations against human rights. And if we were to do that, do you think that would potentially spark a race to the top or would it just spark a kind of divide in which there are jurisdictions in which there are limits and other jurisdictions in which there are not. And as a tin can on that, I've certainly seen discussion about the fact that China actually, at least with respect to commercial privacy has been racing a little faster to the top on that front. Yeah, I wouldn't give them credit on commercial privacy but to the larger question, I think we have to win the geopolitical battle so we cannot be in the pure minority. And so we need to develop a model of democratic governance of digital society that lures the majority of states to that vision. And three buckets here. First piece is we gotta get our own house in order and figure out what does this look like at home. Then we have to play in the international diplomatic realm and invest in values-based international leadership in the Freedom Online Coalition. There's gonna be a summit for democracy. We've got to lure the world to this vision. But the third bucket, and it's interesting, the National Security Commission on AI did a very good job of holding these things together, the normative leadership, what you do within democratic societies needing an international framework and an international digital democratic initiative to lure people, export democratic infrastructure around the world to compete with authoritarian infrastructure. But we have to also win that tech innovation battle. And it's because of this other point that winning the AI battle, winning the quantum battle, even the sovereign currency battle will give power to authoritarians and the world will move in that direction. So we have to hold those things together. And it's so interesting. Another point of departure that I just note is, of course, how we characterize all of this and both the uses of martial metaphors, the great area to explore about, are we in a battle and how many are there? I counted four in the past minute or so. But before we explore anything like that or turn to some of the questions that have accrued in the queue, from the sublime to the less sublime, I believe we are about to reveal the long-awaited CLE code for those of you who've been with us on this journey so far. So there it is, Romeo, Charlie, Lima, Sierra 1225 will unlock your ability to not have to attend another CLE. In the zero-sum game of keeping up among lawyers, at least in New York State, with your continuing legal education requirements. Thank you, Ruben, for revealing the course code. And with that said, I think we should turn to questions. And Tess, you might have wanted to offer one or I can draw from among what we've seen already in the queue so far. So I'm gonna just jump to one, which is fittingly perhaps anonymously lodged. Is there political will in the international community and among NGOs to negotiate a multilateral treaty, governing AI to regulate technology companies and to safeguard human rights? How would we or how do we begin to negotiate a multilateral treaty based on a universal declaration and modeled, say, on the UN Charter? And I would just take that to be then a question for anybody who wants to speak to it. And I see Ron already unmuted his mic to stake out a possible answer but Shinmai is also interested. Is a treaty, is the treaty path, the path of interest to pursue? And if so, what would the scope of it be given how much we've been talking so generally about technology and authoritarian, is AI the right way of trying to put a handle on whatever it is you think we need a treaty for? Ron, go for it and then we'll hear from others. Okay, great. And I'm glad I jumped in quickly. What I'll do is I'll use an example from a parallel area, very complimentary to the one we've been talking about that I spent a lot of my time at the citizen lab investigating, which is a very acute problem of the use of commercial spyware by governments to engage in all sorts of human rights, harms and abuses worldwide. This is top of my pile these days. And often you hear people say, why can't there be some kind of treaty, some kind of international law that would prevent this from happening? Which is I think a objectively rational thing for people to suggest. The problem is it's very complicated and difficult to get there. And that's because the world is a very messy, complicated place and you have vested interests that will work against those efforts. So understanding the process by which we get from, hey, there's a problem, we need to regulate it to then actually getting those regulations in place needs to be studied very carefully. This is where someone like Eileen with her experience in international forums would have a lot to say. I just wanna say one quick point that struck me in listening to everyone's comments to remind everyone that liberal democracy, social democracy, if you will, human rights, these are all very fragile human social constructs. They're not fixed in nature. They can very easily go away as they've come around. And in fact, if you look historically, they're actually very fleeting. And I think we live in a very dangerous time right now listening to Eileen's remarks at the end about the model of China, I think illustrates this very well. We can't take these things for granted. We need to mobilize around them starting with, perhaps where it's most easy in countries where those practices are still very strong, relatively speaking. Got it. Tonight? Always dangerous going after her own, but basically I agree. I think that the international community, as Ron describes, has been pushing for a treaty, but the danger, as he says, is getting governments to agree to be accountable and then finding ways for companies to be accountable because that isn't hardwired in international laws yet. But if Eileen is willing to step in, I feel like I should defer to her because she can describe this in more detail. Yeah, so I agree with Ron's point that the idea of getting a multilateral treaty sounds rational on an objective level. I mean, it would be great. I don't think it's very realistic. And so the questioner asked, is there political will? I don't think so. It's not at this moment. The US government itself is kind of allergic to international treaties at this point. So we haven't been able to get the treaty on the rights of the child, the convention on the rights of the child. So I don't think that is a realistic move. That said, I really agree with the idea that we need to be, Ron's point that the social norms that were constructed in the post-World War II era, the international human rights law framework itself is social construct, but I sort of as an advocate, I recognize we're at a dangerous moment and these could be fleeting, but I am desperately holding onto them. And I am trying to advocate for the continued reliance on this framework. I don't think we're gonna get a better set of principles that speaks very well to all of the concerns about AI. It was internationally negotiated. It's recognized around the world, even if not adhered to. The hard work we need to do is to articulate how to apply it in the digital realm, sort of along the lines of what happened post-Snowden when civil society came together and drafted those 13 necessary and proportionate principles for surveillance. And that's the kind of work that needs to be done with respect to every aspect of AI. Got it. Last little point I have to throw in there is that multilateral approaches to governance. I think in the digital realm, we need to advance multi-stakeholder processes. Governments alone, we should not be trusting them, they can't get it done. We obviously can't trust the private sector to govern themselves. I really think this has to be a multi-stakeholder governance process that includes process. And just for those who maybe aren't familiar with the vernacular here, multi-stakeholder process has a very specific meaning. Do you want to just give the tweet length? What is a multi-stakeholder process? It's in contrast to multilateral, which is governments only. Multi-stakeholder means you've got stakeholders from different sectors at the table, especially when it comes to internet governance. Obviously you need the private sector. It's sort of what happened at ICANN, where the technologists were leading on merit-based, what was the code that would work? And we need to bring that into the normative realm, especially to give a voice to civil society, because there is kind of a democracy deficit, not only at the UN and in multilateral fora generally, but when it comes to global challenges, we gotta find a way to get citizen representation at the table. Citizens interest represented. Got it. This might be a good question to put to Professor Orewa. Might changes in corporate board governance address some of the issues of control that you raised? For example, having a tiered board system like the German co-determination model with stakeholder and employee representation. Of course, I guess that still assumes that boards can hire and fire CEOs within the example of a company like Facebook may not be so true. I think the CEO can hire and fire the board. But I don't know if there's a corporate governance angle on this worth exploring that you'd wanna speak to. I think two things. I think corporate governance changes can maybe get us part of the way there in terms of thinking about how companies actually operate with this model that Facebook and Google are based on, which is not what we typically have assumed about how companies, how boards and shareholders and management relate to one another because they're all sort of, although the Google founders have left management, in Zuckerberg's case, he's all in one at the same company. So I think rethinking that model would be useful. I don't think it's gonna get us very far as you mentioned because he controls either way. So he's gonna determine what happened. He can elect the board. He can determine a lot at Facebook without anyone else's input. I don't think the German co-determination model is gonna fix it necessarily. I think it would be difficult to import that to the US. And also I don't think again, since he controls, it's not gonna fix it. I think what I take from Professor Donahue's remarks is we do need to think about the values that we're embedded in activities in other countries. This is particularly true in Africa. So the United States, we see ourselves as being a democracy, but when we look at activities of both private and public actors in many African countries, it doesn't support diffusion of that democratic model outside in many countries. So there's been a lot of support of authoritarianism by both government and private sector actors. So I think this issue of cultural values, I think China is very aware of the cultural values it's trying to export and encourage other people in the world to follow. I often fear that that will be a norm in Africa because I think it would be a catastrophe in Africa if the model, particularly with respect to ethnic surveillance were to be exported to African countries. But by the same token, I think both private sector and public sector actors in Africa, at least from the United States, aren't really fully honest about what cultural values they are importing, at least with themselves, or exporting there because often there are authoritarian values. When you look at the relationships between private sector and African governments, as well as between the US government and many African governments, because the US support has supported traditionally a lot of authoritarian and African context. Thank you. Here's a nicely focused question and one that sort of starts to edge into the technical. Does anybody have a view on certification for certain applications of AI systems in the absence of broader based regulation, kind of like international standards organization norms or something that you can maybe not fully kick the tires of something you're wanting to implement, but you'd have some ability for an outside organization, maybe a multi-stakeholder organization to learn something about it and certify it as applicable for certain uses and not exceeding certain limits. Would that, is there any way in which that might help asks our questioner? Anybody wanna take a swing at that, Ron? Well, yeah, this falls in line with what I was talking about earlier, which I realize is a very simple concept, but as a political scientist at heart, I need to remind people of the concept of legal and political regulatory restraints and what you're describing is a form of restraint. We need to ensure that there are independent outside bodies in a variety of sectors coming from a variety of background stakeholders that are able to examine here inside, hold to account the various actors that control our lives. This is the principle of Republican liberal democracy, right? It's not something that we need to invent. We don't need to come up with some cool new cyber theory. This is something that goes back centuries and it's all about holding people to account when they're in positions of power for many different variations, many different models, what's being described here is one of them. And yes, I think it could help, but wouldn't solve the problem, but it would help. Anybody else wanna weigh in on that question? I can just add a little piece, which is the concept of human rights impact assessments is another vehicle for adding constraint and on both government use and private sector if they establish processes for self-evaluation of the human rights impacts in the development and the deployment and after deployment, what are the effects in the real world? It's a little bit of a parallel concept to this idea of certification, but it's an ongoing process that I think should be mandated for the private sector. And I think it has to be embraced by governments as well because I think there's a lot of procurement and use of data and technology where they're just not thinking through the human rights impacts. And do you think environmental impact assessments have, I don't know, what's their reputation? Have they worked or have they worked kind of obliquely in the sense of just providing a speed bump to stuff that should be slowed down rather than the substance of what the assessment says? This is, Ron should speak to this. It's a big chapter in his book. I think it's at least a speed bump and it's better than nothing. Uh-huh, got it. Let's see, here's another subscription that is probably first stop would be Ron on this one given the Citizen Labs pioneering work here. It says, in the meantime, while we await more protection for users, we're seeing AI often being weaponized to surveil and then persecute human rights defenders. How can we guard against or respond to this threat? And Ron, I would invite you just to even give an example. I mean, time that the Dalai Lama showed up on your doorstep and said, my laptop's acting funny. I assume we're allowed to talk about that. I didn't say which Dalai Lama, but if you can speak to an example of this modality that is kind of not just extra judicial, but completely it's just a sort of such a tough scene out there that your own computer might have something downloaded to it and you've seen it time and again. So I don't know if there's a particular narrative you wanna share about that and what should be done from a policy perspective? Sure, so most of the issues that we're dealing with in this bucket don't have to do with AI per se, although potentially they could in the future. We're talking about something quite basic actually that relates to what I described earlier. All of us carry around these devices 24 hours a day. This is something new, very invasive by design, follows us around, always on, but usually has some kind of insecurity. Meanwhile, there are companies out there that have sprouted up that offer government clients the ability to get inside those devices. To the point now, the latest iteration just this week, we disclosed to Apple this issue, this emergency security patch, taking advantage of a flaw in iMessage that even Apple's world-trained engineers weren't aware of to silently commandeer any of the 1.65 billion Apple users worldwide simply by sending code to that device. No interaction on the part of the target, no tricky email attachment that will socially engineer you. You just need a phone number. Can you tell us about the CLE code at this point? But yeah. Well, maybe they should be. I don't know what that code is. But there is no defense against this type of attack. And what we are seeing as a consequence, not surprisingly, is a tsunami of human rights abuses around the world. To me, this is the most acute crisis of global civil society right now because we are looking at what I describe as despotism as a service. Just think about that for a minute. And I think that how to solve that problem, very, very complicated because every government benefits by this marketplace. The marketplace itself is highly secretive, almost entirely lacking in safeguards of the sort that I describe. No public transparency and accountability. Contracts are secretive. Shell companies, kleptocrats investing in this marketplace. This is the worst of the worst. How do we remedy it? Well, we need to start somewhere. We don't just throw our arms up in despair. We do what we've all been describing here. Find ways to bring better accountability, oversight, et cetera over this marketplace. There are ways to do it. It's not going to be easy. You can't solve it by just saying, hey, we need a global treaty. We need to work locally. So for me, that means North America. Let's start making sure that the NSA, the CSE, GCHQ over in Europe and so forth, maybe they could bring some order to this marketplace. That's a good, tangible, concrete way to begin is by advocating in places where advocacy can work, frankly. And then you just hope that it begins to build momentum. You change the world one bit at a time. Got it. Well, we have approximately two minutes left before we have to close. So of course I'll ask just an impossibly fractal question to bring us in for a landing, which is fast forward 20 years, which is a long time in technology time. And is there anything you'd wanna share that you could plausibly say would be a salient difference of the landscape of what we've been talking about between here and now, whether it's dystopian, utopian, something you wanna happen just a brief kind of tantalizing form since we have so little time left. Maybe we can work backwards from that. Eileen, anything you wanna share on that front? I am an optimist by nature and I am struggling. I am struggling to hold on to that. The one thing that I am hoping gets initiated is a process sort of along the lines of what Ron was talking about is start in North America, start in Europe, GCHQ. Democratic governments need to engage in a serious process along the lines of what civil society did post-Snowden. And there are tensions between democratic allies because they disagree on what protection of fundamental rights actually entails. And we need to work through those differences together and come up with a share of view. Got it. Kind of model the world you want to see. Chenmai. Also, utopian, which is not characteristic of me usually, but maybe a different shared language in terms of human flourishing that will bring most countries on board to what Eileen is saying. And so an agreement that we need to stop this race towards the bottom because it affects all societies around the world and a sort of a bar below which these technologies will not sink. Got it. A better language for that, where innovation might not be the only word in the dictionary in that zone. Chenmai. I think I tend to maybe be a bit less optimistic, but I'm gonna try to channel my optimistic side and say I think it's important that we think about a model where we practice what we preach, both at the public sector level and the private sector level. So if we really have these democratic values, we should practice them at home. And I think we do on some level, but we have some problems with these models in the places where they originated. I think we should really endeavor to practice them at home and as well as really export them. So that means really stopping the model of supporting authoritarians by both private sector and public sector companies and think about ways we can instill those values, at least key aspects of those values throughout the world. So this would entail inclusion, thinking about how democracy should operate in different contexts and working with people at a local level that takes account of everyone, not just elites, because I think there's a big problem in many countries in the world with how both private and public sector interface with existing elites. Great, a great counterpart to Eileen's observations. Ron, real quick. Oh, you're muted. Damn it. I imagine a world 20 years from now where libertarian self-interest, profit motive and greed, which dominates our world right now is displaced by other regarding behavior, a care for nature, the world around us inclusion and diversity. Wonderful. Well, what a great note to end on, especially after as has been observed, some of the pessimism of the past hour and 15 minutes certainly calibrated to our times. I just wanna thank at the risk of unduly leaving somebody out, Paul LaCasse, Tess Brinchman, Ryan Goodman, Elizabeth Watkert, Rachel Goldbrenner, Ruben Langevin, Will Marks and our panelists and anybody I've forgotten in putting together such a rich discussion that has just scratched the surface. Of so many complicated issues besieging us right now. And thanks to our attendees for sticking it through. I'm amazed it was not like everybody streamed to the exits once they had the CLE code. So if there's, can't think of a better endorsement of the discussion than that. So wonderful way to set off this three sequence symposium and look forward to carrying on these conversations and indeed reviewing some of the balance of questions in the Q&A queue that we didn't have a chance to get to. Thank you all again very much.