 Welcome this morning, afternoon, evening also, maybe wherever you are. Welcome to the second event in Security, Privacy and Innovation, Reshaping Law for the AI era. This symposium is co-sponsored by the Rice Center on Law and Security at NYU Law School, by the Bergen Klein Center at Harvard University, by Just Security and the National Security Commission on Artificial Intelligence. The symposium is an effort to convene experts to examine how the legal frameworks that govern public and private action must adapt to the demands of AI. The first event in this symposium held last Friday examined AI-enabled civilians and digital authoritarianism. Next Friday, we will explore patent eligibility reform as an imperative for national security and innovation. Today's event, I am extremely happy that we will focus on constitutional values and the rule of law in the AI era, an extremely important and timely discussion. How are AI-enabled technologies changing the threat landscape? What safeguards do we need to protect constitutional values? Where do we need Congress, the courts and the executive branch to take action? Before we begin our session, I would like to provide some information about the CLE credit. This event has been approved for one credit hour in the areas of professional practice category for a New York State CLE credit. At a certain point in the program, roughly a bit after we've ended the discussion, we will pause to display and read aloud a CLE course code, or probably several codes. Those seeking the CLE credit need to record this code and submit to an attorney affirmation form. Attendees have received a link normally to the attorney affirmation form. Sorry, I'm learning about a new process here in their reminder email for the event. The form will also be sent after the event has concluded. So last remark on this event is appropriate for both newly admitted and experienced lawyers. So I will briefly share my name. I'm Julia Wono and affiliate at the Bergen Klein Center for Internet and Society at Harvard. And I'm today joined for this discussion by Glenn Gerstel. I'm sorry, a former NSA general counsel, senior advisor at the Center for Strategic and International Studies. Thank you, Glenn, and welcome. And we're also joined by Aziz Hook, the Frank and Bernice J. Greenberg Professor of Law at University of Chicago Law School. Welcome, Aziz. And we are also joined by Rihanna Pefferkorn, a research caller at the Stanford Internet Observatory. Welcome, Rihanna, and welcome everyone once again. So without further ado, I suggest that we transition directly to our subject. So I would suggest that we take a bit of hate first and we will ask for the help of Glenn here to please help us situate what we're talking about when we're questioning this compatibility of U.S. constitutional values and rule of the with AI. So my question to you, Glenn, what role does AI currently play in national security? And are there any constitutional rights, ethical challenges that are raised in this context by AI? Julie, thanks. And thanks to my other distinguished panelists and also to the two universities and just security and as well as the National Commission for sponsoring this very important symposium, which, as you said, Julie, is the second in the series. This one's about, as you said, Constitution and the rule of law. And perhaps in no sector other than national security are those issues so directly and importantly implicated. To me as someone who was a technology lawyer and someone who spent several years most recently in our intelligence community, if I had to sum up the relevance of or the or the significance of artificial intelligence in just say three words in the national security sector, it would be critical, pervasive and problematic. I say critical because in many parts of the private sector, one could argue that the adoption of artificial intelligence is useful or even optional to improve efficiency to stay commercially competitive, even be socially desirable, such as using AI and medical applications or to help understand and mitigate climate change. But in national security, we really need to use artificial intelligence in order to address threats posed by new technologies posed by adversaries use of artificial intelligence. Ever changing geopolitical circumstances and the competition for emerging technology means that we, the United States have to keep our capabilities at the cutting edge to face our adversaries. The cost of mistakes in this area, either by not fail it by not failing to by failing to embrace the technology fully or by misusing it in a way inconsistent with our standards and values. All could endanger our national well being and our and our security. So our national security agencies have really have no choice, but to you embrace the use of artificial intelligence, which is why I say it's critical. I said pervasive for two reasons. First, there are changes to the definitive definition of national security that are underway that we all see it used to be that national security meant worrying about maybe the rise of communism, the Cold War, where nuclear weapons were around the planet. And then after 9 11 focusing on counterterrorism. But now in the aftermath of this most recent pandemic and some global supply chain disruptions and and climate change. We're seeing all sorts of requirements for national security to have a broader aperture it national security now touches on every aspect of our of our well being due to due to technology. Whether it's a biological aspect or environmental or all aspects of our digital lives and the significance of that. Is that most of that information that relates to that broader aperture of national security is in open source information. It's not classified information. It's not digging into Russia's secret networks. So that leads to the second reason for that for it being pervasive, which is that with the digital age and we all know this obviously comes that much more data to process the sheer volumes of open source data. Which are going to be needed for this wider definition of national security are far broader than any analyst even the biggest security and supply supply agency could analyze. We need to use artificial intelligence to make sense of it to analyze it. Data collection methods are also going to be needed to up to be upgraded to deal with this and it's going to be require really unprecedented levels of cooperation between the national security sector and the private sector. And this kind of collaboration what the intelligence community needs to do itself raises all sorts of important ethical questions as to how to collect and process all this vast amount of data. Which is why I say it's problematic. It's imperative to use it quickly comprehensively and efficiently artificial intelligence precisely at a time when the tool is evolving in unforeseen ways and having having applications that we haven't yet forecast. And yet, at the same time, we don't really know what we want in privacy. I'm sure the other panelists will be commenting more on that. But we have both what we want to do, as well as the limits we might impose on on how we do it in very uncertain territory. We're trying to measure both parts and we don't really have a good yardstick for either. We're using a spandex tape measure if you want. So, putting this criticality pervasiveness and problematic nature all together means that we have a lot to discuss and with that I'll turn it back over to to the panelists to pick up on some of those threads. Thank you very much, Glenn, for giving us this extremely important big picture. I've taken a few notes as you've seen. Probably we've talked about competition and the need to stay on top and the need to collaborate also with private companies and specifically around the issue of data collection. And I would like to go to Rihanna. We briefly touched upon with Glenn the issue of privacy. So, can you tell us a bit more about, of course, the implications for Fourth Amendment, but also for others, including in criminal prosecutions when it comes to the use of AI? Yeah, sure. Good morning. Thanks for having me here. I think we're going to probably end up talking a lot today about the Fourth Amendment ramifications of AI based tools, algorithmic uses of AI in criminal investigations. One clear area of concern that courts seem to be thinking about with regard to these kinds of tools is whether AI can give rise to probable cause for the issuance of a warrant at the investigatory phase. And then given that we don't necessarily always understand how a tool was designed, how an algorithm works, what the inputs were, how good the data was, what the parameters are that are set within it. And then after you move past the investigatory phase and into the prosecution phase can give rise to concerns under the Fifth and Sixth Amendment due to the possibility for a tool to encode and reproduce biases that events unlawful discrimination that raises equal protection concerns. And then also for defendants to have the right to a fair trial and the confrontation of the evidence against them and witnesses against them. To the extent that AI is not the sort of evidence we're used to in terms of being explainable. And it also raises some concerns there and it can even raise Eighth Amendment concerns to the degree that excessive bail, excessive fines and punishment could potentially be recommended by a faulty algorithm out of proportion to what is actually appropriate to a particular case. And so I think we're going to talk a lot today about the critical role of judges having to and defense counsel as well having to stand up and more critically examine how these work in order to avoid infringing upon criminal defendants constitutional rights. Oh, it's, it's extremely, extremely important and thank you so much for reminding us that, you know, these are, well, they're court proceedings that are also affected by the use of AI. We've heard certainly in the public, public sphere public discussions, some examples, some quite chilling examples, and I would like to follow up on the on that subject with as is once again welcome. Since we are talking about court proceedings, how do constitutional ideas of equal protection and due process actually translate or probably failed to translate when what one moves from human to AI dominated modes of decision making. Well, thank you so much Julie and thank you for your terrific framing and moderation of this debate and thanks to all of the institutions who have put together this important panel I'm really appreciative of being folded in. What Julie's asked me about is the way that two values that in the US constitutional context are embedded in the text of the 14th amendment. Equal protection and due process apply to say criminal justice and the national security issues that Glenn has described, and whether the introduction of AI tools rather than human judgment changes the way that we should think about Equal protection and due process and I'll focus on equal protection. To start off with it's important to say that these are both protection and due process issues are rife across the domain of criminal justice where races obviously an important issue and national security where questions about ethnicity questions about the basis for targeting domestic security forces, police or FBI, who conduct counter terrorism investigations domestically, but also internationally with respect to detentions that are still going on at Rantanamo. Equal protection and due process concerns are often raised or available. Even if under current US law litigants or claimants may not have a procedural vehicle to press their equal protection or due process rights and that's something that I think we're going to come back to later in the discussion. So how do these, how do these rights translate over into an era where at the tip of the sphere, it's a machine making a judgment using some ML technology or tool or process rather than a human being. Consider our legal regime for equal protection under present equal protection doctrine. There's really two rules that coexist at first it's it's not permitted for government to act on the basis of racial intent. And second and separately, it's not permitted for government to act on the basis of a racial classification that's the issue in cases concerning affirmative action like the Harvard case that's before the Supreme Court at the moment. Notice that the concerns that are often raised about AI tools, particularly in the criminal justice context which Rihanna mentioned are concerns about race but they're not concerned about concerns about either intent or concerns about classification. For example, many of the concerns that have been pressed against predictive tools in the bail context concerned the disproportionately high number of false positives that have been observed within African American black populations in comparison to white populations. Under current equal protection doctrine that kind of disproportionality is not a concern. So how do the ideas that that are available under equality law applicable to the machine learning context, I think the answer to that is simply not very well. So take first the question of intent. It's rarely the case in the context of the design and implementation of machine learning tools that there is a particular individual who acts with the quintessential invidious intent. Rather historical experience seems to suggest that the designers of artificial intelligence tools will often neglect or simply be ignorant of the different experience of the minority group, or of women. Both groups that are not well represented among programmers and as a consequence of that negligence or inattention right the failure to think hard about the kind of historical data that's being used to train an ML tool. You have negative serious negative consequences emerging right the concept of an of impermissible intent and constitutional law does not give us any traction with respect to that issue. Now think about the issue of classification the Supreme Court in the last decade has become increasingly skeptical of the use of race classifications across a wide range of areas including the criminal law domain. But why or when should we care about race classifiers when they're used by machine learning tools. Does the fact that a classifier use race as a feature of the training data make it impermissible under the protection clause. It's hard to see why that should be the case. We know already that the failure to include a trait like race or gender can lead to serious and large error rates that affect the marginalized subordinated group right if race turns out to track something that in fact is real in the world and that if you don't account for you get more false positives or false negatives as the case may be. We also know that a rule that bars a machine learning designer from using racist a feature may well have very little effect because there are so many things in the world that are correlated to race. The famous example of this is the Amazon hiring algorithm that singled that that was the aim to hire engineers and that singled out resumes with the names of women's colleges on them and through those out women's colleges will highly correlated women's women with rare among the historical pool of engineers being hired. Therefore, the machine throughout CBs with the names of women's colleges. So our equal protection doctrine is woefully under equipped to deal with a world in which machines using ML or AI technology are ranking and classifying individuals. And we need to really start thinking hard about another model of equality that focuses upon the problem of how our governmental decision making processes carry forward and entrench patterns of historical disadvantage. I think that's a good place for me to stop but I'm happy to come back and questions to the due process question. No, it's absolutely fascinating as is and we see here kind of a, you know, on the one hand, Glenn reminded us, you know, the world in which we're living in which there is a fierce competition for adoption of, you know, AI to respond to new threat models. And on the other hand, we'll also have, well, systemic difficulties and frameworks that are probably not adapted but we'll get back to that as is because next question I have is what to do then but we'll get back to that very soon. Glenn, I would like to, you know, get back on some of the initial remark that you have made and including, you know, the challenges raised for the Fourth Amendment when we decide to rely on AI increasingly in our societies. So we've relied a lot on it on the Fourth Amendment to address concerns of civil rights and privacy this far in the digital revolution. So why can't we continue to do so with AI or can we, I don't know. I wanted to ask you. Okay, thanks Julie, and that follows very naturally from Aziz's comment about the, as he says the inadequacy of the equal protection format or framework I should say to help us navigate in the area of the proper use of AI. And I think we're going, we see the same thing as I'll comment on a minute or two about the Fourth Amendment which after all is the fundamental, the most fundamental element of our constitution that relates to privacy that that's the source from a constitutional law point of view of our notions of privacy even though as we all know, in the 54 words of the Fourth Amendment there is no mention of privacy. So, the amendment does, however, provide important guidelines and rules and boundaries for for the government in this in its both undertaking surveillance searching for information, as well as querying the information and analyzing that all of that is implicated in the Fourth Amendment, but the amendment was adopted as of course in 1792 before the digital age, and it's an inadequate compass to guide our society in in the consequences of technology, and in particular, doesn't really give us a guideline for for the notions of privacy as we need it today. And of course, the amendment also applies only to the government, not to the private sector, I'm not advocating that it applies to the private sector I'm simply observing that the vast amount of data about our personal and commercial lives these days is in the hands of the private sector, not the government, and yet there's no comparable limit on on the private sector we don't even have privacy legislation in the United States and in a in a full way that would address this. So as we continue to forge ahead in the adoption of new technologies, I think we really haven't confronted as a US society. What it means to have privacy in the digital age. Now, if you look at other technologies, whether it was railroads, telephones electricity, whatever. As the technologies developed and they took a few decades to really become impactful regulations lag but we ultimately figured out how we wanted to regulate it what the societal norms were. And we were able to reach for what for our society is an appropriate balance between regulation between public and private, and just how we want the technology to behave in our hands. We haven't we haven't done that in the case of the digital age because it's basically, basically about two decades old take your pick we can argue with exact start of it but it's quite recent. The principle case dealing with the fourth amendment in this context of course the most recent one is United States versus carpenter decided just a few years ago. While it does provide guidance in this area, I think it actually shows how little guidance of the constant the Todd Constitution through the fourth amendment is able is able to provide in this area after all. There were nine justices who came up with five separate opinions in that case. Some of those opinions had very, very different conceptual ideas on why the case should be decided one way or another. And by the very nature of our judicial system, which doesn't allow for advisory opinions, judges are forced to deal with the particular set of facts before them a particular technology. The cases in the fourth amendment area because the technology is evolving so rapidly are often expressly rooted and based upon the particular technology before the court the judges say so themselves. Indeed, chief the chief justice in the carpenter opinion expressly said, this is a narrow decision. It's a narrow opinion, it's applies to this particular case of tracking cell cell phone cell phone geolocation data for more than seven days. It doesn't say anything about any doesn't necessarily say anything directly about anything else. We can read into it all sorts of things but that's just individual speculation. So this, the problem of dealing with case or controversy schemes in a situation where the decision itself the judicial decision is based and the rationale is based on the particular facts before the court is problematic in an area where technology is rapidly developing. There's not a problem in the area of say contract law or tort law where the principles enunciated in the particular case aren't limited to just that case no one says that a contract ratification case has principles that are only limited to contracts printed on blue paper written on today, which was the subject of that particular case. They're generally applicable. Same with tort law we can apply negligence concepts in one particular fact pattern across an entire range of fact pattern, and it feels right it feels intuitively correct it feels internally and intellectually consistent. That's not true when the technology keeps changing. These decisions are inherently backward looking or retroactive which feels like the wrong approach when addressing new technology. I teach a, I have a guest lecture as ease at once, once a year at the Harris School at the University of Chicago and at the beginning of it. I asked the students who are all really sharp from every side of the political spectrum. I give them the several of the Supreme Court cases in the fourth amendment area, I give them the fact patterns without the case and asked them to rule on it one way the other about half the half the students come out the way the particular case did. Another half, equally bright come out the other way, proving that in this particular area the fourth amendment doesn't really give us the guidance that it needs to in the digital age. I'm not suggesting we abandon it. I'm not suggesting we weaken its implications at all. I'm simply pointing out its inadequacies and our need to address artificial intelligence rulemaking and nor making in a different way. Thank you. Thank you very much, Glenn. Rihanna, I'd like to get to you on two aspects. So the first one is related to what Glenn was just saying. Do you think, yes, that the privacy framework is not adapted or the fourth amendment, the reasonable expectation of privacy as you, you know, rightly put it when we prepared the session, do you think we have to give up on that basically. And the second one, still related to something Glenn mentioned about, you know, contract. Well, just wanted to know, are there other bodies of law that come into tension with the, you know, constitutional issues raised by AI. I thought you might want to bring up something on that. Thank you. Sure. Yeah, I mean, so to pick up on on what Glenn was talking about. We have often found that the fourth amendment framework that we have become accustomed to doesn't necessarily keep up neatly with technological advances. And it's been a central preoccupation of the courts to ensure that technological advances do not shrink the amount of privacy that we are traditionally entitled to expect going back to the times of the founders when, you know, obviously, the world look quite a lot different than it does now. And you know, when, when we had the formulation of the reasonable expectation of privacy test this was in the late 1960s, and within, you know, 10 or 12 years we were already starting to see how that might not necessarily fit a modern world where we were able to see the development of third party doctrine cases, regarding bank records regarding phone metadata in the 1970s, where, even though we were talking about a dawning realization that we have a digital age, where we have the use of computers where we can have not that much occasion to decide to decline to participate in a thoroughly mediated world that nevertheless the Supreme Court had limited the applicability of the fourth amendment warrant requirement in situations where people were handing over information to the third party in order to get the business of daily living done, basically, and the response that we saw from Congress to those cases in the 1970s, following up on what seemed like a greater trend towards privacy protective cases in the 1980s was instead to pass a comprehensive framework for the protection of our electronic communications in the digital age that, you know, collection of statutes is now itself, sort of showing its age and fraying around the edges because you can see it again in the 30 odd years since the electronic communications Privacy Act was reorganized and extended under President Reagan. So, I would tend to agree that if we want to see the level of privacy protection that as a society, we can come to some agreement hopefully on having to the degree that it may not be the case that courts would consistently hold that there is a warrant requirement in or that particular other constitutional concerns can adequately be covering the difference context in which we see a tools arise that it may be necessary to try and get out ahead of those issues which may already be galloping out ahead of us and to pass some sort of laws to codify and clarify what people's rights and expectations should be in these settings. And, you know, I think that we have seen in time and again that, you know, Congress has to step in at some point, or that other legislative bodies have to step in at some point, because otherwise it is entirely possible under the reasonable framework that will gradually see a diminution of privacy. I know we've seen plenty of scholarship from scholars such as Ori Kerr talking about this sort of equilibrium adjustment theory that as technology advances in society changes that the Fourth Amendment is equipped enough to keep up. But nevertheless, I think we've seen a line of cases involving the usage of technologies, such as the Kylo case coming up on 20 years ago now, to demonstrate how as a technology becomes a commonplace which once was highly sophisticated and expensive and rare to use that can affect how people can subjectively and objectively experience and expect privacy as the, you know, technological environment in which we live changes. And so to the extent that cats and Kylo and their progeny leave room for the diminution of privacy, for example, in public, where I think we really are confronting a need to totally reinvent the doctrine of how much privacy we can expect in public places given the advent of AI tools that can collect a large amount of information about us and synthesize it together, then it does seem like it is now incumbent upon legislators to try and find some ways to act. I think we can talk about the difficulties of what that action should look like. But I do at least want to note that in some jurisdictions we have seen, if not at the congressional level in DC, some efforts to try and set forth what those privacy protections and other protections ought to look like at the state level. So for example, in California, where I am, we have CalACPA, which was passed a few years ago, which regulates state lawmakers, or excuse me, law enforcement, with regard to things like saying you have to get a warrant if you want to get location data, instead of having to run every single case up the flagpole as Carpenter has caused us to do to say, okay, well, about what about six days of historical location data, what about real time, so location data, prospectively, the California legislature said we're just going to pass one, you know, one ring to rule them all basically. And similarly, we've seen some efforts in some municipalities in one or two states to regulate the usage of AI tools in context that impact people's livelihoods and their rights and their lives. But it's questionable whether we will see is a similar move at the federal level, I think, or even whether those local and state laws are going to do what they set out to intend to do in order to provide better outcomes by helping human decision makers incorporate these new tools into their into their workflows. But I think we will be able to talk about that more going forward to quickly address your other question. You know, we've also seen, especially in the court context in the criminal prosecution context, how the use of non disclosure agreements and human rights like those that Glenn was talking about can impede the exercise of somebody's Sixth Amendment rights to a fair trial to the degree that even courts and police agencies themselves may be prohibited by the contract that they have with the from whom they buy these tools from disclosing how they work from explaining how they work. That gets in the way of criminal defendants who might seek to challenge the accuracy and reliability, or as as these was saying to look for evidence of discriminatory intent in their design from being able to test those out in court. We've seen some decisions luckily that have said no you can't use trade secrets laws or contract theories to Trump some of these constitutional rights. But I think we will only continue to see that issue. These issues come up again and again. Thank you so much Rihanna so at this stage in the conversation we kind of grasped how probably in adapted the current frameworks of many of them are you've mentioned some existing procedures including warrants that could alleviate this kind of in adaptability, but I wanted to ask to as these who touched on, you know, due process earlier in this conversation and I wanted to ask you as is do the existing procedural mechanisms including title three warrants, the FISA framework especially well national security issue matters. Do these provide actually useful frameworks for enforcing constitutional norms when it comes to AI tools for inference and predictions and what sorts of constraints should then be applied if these are not working. So we thank you Julie we've been talking until now and Glenn and Rihanna have elaborated carefully about rights, but rights are claims against the government that ordinarily the government does not wish to see. And rights therefore are in efficacious without remedies without some mechanism to enforce them. And Julie's question tease up I think two different points about remedies. The first is that the current system of remedies that we have largely fails to protect individuals against the government with respect to privacy discrimination and other serious human or constitutional rights violations. And the second, even if it were to work better it would be conceptually ill suited to the AI world. So let me elaborate both those two points. In the ordinary context of government acting coercively against individuals, there are large swathes of governmental activity, where it is infeasible to obtain a remedy of any sort against the government. For example, the Supreme Court over the last two decades has dramatically narrowed the availability of after the fact taught suits called biven suits against the government, the against the federal government excuse me. The result of that is extremely hard to bring to challenge the government's action on the ground that it was unconstitutional if you're seeking damages. At the same time, the court has made available the remedy of exclusion of evidence that was obtained illegally pursuant to the Fourth Amendment in a narrowing guire of cases the court has created, for example, an exception for instances in which the government acts in good faith. The government acts in good faith if there's no prior opinion, stating that what the government is doing is unlawful under the Fourth Amendment. The problem with a good faith standard is that litigants know they won't win anything if they bring an exclusionary motion because of the good faith doctrine. This saps the incentives of litigants who are at the cutting edge of the law to challenge Fourth Amendment violations. The absence of challenges of Fourth Amendment violations leads to greater uncertainty in the law, and greater uncertainty in the law means that the good faith standard swallows the rule when it comes to novel questions of Fourth Amendment law. And of course, technology is among the most, raises among the most important novel questions of Fourth Amendment law. Therefore, the Fourth Amendment is at the technological margin, sapped in effectiveness by the good faith doctrine. Or consider the civil remedy that's available under the Pfizer statute. That remedy is up before the Supreme Court in a case called Faisalca. The government submissions in that case is that when a plaintiff invokes the civil remedy for unlawful surveillance on the Pfizer, the government can assert the state secrets privilege and that that immediately results in a dismissal of the case. Accepting the government's position, which the Roberts Court, it seems to me likely to do, will in effect eliminate the possibility of civil remedies on the Pfizer. So, one should start by recognizing that the landscape of remedies for constitutional and human rights in the United States is exceedingly impoverished. So I should confess I have a book called the collapse of constitutional rights on this point that's coming out later this year. The conceptual point is this. All of those remedies focus upon a one to one correspondence between the person who against whom an intrusion is lodged in the first instance. And on the other hand, the remedy or relief that is available. So the person who is searched is the person who is able to seek a remedy. And that one to one connection comes apart for reasons that we have already touched upon, because inferences can be drawn from large pools of data that do not necessarily include your data, but which can be used to draw inferences against you with respect to publicly available facts about you. The one to one correspondence between intrusion against person a and harm to person a breaks down. And one of the things that the fundamental conceptual framework that underpins the warrant requirement that underpins the exclusionary remedy that underpins the constitutional door towards doctrine bivins does not work in AI context and a new framework is needed. And maybe we'll talk about that but provocative place to stop. No, it's, it's extremely, extremely interesting. And of course, I'm interested in Glenn's thoughts on this, especially as a former and it's a general counsel. I mean, you're not speaking as such but your experience will help us understand better your point on this. And, and before going to the NSA. I, I didn't have a full appreciation of, of why in many cases it was important for the government to keep things secret to keep them classified whatever and only when I was really inside in a classified environment. I see the harm that resulted occasionally from leaks of classified information, etc. I'm not being a hawk on this point I recognize their competing arguments as he's made some very very good points I completely understand and appreciate his comments on the absence of remedies. This is a very tough problem there are good arguments on both sides that's why it continues to be an enduring problem. If it was easy we would have solved it. I add to Aziz also, when you're an excellent analysis, the issue of standing which is very often there are plaintiffs who say I've, I think I might have been caught up in some surveillance or some kind of illegal activity. And the court says well in order, in order for you to have standing to sue you have to prove that you were, you were in fact aggrieved in fact injured in the, in the plaintiff says well, I don't have access to that information because it's secret and so that's the case obviously again you've you've touched on that as ease. I think the challenges in this area are very difficult. It sounds like we have a little bit of a consensus that between the limitations of the equal protection concept the limitations of the fourth amendment, etc. There seems to be some sense that maybe the Constitution isn't providing us the full robust framework intellectual framework that we need, and that legislation is needed presumably federal we certainly don't want the patchwork of 50 states legislation in this area, along with self regulation I think that's where we're going to go if we Julie more specifically to your question if we look at the laws on the books right now that relate to in the national security sector to relate to surveillance and searches and there's a set of laws about wiretapping and the circumstances under which the law enforcement agencies principally FBI can can obtain information about telephone calls, a little bit about internet but not so much. And, and other radio interceptions, the principle statute governing the ability of the intelligence community to undertake some kind of surveillance against United States citizens wherever they're located because the fourth amendment applies to us citizens whether they're abroad or are on domestic soil is the foreign intelligence surveillance act, which is very mechanical rooted in the type of collection rather than what's happening how it's being used. And I think we saw this issue I'll just make a quick comment. We saw the fundamental issue of how the fourth amendment applies in this digital age context arise a couple years ago when Congress adopted some, some restrictions on querying the data, what it takes for a law enforcement or intelligence community analyst to go through data and look at it does the mere fact that a machine is sorting through data, looking for a name, a characteristic or something it happens to come across quote your data in electronic form. Does that mean you've been searched. Does that mean there's been some surveillance undertaken this fundamental question of what it means to have a search in the digital age is absolutely critical to the issue of how we're going to apply artificial intelligence. I would simply say that nothing in Pfizer or our current laws really addresses this issue. And that is that is a major gap we're going to have to address but first as I go back, we're going to have to intellectually get agreement on what it means, what does it mean. What is our sense of privacy are we really violated when a machine looks over a computer record. That's an issue. What is privacy that is that exactly what do we mean when we say privacy that's a, you know, we would, I was talking about it with friends recently and the expectations are indeed extremely could be different from people from one citizen to another, and not let alone from nations to another so Thank you so much, Glenn, Rihanna. Do you are you on the same line, the guardrails are not adapted. What should be done then. What's your take on that. I mean, the purposes for which information is collected and how is used in the national security and intelligence context can be very disparate from those in the criminal investigation and prosecution context, and we've seen how the same, you know, secret sources and techniques may potentially get used on either side of that line. And this is where we come back to those questions of due process and fair trial and confrontation rights, which is that if there is a particular tool that is in use on on the national and I see side of things. They will not want that to be disclosed, if it is being used also in to finger somebody in the criminal investigation. And so we have seen cases before, not necessarily involving AI, but in using you know, novel technological techniques to locate somebody who had, you know, hidden their true location, and when that tool which relied on the exploit of a flaw in the browser that the person was using. And when the person who was then being prosecuted for for their criminal offenses, allegedly tried to challenge that in court where this had tool had been used against hundreds of people who were being prosecuted for visiting one particular tour hidden and their true locations were revealed. The exact workings of how that tool work for unveiling their true IP address despite their usage of the tour browser was deemed to be kind of a problematic conflict in a lot of these cases between their rights to understand the tool that had been used against them and how they have been against them had been collected, whether it might be inaccurate, or even have opened up additional flaws or altered data that then should not be admitted against them. Versus the sensitivity of that tool technique, where eventually the government ended up classifying the exploits so that they would not get disclosed, leading to additional discussion of whether it's adequate to have protective orders in place, whether it needs to get a clearance as as is done under the under the CEPA law and you know, just generally trying to figure out, can the government go forward with these prosecutions if a government rule or excuse me if the if the court rules that it has to be disclosed, and there's just one case where after the court ruled, yes, this is privileged under the law enforcement privilege not to have to disclose their service in their techniques, but it is also material to the defense to understand it. The government was put in this quandary of having to eventually dismiss that case against somebody accused of a heinous crime, because they had deemed that it would better to drop this particular case than to have be forced to disclose how this particular case was. And so I think we will see that continue to come up in the usage of AI tools, where the ability to show your work, essentially as investigators and disclose that information to the court and to the defendant pursuant to their constitutional rights and their rights under the criminal procedure rules will continue to be in conflict where the same sorts of tools and techniques may also be useful on the national security and the intelligence side as well. Thank you very much, Rihanna. We are right on time to take a minute to screen the code for the CLA CLE credit. So, here it is, if participants can take, we'll leave it for a minute before we move on to the next. Theme and discussion. So, let me let me read it for you to make it well help you security privacy and innovation reshaping law for the AI era. Virtual symposium fall 2021 and the course code will be R C L S 9757. And I'm sure that was totally useless because everyone is taking a screenshot so, but anyway, I wanted to read it. Okay. So I was actually asked to read it so it wasn't that useless. Fantastic. Okay. We're slowly transitioning towards the Q&A but before we do that. I did have some additional points that I was hoping would get time a chance to discuss so. Aziz, we've, you know, we've talked about the challenges and the inadequacy probably of the existing framework. So, and, but also Glenn in the introductory remarks mentioned the relations well the necessary relationship between private companies while the government and private companies working together on while trying to tackle some of these challenges. So, wanted to ask you how does the introduction of AI change the balance of power between the state and large firms but also between individuals and how should we conceptualize the problem of power here in terms of right in terms of principles and so on. Thanks, Julie. I think this is, it's, it's a nice place to maybe end our discussion because I think it raises your question raises the possibility that in thinking about the way that AI is influencing the relationship between the state. In terms of powerful digital firms and individuals, we should not be analyzing the problem through the lens of rights, which are very much focused upon the relationship of government to isolated and discreet individuals, but we should think about the power and we should think about the question of power in a more fluid and non binary context in which there are multiple actors that can that can exercise power in ways that are sometimes complimentary, and then sometimes offsetting. So why do I say that because I'm one of the principal effects socially and institutionally off the development of a of the most recent spate of machine learning tools let's say starting from the work of Hinton forward is to dramatically raise the value of large pools of data so the ability to acquire large aggregates of data right so that there's a scale dimension there, and the ability, or the technical expertise to extract from that large pool of data. The prediction tool that can be applied out of sample suddenly becomes valuable in a way that was previously not the case. This technology comes upon the scene in a social context in which the principal holders of data to begin with are the government. And a small set of private companies that are acting on the basis of their commercial incentives, perhaps not always with the interests of consumers or citizens in mind. The both the government and these, the small category of companies benefits from benefit from economies of scale economies of technical expertise and a certain form of opacity with respect to the exercise of power that comes from the imbalance of knowledge between consumers and citizens on the one hand. And the government and large firms on the other hand. So we were living in a, in a, in a different world than we were, let's say 30 or 40 years ago, in which there is a new form of power and the power is not concentrated solely in the government, and the threats that are posed by that power and not concentrated solely in the government, that disperse and dispersed across large firms and the government. And I think that the core question for our age for this period, maybe not for our is, how do we think about the plurality of ways that this new digital inferential power. How do we, how do we think about the threats it poses to our core moral legal constitutional and normative values. When those threats come from more than one place, when sometimes the way that government exercises digital power can piggyback on and and rebound potentially because of its interactions with private entities, how sometimes private entities in the government can offset and match each other we saw this with respect to, for example, searches of the apple phone a few years ago. And how we have developed two separate conversations. One conversation about what Shoshana Suboth called surveillance capitalism, and one conversation about what Bruce Schneier calls the Goliath that is the state that has our data. And what we think have failed to do as a community and I maybe speak here of legal academics in particular, we have failed to think about how these dynamics of power interact overlay and either reinforce or undermine each other. And I think that's a conversation that is well worth having. Thank you so much as is I feel like we're having this when it comes to innovation and society these days where we do need to have this conversation but we failed to do so and we are afterwards in the situation in which, well, the harms are happening the harms are there and, and we don't know where to turn, basically, but thankfully we have platforms such as this ones to motivate ourselves to look further into this, these issues. So if we go to the Q&A and I encourage you please in the audience do not hesitate to ask any questions that you you may want to discuss with the panelists. I wanted us to look at the, well, what's coming next right. What what what's the future going to look like in this, you know, environment. And my question is, so various countries have announced national plans for adopting but also dominating competition in AI, you've touched on it earlier today. What challenges does this represent for US national security, not only unconstitutional from a constitutional perspective but the other ramifications. Thanks Julie at your question goes to a key point that that was was very much part of his eases comment earlier about the significant data that is going to be amassed not only by the government, but by the private sector. And I just want to spend a second on that and then get more specific on your on your question but with the advent of 5G the internet of things the increasing digitalization of our world. We can't imagine the amount of data that is going to be amassed in the hands of the private sector in the future it will dwarf whatever any government is ever capable of. I have a couple of questions that is ease and the other panelists and you and Julie have raised about about how we need to manage this and whether our laws are currently adequate sounds like we have a consensus that our current framework is inadequate. Those questions are going to be thrown into high relief when we consider them in the context of national security for at least three reasons. On a very simplistic level, our intelligence community is now going to have far more targets far more areas of interest that they need to keep track of for our national security for our national well being we may now we now need to worry about everything from crop genetics around the world to climate change around the world to shipping logistics to the outbreak of of global health concerns, etc. We're going to have many, many more targets areas that we need to keep track of in order to understand better for our national security. But second, other countries are using technology in particular artificial intelligence in breathtaking and novel ways we've certainly all read reports about how China uses artificial intelligence including including a facial recognition and facial characterization software to deal with their their we group minority in the in the far west provinces of of China and how that raises human rights concerns. We are simply not fully appreciating what it means for a country like China which applies a whole of nation effort embracing their private sector their state owned enterprises and the government itself towards one role in this case, dominance for AI they've said that it's not a secret they've made it clear, and it's stated in there in the CCP Communist Party of China's report that they want China to be the dominant player in artificial intelligence in a matter of a couple of decades as well as quantum computing and, and other related technologies, but at a minimum this poses competitive issues for us, and at a maximum it may even pose existential and national security concerns for us. And then finally, the third reason in addition to the fact that our adversaries are galloping ahead in bracing a whole of society approach is the fact that we've been talking about before that we, we need to be addressing these questions and implementing the use of artificial intelligence in a prudent wise sound way, precisely at a time when it's evolving very rapidly, innovating very rapidly, we don't have a yardstick to give us or guard rails to give us the best and appropriate and most prudent ways of conforming its use to our to our national standards and values. And, and this will require extensive coordination and collaboration with the private sector again something that in the United States we don't have a lot of experience we have a very sharp dividing line between the private sector and the government. We do not look at all in our legal system the way Europe and certainly countries like Russia and China do. This is going to raise some very profound questions I'll just give you one tiny example of one and stop there but there's been some recent concern about whether the government through its spy agencies can purchase data on the open market about the pertaining to to individuals locations shopping habits, whatever. The information is gathered through open source information by public companies in in absolutely legal legal circumstances. Is it okay for the government to just purchase that data and then run analysis artificial intelligence analysis on it, or does that implicate some privacy concerns because it's being done by the government but not by the private sector. All important questions. No easy answers. Thank you so much, Glenn. Rihanna briefly on that on the future and then we can answer some of the questions that have been asked. Thank you. Sure, so I think there are at least three different things that policymakers need to concern themselves with one picking up on where I'm left off would be going back to my point earlier about whether we need to rethink what privacy protections people have in public where traditionally there's been a very low level of privacy protection, but whereas as these was cogently explaining the large volumes of data that can be gathered about us now may change the power balances and call for a change in that. Another is whether there should be some applications of AI that should simply be off limits, whether to governments or in private application. There's been a lot of challenges to the use of facial recognition for example and saying even if we put out a quick guardrails or laws in place that's not enough, there are going to be some domains where this should just not be used at all. And if we agree that that is something where we should draw a bright line around what applications should should that be. And the third is that one tension that the AI and ML community is trying to deal with is the tension between explainability interpretability of an AI model or algorithm, and it's accuracy, where something that is more accurate maybe harder than possible for the people even the people who built it to explain. And so if the whole point of deploying AI technologies is that we expect them to be effective which hopefully encodes an expectation of accuracy. How do we deal with these conflicting goals of wanting or needing to have an explanation for how they work for due process concerns for probable cause analysis. I think that those very models that are more explainable may not work as well as intended. And so that's an area of computer science that is going to continue developing and we'll hopefully see whether that is a tension that can be resolved. So those are the three that I would point to. Thank you so much Rihanna a lot of food for thought before we go. But before that, we have great questions here so Elena Quint, the president of the cyber law society at Georgetown law is asking, can you discuss the possibility of using privileges and the laws of the 14th amendment to protect our an enumerated constitutional rights challenges, challenge sorry by emerging technology I'll ask through to question and then. And then, how Scott Davis is asking what about the penumbra of rights that led the court to find a right to privacy in your role versus weight. I'm actually pleased because we still have two more questions. Thank you. I think I can try and answer very quickly. So the question, both questions are linked by the idea that maybe there are some rights that are not explicitly enumerated in the way that the fourth amendment is explicitly enumerated, but might nonetheless be protected. Often the rights for abortion under Roe v. Wade is is characterized in those terms. And recently there's been a debate about whether a an element of 14th amendment, which talks about the privileges and immunities of citizens and notice by the way that it's just citizens right so it's limited and in ways that the protection clause is not might have traction, not withstanding decisions from the US Supreme Court in the 1880s. So I think that's a great reason to raise the public's immunities clause of meaning. I wouldn't hold my breath with respect to these kinds of non textual theories of entitlement that sound in the domain of digital or informational, or I would add to that actually sexual privacy because I think that's an important domain that I can intrude upon that we I think it's extremely unlikely that a court is good that the courts that we have, given their partisan composition, are going to read this ambiguous language in the 14th amendment, are going to take the concept of fundamental rights under the 14th amendment to issue in Roe v. Wade and extend it in this direction. I think that for Americans are fundamental who are concerned about digital privacy are fundamental problem is that we don't have a federal forum, either in Congress or in the courts that seems inclined in terms of political incentives and technically capable enough to really address those these issues and I think it's that gap in the in the national capacity that really has been driving California's efforts that's been driving the tension in the US to what's happening in the European Union through the GDPR through the AI framework that was issued earlier this year. There's just simply a sense that the the our own political institutions, you know, they're they're not up to much and they're certainly not up to this. I'll go ahead and read other questions that were asked in the in the chat so one attendee would like to hear your perspective on potential of altering AI data in criminal investigations and giving an example of recent allegations of gunshot detection vendor who altered which altered gunshot gunshot data at request of law enforcement. And there's another great question I'll let you choose to as this points what role and can or should play regulatory agencies in protecting consumers. Alan Rao should fees accord a point on an amicus to serve as technology to serve as technologies to help explain assess algorithms and ML and sale from Alan superior court in Wallen and versus row 77 rejected extending row versus way to information privacy so that was just a remark but Yes, there are questions on roller regulatory agencies. What's the, what's the, your perspective on altering data and criminal investigations and fees accord to appoint amicus is a psychologist and shortly, possibly. I mean briefly. I'll take the Pfizer one but why doesn't Rihanna go ahead with the, the criminal one it sounds like that's an area she would focus on. Yeah, I think this definitely brings up the points I was making earlier about how important the confrontation clause is going to be when, when we see the use of AI in court context in criminal prosecutions because to the degree that somebody has altered the data retroactively. I'm not sure how that would affect, you know, a tool that had a particular data set at point A if you alter that point B and it had, you know, detected or figured something in between. But it goes to the importance of being able to make witnesses available to question them about the data that goes into a particular tool and how it works and to try and detect these kinds of potential malicious tampering which is potentially an option not just in one particular case by case basis, but also as these are talking about earlier with regard to the disparate impact and the need to show discriminatory intent for equal protection purposes. I think also it's going to be important to continue to make vendors and agencies that use these tools accountable. And so that we're not only querying the data and the tool are also querying the humans who are involved in the tool to ensure that this kind of miscarriage of justice does not happen. Thank you Rihanna Glenn on the FISA Court question. Sure, just very quickly. Most of your audience is familiar with it but perhaps for some warrant. There is a special court the Foreign Intelligence Surveillance Court, it sits in Washington DC it's a secret court it's the only one in the United States that has all its proceedings conducted in an unified environment, consisting of judges appointed by the Chief Justice from around the United States to hear applications and matters arising under the Foreign Intelligence Surveillance Act established in 1978. This court looks at very, very technical questions of collection of data by the United States intelligence community the FBI, the CIA, the National Security Agency, and evaluates them and decides whether in effect a search warrant is needed they can issue a search warrant but something very comparable. In order to do that they really need to understand the technology and to assist them in understanding that technology they have a panel of amicus friends of the court, who are able to assist them in both the legal concepts as well as technically, they can if need be reach out to particular technological experts for additional advice. I might add that a companion to that is the Privacy and Civil Liberties Oversight Board, a government agency that that looks at the question of surveillance particularly in the counter terrorism context. And it, several years ago appointed a chief technologist and advisor who could just assist the board with some of these technical questions because as been apparent for this past hour. We can't really have a full understanding of our privacy notions unless we understand exactly what the technical aspects are and what what is being searched and what's being surveyed. So, I think the court currently has adequate adequate adequate advisors in this regard but can always use more of course. Thank you so much Glenn. And this would you like to speak to the regulate regulatory agencies role, please. Yeah, I would just like that circuit courts which are the lower courts appeal have taken different views about Waylon, and you can, there's there are little subsequent court cases such as NASA the Nelson that are at least ambiguous on the question of privacy site, maybe I'm reading too much into the case law but I am a little bit more optimistic on on that single point, at least as the law stands that the idea of a regulatory agency with respect to tools was raised a number of years ago in a really terrific article by Andrew Todd, who argued that we should have the analog to the food and drug administration for algorithms. To my mind, what would be I, if it were politically feasible to do something like that, and I should be clear that it's not. Well, I would think warranted would be something that aggregates together existing technological expertise in the government, particularly that expertise that's driving innovation and linking that expertise and innovation motor to the development of when and how should AI tools be adopted under what circumstances domestically or internationally should there be certain tools that are simply off the table. Think here for example of the use of the genetic manipulation tool CRISPR with respect to human DNA, right that that is and should be off the table. The agency that could do that probably would have within it what what is now DARPA, which is a part of the intelligence community, and it would probably end up looking a little bit like either the CDC, or the net or NIST the National and Technology. That's a great possibility, but I don't see it as a practical political reality anytime soon. That's perfect as is. And on top of that you answered the other question about what's off limits. So I would really much like to thank you again so much for a very rich conversation. And I've taken plenty of notes I was here to learn, and I have so thank you so much and I hope it's the same for attendees in the audience. Don't hesitate please come back next week next Friday. We're going to have another session and I can't remember what it's about because I don't have my notes anymore but we're, please come back. Yes, I think that's it. Have a wonderful rest of your day and weekend and see you soon.