 We're going to have a discussion now with two renowned experts on the topic of privacy, AI, and racial disparities and what's happening in terms of evolving legal principles. Today we're joined by Katherine Forrest, Judge Forrest is a former U.S. District Judge for the Southern District of New York and a former Deputy Assistant Attorney General in the Anti-Trust Division of the U.S. Department of Justice. She's currently a partner at Cravath that has a significant focus on high-tech issues, including those related to AI, the digital environment, and big data. Judge Forrest is the author of a book I commend to you, when machines can be judged jury in execution or justice in the age of artificial intelligence. It came out this year and again I think it's an excellent read. She is a technology columnist for the New York Law Journal and an adjunct professor at NYU Law School. Also joining us this morning is Professor Heidi Feldman. Heidi Feldman is a professor of law at Georgetown University Law Center. She also has an appointment in the philosophy department at Georgetown University. She is the founder and director of Leaders from Law, which is a training and orientation program for progressive law students and lawyers considering elected office. Maybe training those who will challenge some of us someday. During the last couple of years she has served on the Department of Homeland Security Oversight and Accountability Project Task Force, organized by the Center for New American Security. She regularly comments and publishes on topics related to technology and privacy. So I think just to sort of set the stage, we're going to be talking this morning about both the promise and the hazards of technology. Technology that uses data collected on our daily lives to both categorize us and to make predictions about our behavior. We know that this type of technology holds tremendous promise for social welfare, consumer products, health care delivery and other fields. It also presents real risks as we are all increasingly surveilled. Interesting to hear comments from my colleagues this morning about their own personal experiences with that from General James and General Tong. But this is happening more and more. And we also know that these tools can embed and further deepen systems of bias and oppression. And those are risks that really implicate important legal questions of privacy due process, discrimination and equal protection under the laws. My office, like many have been learning more, engaging more on these really heady topics. Just last week, we did something and that was calling for the immediate removal of race from formulas that health care providers use to estimate kidney function. Recently, the National Kidney Foundation and the American Society of Nephrology came out with information that showed that certain formulas, which were baked into the machine learning of these health care provider systems, were making falsely, making black patients' kidney circumstances, their kidneys look healthier than they actually were, which meant that they were less directed into the kind of care they actually needed. It was an instance where systemic racism was embedded in the health care system and then further exacerbated, compounded by the use of race in an actual electronic medical record. So it's just an example of really sort of something that's just the tip of the iceberg in terms of the use of predictive algorithms in the health care space. Again, some of these tools may be incredibly helpful in ensuring that patients get the care that they need and in reducing implicit bias that can be inherent in individualized determinations by providers, but others may build in systemic racism and only serve to reinforce cycles of inequity and disparity in the health care field. So it's important that we engage with one another, that we learn from one another, that we learn from esteemed panelists and those contributing to this wonderful conference. As we think about these difficult questions, I am grateful to the team in my office, Abby Taylor, whose Chief of Civil Rights, and Sarah Cable, who heads our Data Privacy and Security Division, working with many of your teams on these issues. So we look forward to our panel and I suggest we just dig right in. I'm going to begin with a question to Judge Forrest, and that is, could you give this audience an overview of AI tools, what they are and why they matter? Yes. Well, thank you. Thank you, General Healy, for having me here to speak about issues that are, I believe, among the most important for our communities today. We're in the midst of a technology revolution that is really quite extraordinary, and it's one that's in some ways happening behind the scenes. The example you gave of the nephrology situation is, I think, emblematic of the kinds of ways in which artificial intelligence is infusing our daily existence in ways that we're not even aware of. So what is AI? AI is software. A lot of people think of artificial intelligence as having to be robots or somehow in machines, but it's really just very sophisticated software. It's software that learns, and that is, I think, extraordinarily important from just what would otherwise be characterized as high-tech software. And you'll hear me today, I think, throughout this panel talk about two very basic principles that go to these issues that you've already now flagged. And that, in fact, were flagged by a little bit of the last panel that I just caught the tail end of, which is that humans are the progenitors. We are the makers, the designers of artificial intelligence, and therefore whatever we come to the table with in terms of our biases, our past, our histories, our views on the world, those are embedded into what we make, and that has always been the case. So humans are the progenitors of artificial intelligence. And also the way AI learns, and I'll talk about it very, very briefly and at a very high level in just a moment, AI has the potential and indeed the great potential to make the past be the prologue. In other words, AI, you'll hear me talk about, works with data sets. Data sets are a snapshot in history. And what that means is that the past, if we've got a past with, say, structured inequalities, that past can become prologue because it is our past that teaches our artificial intelligence what it needs to know to actually proceed. There are some parts of AI, things like recognizing a cat or a dog, where that's not particularly important, but other aspects of AI tools where you're trying to determine if a particular individual is going to get a social benefit or not, or get a loan or not, or is going to recidivate or not, where the past is prologue becomes extraordinarily important. So AI, what does it do? I'll just try to do this very quickly. It actually is software that makes predictions. We see it everywhere around us. It's in the healthcare field right now making predictions about whether or not, for instance, a tumor is cancerous or not, whether or not right now certain COVID patients might be more or less likely to survive based upon a series of characteristics. It's all over our telephone. Every one of us is probably walking around with a smartphone. Very few don't. And our phones are full of AI. Siri is an AI tool. The AI in Spotify tells us what kinds of music we might want to listen to and on and on. AI is in manufacturing. It's in fulfillment and making sure that our tools keep on working. AI is used for customer service. It's used right now to determine who might be the best employees, who might actually fulfill for a corporation its hopes and dreams as to the ideal employee. And right there, I assume that many of us can already tell the kinds of difficult issues that that might raise. AI is also used throughout our criminal justice system. And perhaps we'll talk about that a little later. So AI matters because we're dealing with matters of life and death. We're dealing with issues of liberty. We're dealing with issues of resource allocation. That's happening today. That's not happening sometime in the future. It's not happening five years from now. It's happening with tools today. So how does it work very briefly, just so we know enough to talk today in a sort of an intelligent way? AI is essentially made up of math and math that works with data sets. So think of a recipe. AI is like a recipe to make bread. Only, let's say recidivism is on the, it's not bread that you're trying to predict. You're the best recipe for bread. You're trying to predict recidivism or you're trying to predict resource allocation. It's made up of inputs, flour, yeast, water, and those inputs are selected. Somebody decided how to make bread and they selected inputs. And those inputs are weighted. So much, so many cups of flour, so many teaspoons of yeast, and so much water. Each of those inputs, they don't come from a golden tablet where somebody magically says, these are the right inputs to go into a particular AI tool. They're either chosen by the AI tool itself in a complicated kind of machine learning or they're chosen by humans. And they can be adjusted and are adjusted sometimes by humans. So just very, very briefly, what does that mean? It means that when AI is actually being taught how to work, it is actually using sometimes the weightings that the humans have given it. So for instance, a lot of folks, right, these days understand that including race as an input in say predicting recidivism is not an appropriate input. So they take race out, but they might include zip code and they might wait zip code at say 75% or 20% or 10%. And there are serious issues about what weighting is given, why and who's making those determinations. So AI then works with a data set. It takes these inputs, it takes these weightings and it uses a data set to then teach it by using those inputs what patterns it recognizes. Okay, I know I should look at zip codes and mental health and this that and the other thing in order to determine recidivism or to determine the best recipe for bread. I'll look at every bread recipe and pick out the best bread recipe. It uses then a data set to do that determination. Who chooses the data set, where it's chosen from the time period of the data set, whether the data set reflects a historical moment that we may not be proud of or that we may be very proud of. These are all issues and questions, but they go into the makeup of our AI tools. So our AI tools, they're full of human biases that made our data sets what they are good and bad. And they're also made up of the human biases that make our inputs what they are. So AI is critical because it's making life and death decisions for us today. It's assisting decision human decision makers, but it's also something that we don't fully understand and we need to. Thank you so much for that and I love the the analogy to a recipe for for making bread that that helps me better understand all of this and also the point you make about the very real role of human direction and intervention. It's not just machines out there acting. Fortunately, it's animated by human beings ultimately and so that gives us both opportunity for good and I guess opportunity for harm. Also the prevalence, who doesn't love their their their Siri or their daily playlist from Spotify? I do guilty, but clearly there are so many other uses out there. We're going to get into that. One of the things that you mentioned, AI is software. We just heard a little bit about biometric tools, surveillance tools. Can you speak to that? They're also based on AI. Yes, and in the last panel, there was a discussion, a very interesting discussion of facial recognition tools, which is a form of AI and in that panel there were I think some very important examples of the way that AI tools can actually make mistakes depending upon how it's been trained. If you train a facial recognition tool on a largely white dataset full of Caucasians, you're going to have issues when you're trying to utilize that same tool to recognize people with variations in skin color. Biometric tools come in really three types and I know that Heidi is going to talk about a lot about some of the issues with biometric tools, so I'll just introduce them here. They can be physical biometric tools, behavioral biometric tools, and emotional biometric tools. So we're familiar I think pretty much with the physical ones. There are things like your retina, your fingerprints, your voice, your face, and these are physiological biometric tools that can be captured and they can be captured in a way that's unique to you, but very importantly these days what the forefront of biometric tools is advancing to is the use of behavioral biometric tools and emotional. What does that mean? It means that right now today, and these are actually deployed and actually China has the biggest deployment of behavioral and emotional biometric tools around, it recognizes gait the way you walk. It recognizes the particular configuration of how you smile and for emotional, which is closely related, it recognizes whether or not you have a what I'll call a tell. Do you particularly have a way of exhibiting surprise, fear, concern and if you build up a database long enough over those kinds of biometric tools you can build up a database that can predict perhaps better than a polygraph whether someone is lying. So AI tools are doing the same thing there that they're doing in my other examples. They're actually they've got a series of inputs though they may be inputs based upon the contours of your face or the way in which you walk those would be the inputs that I talked about like the bread recipe. The database might be you. The database might be all of the ways that you show emotion all of the ways that your face actually will present itself in different situations dark different kinds of light things like that but also against databases full of other people. So Heidi will talk about the very very important implications of this but biometric tools allow for detection of a person verification of the person and then also ultimately identification detection verification and identification. So it's all based upon algorithms all based upon datasets. So we go back to some of the same principles that I've introduced. Fascinating especially the the bit about predicting emotional and behavioral. So turning to Professor Feldman we're lucky you're both a lawyer a philosopher this gets into some pretty pretty interesting discussions. You hear about these tools what does it say what do these tools say about how we should think differently or not about privacy and the role of the law in protecting all of us. Thank you Angie Healy and thank you organizers and my co-panelists and to the very tolerant audience who's partly on zoom and partly in the room for tolerating the format. So I just want to set the stage a little bit for our discussion in response to your question a discussion about AI and privacy which is well can be related to a discussion of bias is a slightly different angle on the way that AI has ethical and legal implications for that are particularly relevant for state AGs and their staffs. Catherine has done a great job of explaining something that I put at a very high level to say that AI data and analytics and machine learning are processes that iteratively collect data and information about people aggregates the data to produce generalizations about people and then deploys these generalizations for use by decision makers in and across many different spheres the iteration and aggregation cycle is enormous it's very hard for us as human thinkers to get our minds around how broadly AI can operate over a wide range of data sets integrate the findings detect patterns that we never expected AI to find and AI doesn't have a conscience or and it doesn't have a certain type of reflection so it doesn't say gee I wonder if that pattern is actually picking up track a proxy for something that is ethically or socially suspect AI just detects the patterns and then goes to work on correlating those patterns with the matters that the AI is working on whether it's for purposes of prediction or or other types of management so that's my little gloss just on AI now with regard to privacy we Anglo-American law starts with the conception of privacy historically that's focused on physical spaces the two most prominent ones in our in history have been the home and places of worship basically you get a legal conception of privacy that is generated in response to the issue in Anglo-American societies of keeping the state out of the home originally for purposes of preserving the control of the patriarchal head of the household and and out of houses of worship so that people can choose their ritual and thereby pursue their own salvation according to their own consciences I'm being descriptive I'm not taking a position on theology the the so that conception of privacy still influences how we think about privacy at the same time we have a slightly modernized understanding of privacy which we sort of understand that it doesn't really make sense in modern society to think about privacy as starting from boundaries around homes or houses of worship because we have a more individualized notion of privacy in a sense we have this sense of how the individual carries if you will a bubble of privacy around them so if you're in your car you have an expectation of privacy you may have an expectation of privacy on your phone but it's noticed that this is still all very rooted to physical spaces that over which you exercise personal control the challenge of AI is that as others have mentioned it's everywhere and it's collecting data in locations that are amorphous and which we think of as public spaces by operating in public spaces the AI can sort of reach into our individual expectations of privacy our zone of privacy gather information about us but even I think more importantly when thinking about privacy interfere with our freedom to make various decisions to have various opportunities even to think in certain ways so I just want to very quickly give you two thought experiments sort of prime the rest of anything I will say about today which is so imagine the two context that I will be focused on are the consumer protection and workers rights and employment and the employment setting so imagine if you went to stores assume you know it can be a physical store a picture of physical store and picture maybe it's an out you know it's a it's a marketplace of some kind what we used to think as a mall could be a farmer's market right you go there and when you get to the doorway there's a whole bunch of data checkers who give you a really complicated survey before you go in and they decide which door you're going to enter to and which door you enter into chooses what stores you'll see and then at each store you're again you're going to go over a tremendous survey and it you're then steered to certain racks and then based on what you look at or pause at on the racks and possibly how you move through the space you are then your future path to the till is is further adjusted plus all the price tags are changing in response to every single rack you're looking at every single time your attention wanders possibly your be your your video is being collected or there's a whole bunch of people observing you and making notes and looking to see when you look happy when you look distressed and so you think that you're just going around perusing the wares but in fact you're being acted upon in a very personal level throughout the experience such one thought experiment to have in mind another thought experiment to have in mind I like to think of as the employer stalking example um which is imagine that your employer it could uh have the resources and the ability to hire a fleet of private detectives per employee and those private detectives followed the employee everywhere went through their garbage saw where they drove um looked at their online activity um gathered up information about their families that is again without doing anything unlawful doing all the things that and then on the basis of that information created algorithms and and other tools that let the employer know are you the sort of person who is likely to see a therapist are you the sort of person who's likely to see a chiropractor are you the sort of person who um is uh prone to taking long breaks during the day to go for walks outside on and on and on and in much more fine green predictions this is what what what actually happens right is that big business is buy data sets from a wide range of sources that allow them to begin to think about their employees and interact with their employees in this way and finally what is going on both in the retail sector and in the employment sector is that every time you interact in one of these environments the employer is gathering more data about you aggregating it into data sets and selling it back to the producers of AI so that's the iterative process which sort of keeps the entire mechanism of surveillance and intervention in individual pursuit of goals ticking over well fascinating moving from the iterative data collection process meaning you speak about deployment deployment by decision makers could you just comment on the risks and benefits of these tools in a general sense and what you think AG should be aware of um AG's and their team should be aware of so I mean here are some simple benefits that we may even that we may take for granted so most of us have spam blockers spam blockers work on they protect our privacy because they detect things that we don't want to look at because it's distracting not worth our while potentially upsetting and prevent and the spam blocker prevents us from seeing that stuff but of course what's we have to the extent that we control our choice of spam blocker or if you want to take a related example of an ad blocker on a on a if you're on the web you you are having some say in what you're not being exposed to whereas in my other examples your man business and managers are excluding what you get to see in so far as businesses and managers are violating anti-discrimination statutes uh fair hiring statutes unfair trade practices statutes so it's not just statutes that are directed explicitly at uh uh discrimination on the basis of race or gender or ethnicity and it's uh sorry I'm I thought that was me in the back making I thought I did something to the computer okay back to topic the um in so far as managers are controlling what what people get to see what people get to think about what people what educational opportunities you get on the job you don't have any control over how the technology is shaping your opportunities either at the micro level to pay attention to certain things that's my spam blocker example or potentially at a larger level to sign up for certain training opportunities or look at for certain positions within your company and so in so far as we regulate for the public good and for the protection of individual rights the ways in which one group of people with a lot of power either because of economic relationships or because of historical circumstances shape the opportunities and avenues for development thought growth of other people we are we need to attend to the new generation of ways and means which powerful people can deploy on more vulnerable people as they use these technologies and and generally you might I just comment just for one moment I just wanted to um and maybe have Heidi sort of speak about this a little bit but there are also I think taking your employment example just to sort of name it some potential efficiencies in the employment area as well there while the employer may be certainly watching and monitoring all kinds of behavior productivity levels where the employees tend to congregate how they move through a particular area of the factory for instance some of that can lead to efficiencies and the efficiencies can actually lead to a for instance a potentially smoother set of tasks smoother set of task allocation or cost savings which might then go into the coffers of the corporation or some of that come into the benefits for the for the workers so there's I think there's efficiencies and so there's a balance to be made between the the very real concerns that you've put forward about surveillance in the workplace but the potential that there is good to be had with the efficiencies so there's that and then on the surveillance of the population the possibility that also in terms of surveillance there could be and I'll raise the and a different example a way of monitoring the public that could result in saving lives of being able to for instance preemptively determine this is a potential terrorist that is coming over the bridge and has the characteristics or has the license plate that we recognize and we can tell what the whizzing and whirring of our pattern recognizing database that that's what it is so there are these potentials and I'm just curious how you weigh the privacy implications that are so real uh against the potential efficiencies so I'm first I'm going to complicate the question to me privacy versus efficiencies there are also examples where what looks like an intrusion on privacy can benefit the very population of people not the overall efficiency of a system but the very population of people whose we might at least intuitively think their popular privacy is being intruded upon so for example it's not just that you get efficiencies that lead to lower prices and or higher return to shareholders in the workplace or retail setting you also can get you can detect patterns about people that give you information about ways to reorganize shift work in a way that is more convenient for people you can get insights that tell you give you suggestions about what sort of health insurance benefits are most important to offer that you never even thought about before or realized because you can detect needs that people have by getting what amounts to intimate knowledge about them so the trade-off both between aggregate social benefits and protecting privacy as we intuitively understand it and between benefits to individuals not in the aggregate and privacy retention are very complicated so the first thing I think is that that sort of trade-off and balancing lawyers are used to thinking about those sorts of trade-offs I mean all of our jurisprudence around elevating individual autonomy is focused on balancing that against legitimate interventions in autonomy so the first thing I would say is lawyers are uniquely suited once they understand what's being traded off against what to thinking about devices for regulating how the trade-offs are managed in practice so what I would encourage people is to when you are encountering a particular situation that involves the use of technology you need to work through in a very fine-grained way what is what is the form of intervention and interference at the individual level where we locate the idea of privacy and what are the uses not only which any individual actor is putting the technology to but what are the best uses from a social perspective what are the most pernicious uses and how do we regulate so that we safeguard against the pernicious uses allow the benefit uses allow the beneficial uses so there's a I think of this as lawyerly engineering I'll I realize that I haven't said here's a magical formula for trading off privacy against social efficiencies I know Catherine that one of your areas of interest is deontological thinking and compared to utilitarian thinking and obviously that's lurking in the background I don't think I can speak to that certainly not in the time we have today I can't resolve all that I could speak to you well fascinating thank you thank you for that um we talked earlier a little bit about bias and I'm wondering if Catherine you could maybe talk to us explain how algorithmic bias works what it means and and how we can can address this how it should be addressed and also you know if there's a particular case study that you think is relevant for us to learn from yeah and I think that this is this is something which the courts right now are dealing with and I think we are going to see some cases actually make their way to the Supreme Court I would say in the next five years I don't necessarily think it'll be before that because the Supreme Court likes to have technology be able to really become more fully formed than it is right now but we're going to see and we are already seeing in the appellate courts a lot of algorithmic bias cases coming up so what is it well first of all I think that when AI was first something that people started to become aware of they believed that it was science they believe that artificial intelligence was based upon high tech software and it meant that AI was going to somehow have the magical key to being right about whatever task it was designed to address but the reality is as I've already said humans are the progenitors we are the makers the designers originally of the AI tools and it's very important and I don't want to get too much into the complexity of it but to realize that while we design the initial AI tools there are complicated many many many different types of AI tools that themselves can actually take wherever they started and make themselves into more and they can through a sort of iterative regenerative process recursive process make themselves into a tool more sophisticated than the original human being designed or potentially even imagined so part of our issue with algorithmic bias is we know where we started but there is a point at which AI can become sort of black box AI and we don't necessarily understand all that goes into where we end up and I'll talk about that in a moment but what can we do at the beginning and this I think is something that we do have control over still and it's an issue that is an issue for today not tomorrow for people to be thinking about because there's always a danger that down the road people will say well AI just we just lost control of it we're using these tools now they're so complicated it's really just too late to turn back the clock so how does it happen the bias happens through the things that I've already introduced through the inputs where a human being is choosing the inputs or the AI tool is using using a data set to select inputs from recognizing by its review of a data set what's important that can then be overridden if you will by a human and sometimes they are it can actually then be adjusted by the human as we've talked about and sometimes there are real adjustments and there have been cases where real adjustments have occurred so there is each step of the way the ability for the human to have some interaction but I want to spend just a moment on the data set and then give you a couple of quick examples of some case studies the data set as I've mentioned is the potential for our past to become prologue past becoming prologue and that's because and I'll use for instance right now the criminal justice area as one example but we can use an example in the financial services area we can use it in the social benefits area but in the criminal justice area a lot of the AI tools that are deployed are using arrest databases not conviction databases but arrest databases if that database is chosen from a time when there was for instance a stop and frisk police practice that was in place and if you were of the view that that stop and frisk police practice resulted in the over arrest of a particular population for instance young black men then your database is going to be overpopulated in the same way so that the past history of the arrests becomes the data set that the AI tool then learns from so the AI tool that's designed to predict recidivism will then use for instance inputs such as well I'm looking at I the AI tool I'm looking at this data set I notice that most of the people in this data set happen to be black I notice they happen to be young I notice they happen to have an educational level of x I notice they happen to have parents who happen to have arrest records which is also one of the inputs in some of these tools and then the structured inequalities become then the tools that then become the predictive devices now that same kind of issue can occur in the financial services area where you're looking at where were loans given where were loans defaulted upon and then you can think about the reasons for that maybe there weren't enough jobs in a particular area maybe there were usurious loan rates in a particular area but that data set of loans can become for a particular area can become the data set from which the AI tool then learns so the past can then become the prologue now in the last session I know we've you've already spoken about facial recognition technology so I won't go through all the studies that people now are familiar with but in the employment area there was a very useful study of a tool that had been designed by Amazon to assist it in workforce recruitment and it looked at the best possible employees that might assist it with various jobs and become managers it designed a very sophisticated tool and it turned out that the tool was using a database where 64% of the of the data set was male and 70% of the managers were male so the past the past was becoming prologue in that tool Amazon then stopped using that data set all together and stopped using that tool and then redesigned the tool so that's another example of where it comes into use but it's the it's the inputs it's the out it's how they are adjusted and it's the data sets thank you so much for that turning sort of from criminal justice professor Feldman are there other what about other contexts in which this comes into play are there helpful examples about how AI data analytics and machine learning raise ethical and legal concerns related to privacy so I've already alluded to some of some of these but I I'm going to raise some concerns and then I would like to talk about at least a few ways that state AGs are uniquely positioned to perhaps address some of these concerns even though we don't have perfect answers about the balances that we want to strike that we were talking before talking about before so one of the areas I'm very interested in is consumer protection and the particularly under the various unfair trade practices act that different states have and I know that all of the states whose offices are represented here have some version of that those as you know have the very broad language about what counts as an unfair or deceptive trade practice unfair that's the word I want to focus on here the sort of channeling of consumers on the basis of this extensive and encroaching data is a form of manipulating consumers insofar as we think that there are some types of consumer manipulation which rise to the level of being unfair or deceptive that's the sort of thing that AG's office is right now under their authority to work in the area of consumer protection could be taking action the reason I think that's so important is because notice that when you steer people when I mean the more the man the more the more the seller the producer the prior to goodness or services is steering people the more they're affecting their access to quality of goods pricing of goods that's often done on the basis of suspect categorizations from the past even if that's not the intent as Catherine has identified those can be the patterns that get picked up I see we have another attendee in the my background the cat will hold forth in a moment the the but so so all of the concerns that animate are worry about the power disparity between the providers of goods and services and consumers are amplified when providers of goods and services be they consumer goods or healthcare are are able to manipulate consumers into particular market segments that to me is a now I just want to point out that the very same tools allow manufacturers to make special offers to people that they are particularly interested in getting so if you are working with manufacturers or services providers or retailers who provide goods and services you can't just go in and say hey don't do an analysis that gives you information about people's preferences or even potentially about their skills and capacities in order so that because you you don't want to completely choke off the manufacturer or retailer or service providers ability to reach the consumer in beneficial ways so this goes back to so so if I were in an AG's office one of the things I would be thinking about is what can I do to further our collective sense of what these tools are who's using them how they're using them and what the pitfalls and benefits are AG's have the ability to gather information to use to convene people for educational purposes to begin conversations with in this case I'm thinking in the consumer protection world with with sellers and they don't have to immediately litigate or draft proposed legislation they can begin to develop guidelines and standards working with people from different stakeholders in this sort of area in this sense those guidelines and standards might become models for what other states could do they could become models for what should be legislated but it's the fact that AG's are already involved in certain domains another one which I won't elaborate on is supervising nonprofits you can now imagine based on our discussion that lots of solicitation that's done by nonprofits is based on this sort of AI that we're talking about right so so in so far as you're already operating in these spaces there's a unique opportunity and I regard this as an ethical imperative to come back to the to the basis of the question there's an opportunity for AG's to actually improve the legal intervention in these areas in a very uh methodical but not overly hastily conclusive way and this would be a very I think interesting role for the for the state to be playing which is we're in uncharted territory we have unsettled notions about privacy we have limited understanding of all of these AI tools and how they actually play out in practice what is the ethical responsibility of a state's chief legal officer under those sorts of conditions and what I'm saying is it's to leverage the power and authority one already has bearing in mind the background information that's being discussed at conferences like this and use the flexible yet extensive powers of your offices to begin to wrestle with these problems in discrete settings well um I was taking notes so we're we're we are engaged and you know just some some personal background a few years ago you know we were getting complaints from women who are actually inside abortion providing health care facilities and in the same way that you may be walking down the street and you get a coupon for a cup of coffee on your phone at the store around the corner great wonderful they found me the geofencing right that same technology was being used to target and send messages to women as they were sitting in the waiting room of those abortion care facilities to me that was an easy that's wrong um implicating all sorts of you know whether it's health care women's access to abortion and we went after that company and and ended that practice some things are less clear um amazon and and and amazon prime and and what it prompts and offers in terms of goods and services whether they're groceries or whether they're you name it everything's on amazon um and you know thinking through as you say how that information comes to be collected and then and utilized um it is something that I think we we all have concerns about want to learn more about and thank you for pointing out the power of attorneys general to educate consumers and to convene as we move forward and try to chart and pass forward on how to think about this evolving technology which sort of gets to my next question this concept of using aggregating data data may be good good in the collective in terms of what it it affords in terms of predictive analysis but it does run up against individualized determinations and I know captain you've talked in the past about social security determinations for example just putting out to both of you can you talk a little bit about how you think AI could be fair and accurate in relation to the individual can it be and I think that your last question is a is a very sort of very important one can it be there's a difference between accuracy and fairness with AI tools there can be a fairly sophisticated algorithm that can have very high rates of predictive power behind it and may be able based upon an aggregate pattern to have for the majority a high level of accuracy of predictive accuracy but for the pinpoint observation of one person it might not be fair so let me sort of back off and sort of describe that in other terms our american constitution is based upon concepts of the individual it's based upon a concept of due process of decision making that's made in terms of our resource allocation criminal justice every way that we can think of in terms of life and liberty where the government is involved it's based upon individualized decision making and we are very attentive to that AI because of as you've now heard it's use of patterns defaults into a framework not intentionally so it's just the the framework that applies to it to a utilitarian framework which means it's looking at what is accurate for the most people for the biggest group so the utilitarian framework and in the old days when america decided not to follow the utilitarian philosophy was whatever is good for the most people is the path to be followed america threw that out because you can obviously run into all kinds of problems where you decide to put anybody between the ages of 18 and 35 into jail because they commit the most crime that's not going to work and so we don't do utilitarian thinking as the basis for our decision making in a lot of very very important areas but can it can AI be made more accurate as well as more fair i think the answer is yes the question is for each tool there needs to be an assessment a conversation about the interplay between the tool and the human decision maker who layers on to the tool the individual facts of the person before him there were certain decisions which probably never should be left entirely to AI at least for the foreseeable future there may be a time when it can be but for the foreseeable future it might be a combination of efforts there may be some input from an AI tool but a fairness overlay with individual discretionary decision making very importantly continued to be empowered and continued to be required to be used we cannot for instance take away the resources that allow individual decision making to occur in the name solely of efficiency that machines can also do it quickly because machines can make a decision for instance on resource allocation very quickly accurate for many people but 20 of the people may not have a fair decision so if we decide to terminate the resources for the staff required to exercise human decision making we may be then short changing certain individuals within our population so we've got to look at the tools what they're doing understand what they're doing and understand the extent to which those tools need to be overlaid with our human discretion thank you thank you just speak to that for very quickly and just say so I'd like to invite Catherine to come to Washington and we can have a long dinner to discuss all of this but here's what here would be what I would be saying what we say off at the dinner table that we're all now involved it in zoom which is I think the picture of the role of aggregate welfare and individual interest that's embedded in American political and legal traditions is perhaps not so clearly tilted always in favor of protecting the individual there's a awareness of the need to protect individual I'm totally with you on that but you know and I know you don't mean to suggest otherwise that for instance people in AG's offices depending on what part of the AG's mission they're focused on are sometimes assigned to think about aggregate welfare right fight going after your criminal justice yes you want to protect the right of the accused or the people who you're pursuing the whole mission of that is meant to improve aggregate quality of life otherwise we wouldn't you know I mean we don't necessarily think that we're just standing in the shoes of some of the victim who could otherwise just protect their own interest we think that we're improving the overall social welfare by reducing crime um so the reason I'm raising that is because AI might be used to focus on aggregate considerations in the name of promoting overall welfare I think the question is under what's it's an interesting question under what circumstances might AI be used to help decision makers treat individuals more fairly that I think is just is an underexplored question it's because here's the thing what AI does is give you knowledge that's why it's intelligence right in so far as that knowledge gives you insights into particular individuals which it does can you use those insights to be more fair we concentrate I think on the areas where we threaten fairness where we lump someone in with a generalization in an unfair way or where we gain information about someone for a large social purpose but the way in which we get that information isn't doesn't seem fair to that individual but imagine in a benefits hearing that you could use an AI assisted tool to offer people benefits that they themselves don't even think of asking for and you could do that without having to go through some extensive fact finding process based on them individually but by starting with a set of AI generated recommendations it's just an example and I think that you'd find that we had a lot of agreement there I mean I I'm a big believer in the utility of AI tools I also am a believer that we have to do three things if I were to sort of reduce it down to three sort of two dos for all of us one is disclosure know when we're using an AI tool require information about how that tool has been constructed who designed it who tested it and against what so disclosure there are a lot of AI tools that are being purchased by the normal purchasing organizations of a lot of companies it's going through their chief technology officer not necessarily through the subject matter experts so disclosure disclosure number two reject the black box when people tell you they can't explain it to you rejected the GDPR in Europe has rejected the black box and it's a in many respects it's a piece of legislation that is ahead of us it requires logic to be disclosed to an individual for whom an AI tool has as to whom an AI tool has been used disclosure reject the black box there will be some black box aspects but there can also be an explanatory logic aspect three convene national conversations what's being used standards can be developed and reliability can be assessed what level of reliability do we want and so what standards do we want there are national conversations that need to be had this should not be being done for individuals where it's affecting human welfare it can be done for games and it can be done for products for companies but when we're talking about the communities around us and resource allocation and criminal justice there need to be national conversations about accuracy and reliability so I think Heidi you'd find that we have oh I yeah I don't I don't think I regarded this as a deeper conversation not a fight um this is this is great we we we have a few more questions before we then turn it over over to to the audience those are great suggestions Catherine um how do you hit on on some things that we can do is there anything else you want to say about the ways that we can make these make make sure that these tools are used and a more ethical and legal way um oh sorry so so not a disagreement with Catherine but a footnote which is um in this country when we call for disclosure we often accept a very uh sort of cosmetic approach to disclosure think about uh you know shrink wrap licensing of software there's a lot of information put up there you just click through and I mean it's disclosed but it in some very formal sense so I was very intrigued by Catherine's emphasis on what has to be disclosed and how understandable it has to be I think that's a real opportunity for AG's offices to contribute their expertise because AG's offices are often engaged in the process of explaining complicated matters to the populace at large sometimes it's breaking down law sometimes it's encouraging um uh one actor in society to be clearer with another actor in society uh so you know various state provisions let's say around uh real estate and banking where you require various entities involved in that process to explain things clearly in uh as best you can in ordinary language um you can see here that this is going to require AG's to get some sophistication with the technical language so that they can actually monitor whether the disclosures are telling people something that they could meaningfully protect themselves from be or choose opt out of the other uh at related point is that disclosure gives people can give people the opportunity to opt out but I think state AG's are very familiar with when opt outs are a realistic tool that people can use to protect their individual interests and when they're not in Europe Catherine and I didn't plan this but this is actually a point of convergence in Europe there are discrete subject matter regulators and particularly in the employment context still powerful enough unions where people are focused on the inadequacies of giving individual let's say workers or consumers let alone criminal defendants um uh information and letting and saying oh you can opt out this is where the convening and thinking about what how do we define the thresholds that protect vulnerable people who can't opt out give essentially allow them to be opted out because of the development of our standards and AG Hill I was thinking of your example of the company you went after uh for uh tagging people in the abortion uh providers offices I I just was what I was thinking about and I was I was wondering when you said that's an easy case was it an easy case because you said leave people alone when they're in that geographical space where were you able to go in and look at the technology and say this kind of use of technology is going to be systematically problematic at least as you deployed it so that you can make a standard that reaches not just the what what I think at least I would agree with you it's an easy and egregious case but allows people to think about cases that might be somewhere between that egregious interference with someone in a in a setting or in a moment where they ought not to be interfered with and a setting where we think it's perfectly fine to be reaching into someone's everyday life and influencing them and I think that that's where the sort of conversations that Catherine referenced are important to have both amongst the as I said all the stakeholders but also between AG's offices and the producers and providers of the tech because I think that you can get the attention of these tech providers and basically get them to disclose to you and educate you about the ways in which this technology works so that you can act on behalf of the most vulnerable segments of the population. Yeah now I appreciate that and that was about five years ago we resolved it relatively quickly and I think that you know it was only sort of we were beginning to learn about the pervasiveness of machine learning AI and the like and a lot has evolved since then and I think I think all of us are we don't really have technologists on staff though it's something I think we're considering and at least developing people's facility and understanding and learning so that we can be effective in our jobs whether it's protecting consumers or protecting against discrimination or the like and the point about getting educated is something that we're here to do and really benefit from your your comments today a discussion will keep going. Thinking about AG's I also am thinking about judges and I'm wondering Catherine if you had your old hat on I mean what's your what's your view can the courts handle this have them how are they gonna are they equipped to address these really meaty and difficult you know topics courts have shown an ability over the history of our nation to adapt and to with common law adapt to new technologies and sometimes there'll be a big new technology like the internet and people say well no it's not gonna work the courts will never get it now there is typically a legislative effort that does occur that assists the court with certain aspects some computer decency legislation some copyright legislation that's geared towards the internet so there can be sort of legislation around it but with the combination of our courts and good thoughtful legislation the courts can do the right thing they can do it the issues right now that we're facing are the pace of decision-making in the courts and in the pace of innovation so right now the pace of an average case can take two years in the district court it can go anywhere from a year to two years in the district court of course there's an appellate process and the pace of innovation is far greater than that so we have a little bit of a disjuncture with that said we're seeing around the country right now cases that are challenging algorithms and that are challenging the output of algorithms the first case that uh was really uh one of the cases that inspired a lot of people to get active in this area and so it did some it did a lot of good it came out the wrong way in my personal view but did a lot of good was the lumas versus wisconsin case which was a case which challenged the due process right of a defendant to understand the tool that had been used to assist in uh a judge in determining a sentence now that case there was found not to be a due process violation but that case as a result of that spurred a lot of additional far more sophisticated activity where people then became much more knowledgeable about things and there are now a number of cases that are percolating up through the system but none of them as I've said have really reached the highest court and then in the public employment arena there are cases where uh there have been challenges to the utilization of teaching assessment tools that have been uh found where the courts have been able to grapple with those difficult areas and in the public benefits area there's a case called kw versus armstrong where there was found to be a human adjustment of a waiting to one of these inputs that I talked about this is a kentucky case and there was found to be a due process problem and that the determination by the social security benefits actually was a private a local more local benefit organization was found to be arbitrary and unreliable so the courts are able to grapple with it but they need the help of some thoughtful legislation and they need the help of guidance and uh that's that's your area uh and not mine well I thank you for that and I think what we're going to do is open this up to questions uh we do have one from the chat and then I'm going to turn to to my ag colleagues uh for there who so many of whom have been doing really important work in this space one one thing that we are looking at and have concerns about it's also raised here in this question by page boggs is you talked about disclosures are there additional this is page's question are there additional considerations for the ethical use of AI technology in the context of young people are disclosures sufficient in this context how I mean here's I mean the quick and dirty answer is as it as challenging as it is to come up with a disclosure that does the work you want it to do around uh protecting individual privacy or decision making it's that much more complicated for children and teens and obviously the younger the child the more complicated it is but also with teens right the issue is how do they even process information and do they process information about abstract risks to let's say their privacy uh or to to to privacy related considerations do they think about those risks in relation to the more immediate benefits that seem to be promised by operating in any given setting so notice here that I'm not just saying that teens and children are more vulnerable I'm saying that children but young children are perhaps not in a position to process disclosures at all so then we have to think about how free are they to enter spaces where AI is acting on them and I think it's fairly free um without parental supervision and if it's without parental supervision or the supervision of another responsibility adult what is the role of the state in regulating how AI acts on children with regard to teens I think there's a probably going to be a much more sophisticated uh regulatory problem which is teens are not incapable of absorbing information but they don't absorb information the same way adult cognizers do and so the whole issue of what constitutes an adequate disclosure regime for them is more vexed and I'll take a slightly sort of different angle at that which is uh we have to have a way to let our children and our teens leave their algorithms behind if you think about the Facebook and Instagram which are algorithmic uh they are driven by algorithms and you think about that's what they are right you sort of you look at something on Instagram and if you look at puppies you see puppies if you look at puppies in water you start to see more puppies in water I look at a lot of puppies and uh so but you are you become your algorithm and the problem for teens is teens need a way to be able to grow and they need to be able to outgrow who they have been and one of the things that we've got to do is we've got to allow our teens to leave their algorithms behind and so one of the things that I would like to see is a uh I would uh value the ability to have a different kind of algorithmic regulatory scheme that is geared towards children and teens and the reason for that is for a young girl who is perhaps seeing too much of a particular type of at-risk uh behavior through Instagram or Facebook it would be nice if she could leave that behind when she's done her personal work and wants to be able to become and evolve into somebody else not to see the same things follow her because that's who she was so we are our algorithms and our children are far more vulnerable to what algorithm they have been named and so we need the freedom to leave it behind That's fascinating thank you That's a great point Catherine I just want to say that I mean and you could imagine I mean this is the sort of thinking that is so urgently needed because you could easily imagine some form of that being made available to adults right I mean people do grow through their lives or people want to feel like they are not having their future paths dictated to them by the algorithms what the algorithms are exposing them to and I just wanted to gloss Catherine's point about puppies and water it's that what the what happens with these algorithms I think people know this but it's really in this context important to think about this the machines detect patterns such that you'll start seeing puppies water and adds for menthol jewel pods and adds for going to cosmotology school and you won't necessarily see ads for access to scholarship opportunities to other types of schools I mean I'm making this very wooden but the point that I'm saying is that the algorithms detect patterns of interest that don't necessarily answer to the sort of growth opportunities that we particularly want younger people to have and so I think Catherine's point is just very well taken well I think that is a perfect a perfect segue to opening up the this discussion to our our colleagues my colleagues the great nag team is going to help me figure out how we're going to do this I know people can write in questions but for folks in the room um yes have a microphone and the audience members have a question we'll give them the mic and you'll hear them great thank you maybe while people are gathering their thoughts about this I'll just make a very quick point about uh something that state ag's could be doing in the area of tech they could be talking to their different state officials about how the states their departments are using this tech and they could model responsible use of AI they could encourage the state to do that and they should find out what sort of data sets their different state departments are selling to developers and notice that states develop really powerful data sets that are sought after in the private sector and so one of the things that you have the opportunity to do is bargain for disclosure about use of those data sets before you let people buy them it's not that you don't ever let people have them but if you want to make have developers you you can bargain for a certain level of transparency either in the final product or in the intended use so I just wanted to focus on the way in which states are themselves part of the uh their users of AI and they are suppliers of the basic building blocks of AI and that gives them opportunities to influence what happens with regard to responsible uses of AI great point thank you hi this is a leaving kumacha from guam and hi mora your room looks great i get you a 10 out of 10 for your thank you my friend great to see you we're one of a handful of ag's offices who actually has original jurisdiction over criminal prosecution which includes and adult and juvenile and um when i started practicing when the i was a defense attorney and when the judge would take the bench if they had a cup of coffee i would look at my client say you're going to jail because the judge is probably not in a good mood but we've moved over the last five or ten years to you know aura tools or these scores and risk assessments that kathrin you were talking about and for our office we run our restorative justice program for juvenile offenders and we're having these interesting conversations with the courts where our scores are not aligned with the court in terms of who is a high risk offender and who would be eligible for these types of programs so i'm just you know when you say don't reject the black box it's just very hard for me i'm not good at math which is why i was a literature major and became a lawyer um but for those of us who are not sophisticated and we don't have tish james as resources necessarily or new jersey to build out a 17 million dollar uh pretrial release software what do you recommend for smaller offices in evaluating and making changes where needed um well uh that and that is a terrific question and i don't want to make anybody buy my book um but i would suggest that my book uh when machines can be judged juries and executioners go through several chapters of the different types of tools and talk about exactly the problems with certain kinds of tools and some of the ways in which the debate has developed so one of the things that can be done is you can figure out which tool you have do you know the name of the tool it doesn't matter there are lots of different kinds of tools and there's different kinds of disclosure on the tools and what that tool is actually embedding within it and in my view there's a due process problem when you don't get disclosure of the inputs into the tools because the tools are using uh there's one tool that uses a database that is quite old and that is based upon for instance the arrests that i just talked about before and there have been lots of studies now pro publica did the perhaps the most uh sort of famous one with the compass tool but that's only one of many tools and compass by the way also has a uh they have a response to each of the points that pro publica raised but pro publica did a very very in-depth discussion of uh the way in which that tool is deficient so each tool has its own sort of peculiarities and shortfallings and so if you don't have the particular technological expertise then the requirement is i think to get disclosure of the literature surrounding the tool and then to find out well what are are they using uh is race one of the inputs in that tool is age is mental health status of their parents one of the inputs and so finding out and asking the court for it now one thing let me just sort of finish up very briefly a lot of companies don't want to let people know what's in their tool because of confidentiality but that's what protective orders are for and protective orders ought to be able to take care of that i could go on and on about it but i know we don't have time um i'd love to talk to you about it if you ever wanted to give me a call i'd be very happy to and by the book i can say it so more let's see i think the camera's back there how are you it's great to see you this is odd but i'll turn him back around it's tissue and it's great to see everyone it's tish james and heidi where's the cat um the cat the cat does what the cat wants and is not controlled by any algorithms so the cat is gone but but heidi no the algorithm and the ai is really critically important it would suggest that you don't like dogs well actually what it what it should suggest is that the cats have created the algorithms that govern my life that's great so katherine you mentioned before um the gdpr which is the general data protection regulation agency in europe um and they've got a single agency which is focused obviously on protecting the data of european citizens so and and both of you have talked a lot about the role of ag's what about the role of the f tc okay so the f tc has a big role to play and they've yeah yeah so go ahead speak a little bit about f tc and what f tc can do in protecting uh the privacy of its citizens citizens can do in principle or can do if it's not captured um what you can speak to any of those two points katherine do you want to uh no you go ahead you go ahead heidi we've got a lot to say right so i'll just say two very quick things this is right the usual challenge which is the f tc could develop nationwide uniform uh standards and regulation to deal with data privacy and maybe even uh the use of uh discriminatory data in the set in in the marketplace right that's they they conceivably they and it would might be and and manufacturers will certainly and sellers of service providers will all will all say that that's what they want they would prefer a uniform standard i think and i you know obviously my question uh telegraphed this that the likelihood of the f tc emulating something like the european data regulatory agency is unbelievably slim um and so i actually think that to the extent that states can begin to address dare i even suggest it that that state attorney generals end up having component parts of their office dedicated to data fairness and data privacy so that we have a bunch of little i you know i'm thinking of the little f tcs but the little uh the little uh uh and i only say little because it's at the state level uh but state level agencies or departments of ag's offices focused on this that might eventually get the attention of the f tc because the counterbalance to the problem of capture on the uh powerful side of the equation is that if there does start to be state by state regulation that's meaningful those people will sometimes shift over right as you know and push for federal legislation and then the state's approaches to the problem can become models for nationwide uh action so i you know it's not the f tc has the jurisdiction to deal with some of this i just think that uh they're unlikely to for political reasons really to play a leadership role can't really make us agree yeah no i i don't i don't know i think there's a lot to be learned from the new administration i actually have some more hope that there'll be a some convening of people to discuss these issues there's already a lot of going on in the consumer protection area obviously that's what their primary one of their primary mandates and so i actually believe that there will be a role for guidance i don't they're not obviously not going to legislate but i think that the guidance that they can give the kinds of conversations they can have could be extraordinarily useful and i wanted just to mention that right now what we've got is for general heli in summerville massachusetts uh they've got uh an individual sort of ban on the use of facial recognition technology in spring massachusetts they've got uh some outlawing of facial recognition technologies in boston there's also an issue of an outlawing of the use of facial recognition technology except with permission in new york we've got uh the biometric identification law that i believe went to effect in july so there's a lot of legislative activity that's happening around it's not that there's nothing uh but that uh you know the ftc's guidance could be very helpful to have us all start to coalesce around some reasonable standards and katherine just thought last question of just a follow-up these standards and thresholds and prohibitions do they apply to law enforcement in any of those jurisdictions as far as you know well yes new york does so new york actually has and i happen to know if you know a fair map about not the new york biometric law but right now to use facial recognition technology the nypd actually has to seek permission to use it in terms of uh identifying uh criminals the problem is that while that applies to the nypd there were 10 000 requests last year and 10 000 grants so uh it does as apply because this is the place thank you um do we have more questions from our live audience is this called a zoom bob yeah at least i'm a voice general haley i want to thank general haley i want to thank you and judge forrest and professor feldman for a wonderful panel and uh the sun is shining in brilington vermont and we're going to get outside well you get out to lunch gentlemen great to see all of you and i know crap i'm hiding benefited tremendously from this we'll come back to you as we seek to find a way forward in this new and evolving uh fast paced world thank you thank you all so much thank you for having us and good luck