 Hello and welcome everyone to our virtual event, the intersection of federal privacy legislation and AI governance. We're delighted to have you with us today. My name is Prem Thirvedi and I'm the policy director of New America's Open Technology Institute. This event brings together experts on privacy and artificial intelligence to discuss how federal privacy rules would address harms that stem from the misuse of data that powers AI systems. Before we get to the panel, it's my pleasure to introduce our keynote speaker, the chair of the House Energy and Commerce Committee, Representative Kathy McMorris Rogers, representing the fifth congressional district of the state of Washington. Chair Rogers has been an instrumental part of the push for comprehensive federal privacy legislation, first as ranking member of the committee in the 117th Congress and now as chair in the 118th. Chair Rogers, thank you for your leadership on privacy issues and for sharing your thoughts on privacy and AI with us. I'll turn things over to you. Hello, everyone. I want to thank Open Technology Institute and New America for hosting this timely and important event. AI has the potential to usher in a new era of innovation. This technology will solve problems we once thought impossible. It has the potential to drastically improve Americans' lives and grow the economy. That said, AI also raises serious concerns about how bad actors can exploit this technology and abuse it. As AI gets deployed, we must think about how to ensure accountability and American leadership. The best way to start this effort is by laying the groundwork to protect people's information with a national data privacy and security standard. A key element of this is ensuring the safety of the algorithms used by online platforms, which serve as the instruction manuals for artificial intelligence. By requiring Big Tech to examine how their algorithms are developed, trained and deployed, and how they're impacting the people that interact with them, we can address the harms we've seen come to light. Through this, we can provide Americans with greater transparency for how their data is analyzed, how these systems identify patterns, how they make predictions, and how their interactions with online platforms are used to determine what content they see. Trust where the algorithms are essential components in the responsible deployment of AI and failing to enact a national data privacy and security standard puts our country at risk of ceding AI leadership to China and heightening the risk for the over-collection and misuse of our most sensitive information. We need to prioritize strengthening data security protections to safeguard people's information against threats. Enacting a national standard will ensure greater public trust in AI, which will help ensure the US leadership and the future of innovation. We continue to work on the Energy and Commerce Committee to enact comprehensive privacy and data security legislation to put people back in control of their personal information. We're building on the momentum of last Congress. When the committee advanced comprehensive national data privacy and security legislation with a near unanimous vote, 53 to two. To date this Congress, we've held seven privacy-related hearings. We're also continuing to work to find a path forward on other online protection proposals like COSA. As you know, COSA is still being worked on in the Senate. I see it as complementary to the comprehensive data privacy and security standard, which is necessary to building foundational privacy protections for Americans. It's important that we're getting to the root of how data is collected in the first place. Overall, we must strike the right balance with AI. One that gives businesses the flexibility to remain agile as they develop these cutting edge technologies while also ensuring the responsible use of this new technology. A national standard for the collection and handling of data will provide businesses, creators and every American with clear and understandable protections wherever they are. We encourage stakeholders and advocates like you to work with us. Our doors are always open for constructive feedback as we continue to build support for this important legislation. Thank you very much, Chair Rogers. I am David Marar, senior policy analyst at OTI and the moderator for this event. As our keynote speaker has emphasized, privacy protections are crucial for making sure that AI is governed in a way that balances protecting innovation and guarding against underlying risks, inherent in deploying and using AI. At the Future Security Forum hosted by New America in September of this year, NTI administrator Alan Davidson in conversation with New America Vice President William Corral mentioned the crucial importance of having comprehensive federal privacy legislation in place as the first step in governing AI. More recently, with the release of the White House Executive Order on AI, the fact sheet that the administration published has a significant point about the necessity of passing federal legislation to protect the privacy of adults and kids alike online. We will get to important questions about this and more with our fantastic panel and I'd love to bring them up now. In no particular order, we have Sarah Collins, Director of Government Affairs, Public Knowledge, Wilmaria Escoto, US Policy Council, Access Now, Brandon Pugh, Policy Director and Resident Senior Fellow, Cybersecurity and Emerging Threats at the R Street Institute and Frank Torres, Civil Rights Technology Fellow at the Leadership Conference on Civil and Human Rights. Thank you all for being here, really appreciate it. So first, as a first question for you all, let's start with the Executive Order on AI. It places a strong emphasis on civil rights and like mentioned, the fact sheet accompanying it specifically notes the importance of passing comprehensive federal privacy like the American Data Privacy and Protection Act from last year. What are your thoughts on the EO broadly and how does it align with the principles of comprehensive privacy specifically with last year's ADPPA? Let's go with Sarah first. Sure, thank you for having me, David. So it was really exciting to see the AI EO finally come out when we just needed a you and you'd have all the vowels. And you're right, it did call for comprehensive privacy legislation, which Public Knowledge is very excited about. We have been big supporters of the ADPPA in passing federal comprehensive privacy legislation generally. I will wanna give a note of, and we were really happy to see some very strong continuations from the AI Bill of Rights in the AI Executive Order. Some of my colleagues in the civil rights community that you've invited here can speak much more fully on that than I can. But a note of caution here, right? Calling for comprehensive privacy legislation is sort of a low bar. We have been doing this now for, I'll be generous and only say five years. I think the real might be 15, but five years. I don't think it's a particularly political position to take that we need this. You'll note that the Executive Order, because it's an Executive Order focused a lot also on privacy enhancing technologies, which we think is a fine thing to continue to focus research on and implementation. But again, the AI Executive Order really points back to Congress and says, hey, we would really like a comprehensive privacy legislation here. We're going to continue doing things at what we can do in the privacy space, but it's really on the responsibility of Congress to get us privacy legislation. Sure, David, I'm happy to jump in here. Like Sarah and the good folks of public knowledge, we were very pleased to see the White House AIEO come out. We are also pleased to see that it was kind of a continuation of the tenants that they've included in their Bill of Rights. It's great that simultaneously, Congress is also contemplating and really digging into both privacy and AI. I think when we think about comprehensive federal privacy legislation, it's certainly centered on the ADPPA and the provisions in it. And folks may recall that it's not just, the ADPPA covers not just privacy, but there are also some strong AI protections in it as well, calling for what's in the OMB guidance now, it's in the executive order discussions in the AI forms that are going on in the Senate around prohibiting AI that doesn't work. And by that, I'm meaning AI doesn't work if it's biased or discriminatory. That means there's something going on, there's something wrong. I mean, even if you take civil rights out of the equation, if the AI isn't working, then something is wrong and it shouldn't be used to make especially important decisions either in the commercial setting or by the federal government. And of course, the AIEO applies primarily to the workings of the federal government. And so, great to see the continued call for comprehensive federal privacy legislation our view of what that should look like is what's in the ADPPA, our task force, the leadership conference spent a lot of time kind of digging through the provisions, working with other members of our coalition to ensure that that language is as strong as we can get it. I'm encouraged also that almost since the get-go, privacy legislation has been a bipartisan thing. And it's great to see that continue. And it was wondering what would happen with the change in Congress and certainly what we've seen as a continuation and we heard that from the chair of the committee today. So there are just one correction and then I'll pass on the mic, but just one correction, it's not five years, it's 10 years. I've been working on this issue for about 25 years. But folks may not remember, but Senator Hollings, back in the day had the first privacy bill in the 90s, folks might not recall, but there was a bipartisan bill. It was a Kerry McCain bill. They've worked on privacy legislation together. And so the privacy legislation is not new. It's had its ups and downs, lots of learnings. In the meantime, we've had things like the GDPR happen and companies actually now embrace the thought that there could be federal privacy legislation. We just need to get across the goal line. Thank you, Frank. Apologize, Frank. I just dated myself there, but that's okay. No, no, Frank, I appreciate it because I have to kind of remind myself too, I haven't been working to space as long as you, but it's one of those issues that I think it's hard to find to make, at least I haven't met some, it says I don't want comprehensive privacy legislation. I grant like the actual substances where some of us may disagree, but it is reassuring this year, both Chair Rogers as well as state reflecting the EO or I should say the faculty of the EO calling for that. I would say like, I do like bring myself to reality. I am an optimist, but we have seen the White House call for this in the State of the Union, along with the national cybersecurity strategy, which called for the need and action on privacy legislation because of the security angle and we also have the president's op-ed. So thrilled to see the continued push. I do hope they don't just put that in a faction and let it end. I'd love to see the administration continue to be a supporter and take it a step farther and actually be a champion for what a comprehensive privacy law would look like because I mean, at least myself and our student as well, like both your organizations, we definitely think it's critical for us, like for consumers, industry as well as for security. So I would say maybe a few follow-up points on, you know, the EO, it has a theme throughout in terms of being having AI enable innovation and competitiveness and also to its credit, it doesn't reflect that there are risks and benefits with AI. I agree with those sentiments, but I think it's key that we actually make sure that they're reflected as we move forward because I am fearful for the scenario where it just becomes so regulatory. We just take such a broad approach to it that like America falls behind, especially to some, you know, adversarial countries that we may not see the AI with or I think worse yet, we allow other countries to kind of take the lead on AI regulation and we either follow them or we're trying to catch up 10, 15 years later, like we see it with GDPR five years later. So I do like to see that reflected. Only another point I would raise to is, is, you know, the EO is largely reflecting on what federal agencies should do. There is some of that is going to trickle down in terms of best practices that industry may pick up. There are some calls for federal agencies develop guidance that may or may not become regulation and enforceable later on. But I am concerned that some federal agencies may take it too far, like because right now there are some broader calls and then it kind of largely defers the federal agencies to kind of pick up. I think a good example of that is with the FTC. There's a strong call for the FTC to act in this space but it's not really clear on how far they may exercise their authority. And at least my personal view is, I generally like to see Congress taking more political and policy calls than some of our federal agencies. Not saying there's not a room for agencies there certainly is. It's just, I think it should be a little more measure than perhaps what one extreme could be resulting from this EO. So I don't want to sound like I'm taking a negative approach to start. I do think there's a lot of benefits to what's been done here. I do think the implementation is going to be key though. There's a long road ahead as this moves forward. Lillemarie? Yeah, I think everyone has, I echo the same sentiments. So I think we all agree this is definitely an initial step, but it shouldn't be the only step. And access now is definitely, we applaud the Biden and Harris administration's efforts and the executive order is sweeping. And it does, we do feel it did take seriously the concerns of the civil rights community and really leverage the US government's influence as a major consumer of AI technologies. And it just does so much underscores the need for multifaceted impacts of AI. And it reinforces the importance of international cooperation and tackling these harms. And so the caveat as well is, and something that we've advocated for in the EU is warning about still adopting a risk-based approach, which we have done. And we do feel that it really won't ever be sufficient to protect fundamental human rights. It creates a lot of loopholes in the systems upside down burdens of proof. And as we've mentioned here is there is gonna be a need for an augmented kind of congressional action. You know, there are some criticisms, I won't get too deep into it. I know that you asked kind of for our full take or, you know, for my full take on it. So I do think that overall it is gonna still require kind of this, you know, more input into how the implementation is gonna happen. You know, when you compare the OMB and the EO, there's still some gaps and some questions that are kind of left outstanding there. And, you know, when you talk about really protecting the lives of brown and black people impacted by AI systems, there are some measures that really need to be considered. Activists and advocates have been calling for, you know, the ban of certain technologies. And when you think about internationally, what the EU is doing there, AI law is set to ban, I believe real-time facial recognition in public and our order really only asked for kind of a review of how we're using AI in the criminal justice system. So it really kind of falls short on some of the stronger languages. Also really great to see. I know you asked about the ADPPA and kind of how it matches up. And, you know, it was great to see, as I said, those civil rights provisions as well as Sarah mentioned the privacy by design aspect and some of the algorithmic auditing requirements. I know Sarah also mentioned earlier kind of how it really enshrined the core principles of the AI Bill of Rights, which is something that many folks here on this call sent letters out to the White House and the OSTP calling for just that. So it's definitely, it's worth positive reinforcement, right? But I definitely think that we are gonna still need leadership in Congress to keep pace with really everything that's going on. And I will give a quick shout out to VP Harris's remarks as well, focusing on at the UK AI Safety Summit, focusing on, sorry, I got wrapped up here in my cords, current harms, not existential threats and risks. So it is great to see our, you know, our leaders in government really paying attention to our warnings about that and the real heartbreaking harms that we're seeing of AI. Yeah. Thank you. And definitely agree though, that, you know, kind of kicking the can down the road a little bit in regards to how to address AI and workplace and housing and other things too. So I agree with that notion as well as, yeah, kind of passing the baton over. So before this started, I, you know, was felt really hopeful and have been feeling hopeful of, you know, will Biden's remarks kind of revitalize and motivate Congress to pass this law. But as we've heard, we've been calling for this for a while. So it is really hard to say, but try to stay optimistic. Thank you. So to sort of piggyback off of what we were just talking about, could you all speak a little bit about the importance of something that's, you know, has been a part of the ADPPA and other legislation, which is algorithmic assessments as part of privacy legislations, right? Algorithms are clearly the main engine for artificial intelligence products and services. So if you all would want to tackle that a little bit, Frank? Sure, yeah, I'm happy to jump in here. I mean, certainly a call for algorithmic assessments or risk assessments really, you know, is critical and key. I mean, listen, Director Chopra has said this in the context of financial services products. If you can't show or can't explain, if you can't explain how a decision gets made using an algorithm, if you can't show that the algorithm is not biased, then you shouldn't be used it. I shouldn't use it. I think his term was it should be banned, you know, or at least I guess it's banned until you can show or test or prove or explain, you know, how the thing works. And I think that, you know, to me, the way you get there to show that is to do the assessment. You kick the tires, you make sure that the system that you want to use for that particular purpose is actually fit for purpose, that you can monitor it, that you tested in real life situations, that you took care in terms of the training data that you use, that you didn't use, you know, training data just, you know, of all one set of the population that you really tested it using the best sort of data sets. And so assessments become very, very important. Certainly NIST has started down this path with their risk management framework. But we also all know that AI is unique in that it's, you know, very, can be very, very sector specific, which is a blessing and a curse, right? Because what we've seen in the past, you know, in some instances where harms have occurred, it's somebody trying to take an AI system that looks really cool and she horn it into an application that it was never intended for, never tested for, you know, and so understanding the capabilities and limitations, the appropriate and inappropriate use cases to be able to make the call, like in the case of facial recognition, that something just shouldn't be used or all should go into the risk assessment. And just with three comments real fast on some of the prior discussion, you know, Brandon, thank you for raising kind of that and others have raised kind of, you know, what about innovation, what about competition? We don't want to kind of jam those up. We understand that those are very important. You know, I think some people like to cast it as kind of an either or, and I think we can do both. That's why I like the way Chairman Rogers kind of teed this discussion up by saying, listen, we need trustworthy algorithms, trustworthy AI, trustworthy technology if we want to get to a place where we're innovating in a way that, you know, allows us to compete. You know, and I don't think we in this country accept innovation for innovation's sake because, you know, I can say I'm innovating and I've created this super cool school bus that's gonna revolutionize school buses, but we wouldn't just put it out on the street with our kids in it and trust our kids' safety to, you know, this new innovative school bus that's gonna, you know, raise US competition around the world unless it was tested, unless we knew that it was safe, unless it was trustworthy. And I think what we're calling for in terms of AI and algorithms is that same sort of consideration. We wouldn't put a drug on the marketplace unless it was tested, unless it was safe. You could, but if word got out and people died, then all of a sudden, you know, or bad things happened, then, you know, it would have the opposite impact. It would actually, you know, hurt the credibility of, you know, industry to do what they wanted to do and compete. So just wanted to call this thing out as well. Yeah, sorry. Yeah, I was just gonna say, but I do agree with, especially with one point Frank raised is, I do think like many things, there needs to be a balance. Like I think even with ADPPA last Congress, it wasn't perfect for everybody. And I don't think we ever will reach a, it's just impossible in my opinion to reach a privacy wall that every single person is happy with every single provision. Like they're either give or take. And I think we kind of actually saw that play out with the impact assessment piece last year. Some people didn't want to have it reflected at all. Some wanted to have it apply very broadly. And I do think it's a matter of striking the balance. So for instance, is there a scenario we do impact assessments for certain types of algorithms, but not all of them? And I think there was some ambiguity of last year in terms of like what actually is either a high risk or consequential risk. Like we don't want to have these assessments done in every algorithm, especially the ones we've been using for 10, 15 years without even thinking that it's actually an algorithm or AI. Like versus like there are some that are very high risk, whether it's credit decisions or housing decisions. Like there's a distinction there. And I know it's hard to thread that needle. And I am fearful of the approach even like the EU takes at times of putting too much in the category of what could be consequential risk. Like there's definitely a downside there. But relatedly, I would question, if the privacy law has algorithm impact assessments in there, like how are they actually being used? So we know from ADPPA last Congress, they would go to the FTC and they would develop guidance to help with following them. But how is the FTC safeguarding them? Because I know that's a real concern is as companies providing this information and being transparent, but perhaps it being disclosed by bad actors or maybe the competitors using it for a competitive advantage or is it going to be used for enforcement actions? Like a lot of those questions were up in the air I think are extremely important to clarify, regards of how we move forward with the privacy law. Yeah, I think you raise some really great points, Brandon. I think that there are some concerns. Well, number one, I mean, there's so many things I want to say. So number one, algorithm impact assessments are vitally important. I don't think it's just a component of privacy legislation, but really a cornerstone to safeguarding human rights and data and the digital age. And we constantly say, right, the right to privacy is a fundamental human right. And as AI systems just continue to be ubiquitous, it's more becomes more important to really integrate these assessments to understand what's going on behind the scenes. Everything that Frank laid out about what we want to understand better about impact assessments and how they can impact our everyday life. And so we've seen that it's gaining traction as a crucial component of AI governance. Brandon mentioned the EU and I really wanted to raise also kind of just a concern of when you reserve AIAs for like these high-risk applications, there could be misclassifications that could allow for some type of evading proper oversight essentially. So a lot of questions arise, I'd say, about the scope, the methodology, the best practices to prevent these algorithmic impact assessments from kind of becoming another tool that can mask human rights and other abuses. And to piggyback off of Brandon's point about like safeguarding these assessments, that's something that kind of came to mind as I was reading a bit of kind of the OMBM memo as well is, you know, who will have access to the results of these testings and these assessments, especially when you consider kind of the data sharing agreements that United States government has with other governments and perhaps, you know, how misidentifications or misuse of data can lead to harm of people in other countries, people on the move and the like. I think that overall it's definitely a really important tool to be able to mitigate biases and ensure that certain groups, you know, aren't gonna be continued to be subjected to social inequalities and things like that. But when it comes in, when you get down to the nitty gritty, I think that's where kind of the questions need to be teased out a little bit of how is it gonna be implemented? Who's gonna be over viewing it? Who's gonna be protecting this? And, you know, what information kind of do we think can really address the potential for discriminatory outcomes and what mechanisms are we gonna establish to rectify those issues? So it's definitely kind of like a component of it that can really help empower people too to make informed choices about sharing their data and just really putting the control back in people's hands and being able to mitigate those consequences on privacy. So I wanna zoom out a little bit about AI impact assessments because I think it's useful to situate why these impact assessments have suddenly become a really hot regulatory tool. We've seen them in the California age-appropriate design code. There is impact assessments, I believe, being discussed in the AI Act. GDPR has data privacy impact assessments as well. And you often see this turn to in the regulatory context, I believe, for two reasons. One, it provides flexibility. Once you say the scale of like, so for data impact assessments, usually it has to do with either the scale of your business itself or the scale of the data you collect, like whatever threshold you set, then it's like, great, you need to look at your practices and see if it is harmful in whatever vectors you'd like to insert here. The reason I think AI impact assessments have become so attractive to legislators is because of the breadth of applications AI can be used in, right? You can think in certain sectors how you might have proscriptive rules about certain applications, even certain tech being used, certain levels of confidence and predictions, that sort of thing, but each sector and probably each sub-sector of that sector probably would need different regulatory rules. So the AI impact assessment, I think why it gets turned to a lot in these comprehensive bills is because it requires companies to look at their processes and at least document the thinking behind them. I do take Brandon's point quite seriously that like what happens after this, you don't wanna incentivize people to lie on these things. You need to make sure if they're going to be turned into a regulator, that they're safeguarded because there's lots of company data involved and the points are taken, but I don't think AI impact, I don't think impact assessments in the tech space are going anywhere. It's too flexible and it solves a lot of problems that legislators have, which is one, it's really hard to create like really, really precise rules at the legislative level and two, at least in the US, we're just not interested in creating at least right now a digital regulator, an AI regulator, a privacy regulator that might be able to do this on their own. So without those two things, you sort of are left with impact assessment. So I will say, even if ADPPA isn't the bill, which would be sad, but it is what it is. I still think any bill you're going to see that touches on these processes will always have some sort of impact assessment component. And can I just interject here? You know, prior to joining the leadership conference, I did spend some time working at a tech company in part helping to develop that company's, what's now become their AI standard. And in terms of the risk assessment, in terms of kind of the questionnaire, if you will, that gets put to the engineers to say, have you thought about the capabilities and limitations? Have you thought about the training data? Do you have the documentation? Have you tested this? Is there a test? Can we develop a test? You tested it in real-world settings. What are the capabilities and limitations? If you develop something for the education space, is it appropriate to use it in the criminal justice space? And the question, maybe yes with some tweaks or no. But the risk assessment process itself is almost like a forcing function to get people to think. I mean, I remember back in the day when you developed a great new product and you quote-unquote dog food it. You tried it out amongst your friends and family and coworkers just to kick the tires and see if it worked. In today's world, it's like there's no more beta testing. You just put it right out there in the marketplace and people pick it up and use it for whatever application they want to use it for. With AI, and Brendan and I agree, there's some use cases where it's like, if it says, try to self-rank a pair of socks and I'm not interested in socks, no harm there. I just shame on you retailer for trying to sell me socks if I'm not interested because your algorithm's off. If it's making a credit decision or a job decision or a decision about my health care, then it needs to be a little bit more perfected and a little bit more fit for purpose. And simply what I found was going through the kind of risk assessment process and thought process really helps, if any, it helps avoid and address and identify issues around bias and discrimination, but it also helps improve the product. Like thinking about the impact, talking to the impacted populations or the populations I should impact it, that the communities in which this tool will be used, talking to the disabled community, can we make this also with some functionality that will help the disabled community? Asking, going through that process actually helps improve products and helps people get geared where these things just get built into how things get done and the end results a lot better. Absolutely Frank, just one last point. I mean, I love how you summed that up and just the importance of the impact assessments and everything it can do. And Sarah, I agree with you. I don't think they're going anywhere. And we've seen their tech companies like Microsoft and Facebook, they've been conducting HRAs or impact assessments to address these type of risks. I know Microsoft publishes their annual report that I think it addresses the human rights effects of its tech and the risk mitigation strategies. And I know that Facebook had commissioned one to evaluate its role in the genocide in Myanmar. But I think what's really important is to kind of see what has worked and what hasn't. And these kind of case examples, like with Facebook, I know that that was a criticism that even that impact assessment was found ineffective at uncovering the human rights harms of their like tools and mechanisms to mitigate those harms. So thinking of how to keep perfecting it, just how to keep making it better for people so that people can trust these systems more so that people can trust the government more and companies more as well. Especially when we're talking about systems that impact, housing, employment, education and so much more or systems that are tracking and surveilling us. So I love those points you two just made. I wouldn't have said it better myself. Yeah, it was a quick follow up, David. I know you're probably like, stop talking already. But we're a talkative group. This is good. I think you raised a good point about what industry has done already. I think we all understand this, but there is a myth out there that really, that AI is a, is a wild, wild West, not to overuse as a phrase. Like between this AI governance framework, building on top of their privacy and their cyber ones as well as the voluntary commitments we've seen from industry, as well as just a self-imposed restrictions we've seen in industry. Like not saying all of those are perfect in every sense, but there has been a lot done. It's interesting to see kind of what where industry has been leading in this space. And I think it's really key if they stay extremely involved in a crucial part of this as we move forward, whether that's additional EOs, whether it's fall on agency action or whether it's action by Congress. Because I think it'd be, we'd not be helping ourselves if we didn't include what they've done already. Yeah, thank you. I will note that there's also a consortium of industry actors called the, the forum for the Frontier Model Forum that I think they're trying to do some sort of, industry best practices codes for folks that are building Frontier Model. So there's definitely something going on there. But I want it to sort of like very quickly change gears and talk about the international context a little bit because as we all have, as we all know and you all have brought up, the US is kind of in a lagging space in terms of legislation specifically on data privacy and protection, but also in what the EU is doing in terms of the AI Act, what's happening in the UK. So how do you see data privacy and data protection kind of mesh with the AI governance space broadly looking at the international context? Obviously both the issues that arise and also the legislation that's already out that we don't have here. Weimati? I'm happy to jump in. So I was thinking, but this was a great question. So I think what we're seeing internationally really is kind of like a pursuit of a harmonious coexistence between AI, individual data protection rights and data protection on this global level. It's really hard to have a conversation about AI governance without talking about the data that's fueling those systems. And I think that the significance of how these frameworks interact individually and collectively becomes really clear when you look at, for example, like the GDPR in the EU and how, for example, like I believe as French data protection authorities are using the GDPR to tackle some of the AI challenges that they're seeing which had GPT. So kind of like on a high level, if we want safe, responsible AI, we need frameworks that go just beyond these technical functionalities and tackle the entire life cycle of personal data, collecting, storage, processing and individual data protection rights because you could have a data privacy law that doesn't allocate data protection rights like the right to delete or the right to access essentially you could. So I think that we just have to be really careful kind of how we look at that. We're seeing countries like China and India that are already regulating data and data protection and privacy is a part of their conversation, whether it's the UK or Japan, nations really want to regulate AI for the risks. And when you look at kind of the Bletchley Declaration on AI safety, I think there was 28 countries that agreed to what was it understand and collectively manage potential risks. I did kind of a scan to see how deeply are they talking about data protection and data privacy and they do, there was a language or that they recognize that the protection of human rights, privacy and data protection need to be addressed. So it's definitely on people's radar. There's definitely kind of a baseline awareness and understanding that they do go hand in hand. Thank you. Anybody else? Oh, Brandon. Yeah, just maybe a quick follow-on. I won't be nearly as substantive or insightful, but I'd say two points quickly to raise is just, one is being mindful of how security ties in. I do think there is word of disservice to our country in terms of national security if we don't act and do something when it comes to privacy and potentially even with AI. I mean, we're allowing countries that don't have the same values as us to act. I mean, take China for instance, it's no secret that they want to be the world leader on AI. Initially it was 2030, now we even see quicker estimates in that. And in terms of their use cases or many of which the US would never even consider are ones that seem to be likely commonplace there. So I do think that is a risk of letting certain countries get ahead of us and potentially on how we could fall behind on a defense perspective. But on an industry perspective, I think it's equally important for the US to act, especially when it comes to privacy, even not even getting the AI yet. I mean, I understand a lot of companies are gonna fall to the GDPR or the requirements around the world in the absence of the US standard. And they're not always the ones that are the most American standard. I mean, there's definitely things we can pick and choose and learn from the GDPR, not saying it's all bad, but there are certain parts that aren't ideal either. So I do think that's really the key of us having American leadership in this space and really doing so in the short term. Thank you, Brandon. Sarah and Frank, do you wanna jump on this? So again, I'm gonna twist the question a little bit to suit my own needs, which is just what I think this teaches us about AI regulation is we shouldn't be thinking about AI regulation. The tech policy space protecting human autonomy, fairness and dignity, pursuing innovation, preserving competition, there's all sorts of values we've been talking about for a really long time. And I think a comprehensive privacy law baseline does that and also provides both assurances and guardrails through AI without ever actually having to say the word artificial intelligence. So what I will say is as senators, especially get very hung up on generative AI and what to do there next, I would have an urge of caution, like we don't have a technology, what I would call like regulatory framework that makes sense, like the worst of all possible words, the world is suddenly jumping to like AI law. And it's like, well, we didn't do privacy or anything. It's like how for an AI company, especially if suddenly you have all of these data privacy protections built into your work, but your sources of data don't or anything, that's like a provenance nightmare. And it actually might hamper your development. So that's my only urging is like, there is a bit of an order of operations to this. And my concern in the US is we've become so enamored with thinking about AI that we'll forget about all of the tech policy questions that we haven't answered yet because they're hard and difficult and they're gonna take a lot of work in compromise. Yeah, and Sarah, that's a really good point. And to me, it's all of these things build on one another and they're linked, right? It's you need the data privacy protections as well as the AI to look at what's going on around AI. I think in terms of what's going on around the world, especially in Europe and in other countries, both on privacy as well as AI specifically, I think we've learned collectively a lot from that process and kind of what the tolerances can be, what works and what may not work so well or it could be improved. And so I think it gives us a strong kind of baser foundation from which we can build and use I do think a lot of companies kind of have been defaulted to the GDPR like many companies do for whatever California does in terms of where the bars get set. And I think that, listen, if you're gonna be dramatic about it, we'll be to a company that doesn't look at the EU AI regulatory draft, even if it's not passed yet and not get some sense of what they ought to be thinking about doing when it comes to building their algorithms and their systems. It's like, okay, you can quibble about the details but like you're gonna have to test it. You're gonna have to be able to explain it. You're gonna have, there's certain things that the handwriting is on the wall of what you're gonna have to do. And I think risk assessment certainly is part of that. The ongoing monitoring will be part of that. The documentation will be part of that. The protecting data and security will be part of that. So I think that's where it's going on not just in Europe but around the world. And Singapore has done a lot of thinking about this in terms of guidance in other countries as well. So we're not necessarily unique in that regard. Thank you. So then it seems like the next step at least from our conversation is looking at what's happening or what can happen in Congress. So what does the panel see as a path forward on comprehensive privacy? Looking at ADPPA as last year's a blockbuster, I would say. What are the prospects there? How are we looking? Yeah, I guess I'll start us off. I guess I can see Congress proceeding in like four different ways. I mean, I think one, we can act on more niche pieces of privacy legislation, whether that's kids specific like COSA first, not saying to even Chair Rogers point, we couldn't do COSA and then later on do ADPPA. I think that's one avenue. That's where the Senate has traditionally been. I think we could act on a comprehensive privacy law, much like ADPPA that has provisions that relate to AI, like algorithm impact assessments, like civil rights, like data minimization, that have value independent of AI but directly connect. I think third, we could act on an AI on the best bill that has privacy included. I know there's some calls to see something like that, like an overarching AI law. Fortunately, they've kind of become fewer and fewer it seems like over time. And I think fourth is to act on like more of a privacy specific law. I know not everybody was happy last year, even with having certain AI related provisions in ADPPA. I think where I personally stand, I think actually was getting at one of the points that Sarah raised most recently. I think it's, we can't act on AI or consider AI without having a foundational data privacy and security law. I think that's a misstep. I think that's really the first step. Before we act in COSA, before we act in anything AI, I like to bring myself back to two things, like remembering that AI is one form of technology. Yes, we could address privacy in the context of AI, but there's gonna be a new form of technology the next day. We also have a whole host of other technologies that don't really have any degree of baseline when it comes to privacy. So we've only addressed it in one area. Then secondly, AI really sum it up, it is all about the data. So like, if we don't have any baseline degree of how we're minimizing data, how it's collected, who it's sold with, basic rights that people have, how are we really addressing AI? Yes, we can target it in very niche context, debate about what high risk is, but at its core, there's no foundational approach to data. And I think that's the real issue here. And that's why like an ADPPA or it's a COVID, I think it's really needed now. Well, arguably years ago, but I'll settle for now. So I completely agree with Brandon's assessment, but I'm gonna be a bit of a bummer here. I am not really hyped about this Congress. We have shown time and time and again, that if we could get something done or not get something done, this Congress in particular has chosen to not get something done. Let's just look at the past week of yelling about a one month CR. And that's like as a policy people, like when we get very into our niche area in substance and weeds and talking about that, it's really important to do that work, but I have to be director of gun affairs or PK. So I have to look at the like sort of broader landscape and right now with the amount of dysfunction that's happening and the sort of refusal to like really figure out a plan for what the rest of governing is going to be for this like congressional year, I'm just not sure that anything is gonna get done, but maybe that's just because of cloudy today. So I'm in a bad mood. Thank you. Thank you, Sarah and Brandon. Frank and Willemar, do you wanna jump in on this or do you wanna move to something else? Oh, good morning. So I'm kind of on the same wavelength as Sarah to be honest with you. You know, I don't envision unfortunately that we're going to see movement on ADPPA being kind of just pushed forward in these last two months that we have. As I was preparing for this panel, I was thinking, you know, I'm hopeful. We know we have bipartisan agreement like Brandon said for more like narrow aspects of privacy regulation with kids' privacy as well as, you know, reining in data brokers and the TikTok debacle in protecting our data from foreign adversaries. You know, the solution isn't to ban TikTok, it's to pass a comprehensive federal data protection regulation. And so I think that just throughout this past year we've seen folks get riled up about, you know, data and the uses of data by AI systems and the like. And again, you know, President's Biden's call for a federal privacy law could revitalize the discussion, you know, especially as advocates, you know, we could keep applying the pressure. But, you know, 2024 is the year of elections and there's so many ways our data can be used against us and we're just at major great risk, especially when you think of how generative AI can supercharge mis- and disinformation. And a lot of members of Congress are hyper focused on election, election, mis- and disinformation because they're members of Congress. So we see that come up a lot as, you know, there's that personal interest there but, you know, there are hundreds of thousands of people at risk, you know, the longer we don't regulate AI or how our data is being used by AI systems. So we really need these guardrails. I am interested, the last point is I am interested in seeing, you know, what the FTC will do still waiting to hear about that commercial surveillance rulemaking, you know, what will happen with that open AI investigation kind of gives you a little bit of insight into what they're looking at and how they're looking to apply and their existing authority there. So it's definitely something we're monitoring and we'll just, you know, continue to look for other avenues of how we can really safeguard people's rights. You know, the FTC has a tough, you know, they don't have a lot of funding. You know, we've been calling for funding for a really long time. So again, keeping that in mind but I'm, you know, really interested in seeing about that open AI investigation. I think that can, you know, give us some insight there as to what we can do and see the implementation of the AIEO and the kind of what comes from there and how folks are thinking of maneuvering these challenges. Thank you. I was going to add that there's more than just the next two months. We also have the next year, but then you'll be meant. Do you remind me that it's an election year? So that's even less likely that things will happen, sadly. So then, you know, what do you all think as kind of like a, you know, when we have very few minutes left, but what do you think is the strategy going forward, right? In the absence of ADPPA passing, you know, we at OTI have been strong proponents of comprehensive federal privacy legislation. But, you know, in a hypothetical, you know, let's say we can't, the ADPPA doesn't pass or it doesn't even get reintroduced. What, you know, how are we, how are you thinking of next steps? You know, would it be putting it into a standalone, sorry, putting it into a broader AI bill like Brandon mentioned, instead of like a standalone privacy one, looking more towards the FTC, like Will Murray mentioned, what are your thoughts on the strategy and that sad, hypothetical? Well, I'll jump in here. I think it's all of the above. I think it's looking where the FTC can do what they can do, both in terms of putting out guidance to industry, but also in the cases that they bring, right? To send the signal to industry as to what's acceptable and what's not under the laws and regulations that we have. You know, even getting something like ADPPA reintroduced also sends the signal, I think, to industry is that you're like, here are the parameters. I think the fact that it's got bipartisan support, yeah, it's not law, but it's like, kind of lays out some good markers there. You know, certainly looking at state activity as well, you know, the activity either in California and the other states that have passed or considering privacy legislation, you know, is another avenue. And the other thing is working kind of directly with the companies to say, hey, like, you know, there's enough knowledge base now, you know, whether or not they're complying with the GDPR and that provides some level of protections or other things that they could be doing that that won't cost them in terms of innovation and competition, but that would hold them in a good place amongst consumers and calling out where there's bad practices. So I think it's gonna take a village to try to get a comprehensive bill across the goal line. You know, I don't hold out hope for this year, but you know, I think as the years go on, you know, more and more momentum gets built up for it. Just one last thing, because I know we're running out of time and we didn't have a chance to touch upon this, but I'd be remiss if I didn't talk about in terms of innovation, you know, that there's innovation, I think in terms of how some of the companies and others think about it in terms of competition and building up the business and doing all these things, but when it comes to AI and you think about all the opportunity there, you know, could we turn innovation into a thing where it's like equitable innovation or innovation for equality where how can we use AI and data to actually expand opportunity for people? Like, you know, or is there a segment of the population that's being left out? Will AI help us identify them and also identify ways that expand the opportunities available to, you know, those segments of the population and those communities that may need the help that may not be identified? You know, is there a way to provide better price products and services to people? Not to increase prices for them, but actually to figure out a way to use AI to decrease the cost. And so, you know, I think equitable innovation is something that also needs to go into the mix when we talk about AI and the use of data. Sure, thank you. Sarah, you were also about to jump in. Frank covered it, basically. I will say you're probably gonna see more small bytes legislation. Like we've already seen like the Fourth Amendment is not for sale act, a lot of interest in data brokers. This is obviously not a small byte, but like 702 reform is really, really important and needs to happen. So that would also be a privacy win even if we don't get like a consumer-facing bill. So there's a lot of avenues that like that Congress has been sniffing around. And I think we could see some of that and that be used to rebuild momentum for a bigger law. Yeah, just maybe as a final thought, I know we're coming up in time, it's just, I do think it's really important to continue to events like this and have conversations to keep the pressure on the comprehensive data privacy and security law because Sarah named a few different examples, but there's so many competing demands even outside of privacy and security. Even if Congress wanted to do them all, they just don't have the time or like the time. So it doesn't matter of keeping this front and center and really letting all different types of groups know like four or five of us are represented today that this is important for us. So I think that's critical and not that let's us fall by the wayside. Secondly, I think we did an awesome job today at this. Like so many times AI conversations just go to the doom and gloom and it's all about fears and how it's gonna be the end of civilization as we know it. And I don't wanna diminish like there are risks with it. There are things we need to think about but there's just as many if not more benefits even come in from a privacy angle. I love researching and speaking about how AI can actually further privacy. Like what are some innovations we have currently and what are future ones that can emerge to better protect privacy. And then how can they equally important work to address some of the risks we've elaborate today? I think it's critical to strike that balance and for all policy makers not to forget that. Winmari, any last words? Yeah, I'll try to say something new but I think Brandon, Sarah and Frank really summed it up. I mean, it is important to continue having these conversations to apply pressure and find avenues where we can drive change whether that's through the fourth amendment is not for sale act or section 702 reform. Frank mentioned monitoring state developments and that's very important. I just wanna add like there are very serious risks with these industry backed state laws that we've seen on that could really undermine the overall arching goal of data privacy and protection. So it's even more of a reason to continue pushing for federal regulation to set a baseline to really prevent the patchwork of conflicting state laws. To your question of bundling AI regulations as we're closing up, I do wanna like my position there is attempting to bundle it into like a single bill without any type of standalone data protection legislation is a really dangerous proposition. It could really risk undermining data protection and privacy. I know we've heard, the GDPR isn't perfect but it's still, sorry, I'd lost my train of thought. It's a compelling case study of how can still, how AI legislation is meant to build upon pre-existing data protection regulation. You know, it's not just a feature of AI regulation. It's the bedrock and the two are interconnected and separating them could have detrimental consequences. I'd like to think of it as like privacy and data protection and privacy legislation as the guardian of individual rights and it can create those boundaries that we need. And if we don't have that we're gonna, people are still gonna be vulnerable to misuse of AI. And so, I just really wanna be cautious about kind of intertwining those aspects. We could really risk overlooking the nuance complexities of the different domains within them. And I always think of this, I came up with this kind of metaphorical ascent of AI governance of mountain climbing and going up to Mount Everest. I was reading a mountain climbing book and the way that I envisioned it in my mind is right, we're on this journey to AI governance and we're facing challenges and we're running low on oxygen and you discover these two oxygen tanks and they represent the central roles of data protection and AI legislation. You need both to regulate AI harmful, harmful AI effectively. So before we even think about AI systems, like Sarah said, we really need to establish these data protection regulations. It's like the air we need to breathe into AI governance. So I'll leave it with my weird metaphor that I came up because I was trying to be crafty for my niece who is 10 years old. So I try to come up with fun ways to explain the importance of these concepts. So if it sucks, I'm sorry, please give me ideas for better metaphors. I think that was a great metaphor. Thank you, Ulmari. We are a little bit over time. So on behalf of the Open Technology Institute, I wanted to thank our panelists, Sarah Collins with Maria Scoto, Brandon Pugh and Frank Torres for joining us as well as a thank you to Chair Rogers for serving as our keynote speaker and to you all for tuning in. Thank you very much, everyone. I hopefully we'll see you all soon. Goodbye.