 You're alive. Great. Thank you, Matthew. Good morning, everyone. I'm Tim Briglin, the chair of the House Energy and Technology Committee. It is Wednesday morning, April 28th. This is our nine o'clock hearing. And this morning, in recent weeks, we've been doing this work on Thursday mornings. Today we're doing it on Wednesday morning. We're gonna be turning back to this committee's work on artificial intelligence and automated decision systems. We're gonna have two hearings this morning and our first hearing is gonna be a little more focused on issues related to privacy, which is relatively new to this committee on the AI and automated decision systems front. And we have three guests this morning, Sarah Jordan from the Future of Privacy Forum, who's calling in from Northern Virginia. And then we have two guests from the Attorney General's office, Sardy Clark and Ryan Grieger. And again, I wanna welcome you all here. Thanks for being with us. I really appreciate your time. We're gonna turn first to Sarah Jordan. And, Sarah, I understand that there are some documents that I think are posted to our website. So for committee members, as well as for folks who might be listening in the public, if you go to our committee's website under today's date, you can find documents that are listed there. So, Sarah, why don't we rely on everybody to pull that document up on their own? And if we have difficulties, then we can screen share. So we'll go from there. But anyway, welcome, Sarah. Thank you for being with us this morning. Great, thank you so much. And thank you, Mr. Chair. I really appreciate the opportunity to speak with you all this morning, the morning representatives and members of the public. I'm Dr. Sarah Jordan, Senior Counsel and AI and Ethics of the Future Privacy Forum. FTF Future Privacy Forum is a nonprofit dedicated to supporting consumer privacy leadership and scholarship. Our mission is to address principle data practices in support of emerging technologies. First, I'd like to really take this opportunity to commend you all as the legislature in this organization for your continued attention on issues of artificial intelligence, which is a truly important issue of significant concern for privacy, data protection, and civil rights. As you may know, Vermont is not alone in considering this issue. Automated decision systems legislation is also being actively considered in California, Colorado, New York, New Jersey, Washington State, and Maryland. In addition, the Federal Trade Commission has decades of experience enforcing the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunities Act in the context of automated decision making. Overseas, the European Commission recently unveiled an extensive proposal to regulate AI. Given this broader context today, I'd like to offer some general observations and recommendations about regulation in this arena, which I hope will be truly helpful to you as you consider how to move forward with AI policy. As a general matter, it's crucial that any measures adopted serve to increase transparency and accountability without overburdening vendors or legislators or oversight bodies or creating compliance barriers for new technology. As we see it, there are at least four key areas of opportunities for legislators across the US to clarify and improve the technical and conceptual expectations incorporated into current AI and automated decision system legislation. One, truly clarifying the definition of automated decision-making systems of concern, clarifying its relationship to artificial intelligence, for example, attending to the needs of the many audiences to artificial intelligence and automated decision systems, ensuring an appropriate timeline and resources for complete compliance and including appropriate provisions to mitigate unintended consequences. So let me start here a little bit on point one, clarify, clearly defining automated decision systems, systems of concern. AES, automated decision systems or algorithmic decision systems or algorithmic decision support systems, there's many permutations of this, are part of the fabric of our daily lives, whether used to reroute internet traffic due to unexpected surges and data flows or used in situations that materially impact a person, such as when they're used to grade students' tests or diagnose health conditions. It's important for regulation to focus on those areas which present a heightened risk of harm to natural persons and to avoid over-broad or vague definitions of ADSs. Appropriately defining what's considered to be an ADS or an algorithmic or automated decision system is an important step for designing implementable regulation and oversight. Clarifying which ADS influence decision-making in an adverse discriminatory or harmful way is also an imperative part of this, particularly for regulations that invoke a tiered method for ranking which systems will need to go through different forms, say self-assessment or third-party assessment of review or different systems of scrutiny. It's essential that the legislators define ADS in ways appreciable by systems engineers, the public and legislators themselves. A key component to be clarified in legislative efforts to define ADSs is the definition of automated decision-making that accounts for the wide range of tools used to create the outputs of automated systems. In particular, legislators should be cautious to define algorithmic or automated decision systems with respect to techniques used to manipulate data that arrive at a decision, whether or not that's through statistical modeling, artificial intelligence or machine learning that produces a score, classification or recommendation or any other simplified output that's designed to support or replace automated decision-making through automated extraction of data. As a part of our efforts at FPF to educate policymakers like you on the various types of AI machine learning that make up the automation and many, but not all, automated decision systems, we built an infographic which all of you have had to hand already. This infographic, the spectrum of artificial intelligence shows the many forms of artificial intelligence that are used in decision systems deployed by governments and private firms today. We shared this infographic with you and I'm happy to take any questions about this after the conclusion of this more formal testimony. However, in order to be able to situate ADS in the context of this, it's probably best to realize that many automated decision systems rely on methods of data analysis that do not fit visions of extraordinarily complex, inexplicable, artificial intelligence as perpetuated in the AI hype cycle that we see in the news today. In most cases, it is explicable forms of AI that are used rather than more opaque, say machine learning or neural network-based examples. Rules-based or symbolic AI which knits together logical if-then statements to determine whether a combination of data points meets a decision threshold is an often used and explicable way of determining how a decision could be made. Knowledge engineering, which creates a network map of pathways between documents and their components to help us understand complex, regulatory instruments like tax codes can be explained by computer scientists in much the same way that social network analysis could also be explained by social scientists. Natural language processing that helps consumers navigate websites through interfaces like chatbots may be powered by extremely large language models but many are not. In fact, commonplace natural language processing techniques such as dictionary-based search or question-and-answer wizards are behind many of the systems, say which stand behind unemployment office, chatbots or even vaccine finders. Machine learning, which is used to build recommender systems which help customers find products that fit their preferences, often lean on machine learning that employs regression techniques readily understood by legislators and economists alike. Even image processing powered by neural networks can be explained through the use of saliency maps. The primary challenge then for legislators striving to understand the techniques of automated interpretation of data incorporated in ADS and algorithmic systems is to grasp really the landscape of possibilities and the vocabulary. We really hope that the infographic we've provided and the associated white paper also provided will help you to accomplish these goals of learning more about the techniques and technologies that stand behind the automation and automated decision systems. Automated decision systems, as we pointed out, are pervasive and thus there are many audiences to this. So our second point is that we truly need to attend to the needs of the audiences to ADS when we're trying to define oversight measures for these. What each of the audiences, whether not individuals, groups, offices or organizations need from ADS evaluation is comprehensible information which allows them to understand when automated decision systems make decisions that affect them, how to understand how their data is used in these systems, data processing and how to appreciate how they might take action to protect themselves or others from any adverse effects that ADS introduces into their lives. Effective legislation will need to address the information needs of these multiple audiences of users and subjects of ADS systems. In January of 2021, as you may well know, the National Institutes of Standards and Technology, NIST, offered guidance on explainability in artificial intelligence that may be truly useful to help states as you craft the criteria for impact assessments, one of the potential oversight tools proposed which will be useful for the multiple audiences to automated decision systems using AI and similar technology. NIST guidelines suggest four principles for explanation. One, that system should offer accompanying evidence or reasons for outputs. Two, system should provide explanations that are understandable to individual users. Three, the explanation given should correctly reflect the system's process for generating the output. And four, the system should only operate under conditions for which it was designed or when the system reaches a sufficient confidence in its output. These principles could be met through any number of mechanisms within, say, an impact assessment framework, but these constitute an effective baseline for the components that ADS or AI impact assessment tools must encompass to meet regulatory goals. But as pointed out at the beginning of this presentation, we need to ensure that there's appropriate timeline and resources for compliance. Proposed legislation, such as presently considered in Vermont and elsewhere, suggest wide ranges of tools and timelines for implementation of oversight of automated decision systems. Some bills already assume that ADS and use by government, private actors, and other public-private partnerships have been appropriately documented, characterized, versioned, and cataloged in locations that are easily accessible by those persons who are accountable for submitting oversight documents like impact assessments. However, the complexity of these systems themselves and the lack of prior requirements for tracking automated decision systems, development, design, and deployment, whether in contracts or in other procurement decisions, means that some organizations may not readily have access to technical documentation for their systems in use and thus may not be able to readily comply. Government offices and vendors to governments may not be aware that the systems that they use every day to improve throughput, to extract efficiencies, or to create more effective program monitoring really fall under the definitions of some of the ADSs as proposed in current legislation. Governmental organizations in particular may not realize that they're using algorithmic or automated decision systems that are provided as a part of a package of services provided by vendors. Also, government offices may suffer from rather silent procurement or development strategies and may not realize that they've built or developed overlapping ADS systems that have codependencies that will create challenges for help for creating oversight and mechanisms for both or all of the offices using them. Thus, ensuring that ADS legislation is robust to the somewhat messy reality of documenting ADS in government or private business, including identifying those dependencies will require legislation to attend to supportive mechanisms, such as provisions of time or expert personnel. A fourth point is that we must include appropriate provisions to mitigate unintended consequences. Many of the proposed bills envision a new or expanded oversight office with the responsibility to design and review organizations compliance with the expectations penned in ADS legislation. Creating or expanding these offices will present states with some challenges, including identifying and attracting appropriately qualified personnel. These personnel needed for these offices must be able to meaningfully interpret algorithmic impact assessments, and they will need to do so in an environment of high sensitivity, privacy concerns, publicity and technological change. As observed in many of the state and federal bills calling for STEM and AI workforce development, the talent pipeline is limited and legislators should address the challenges of attracting qualified personnel as a key component. Finally, protecting the public from adverse decisions by automated decision systems may perversely raise privacy and security risks. Bills requiring handovers of training, tests or validation data, or which require the making public of source code for each of these systems may have inadvertently open up opportunities for data breaches, ex-filtration of intellectual property, or even attacks on the algorithmic systems which could in turn provide harm and harmful situations for individuals who are interacting with those systems. As state bills progress, we would encourage deep collaboration with privacy engineers, privacy professionals, data protection officers, cybersecurity and IP lawyers in order to ensure that these assessments will not produce unforeseen perverse risks. We strongly and truly encourage lawmakers and agency to continue to solicit the impact of input of the business community as well as civil society organizations and nonprofit organizations like FPF as you work to develop these pieces of legislation further. I've sent, we've sent resources to the committee and I'd really be happy to talk about any of these further and I truly thank you for your time and attention this morning. Thank you, Mr. Chair. Thanks, Sarah. I don't know if FPF is working directly with, you've noted maybe a half dozen states at the beginning of your testimony that are doing work on AI right now. Are there granular recommendations that FPF is making? Kind of in collaboration and I'm not sure. I think of you guys as a think tank essentially that is supporting work that maybe some of these states are doing. And again, this type of work is so nascent in a policy sense that states and legislatures don't have kind of internal resources to affect this type of work. The state of Vermont created a task force two or three years ago. Some technology folks involved in that, some commercial folks, some folks with a legal background involved in that task force work, but that was kind of the initial salvo. And so, one of the things that we're considering as a state is potentially having a permanent commission in place that looks at these issues from a commercial standpoint, from a government technology standpoint, looking at privacy issues, a whole host of things. So to get to my question, one of the things that I think you said in the early part of your testimony was essentially be cautious in terms of how you proceed from a regulatory standpoint. And so, from a macro sense, that's helpful, but has FPF come up with a, if you're going to regulate, do this or before you regulate, do this type of recommendation as you're working with states? That's my general point. Yeah, our work with states thus far, I personally have worked most with California on design and recrafting parts of AV-13. We've also had the opportunity to work with Washington State on SV 116. And in that context, we've provided some input and feedback in terms of what specific forms of regulatory oversight mechanisms may, how they may need to be adapted to address not only the reality of algorithmic design and cataloging of models and data sets, but also what the ideal components might be for things such as impact assessments. As you asked how granular has our advice been, it really has been different per state. So the California bill AV-13 was quite narrowly tailored to financial instruments and financial organizations use of automated decision systems while Washington State was quite broad. Thus, obviously our advice has varied. We're happy to continue to work with those states as well as Vermont in order to be able to help you to identify what potential caution points there may be that arise in the regulatory instruments or the regulatory objectives that you have considering that objectives may change and may will likely be what drives the instruments that you choose for regulation. Yeah, and I'll just say and I don't recall if you had awareness of this before joining us today, we essentially have two bills that at a high level that we're considering and one relates to one of the points that you talked about, which is initially you've got to know what you have from an inventory perspective. And one of the bills that's in front of us very specifically relates to that. And then the other is much broader in terms of, as I said, a permanent commission. And I can't remember if it was testimony we took or some of the feedback we'd gotten from the task force that had been working 2019, 2020, but looked at a precursor to specific regulation being essentially putting in place a code of ethics that people can comply with or look to as kind of a North Star of how one operates in this field before something more granular like specific regulations is put in place. But those were a couple of things that this committee had at least is on our radar screen. Yeah, in our initial feedback, one of the things we noticed is that there are overlaps between two, six, three and four 10, but the definition of AI was not a point of overlap in that creating those overlaps. So that way there was a little bit more harmonization maybe an important move to begin with. We recognize that you advised including a code of ethics and we would certainly have some feedback on how to create mechanisms for harmonizing that not only with the many different forms of AI principle statements that are out there at last count there's over 250 around the world, but also doing that in ways that are conscious of the current requirements for say professional engineers and their code of ethics and ensuring that those harmonize in a way that is not duplicative or is not prohibitive to the way in which engineers need to create these systems. Thank you. We've got a couple of hands up from members and I'm gonna go to Representative Rogers first. Go ahead, Lucy. Thank you and thanks for taking the time to share some of your expertise with us. I had a couple of questions mostly on that part three that the implementation for oversight of automated decision systems, but I think before I asked I was wondering if you would kind of for our record and for the committee and watchers on YouTube just give a little bit more background for us about the Future Privacy Forum to kind of orient us. I'm not sure everybody necessarily knows who you are, where you are, how long you've been around, what your overall mission is. If that's, I don't know, maybe I'd ask the chair if that's appropriate at this time. Okay, so you get thumbs up. So Future Privacy Forum has been around for 11 years. Our CEO and founder, Jules Polonetsky is preeminent in the privacy world, primarily working on issues of data protection and privacy law early on. This has expanded our mission, our divisions, et cetera have expanded in many ways over the past couple of years. I joined FTF in January of 2020 in order to be able to help augment our side for artificial intelligence, as well as for review of secondary use of data from researchers and from corporate spaces in order to be able to fuel design and development of AI systems, as well as in order to be able to advance overall research goals. Our AI team is presently three people, but we have a robust and quite large education privacy and education and student data concerns division. We also have people who focus on the global remit of privacy. So we have offices in the European Union, primarily in Brussels. We have experts in European Union legislation, GDPR, et cetera. We have a legislation team that focuses on both federal and state privacy law within the US. As you'll know that there are many different privacy bills that are coming out this year. So we're tracking all of those. But we also have individuals who are experts in health law and health data. So we are both a, we do a lot of policy work, but we do a lot of technology work. One of the ways you can think about it is that we speak both tech and walk. We do both sides. But one of the things that is probably most notable about FBF is that we are data optimists. We're typically there to try to identify a neutral ground between what is considered to be strong, say private or corporate concerns and strong public or advocacy concerns. We started to find some place between those that can be a workable solution for privacy and data considerations. Does that answer the question? Yep, it does. I just think it's helpful to have a little background and where are you located? We are in Washington DC, that's our home. However, like everyone else, we have in some way scattered to the winds in the last year. Thank you. My other question, just so the part of your verbal testimony that's labeled part three, you said, you basically, you talked about some of the challenges of kind of documenting AI, which as you know is a large part of one of the bills that we have and the challenges of even knowing what counts as AI or not, and I guess I'm enlistening to that part of your testimony. I was wondering if you could share a little bit more. Is that coming from a place of kind of your perspective sharing with us that it's more or less a fool's errand or from a place of it is important and it is a valuable piece going forward, but just making sure that we know that it's not necessarily as simple of an undertaking as I guess just if you, from your experience in other states and if you could speak to whether you see this as an important piece moving forward or not. It is absolutely an important piece. It is utterly crucial for states. It's also crucial for businesses. So my reflection and why we've included this year actually comes from working with private businesses with private industry, as well as working with states who are trying to identify all of the different systems that they have and how those systems interact with one another in order to be able to build what is either the company's enterprise level AI or the state's sort of enterprise AI. One of the things that's widely recognized in the AI community is that building model registries is an utterly crucial task to be done regardless of where you are. And that includes even in things in research spaces. So as someone who comes to the AI side from a background in computational social sciences, one of the things that I can tell you was always a struggle and is a consistent struggle for people with my background, people who use AI as a tool for research is that oftentimes the models that you are using, you don't necessarily have adequate tracking for them. And that is something that is consistent challenge for reproducibility in research, but also for replication and also for ensuring the trustworthiness of the research outputs there. That ports to the enterprise environment because oftentimes companies may not know exactly which models are being used. There are things that are built into one side that becomes part of another larger model system, which becomes part of another larger model system. And in fact, you can probably think of the way in which most ADS or algorithmic decision systems are built looks a little bit more like Matrushka dolls, where there is one small component that built into one bigger component. You may see the overall largest doll in the space without necessarily recognizing each of the ones that filters all the way down to the smallest system. So absolutely crucial. We recognized, or from my perspective, immediately recognized that that was a strong component of this build. And I think that that is one unique in terms of the legislative landscape, but also two signals that you are aware of the messy environment that ADS already is. Okay, no, thank you. That's really helpful perspective to have. I think also, you know, one of the things that I've become aware of that makes me think this is the right time to be passing this bill is that we don't have a huge number of AI systems right now. So it seems like it could be with all of the challenges you outlined, it could be a good opportunity to kind of work through some of those challenges where at a time where the number in our inventory might be more like 10 or 20, whereas in a few years it might be more like 100 or 200. You know, and so just thinking that through is helpful. I guess my final question, and then I will turn it over to other committee members, but my final question is, did you coming out of that paragraph of your testimony about just kind of an awareness of some of the challenges this might bring if you had a chance to look over the language of the bill, and I'm not sure if you did, but are there any specific recommendations to our language that you would suggest to kind of meet some of those challenges? And again. There are certainly areas that I think that we could offer some advice in terms of clarifying the language, making that language not only harmonized with other either state legislation, pieces of legislation, other forms of discussions of model registries that we know exist in the private industry ecosystem or even those things that exist amongst proposed federal legislation, we're certainly happy to provide that legislative support and tailoring. I think it would take us into some parsing of words that I'm not sure that everyone has the endurance to address here, and I know you still have two speakers, but I am very happy to speak with you and to work on this offline, because yes, we do have some considerations and advice. Oh, wonderful. Thank you. I know at least I'll speak for myself and say I know that I at least would greatly appreciate that. Thank you. Happy to do so. Thanks, Lucy. Representative Sims. Yeah. Thank you so much for this testimony. Dr. Jordan really appreciate seeing our work in this broader national context and get your expertise and perspective on it. I think as our chair mentioned, our current bills focus kind of primarily on setting up a commission to provide oversight on this topic and looked maybe more narrowly at government procurement and use of AI systems. And so I'm particularly interested in hearing maybe a little bit more from you about this question of attending to the needs of the audience and what role, if any, the state might play beyond just our own state government operations around regulating sort of the disclosure of when systems in the private sector are used so that audiences have there's transparent use of technology and you mentioned these and NIST principles as possible guidelines. But would love to just hear more thoughts since maybe we haven't spent as much time thinking about state government role in transparency around use of systems in the private sector and our regulatory responsibility or role in that situation. Sure. So let me start back with the question that you had related to procurement and use. So one of the things that I think is probably important here is to realize that governments have that choice of make or buy in terms of algorithmic decision system just as you have that choice for almost every other thing. Whether or not you've engaged with say development of these systems as a public-private partnership perhaps with the University of Vermont or other forms of research institutions determining what sort of transparency requirements you would like to have going forward for these arrangements may be a good place to start. What in the future would you like to see arise as the statements, the clarity around the systems that you co-build or that you purchase in the future? And then back working backwards to identify what it is that you need to know about the systems you presently have in use. Identifying whether or not each of those systems that you presently have in use is used in the way that it was intended. I think is also an important question. Is it used for the design that you specified in procurement contracts? Because one of the things that's interesting about AI and machine learning is it is fluid. It is adaptable. And oftentimes we can take a system from one venue and use it in another. Identify whether or not that's been done and tracing the history of ADS use through procurement to final use may be a very useful task for your commission to undertake. Because not only would that show you what it is that you are ultimately interested in using the systems for but it will also show you where it is that future procurement contracts need to focus in order to make sure that future uses of ADS systems are transparent. Requiring disclosure of private systems through public agencies. I'm not sure I completely follow where it is that you'd like to go with that. Perhaps you could provide a little bit more context for me to be able to respond to. Or again, I'm happy to work with you offline in order to be able to narrowly tailor that language to the objectives that you have. Yeah, so I mean, I think trying to tease out what is appropriate oversight by state government and state policy in the private sector recognizing that we're just a state within a federal landscape. And we so wanted to kind of understand what other states are doing in this area and appropriate roles for us. And I guess I'm wondering whether some of these suggestions around establishing principles that systems should provide explanations that are understandable to users about when AI is in use. And I think I've heard other testimony around allowing opportunities for review of decisions that are made. So if let's say I'm applying for a job and my application has been screened by a private company using AI, does that company have to disclose that I was denied the job and that decision was made by an automated decision making system? And is that something that the state of Vermont should consider regulating or requiring within our state boundaries? Sure, so there is a lot of activity in those areas that you just mentioned such as hiring obviously with an education and determining whether or not states would like to take on that role is actually a very paramount question because while there are federal bills that are being proposed who knows what might happen with them to help to track some of these whether or not the states are going to be the first laboratories of democracy in order to get that done. It seems as if that may be the likely path where state action in particular venues is where we learn. So obviously as mentioned AB 13 in California is tailored towards financial systems and automated decision systems there. There's bills in other states for example in the Northeast that are trying to restrict the use of automated decision systems in hiring. There is a state role that is tailored to the constituents needs within that state for oversight. However, the question of whether or not states wish to take up the expense whether or not personnel expense, time expense for legislators, time expense for agencies to be able to oversee these systems particularly in private use is an important question to be asked because this is not likely to be an easy system to generate. Obviously in your 410 bill you've already proposed and fairly extensive AI oversight and AI committee. Whether or not that particular committee structure is the appropriate choice for reviewing things like private use of ADS systems as pertains to the citizens of Vermont. That is an important question to ask. One, the last thing I'll say about this is asking if there are specific and special conditions that are pertinent to the citizens of Vermont where an ADS system is often used. I don't know what that context might be but asking are there particular things that your constituents are concerned about and where ADS systems are used for their particular needs and determining what it is that those citizens will need to know about those systems to be able to appropriately navigate with them and or around them I think is an important first place to start. Paying attention to what's going on in other nations. For example, you mentioned the right to review of decisions seeing what it is that comes out for the right to an explanation in the GDPR. Maybe a place for you to begin to look to see what the difficulties or the challenges and what the ultimate promise is the final return is on being able to do that and seeing whether or not that is where the costs and the benefits come out in favor of your state doing that. But again, happy to be able to drill down very deeply into what it is that you would like to see regulated that pertains specifically to the needs of the citizens that we want. Great, like Representative Rogers I'd love to take you up on that offer. So thank you. Sarah, before we move over to Charity and Ryan I just have a general question about the maybe it's the lobbying, maybe it's the I don't know if sides are being formed at the national level or at the state level that you're seeing the different states that you're working with. So in analogy we've talked about in this committee in the last month or so was that as the internet grew in importance in the last quarter century, I think suddenly legislative regulatory bodies woke up I don't know if it was five years ago or more recently that there is no regulatory oversight certainly that's effective. The genie's out of the bottle and right now there's a scramble on some of these issues and who knows if we'll ever get our hands around them from an AI perspective and ADS perspective. I think some states are trying to get ahead of that right now so that in 2030 there's not the same realization that we had in 2018 that we have missed the boat on this. And are there battle lines being formed with corporate, let's say corporate interests from an ADS artificial intelligence standpoint on one side and maybe this isn't always where the battle line is but for the sake of this conversation I'll say with more of a regulatory kind of privacy advocacy group on the other trying to support whether it's the privacy interests or the kind of the negative effects that we're concerned about with AI and there's many positives but what are those battle lines? Who's on one side and who's on the other? We haven't seen those really start to form in Vermont yet but I'm guessing they've formed already in California I'm sure that they formed in Washington and what are the interests on either side of those? Because we're gonna see in here. Yes, you will eventually see the battle lines being drawn in your state. It is probably easiest to characterize those who are often against ADS or AI regulation bills and it will be very difficult to actually pin down who is encompassed in this because the form of AI that's being discussed or the sector in which it's being applied sort of drives who the please don't regulate this space who's included in there but that do not regulate space has a couple of different flavors. One is that regulation even if it's regulation in the in advance of privacy and data protection sort of a minimum form of regulation the argument is that it'll stymie innovation that any type of regulation that you institute will drive tech companies from your state borders will drive them from your cities and you will not reap the benefits associated with having them cited. Also, there are some arguments that regulating AI will be impossible and this speaks to Lucy's question about model registries that we've already built this tremendous ecosystem going back and asking us to trace the provenance and the versions et cetera of AI as enacted right now is so difficult and so cumbersome that you would essentially ask for modern life and convenience to stop in order for us to achieve this. Another argument here is that the IP the intellectual property loss that would come from making visible all of the different training data models et cetera is so tremendous as it will damage the profitability of firms or damage their ability to compete on a global or even national landscape. The third another part of this is that by increasing regulation by creating any form of regulation you'll slow down investment and that in order to be able to spur say venture capital pouring into artificial intelligence we must keep it a regulation free environment in whichever state. And so that is kind of the do not side of this. Again, it's difficult to characterize who is all included in this because it does vary by sector and it does vary by state. On the other side of the coin of people who are very strongly for regulation it often comes from the perspective of do not harm, right? The argument that AI or automated decision systems algorithmic decision systems however you characterize them have caused harm. Therefore, if states wish to prevent harm particularly to specific populations they must shut down the uses of these systems almost immediately. Other sides for the four regulation is that there needs to be extraordinary levels of transparency around these systems because offloading the cognitive decision making may by states, by judges, by any number of state officials to systems actually displaces the source of legitimacy and authority of the state. That's a little bit more of an academic argument there. And finally that it is impossible to achieve a satisfactory explanation from these systems as one may do through pursuit of appeals in the court system, explanation from officers from agents, et cetera, that you may have ordinarily gotten. So that AI by being used displaces what we expect from a state. Those are two poles. They don't necessarily speak the same language. And one of the challenges of crafting legislation will be identifying a middle ground where you're able to speak to both of those sides. Those who are worried about investment IP and innovation and those who are worried about harm transparency and explicability and making sure that we square something between those. Thank you, that's helpful for what's coming down the pipe. So thank you. I don't see any of their hands up right now. And so I thank you, Sarah. And thanks for being here today. Charity and Ryan, I wanna turn to you again to help us through this conversation. Thank you for joining us today from the AG's office. Thank you for having us. I'll just begin by kind of orienting the committee as to where the attorney general's office intersects with this kind of work. I am the chief of staff and as the chief of staff I oversee our legislative work, but I also came from the consumer unit. I was an assistant attorney general before I was chief of staff. And that's where I met Ryan Krieger who is our subject matter expert on this. So let me just begin by listing off some of the ways in which our office deals with privacy or has knowledge about privacy issues, including AI. The first of course is the consumer protection act. And that is an act that we recently used to sue an AI company called Clearview AI. I'm not sure if you are familiar with that company but Ryan probably can speak more to that case being one of the attorneys working on it. We sued them just, I think that was the last live press conference we did before we went into quarantine last year. So it's been a little over a year since that lawsuit was filed. We also have a couple of other pieces of legislation that are relevant and as the chair pointed out, very recent legislation. One is the data broker registry and the other that comes to mind is the data breach notification act. We see, and again, if you are more interested in privacy broadly Ryan can speak to these things but we of course see a lot of issues around data breaches at our consumer assistance program where people are calling for assistance with that and getting help with identity theft as a result of data breaches and that kind of thing. Also at our consumer assistance program we have the small business initiative. So we have worked to educate small businesses in Vermont about data breaches and protecting themselves and data that they may hold onto from customers or employees. And then shifting gears a little bit, we also have a lot of expertise about discrimination issues in our civil rights unit which oversees and deals with discrimination in the employment context. So the folks who work there have a lot of knowledge about discrimination in a general sense. And then our internet crimes against children task force which is located on our criminal division and they work on issues around what was formerly called child pornography and now is known as child sexual abuse material. And they use AI in a very, very limited capacity which essentially is to use images that they have already to quickly identify whether a child who might be in danger is located among those images. So they actually use that technology in this very narrow context. So those are kind of some of the ways that our office intersects with this work. And as a result probably of all these different areas, we have a subject matter expert in Ryan who has been in our office for over 10 years. He literally teaches the class on privacy at UVM in his spare time. It's really incredibly knowledgeable about these issues. So I'm really pleased that he's here today and can provide more context, more information. We're not here today to weigh in on the bills that you mentioned specifically but if you would like us to, we can come back later or do that in writing, whatever you prefer. But with that I think I'll just turn it over to Ryan and I'm sure you'll have lots of questions for him. He's very knowledgeable. So I'm glad that we could be here this morning. Thanks. Thank you, charity. My name is Ryan Krieger. I'm an assistant attorney general in the public protection division of the attorney general's office. Not to contradict what charity just said, but I am not a subject matter expert on AI specifically. It is an area that all of the attorney general community is very aware of and is studying and trying to get up to speed on. It is an issue with significant consumer protection implications, but I believe you are probably gonna hear from other witnesses who know much more, including the witness you just heard from, who know much more about the actual technology of AI and things like that. I thought that I would give you kind of a higher level overview of some kind of first principles around consumer protection and privacy and a kind of broad overview of what some of the issues are with AI in particular that concern us. And I apologize if I'm reiterating things that you already know, but I will just say the big issue with AI that we're concerned about, everyone's concerned about is, well, there are a number, but I think bias is probably one of the biggest ones. Government entities, companies want to, they wanna have consistent systems for making decisions and AI often promises the kind of an objective approach to decision making, which can take bias out of the process because we all know that human beings are biased. We know that there is institutional racial discrimination that can kind of be baked into decision makings. So often coming from a very good place, entities don't want to make bias decisions. So if they can rely on a computer system to spit out a number and say, this is the course of action you should take, then they're likely to do that. And I think people are more aware of it now than they used to be, but of course, we're now learning that the AI systems that are spitting out these numbers are often also biased. Now, whether they're more biased than the human beings would have been making the decision in the first place or less biased or they're just exactly as biased is a question that I think a lot of people are wondering about. But one of the classic examples, which you may have heard or if not you'll probably hear multiple times involves a pro-publica investigation of a criminal justice algorithm that was used in a number of places, but they studied its use in Broward County, Florida. And it was basically used for determining sentencing recommendations, whether to give bail, what level of sentencing to give. And it basically ended up incorrectly labeling African American defendants as high risk at nearly twice the rate of white defendants. And it tended to undercount the likelihood that white defendants would re-offend. And the explanation for this was that generally speaking, the way AI works, and I attended a conference at Harvard Law where they explained that when you hear the word AI sometimes it's easier to just think statistics. I mean, basically you have an algorithm that is trying to find correlations between massive sets of data in order to spit out an output. And so AI is generally trained for the training set of data. And if the training set of data has biases baked into it, then it is going to spit out a biased outcome. So if you train a system that is making criminal justice determinations on a training set in which African American people tended to get much worse sentences than white people, then it's probably gonna spit out a similar bias. And I suspect that that's one of the big concerns that this committee and the state of Vermont has in thinking about our government use of AI. We don't want to make that same mistake. We don't want to have systems that are going to create additional biases like that. And of course, part of the challenge here is again, those using the systems, because it's math, because it's a computer spitting out a number, people tend to trust it. It seems more objective when really it can just be reissuing some of the same biases that be devil us all the time. So when we talk about AI, there's that kind of decision-making recommendations. There is, it's a broad category. Facial recognition falls under AI because machine learning is used to develop facial recognition. Last session, there was a law already passed pretty much prohibiting the use of facial recognition by government entities. So I think that has been at least dealt with for now. AI is also used for predictive analytics and scoring generally, which is used by marketing companies and advertisers and insurance companies and banks and pretty much everyone to kind of build profiles on individuals. I think from a consumer protection standpoint, this is something that people are kind of worried about. This has real privacy implications. When we passed the data broker law a few years ago, we testified about some of the scoring that data brokers do of companies and we're talking about scoring. AI is part of the, what goes into this. And so companies would score people based on, some of the scores were likelihood to have an addictive personality. Likelihood to be suffering from dementia. I mean, when you take a large enough data set and you feed it into a computer, you can come up with some very interesting analyses of people which if people knew that these judgments were being made of them, they may feel very uncomfortable with those outcomes. So I think one concern about AI is that it kind of superpowers the ability to kind of get into some of these major privacy concerns around what companies know about us and what they're trying to learn about us. So these are some of the chief concerns I think around AI. Now, based on some of the conferences I've been to, I think another issue that comes up with AI is self-driving vehicles. I don't think that's probably what we're too concerned of at this point. I don't think that's something we need to dwell on too much. But just to speak to some kind of first principles, so the issue is probably about definitions and how are we gonna define this? And agreed, definitions are hard. You wanna try to get it right. But I would suggest that if you do decide to pass law, there will be a lot of discussion about definitions. I'm sure you will be hearing from a lot of the companies that will be directly impacted by such legislation talking about definitions. When we did the data broker law, we spent a lot of time talking about definitions. Charity mentioned the Consumer Protection Act. This is currently the main law that governs privacy in the United States across the board. There are sector-specific laws, but COPA, the Fair Credit Reporting Act. But the only law that applies to all businesses is section five of the FTC Act or its state equivalent to Consumer Protection Act. The FTC Act was passed in 1914 and section five says unfair and deceptive acts and practices in commerce are illegal. That's the definition, okay? What should businesses not do? Unfair things and deceptive things, that's it. That is a law that we have had for over 100 years to regulate all of industry. And when that law was initially passed, that was intentional. It was intentionally a very broad definition because the drafters knew that they could not list all of the individual things that a business might do wrong. And if they had tried to, they probably would have missed having poor data security. They probably wouldn't have thought to put that in there in 1914, right? So unfairness has kind of worked as a definition going forward. If you think about the other way, if you want to go overly narrow, I think that we can all agree that robo-calls are a bit of an issue, right? So the main law governing robo-calls was passed in, I believe, 1991, the Telephone Consumer Protection Act. And that basically put huge restrictions on the use of auto dialers. And it had a very specific definition, a definition which a lot of companies have been able to get around and essentially do robo-calling without using something that falls within the definition of auto dialer. In fact, earlier this month, the Supreme Court came down with a decision in which Facebook had been sued for basically sending text messages to people that they did not want. And Facebook was able to argue that the technology it used to create these automated text messages was not an auto dialer, and therefore they could continue to do what they were doing. So if you go too narrow or too specific in a definition, it might be a definition that works for exactly how the technology works right now. But first off, the technology will change fairly quickly and it could even perversely incentivize businesses to adopt technologies that fall outside the definition. This is something we see with the Fair Credit Reporting Act, which is basically one of the, an act that tries to get to predictive analytics. I mean, your FICO score is basically a predictive analytic that says, what is your likelihood to repay your loans? What is your likelihood to be responsible with your finances? That's all it is. And so the Fair Credit Reporting Act goes after that specific predictive analytic and leaves behind hundreds of other predictive analytics. The same companies that are providing your FICO score are also providing a number of other predictive analytics and intentionally trying to design their businesses so that they do not fall under the Fair Credit Reporting Act so that they do not have to give notice of adverse actions and they don't have to do the level of credentialing and all the other things that the Fair Credit Reporting Act requires. So I'm not suggesting that you go super broad in your definition. I'm just saying that there is their arguments for being specific, but also for having a definition that is malleable and that can take into account what happens in the future. Another thing that I heard is, telling people, give us an inventory of exactly what AIs you're using right now would be a heavy lift because people may not know, right? And absolutely. But I think we should understand that we are coming into an industry and area that up until now has been essentially completely unregulated. So there has been no incentive necessarily to have that transparency and certainly no requirement to have that transparency. So of course there's not gonna be that transparency. So I would not argue that we should not try to have more transparency because there is not currently transparency, right? At one point, companies did not know whether there was lead in their products, including products that went to children because there was no requirement to know whether there was lead in children's products. And after laws were passed saying, we don't want this anymore, now there were strong incentives for companies to figure out whether there was lead in their products. They may have not known of day one, but they were incentivized to figure that out. And I think that this area is one that would definitely benefit for more transparency. I think that, you know, in terms of what I've heard, I shouldn't say I think that I've heard advocates argue that companies should have to be more transparent about their data sets, that they should have to be more transparent about how their algorithms are developed. And there will be arguments on the other side saying that their intellectual property issues and things like that. But that is something that I think that the community may want to consider going forward. The general ethos of consumer protection, I would say, would be more transparency is better. And if there are concerns about intellectual property and things like that, you know, that can be addressed. You know, there are a lot of government entities that deal with very sensitive area banking, the Food and Drug Administration, where there are trade secrets and intellectual property and there are ways of confidentiality to be able to deal with some of those issues. But I don't think that at this point, the committee is considering doing some sort of broad regulation. They're thinking about doing an inventory and trying to figure out, get their hands around the issue, which definitely should be the first step to figuring this out. Now, again, our office, especially the Public Protection Division, does not regulate government use of anything. We deal with what is going on in commerce. So we're just giving kind of broad suggestions that we have learned from, what we've learned from privacy, generally speaking. And I'm happy to answer questions. I can talk a little bit about, you know, what we've done with the Clearview AI case, which is a facial recognition issue. But I'm really here to kind of lend our expertise to the committee, whatever we can help with. Thank you, Ryan. Thank you, Charity. Representative Sims has her hand up. Go ahead, Catherine. Yeah, thank you for this, Ryan. I'd love to hear more about Clearview. You know, again, not something that our committee has spent as much time on, but I'm very interested in this issue of facial recognition and mass surveillance and implications for us here in Vermont. Sure. So Clearview AI is a company that has screen scraped three to four billion photographs from social media all over the internet and created a massive database, which they have applied facial recognition technology to. And by the way, I want to be very careful in what I say here because we are in the middle of an active litigation with Clearview AI. So I'm going to talk about what we have put in our complaint and what has been made available in public sources. And this came to light because of reporting by Cashmere Hill at the New York Times. A big article came out in January, 2020. Interestingly, they primarily came off our radar here in Vermont after they filed in our data broker registry. And in the question in the data broker registry that said, do you collect information of children? They said, yes. And that, you know, kind of put up a red flag and made us look closer at it. And, you know, dot, dot, dot, we ended up suing that. We are actually the only state that has sued Clearview AI. We are the only entity in the United States, the only government entity that has brought an enforcement action to date. There have been some class action cases, the ACLU is suing them in Illinois. Other countries have been a little bit more aggressive. Canada has essentially banned them at this point. There are investigations going on in the European Union, England and Australia are teaming up on an investigation. But basically when the reporting first came out in 2020, it looked like they were going to market this technology pretty broadly. And what the technology actually is, essentially an app on your phone that you can upload a photo to and it will spit back the photos that it finds in its, you know, internet-based database and kind of links to where those photos showed up. So kind of what this means is you could see somebody walking down the street who you do not know, take a photo of them, upload the photo to the app and maybe it brings back their LinkedIn profile. So now you know the person. And now you could do a little bit more searching and find out all sorts of stuff about this complete stranger that you've just seen in the street. So you may be able to imagine the potential for abuse of a product like that. And this is a technology which has been possible for a while. Google could have done it, Facebook could have done it. The CEO of Google several years ago kind of famously said, and we cited in our complaint that they developed this technology and chose not to implement it because of the implications. And the notion of Google not wanting to implement something because they thought it was just a bridge too far shows you how much this was a bridge too far. But those ethical concerns did not impact the folks at Clearview AI. Now, I should say that since the litigation has been brought they have claimed that they have stopped marketing it to private individuals. They want to limit the sale to law enforcement and to private security services, whatever that means. Shortly after the articles came out there was a data breach in which the list of consumers, customers was leaked and it was determined that companies like Macy's, the NBA, a lot of companies that you wouldn't have thought would need this kind of security use of facial recognition were using it. Interestingly, after we sued, we were able to came public that the only state that did not have any law enforcement using Clearview AI was the state of Vermont. And the way Clearview AI had been marketing its product was basically giving out free samples to anyone who wanted it. So a lot of law enforcement agencies, police state troopers, government agencies, even an internal general office in another state were using Clearview AI without their leadership knowing that they were using Clearview AI. And with some frequency we have seen articles come out. NYPD is discovered to be using Clearview AI and immediately prohibits it. LAPD is discovered to be using Clearview AI and immediately prohibits it. The state of Virginia just issued a statewide prohibition on use of facial recognition which was instituted because one of the legislators discovered that law enforcement in Virginia was using Clearview AI and so they proposed a bill which interestingly was passed unanimously complete bipartisan support for prohibiting facial recognition. We of course were ahead of the game on this one. We were actually the first state to prohibit facial recognition across the board both by executive order and by legislation. So we were ahead of the curb on that one. But in terms of use of facial recognition by police our concern there, the argument we've heard as well isn't it okay if it's just used by police and our office's response is no, it's not for a number of reasons. One, the way Clearview has been providing it has just been irresponsible. No guardrails around the use of the technology. In fact, there was a marketing email that went out in which they recommended that users go wild with the product, just search on whoever they want to. And these are police officers who are supposed to be using the product for very specific uses. If police officers are getting licenses to this without their offices even knowing then they have a very, very powerful tool which there wasn't any sort of controls over and Clearview did not seem interested in implementing those kinds of controls. And this is kind of writ large what the issue with privacy is in general which is to say that techie's gonna tech. They're gonna build stuff that is cool and that stretches the limits of what is possible. And that's great, we want companies innovating we want them doing that but they're not necessarily thinking of the privacy implications and the ethical implications. And frankly, the market incentives don't encourage them to do that. There's venture capital to be had if you can come out with something for which there's a market and which people will pay money for. And the reason we're in this position is because we do not have any sort of privacy regulatory structure in place to say otherwise. So companies are gonna kind of run out of the curve and they're gonna develop this stuff until someone can come back and say, no and we're in a very reactive mode right now. We have to wait until the thing happens and then becomes public. And then we have to say, no, no, no you shouldn't have done that two years ago which is hard to do now that it's kind of embedded now there's maybe hundreds of law enforcement agencies using the product and they're saying, oh, but now everyone's using it this thing which maybe we shouldn't have built in the first place. And that's where we are right now. That's why it's so important to have some sort of broad privacy regulation whether it's at state level the federal level so that we can get ahead of these issues before they kind of become embedded and give that argument. And arguably, a lot of the privacy practices that we see right now that started 10, 15 years ago. If we had known 10, 15 years ago that they were gonna do what they're doing now we probably would have said, oh, no, no, that's we have an issue with that but there's been this kind of slow creep that we have been very late in discovering it's been very opaque. And so now we're here and you hear well the horse is already out of the barn it's already out there. So, you know, so you like to regulate and it's like, well, no, I mean, you know I mean people were hiring children for labor for hundreds of years before we said, no, this is wrong. You know, you're not changing business practices, sorry. So I mean, we're only about 15 years in here. So it is not too late to say we don't like the trend we don't like the way this is going. And, you know, if you want real not scare tactics but really understanding where this could go you could look at how this technology is being implemented in some other countries with more authoritarian regimes to see some really scary stuff of what the worst case scenarios. I mean, you don't have to speculate about this stuff. I mean, it's already being implemented in some countries. And so, you know, that's what we're here for to head this stuff off before it goes too far. Ryan, thank you for that overview. And some of the things you touched on I think are some of the things that the legislature will also be wrestling with in terms of there are a few institutions that are more siloed, unfortunately, than the Vermont legislature. But in terms of some of the commercial aspects of this some of the constitutional privacy aspects of this in spite of this committee's title we actually really only have jurisdiction over technology as it's used within state government. So it's not kind of broadly speaking technology in society or the commercial aspects of technology. So one of the bills that we're looking at age 263 is very specific about the inventory of technological systems and ADS used in state government. We've had testimony from the agency of digital services. There are somewhere between a thousand and 1500 software systems that the state employs. I think 1200 was the number, but it's I think that's a rough number. And also that to the extent that they've been identified automated decision systems and artificial intelligence really is only employed in a pretty small number of those. So at this juncture, the sense we have is that it's much better to get our arms around this inventory now as opposed to three years from now when this type of software will be expanding geometrically. But the thing that I want to bookmark for the attorney general's office is that there's a part of age 263 from an inventory perspective that looks at whether the automated decision system makes decisions affecting the constitutional or legal rights duties or privileges of a Vermont resident. And then I think later in the bill, it talks about other issues related to legal rights, duties and privileges of folks impacted. And there's questions as to whether the agency of digital services has kind of the capacity to make some of that assessment. And to the extent that the AG's office might be involved in directly looking at some of these software systems that we currently employ in the state, again, to look at what are the constitutional issues at play here or how do these affect the legal rights of citizens of the state of Vermont? So the question of how do we want the attorney general's office involved in looking at these things? Do we want to replicate or make sure that those resources are available in the short term or the long term in the agency of digital services? Is it important that we have those capabilities involved in what type of technology we use in state government so that those legal constitutional issues related to discrimination that we have those front and center? So that's more of a human resource question that I think we've got to make in state government. But I just want to bookmark that as that's something that's in this bill and that we might be further reaching out to the AG's office as to how do we assess those things? And do we need to pull the AG's office in as a resource in state government to taking a look at and assessing those things? Cause frankly, I don't think we have that capability at the agency of digital services at the moment. So that's just a flag I want to put up. Representative Chase, did you have a question? I did, thank you. Thank you, Ryan. You mentioned a couple of things that like with the auto dialers having legislation that's too narrow and prescriptive and with Clearview, perhaps being a little bit behind the ball, having a legislation that may be a little too broad and doesn't exactly keep these companies from doing unethical things or whatever. From your position, could you elaborate a little bit on how we can tread that line, perhaps to what degree legislative intent in the beginning of bills creates a structure by which a court could look at a situation and say, okay, so this particular situation wasn't identified two years ago, but it is clearly against the intent of what was being sought. I would suggest that legislative intent certainly is important to consider, but from a legal standpoint, it's kind of like the last ditch backstop. You want the language of the bill to say what the bill should be. Legislative intent comes in when the language is really so confusing to the courts that they have to go to some other source. The plain language is what they're gonna look to first. If you want a law that is going to kind of get ahead of the issue, I think it makes sense to have a law that kind of is based on principles, is based on kind of broader notions of what should or should not be done. I'm not suggesting that we necessarily adopt GDPR. I'm not suggesting anything one way or the other, but GDPR is an example of a law that really just lists very broad principles an enormous number. It does not try to go too into the details, which I think has caused an enormous amount of frustration from the business community. So that is the downside. And it's also a very broad sweeping law, but it does try to get at everything. In fact, it has a section on artificial intelligence, article 22 of GDPR. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profile, which produces legal effects concerning him or her or similarly significantly affect him or her. So broad just, that's the statement. And then there are some other language in there. But so, as far as along Europe, which a lot of companies here already have to comply with, there is this kind of trying to address AI, again, our office does not have this position one way or the other on whether that's the way to go about it. But I think it does make sense to kind of try to get ahead of it. I mean, there's kind of, I think there's a couple of different ways you could go about doing that. One would be to have a law that has those broad principles. And another would be to have some sort of equivalent of the data protection authority that each country in the EU has, which California's latest law actually creates an equivalent of a data protection authority in California. Interestingly, the city of Portland, I understand, has a commission which has to approve any Portland government use of anything that might have privacy implications. I don't know that any other government entity has created something similar to that, but that's kind of another model to potentially look at. You've talked about having a commission that will look at AI. Maybe that's, obviously, these are very resource-intensive solutions, but at the end of the day, these things are gonna have to have people with dedicated resources and expertise to look at them. That said, we do already have kind of an ad hoc way of addressing these things. During COVID, I was pulled into meetings to when they were talking about contact tracing and what might be the privacy implications of contact tracing. This is what we all do here in state government is we lend a hand, even though health and AHS is not my area, we all help each other out where we can, and so I think that the office, if there's a legal analysis to be done, someone in the office, whether it's GTAL or consumer protection will lend their expertise. If it's such an overwhelming thing that these things have to happen every day, now it becomes kind of a resource issue and maybe a new position has to be created, a new FTE. Thank you. And that's Portland, Oregon, Maine, or one of the other? Oregon. Oregon. Thank you. Sarah, Charity, Ryan, thank you all for being here with us. I really appreciate your time and we likely will be reaching out to you in the future again, but as we kind of table set for the work that we're doing to craft a piece of legislation, this is really helpful fundamental information, so appreciate your time. For members, I think we're gonna take a three minute break before we start our 1031 hearing just for your good lumbar health.