 I think we're at the early stage of algorithmic regulation and kind of reining in the freehand that tech companies have had over the past decade or so. I think what we need to have is an inventory of AI systems as they're used in government. Is your police department using facial surveillance? Is your court system using criminal sentencing algorithms? Is your social service department determining your access to healthcare or food assistance using an algorithm? You need to figure out where those systems are so we can begin to know where do we ask for more transparency. When we're using taxpayer dollars to purchase an algorithm, then that's going to make decisions for millions of people. For example, Michigan purchased the Midas algorithm, which was over $40 million and it was designed to send out unemployment checks to people who recently lost their job. They accused thousands, 40,000 people of fraud. Many people went bankrupt. The algorithm was wrong. When you're purchasing these expensive systems, there needs to be a risk assessment done around who could be impacted negatively by this. This obviously wasn't tested enough in Michigan. Specifically in the finance industry, banks are allowed to collect data on mortgage, loan, race, and ethnicity. I think we need to expand that so that they are allowed to collect that data on personal loans, car loans, small business loans, that type of transparency and allowing regulators, academia, folks like that to study those decisions that they've made and essentially hold those companies accountable for the results of their systems is necessary.