 Thanks for joining us, this important conversation. My name is Suresh Fankato-Sarouni, and I'm thrilled to be here representing the White House Office of Science and Technology Policy. OSTP is a team charged with advising the president on issues of science and technology. We work with agencies and departments across the US government, making sure that every American can take advantage of groundbreaking research, new discoveries, and critical innovation from our health and environmental quality to our security and shared prosperity. In this session, we'll be focusing on the technologies that touch every part of our daily life. These tools have great potential to make life easier and fairer for individuals in communities. At the same time, we're seeing what can happen when powerful new technologies go unchecked, with the growing body of evidence showing that the harms of AI and data-driven tools often fall unfairly unvulnerable in historically excluded communities. President Biden has summed it up well. This could be a moment marked by peril, or we can make it a moment of promise and possibility. That's why the administration is fighting to get development right, to ensure new technologies are rooted in our common values, equality, accountability, justice, and integrity. We need a roadmap, a set of democratic principles, to ensure that all Americans can share the benefits of innovation. Today's event is one session in a multi-part series to engage the American public in the process of developing that roadmap and ultimately creating a bill of rights for an automated society. Over the coming months, we're bringing together experts, practitioners, advocates, and government officials to discuss the risks, harms, benefits, and policy opportunities of automated technologies. And we're amplifying the voices of communities in the process. In addition to participating in the series, there are several ways for the public to weigh in. You can email us about how artificial intelligence has made an impact in your life. You can take part in a listening session about your experiences with biometric technologies or respond to our requests for information by January 15, 2022. And with that, I'll hand things over to our host for today, the New America Foundation and their CEO, Anne-Marie Slaughter. Suresh, thank you. It's an honor and a pleasure to be co-hosting this event with OSTP. New America believes in fostering an open and transparent dialogue on developing data-driven technologies rooted in democratic values, strengthening, as you said, equity, accountability, justice, and integrity. And we applaud the US government for taking a leadership role in pushing this conversation forward. We have many old friends in the audience, but for the folks who are less familiar with our work, New America is a think tank and a civic enterprise dedicated to renewing the promise of America. We often work at the intersection of policy and technology, not only on policies regulating technology, but also the ways in which technology can be used to solve public problems. And as one of our board members, Todd Park, a former chief technology officer for the, in the Obama administration says, our goal is to have a technologist at every policy table. So equitable access to broadband and digital public infrastructure have profound implications on the health, economic security, and civil and human rights of billions of individuals and families. These are not luxuries. They're like electricity, a fundamental necessity needed for education, remote work, healthcare, and virtually every aspect of life. From New America's perspective, at a minimum, there are three prerequisites for realizing the potential of emerging technologies. First, people. Users must be centered in the design and deployment of solutions, whether we're talking about broadband access or benefit delivery systems. Second, cross-sector collaboration. This work requires a broad community of social innovators and power to openly assess the opportunities, risks, and gaps created by new technologies. Third, a framework of rights. As the work of ranking digital rights demonstrates, we must apply the framework of international human rights painstakingly developed over decades to the digital world. New America has looked at all these issues not only in terms of problems to be solved, but also opportunities on the horizon. Despite incredible technological advances that have improved many, many aspects of our daily lives, during peak pandemic, most governments struggled to deploy immediate and accessible economic relief to administer effective contact tracing systems or manage basic vaccine distribution programs. Those kinds of shortcomings erode public trust. Many of our loved ones and neighbors had more confidence in their ability to receive a box of masks and hand sanitizer in the mail overnight than to understand their eligibility and claim status of pandemic-related unemployment insurance. For the public sector to be effective in the 21st century, we will need to pursue simultaneous initiatives to develop digital public infrastructure that increases trust, accountability, and the security of critical digital systems. We can all agree that algorithms and systems that make decisions with huge social implications would benefit from more transparency, democratic input, and accountability before they are developed rather than addressing the harms after they occur. Relying on the private sector to solely develop and administer foundational digital systems that we use to connect, transfer, and share information will not lead to equitable results. We must intentionally elevate and center the voices of the constituencies most affected by the problems that algorithms are being deployed to address. Together, open societies can deliver on the gains and promise of digital transformation while respecting human rights and privacy if they are committed to techniques that put people first. Better solutions for digital identity, digital payments, health data exchanges, social protection data exchanges are all within our reach, but we have to be deliberate about who they are being designed for and who they are designed be. In the same spirit, the White House Office of Science and Technology Policy is inviting public comment about the use of artificial intelligence and other data-driven technologies. New America is excited to engage with this process and you should too. It's on all of us to ensure that emerging technologies reflect and represent democratic values. And with that, I will hand off to Michelle Evermore. Thank you. Welcome everybody and thank you for being with us. Before we start, I wanna begin with the most basic question. Why are we here? And what's the point of having this discussion? We're here because the issues we're addressing today have gone unaddressed for too long and it's time for us to work together to articulate some rules for the road, a democratic vision for our automated society. It's time to situate technology development and use in our values, equity, accountability, justice and integrity. We can make this vision a reality, but we can only do it by partnering with a wide range of affected stakeholders, most particularly the American people. So today we wanna focus on the impacts new technology are having on our most vulnerable populations. We're going to talk about the tech that's being used in the social welfare sector and ask whether it's living up to its promises. We're going to evaluate the impact these technologies are having on democratic participation, whether they're making it easier or harder for all people to access what they need to thrive. Of course, these questions aren't black and white. As government is modernized, as key elements of our democratic society, like voting and social services are digitized, two things are happening at once. We're seeing more opportunities for empowerment and more challenges to participation. There are more ways for people to vote, to get around, to live affordably, but new barriers have emerged, often unintentionally. Today's panel of experts will discuss these important issues and help us determine whether there are tools to reconcile these tensions. So I will introduce our panel. Blake Hall is the CEO and founder of id.me. Kara Hallis is a professor of computer science at the University of Illinois, Urbana-Champaign. Christian Van Veen is director of digital welfare state and human rights project at NYU Laws Center for Human Rights and Global Justice. Julia Simon-Michel is a supervising attorney at Philadelphia Legal Assistance. Dr. Zach Mahafsa is a research and data analyst at the Southern Poverty Law Center. And Khadija Abderaman is a tech impact network research fellow at the AI Now Institute, UCLA C2I2, and UWA Law School. I have some prepared questions for the panel, but audience members should feel free to direct your questions to Slido. So let's get started. Christian, you have researched the use of digital technologies and social welfare and development globally. Can you provide us with an overview of how technologies are being used in this realm and how they may affect democratic participation generally? Thank you, Michelle. And thanks to the White House Office of Science and Technology Policy and to New America for organizing this event today, which is a very important one, I think. So I direct the digital welfare state and human rights project at NYU Law, where I also teach on digital government and human rights. And my project undertakes research globally into digital governments, including in the area of social protection and welfare. And we translate this research into action, for instance, in the form of strategic litigation. And we host a platform for human rights activists, students, academics, policymakers, and others who are working on these issues. Just to start off, the focus of the global human rights movement, why I'm in on digital government transformation in the area of social welfare and development is still very recent. I myself got involved in these debates only in 2017, when I organized a country visit to the United States, actually, for the United Nations Special Arbitur and Extreme Poverty and Human Rights, Philip Alston. We then investigated with the help of Virginia U-Banks algorithmic scoring in homeless services in California. And that led to subsequent focus in the United Kingdom on universal credit, the UK's digital welfare system. We subsequently wrote a report for the UN General Assembly that came out in 2019, has been highly influential, which focused on the digital welfare state globally and emerging human rights issues in that context. Now, national governments are rapidly investing in the digital transformation of their domestic welfare systems, both with newer and also with less fancy technologies. And the promises there are many-fold, promoting easier access to government, including previously unseen populations, reducing costs and reducing fraud levels, improving government efficiencies by integrating data silos, improving government transparency and allowing government to build the foundations for a digital economy. And various parts and functions of the welfare state, both in developed and less developed countries are affected. This goes from applying for benefits, verification of identity of beneficiaries, the assessment of eligibility claims, the calculation and payments of benefits to the classifications of needs and risks. Welfare states are changing across the board. And these developments are not just taking place at the national level in countries around the world, there's also an international dimension. So first, digital government transformation is increasingly central to official development programs. So one example is government digital ID systems. The World Bank, supported by Western donors, is globally promoting digital ID systems rolled out by governments in the global south. And these digital ID systems are rapidly becoming gateways to social rights. Secondly, the digital welfare state, both in the south and the north, is often designed and implemented with the assistance of foreign technology vendors, consultancies and big tech companies. And I think that's a relevant dimension when you think about regulating AI in the United States, there's that international dimension to also take into account. Now, while these digital transformations hold great promises, as I said, I'm deeply concerned about the negative human rights implications I've come across in my work. In many cases, these government innovations have happened without adequate legislation and oversight, helped by the fact that they've often been treated as merely technocratic matters. That also means that the functioning of these digital systems is often shrouded in secrecy and kept from the general public, from media and also from legislators. And the outcome has then often been increased inequality and discrimination, affecting especially protected groups on the international human rights law like low income groups or people who are disabled. Now, let me wrap up by saying that I think that the White House initiative to develop an AI Bill of Rights is an opportunity for the United States to show both moral and technological leadership in the area of social welfare and development, but also beyond. But I see two major problems. The first is that the EU is way ahead of the US in terms of regulating AI, including in the area of social welfare. So the draft EU AI Act, for example, designates the use of AI in social welfare as a high risk area. And the US on the other hand, at the federal level, has stood still for the last four years in terms of regulating AI, including in this area. Then a second problem that I see is that, while the US is a world leader in terms of developing private technologies, the US is far from a world leader where it comes to developing government technology in welfare. And the long lines of people looking for unemployment assistance at the start of the pandemic here in the US is just one rather embarrassing example. So let me wrap up by saying that I fear that hammering on China as a geopolitical rival in the field of AI and having a sort of proxy AI war with China as the National Security Commission on AI wrote in its final report in March of this year only complicates these problems. There will be continued pressure on this administration to minimize regulation to allow private AI sector development so it can compete with China. But that undermines America's ability, I think, of having effective moral and legal leadership while also distracting government attention from improving digital technologies in government itself that enable the realization of human rights and equality. Thank you. Thank you so much, Christian. So data analytics and algorithmic systems are increasingly used in social welfare and development programs to assess and mitigate risks. But in some contexts, risk is difficult to define and measure and child welfare is one of those contexts. Khadijah, can you explain how risk-based technologies are being used in the child welfare system and why the use of such technologies in this sector is concerning to parents, researchers and advocates like yourself? Well, Michelle, thank you so much for having me and thank you to my co-panelists. I did wanna just highlight how much I really appreciate Christian's comment about Philip Alston's report because I do think it's important to recognize that automated societies are not inevitable. And what I really appreciated about that report is how it highlights how automated decision systems are experimented on the most marginalized populations prior to being generalized to the rest of society. And so when I'm thinking about predictive risk modeling in the child welfare system are what many advocates and scholars coming on the backs of Dorothy Roberts called the Family Regulation System, I can only answer this question building on the black women who came before me. And so what I discussed in my recently published paper in Columbia Law and Race Journal, calculating the souls of black folks, predictive analytics in the New York City administration for children services is that predictive risk modeling essentially relies on artificial intelligence and machine learning to augment or add another layer to existing decision aids that are based on administrative data that is already collected by government agencies. And the origin of this administrative data has to be traced to enslavement and conquest that is at the heart of American state formation. And this is well documented by scholars like Dana Rammie-Berry and the price for the crown, the flesh, the value of the enslaved from womb to the grave to the building of a nation, Simone Brown's dark matters on surveillance of blackness and Rashida Richardson who is the White House citizen and confronting black boxes, a shadow report of the New York City automated decision system task force. Her title is a double entendre pointing to the opacity of the machine learning systems and the government itself. She highlights how the sturdy data produces feedback loops directing policing agencies back to the same predominantly black and indigenous neighborhoods who are overrepresented in the data. And I think it's very important, I think we are swinging from a techno solutionist backlash to a techno nihilism backlash and everyone feels that this inevitable and we get very ground down talking about variables and we forget that these are human beings, these are families. And so 20 years ago, Dorothy Roberts wrote the book Shattered Bonds, The Color of Child Welfare. And so I think it's important to remember the family regulation system data is not merely a set of variables for racialized lives being classified and sorted through a system that claims to be for protection and safety, but in actuality is decimating our communities and more recently in her review of automating inequality by Virginia U-Banks, Dorothy asked us to look at whether these new systems of surveillance and social control that are being created by algorithmic systems, even as we know classification has always been a centerpiece to white supremacy, to enslavement and kind of the post reconstruction era. I just also wanted to highlight my friend, my colleague and amazing community organizer, Joyce McMillan has been screaming for 20 years since her children were snatched up by the New York City Administration for Children's Services that our communities need support, not surveillance. And so just in conclusion, predictive risk modeling is not just coding over the cracks of a broken system as some have proposed. It is expanding new forms of surveillance while ignoring the concrete demands of the most impacted communities who need to be brought into the discussion not just at the bottom of the decision chain, but very early on when models are being selected. And it is not inevitable that these systems are being imposed. And I think we have to put on the table, not just should the family regulation system be abolished, but do we need to actually have an automated society? Thank you. Thank you so much for all of that. One particular area where technology has the potential to decrease administrative burden is in the receipt of social benefits related to the COVID-19 pandemic, such as pandemic unemployment insurance benefits. Julia, this is an issue you have worked extensively on in the state of Pennsylvania. Can you please share the opportunities and challenges of using digital platforms to enable the receipt of benefits during times of crisis? Are there particular issues to which we must be attuned to enable the equitable distribution of social services during those periods? Thank you, Michelle, for asking us to join and for these helpful questions. And of course to the White House of New America for hosting this event. One thing that came very clear during the pandemic is that our unemployment insurance system is still one of the most vital safety nets that supports communities during times of economic crisis. But we also saw the aspects of it are deeply broken in many ways because our technology is outdated. But before we start talking about how technology can improve things, I wanna just say that we for years have represented workers as they try to access benefits, appeal decisions. But during the pandemic, we also became de facto technology customer service. Because we need to remember that not everyone knows how to navigate technology or has access to the resources to make best use of new technologies. And so I just wanna say as a reminder for some foremost, that no matter what Bill of Rights we have, no matter how we design technologies, we will always need off ramps. We will always need ways for people to contact people. And we will need support in communities that people can go to when they have to access these technologies. In the unemployment system and other public benefit systems, benefits need to be paid quickly. So there's always this question of how can we reduce administrative burden to do just that? But unfortunately, a lot of times when we are looking at how to reduce administrative burden, we are focused on the administrative burden of the agency. We're looking at how to make agency staffs job easier. And a lot of times that actually conflicts with how to make the system more accessible and how to decrease administrative burden for front-end users, those people who need to actually access benefits. And so I really appreciated that Anne Marie started off by talking about the need for user-centered design because that is so central here. Books can talk about how they're offering a system that is dynamic. Let's say they're gonna have dynamic questions on an application and that makes their system so much better. But if my client can't understand the dynamic questions, if they can't follow what is being asked, if they don't understand the questions, then you might be collecting more information, but you're certainly not collecting good information most of the time. And so we have to think about who we're considering as we design these technologies. We also have to focus on what technologies we want to build. So often we are taking technology and using it to further support old business practices, right? We are ingraining the way we have done things forever in technology instead of thinking about how we can use technology to improve the experience, to broaden access to different populations. A lot in the unemployment world is focused on identifying fraud. I can tell you that I have seen many different vendors, many different states focus on their technology for what sort of algorithms automated decision-making they can use to identify fraud. But at the same time, we also have folks who let's say receive a disqualification for benefits and because of that disqualification, they're not receiving any income and could therefore perhaps need access to food stamps, to medical assistance, to other types of benefits. Why are we not focused on how automated decision-making or algorithms can push things to folks who need them at the time that they need them, right? Who can broaden access? Why aren't we using these tools to think about how we can holistically help people and not just how we can identify supposed bad actors because as was just said, oftentimes when we're identifying bad actors, we are really using imperfect data to harm further harm black and brown communities in this country. And so we really need to think about how we can better use data to improve access. And one thing that I think we should note in all of this is language access. I love that everyone wants to just throw out these great new technologies left, right and center. But if our technology is so great, why can't we offer it in multiple languages? Why can't we make it language accessible? Why isn't it always mobile responsive? So again, really centering that person in the technologies. And one thing we can't forget in all of this is that those who are trying to access benefits, especially times like the pandemic are often experiencing trauma, right? And trauma affects how we navigate the world. And so it's really important that when you're designing technology, testing technology, it's not just about how anyone on any regular day is going to interact with the system, but how somebody who has experienced trauma, who's struggling in life in that moment can still navigate that system and not be unduly harmed. So thank you so much, Michelle and others for that question. And I look forward to discussing further. Great, thank you so much, Julia. So one potential advantage of the emergence of digital technologies is the ability for individuals to use digital interfaces to access social goods and services, especially those provided by local, state, and national governments. Blake, your organization is building on this possibility to lower administrative burden. Can you please speak to what you see as the major possibilities accompanying data-driven technologies when it comes to accessing social services? Sure, thank you, Michelle, and I'd like to say thank you to the White House Office of Science and Technology Policy and New America for hosting me today. And I agree with what Julia just said and I want to build upon her comments and that administrative burden and the reduction of administrative burden is all about reducing it for the individual, which if you can do that also reduces it for the agency. And at the same time, there are deeply rooted structural barriers that are going to take time and investment and a team effort for us to knock down, like access to broadband internet, digital literacy, and skills, folks who speak English as a second language and making it more accessible and things like that. So building on things like the FCC's lifeline program are really important things to do. I want to rewind back to 2011 when the Obama-Biden administration announced a national strategy for trust identities and cyberspace. And that initiative and the idea was quite simple. Instead of making individuals create a new login and then verify their identity with a credit bureau, often like Equifax over and over and over again, why don't we just verify the individual once and then let them take their verified data and login and have single sign-on to multiple agencies? And in fact, there are a significant overlap between agencies, between Medicaid and Medicare and USDA and SNAP and unemployment. So if you can do that, not only can you increase access, but you can lock the access and equity gains in by enabling single sign-on for our communities that are most in need of civic services. And I was really captured by that. And we won two grants from NIST along the way. We started working with the VA team and Human Center Design in the United States Digital Service over at VA in 2015 and 2016. And what we realized very quickly was that the concept was good, but how do you enroll in such a system? And almost all government agencies today use data brokers or credit bureaus to verify identity online. Unfortunately, there are 50 million Americans who don't have a presence in credit records. And if you live overseas in a foreign country, maybe Japan and Germany, most often if you're trying to interact with the VA, maybe Mexico and Canada for unemployment, you're completely excluded from these systems. And so for those communities, they're either completely left behind if they don't have an in-person office to visit. And that was especially true during the pandemic. Or even if they are able to go in-person to one agency, they then have to go in-person to all the other agencies that they need to go to as well. And we just found that status quo to be simply unacceptable. So we built a product called virtual in-person proofing. NIST calls it supervised remote and we work with NIST to revise the standards. And one of the first gentlemen that came through was an 81-year-old veteran from Japan whose wife had Alzheimer's. He wasn't in records because he lives in Japan. He didn't have a VA facility to go to because he lives in Japan. And when we got him through and verified him through video chat, he wept. And it was an emotional moment when we restored this man to the benefits that he earned while serving this country. But unfortunately, at that point in time in 2018, our login wasn't at social security, wasn't at IRS, wasn't at unemployment agencies like it is today. So he was still blocked from access to those agencies as well. And so the promise of having a single sign on and single login is that once the identity is portable, even if you had to verify through video chat, you now have equitable access along with more affluent Americans who are more likely to have a presence and records and to be able to verify the first time in a self-serve channel. So two things today we announced in-person verification which doesn't require a mobile device, 650 retail locations across the country that folks can go to at Julius Point to announce an off-ramp. That capability is ready at the beginning of the year, but unfortunately with the pandemic and the stimulus benefits, access and equity and security weren't really considered. It was just a fund. So it was like, well, how do you fund these off-ramps? And so it takes money to staff hardware and to train folks and to get them ready to serve our communities that are most in need. So thank God for the American Rescue Plan and for grant funding that DOL now has the resources necessary to invest. And for policymakers, it's really important that we make the investment upfront. And on language accessibility, I've seen video chat agents interact and to have a Mandarin speaker interact with a native IDME agent who speaks English isn't awkward and not optimal experience. And so what we've done for that is our login interface now supports 13 languages. Our self-serve flows support English, Spanish, Haitian Creole simplified Chinese and traditional Chinese writing four more languages, Tagalog, Hindi, Armenian and Vietnamese in early 2022. Not because a law will or will not be passed but because it's the right thing to do. And our trusted referees now speak 12 different languages and support interpreters to help folks get through. So I want to make clear like no identity left behind is as much of a journey as it is a destination. It is going to take time and effort to get there. We're certainly committed to accountability and transparency. But when you look at like unemployment where the unemployment rate went from 3% to 15% and a lot of states had reduced staff and there are 1980s technology, the only way that they had a fighting chance to even meet the demands of legitimate claimants and to fight the fraud was to automate as much of the folks that you could through these access and equity initiatives to shrink the problem down so that they could serve folks with the limited staff that they had on hand. Because if you're built to support 3% unemployment it goes to 15%, that's difficult. So thank you very much for giving me the time to share my thoughts. Thank you, I appreciate that. On the topic of community participation in building democratic structures, Keri, you've been deeply involved with UIUC's community data clinics described as hubs for developing community-centered solutions for a data-driven world. Could you tell us more about your work in this space and what you've learned about challenges the challenges of engaging communities in the work of democracy? Thank you, Michelle. And thanks to New America and the OSTP again for organizing this event. Again, I'm a computer scientist. I research how computation mediates interaction in face-to-face and online settings. And I audit algorithmic systems and I build new systems in this space. I want to say that I've personally benefited from meeting with somebody at a tech law clinic on the East Coast. And it was just wonderful to talk to someone about the situation I found myself in, explain it when I needed, get expert feedback and support. At the time I was thinking a lot about auditing in the context of housing. It became apparent that the needs of communities in big metropolitan spaces on the East West Coast differed from those of the Midwest. And hence we needed different approaches to address these needs. And talking to colleagues at the university and in the community, I was lucky to find myself working with communities of people smarter and better connected than me. And many of us coalesced around the community data clinic directed by Dr. Nadeh Chan that's centered around the different data needs and specific data uses in our communities. And these data needs are unfortunately undermet with today's data infrastructures. So the clinic has several goals. One is education. Education in the form of courses centered around social responsibility to create scholars who understand and critically think about what it means to create and inhabit algorithmic communities. Courses involve interrogating existing institutional and civic infrastructures, have visits from local community leaders, include field trips, local community spaces, and several central Illinois organizations wanna share data and work with our students to clean, aggregate the data to create visualizations, make sense of the data to make plans for what to do next. Another goal is community centered collaboration response where local interdisciplinary and intergenerational community experts, members of the community and researchers in the clinic come together. Here the intent is that local community organizations lead and describe and show what is needed for the communities they work with. I wanna note that as a researcher in social computing and I conduct a lot of studies, I could run a study to try and recruit participants in the community. But even in a study that lasted a year, I would never ever build a report and trust that community members, community leaders have with their members. And as you can imagine, there are many local organizations and they work incredibly hard. During the pandemic, for example, when it was wicked hard for anybody to find toilet paper, Urbana Township provided that and much, much more. And they helped many people find housing for those in need before, during the pandemic. As such, that's why they lead the initiatives in our collaborative projects. In the context of housing, many community leaders see bias firsthand, even without algorithms, magic people. And many groups we spoke to claimed they don't really need algorithmic systems for this purpose. They knew who needed help, they knew them personally and they could formulate plans for how to help them. Now, this is not to say that larger communities don't have algorithmic needs in this space or in other spaces. Here, however, not everyone does, but other computation needs emerged. For example, they wanted systems to help aggregate up-to-date data to help people find resources. While programs existed to provide smartphones for people, it was hard to keep databases up-to-date and web pages up-to-date for where to find certain community resources, like food, for example. And data aggregation and presentation was a theme for some groups. We've also responded to public health departments here and elsewhere, and we're lucky to have a forward-thinking public health administrator in Champaign County with extensive experience managing health outbreaks who planned and hired many contact tracers, hired translators, offered in-home vaccine opportunities for folks that couldn't get to vaccination sites, coordinated with the university to plan strategies for student arrivals and more. And the university had created Contact Tracing app, but with the health department's tracing infrastructure in place and mobile device, the mobile device contact tracing just wasn't a priority. But it was requested by several health departments were computing tools to aggregate and help make sense of the data around COVID to know where to focus their energy moving forward and concerns on how to manage privacy while needing concrete demographic information to better know how to help the community. In other outside projects, we collaborate with other public health departments in other states and counties to compare and contrast their experiences. And I should note, there's a stark difference in the size of our respective communities. Bigger communities have different needs. And that's why having distinct clinics and distinct spaces is helpful. And people are in talks with us to replicate this model. Now, I may be biased here, but based on Virginia Eubanks' work on automating inequality and discussions here, I'm partial to the idea of smaller clusters of community local organizers that know the infrastructures and the people and communities they work with to efficiently offer help and possibly no shortcuts. But there are challenges. In scheduled meetings, we often tried to go to people's spaces if we were not on Zoom. And the local community organizers phones were ringing nonstop. It was clear that even by talking to us like there was many other pressing things they had to do. And we wanted to be respectful of people's time. Also at times we've met with members of the community, especially if they needed help, but didn't wanna burden them with expectations and finding their own solutions. There's also a lot of invisible labor involved, forming partnerships, understanding needs and following up after the fact to assess the outcomes of the collaborations is very, very time-intensive. There's challenges with data access, for example, with health data, challenges in seeing how we can help offer funds to the people that are working with us for taking the time. And in my interactions, automated aggregation techniques aren't perfect. Just this morning, I received a message from someone asking me how to help them classify narrative changes on social media after policy changes are made locally or globally. How we represent aggregated data, either interactive visualizations via inferences is an open problem and depends on specific cases. And again, there's trade-offs in organization to having small clusters versus large clusters. And I can imagine conflicts arising with this decentralized approach. But I wanna end with some positive outcomes. We partnered with Project Success, the school district, state of Illinois and PCs for people to provide internet hotspots and laptops to some families in town in time for the school year. And access, as many people have mentioned, is critical. For example, many housing authority websites requested documents be sent electronically. New community apps have been prototyped with local community organizations with clinic students. Public workshops, town halls on campus and in parks and libraries led to new much-needed parent-led East Central Illinois organizations around learning differences. And we continue to build relationships with public works, Cunningham Township, the public health department, public health experts, United Way civic leaders, housing authorities, extension schools, autism programs, parent groups, city of Chicago, and other similarly-minded community-centered groups in the U.S. and internationally. Great, thank you so much. And so the past five years have brought significant attention to the role of technology in democratic participation, particularly voting. 2016 brought new attention to the political implications of social media. 2019 brought attention to the use of automated technologies and voter registration after mass purges of state voting roles. And 2020 put an intense spotlight on the role of voting machines. Though these issues have made many news headlines, it's not clear how they're affecting voters on the ground and affects to ensure voting rights are protected. Zach, can you talk about your work with voters, both how they are reacting to the use of technology in the political process, as well as how you're using technology to increase local community's understanding of the process and redistricting or other efforts? Gladly, so as others have mentioned, thank you for having us all here. This is really exciting, really important conversations we need to be having. As far as redistricting is concerned, the big thing that sort of popped up this cycle is the general public involvement with the use of technology known as geographic information systems or GIS. So this is basically a fancy way of saying mapping software, mapping the equipment programs. Redistricting is part of the core maintenance of our democracy. So as our population changes every 10 years, that we track that, we try to make sure that where the populations grow, shift, move, shrink, et cetera, that they are being represented at the federal level, at the state level, at the local level. So redistricting tries to basically account for these population shifts. Well, that requires carefully drawing maps. In decades past, it was very difficult for the general public to access this information, the census might make it available, but it might not necessarily be obvious with where they can move on the next steps. Due to the explosion in open source and alternative data sources that people can utilize to help create their own maps, it's really kind of created this opportunity for people to become more empowered in this process. They're actually able to engage more directly with their communities and try to figure out ways that they can organize to try to make sure that their political interests, that their social interests, that their economic interests are actually going to be represented. The consequences of this are generally pretty positive. You have people that are now more engaged with their community. You have civic society having a much more robust debate and connections are beginning to form. People begin to know their neighbors. They begin to understand the needs of their community, but there's a number of challenges that have kind of popped up as we've been working through this particular cycle. There's a couple of different ways that people kind of approach this technology. Some people are all in theirs, incredibly excited, and they're happy to embrace it, but there's other camps that aren't necessarily as comfortable with it. One thing we have noticed is there's sort of a generational divide. Older users, older people tend to want to engage in this process, but they might be hesitant. They can see the power of big data. They can see the power of this technology because they're not necessarily digital natives. They might feel hesitant and engaging with this process. Inversely, we see a lot of digital natives that they're very comfortable with this technology. It's quick for them to adopt, learn, and begin to apply, and create their own maps. That's great, but they don't necessarily feel that it's relevant to them. There's this disconnect that these young people don't necessarily need to be involved with the political process so that it does not directly relate to them. So there's this disconnect between two different aspects of our society and how they're engaging with this technology. So there's some tension there. And there's a third group that is a bit uncomfortable with this, not because they're necessarily opposed to the citizens being empowered, but it comes from the perspective of what does all this data mean for us? As we see data sources, as we see census data become much more readily available, as computational, statistical, and other mapping programs, other forms of software like this become more readily available, the question becomes, well, what's this mean for my privacy? What does this mean for me as an individual citizen? And on the one hand, you see people becoming more empowered, but the question becomes, well, what do they do with that power? What do they do with this information and this knowledge? So there's this interesting swirl of different perspectives and how people kind of engage with this. Our particular practice group, what we've tried to do is engage at the community level. We've tried to work with people directly to help them make sure that their political interests, their political concerns are being met. We've been particularly interested in making sure that we preserve the minority communities in the deep South to try to make sure that they have some seat at the table. Historically, they've been very marginalized. You don't have to go very far back to find that there was still a lot of different rules being put in place, the limit who can access the polls who would be allowed to even vote. So we're trying to engage and kind of help correct these problems that have been still lingering from decades prior in trying to fix this through this direct interaction with communities. This might involve training. This might involve just giving feedback with people, trying to help them understand their community or how they might engage the redistricting process. So there's a lot of interesting opportunities here, but it's in a very interesting space, right? So if we look back and compare this to say the explosion of social media a decade ago, as people start to realize the potential, the question becomes, well, how do we utilize this information? How do you utilize this technology? And we're kind of at an interesting threshold. GIS is incredibly amazing. We can do a lot of things with it. During the pandemic, people were creating maps, helping people identify where are the food banks in their community? How many hospitals are open? How many have available beds? On a global scale, people have been utilizing American remote sensing technology and finding the GIS to track things like the genocide in Myanmar to track pollution issues in parts of the United States and parts of Latin America. So this technology has this really high ceiling to really make the world a better place. But the question comes back to, well, how do we use it? How do we engage with our communities? So that's the interesting tension that we've sort of seen as we've engaged in this space on voting rights. It's an opportunity to empower, but a question of, well, how do we engage with this going forward? One other thing we'll add about the voting rights element and then I'll kind of yield the floor back. We've noticed that it's very useful to have this technology to identify places that are engaging in gerrymandering, places that are trying to draw the lines in a polite way that might favor particular politicians or favor particular political parties. But this technology also, while it enables us to detect that, it can also be utilized by individuals or groups to try to draw maps that might try to hide this particular aspect. So they might be aware of, well, there's statistical tools used to detect potential issues with racially polarized voting or deliberate designing a maps to be unfair in terms of partisanship. So the knowledge that these tools are there can lead to people trying to draw maps that enable them to engage in this behavior while still obfuscating it. So it's an interesting yin and yang sort of moment. There's positives, there's negatives that kind of come with this. So it requires a very careful consideration of how we go about this. Some of this requires data literacy. Some of this requires a greater degree of involvement and engagement from the general public. So it requires more of a proliferation of clear standards of what data is appropriate or how we want to engage in these particular spaces. And with that, I'll go ahead and yield the floor. Great, thank you so much. And thanks to all of our panelists for this great initial round of questioning. I'm seeing a lot of really good questions coming in on Slido. I'm gonna ask one last question and then turn to those. So for everyone, in light of the various issues you've all discussed, what do you think are some of the policy needs in your areas of concern? And are there any policy interventions that may be promising to pursue? I'll go ahead and chime in. I think the strategy, the INSTIC strategy is working in OMBM 1917, which is on the books, calls for a shared login service to reduce the burden of identity proofing on the public when they access government agencies. And so already what we're seeing is folks are logging into the IRS to access child advanced tax credit and to manage their get transcript into social security. The pre-verified percentage is 18% and that number is steadily increasing. It was 11% just two or three months ago. So the worst time to try to verify the entire country is during a public health crisis because the in-person channels gone and government agency offices are closed. If we can invest and prepare in the future and get the digital equivalent of driver's licenses issued to folks, the incremental burden of verifying somebody in a period of crisis will be much lower and to Julia's point when somebody is in a period of trauma and kind of fight or flight, that's the worst time because they're panicking. And so we need to continue to build in those gains. OMBM 1917 calls for choice. As we care about innovation for equity having both federally provided and commercially provided solutions calls for measuring equity and in tracking progress. So what verification pathways are available online in-person video chat and then ease of use, what languages are supported? Where are those retail locations at and how accessible are they for folks maybe rural parts of the country? The third thing is just better data validation extending data validation. I know GSA is working on this like social security and getting some of the data brokers out of the mix that could increase yourself serve pass rates by 10 to 15% for folks who don't need to go to video chat, don't need to go to in-person often for folks who are missing from financial records which are part of your historically disadvantaged communities then the last thing is there just needs to be an independent oversight and accreditation body that makes sure that folks are accountable and transparent and that the claims that entities are making both public and private are vetted by a neutral body so that we all have a sense for truth. And if all those things can be true we'll have a transparent and accountable ecosystem and we'll also have innovation that happens with appropriate oversight and guardrails. Again, if I could just add, I think it's important to think about where and when to use algorithms. If the data isn't representative in some cases of smaller clusters like we talked about earlier, maybe you don't need to. I would also like to see policies around data and privacy more specifically. I've seen many people claim that because they're using differential privacy everything is set don't need to worry about anything anymore. However, it's not that simple and in some cases people do need exact demographic information so it'd be good to assess harms in the space. In housing websites, some of them cluster and put labels on homes. I would love there to not be a label that says safe neighborhood because it just reinforces stereotypes and I could speak longer about advertising and the need for regulation around that in housing and elsewhere. I would love to see it be made very explicit when algorithms are being used and notifications when they change. In our work we found the people behave differently when they know that algorithms are intervening and many just don't know. And clear messaging is important around this. We often spend 90, I could tell as a CS person, computer scientist, we spend 90% of our time on the tech 10% on the messaging if that and we need to change some of that and have clear unambiguous messages. And just from personal experience in a series of 10 workshops with vulnerable communities around automated decision making, what they really wanted was to incorporate representation from people like them in decision making process that incorporated cultural competency. They wanted a space for improved communication and argumentation around decisions that didn't reinforce a sense of learned helplessness. But the most prevalent theme that emerged was the need for compassion and algorithmic interfaces. Like a signal that says, I hear you, I see you and what you suggest needs to be taken into account. And they even wanted channels for mental health support. Now, it's hard to operationalize all of this, but it points to a direction. I was just going to build off of what Carrie said. I think when we're thinking about where algorithmic systems being implemented, I mean, there was a report recently that came out of ACLU showing that automated decision systems are some kind of algorithmic decision-making system similar to the Allegheny family screening tool is being implemented in over half of the United States. And so this is already happening. It's happening among the most vulnerable people. It's happening mostly to people who identify as being a part of the internal colonies of the United States, of people who are the descendants of slaves, of people who are indigenous to America. It's being an experimented on those who have the least safeguards. And I think this is a space that is very right for policy interventions. And so right now, we know that predictive risk models are already being fed in administrative data. There needs to be a policy intervention or remedy where people can ask for recourse and have their data removed and have some kind of repair in that situation. The response could be a spectrum, a tiered response, some kind of financial reparation, knowing exactly how much of their data, what was it used for and also I'm actually not against predictive risk modeling. The issue is, is that a lot of the predictive risk modeling, particularly in the family regulation system, assumes that the danger lies within families and within communities. So for example, we see these predictive risk models using data from the local law enforcement or police departments. We see that data being used to predict danger in the community. We do not ever see the data about the New York police department or any other local law enforcement, the degree or the rate of frequency of which they abuse, brutalize and murder people in the community. Similar, we don't see that about the department of homeless services, even though the conditions of safety and poor health quality are usually a product of these government agencies, rather than a lack of contact with them or people accessing services. And the administrative burden is disproportionately on people who receive public assistance, right? Multiple people mentioned Virginia Eubanks's book, what are the three case studies that she's looking at? Coordinated housing entry in California, Allegheny family screening tour, predictive risk modeling for child welfare in Pennsylvania. Third is Medicaid in Indiana. If you are paying out of pocket for health insurance, they are for health services, they are completely unaware. If you don't have food in your fridge, but you're not using food stamps or SNAP, they are completely unaware. So the administrative burden is on a very hyper-targeted specific group of people. And that's where we need to intervene and that's where we need to shape policy. And I think when we talk about impacted communities, this gets very vague and very amorphous. Look, we need infrastructure. And I think similar to the big tobacco taxation where you had the taxation, go and fund community-based organizations to do public messaging, to do science communication, to invest really in the community being able to take part in this decision-making. Because if you are an academic, if you are part of a civic society organization, your life is subsidized in order for you to participate in these forms of decision-making. If you're in the community, you're already being surveilled by the city agency, taking care of your kids, being financially precarious. And your access to these conversations is very limited. So too long, I didn't read, is that I think there's many, many, many different policy interventions that we can put into play, but it requires stepping back and not looking within each socio-technical system, but looking across them and beginning from the demands of the community, not beginning from the moment that the organization that you work with arrived. I can only build upon that brilliant answer and so applicable to so many of the people that I work with on a day-to-day basis. I think about this often in terms of guiding principles, right, if we're thinking about how we want to build and shape and consider this technology, the first of which to me is, given where we're at right now, I think that automated decision-making, algorithmic assessment should only be used in situations where the results provides a positive consequence, right? Give something to someone they wouldn't have otherwise had, such that if it goes wrong, they are not in a worse position than they were before. So often we are using these technologies as a way to take things away from people, maybe take away a benefit, take away their freedom, right? So if it goes wrong, they're in a worse position. And so I think we need to be really careful about how we're using these technologies, given what we currently have access to. Second, technologies can't block access to public programs, right? If you're using technology, there has to be a way for somebody to still access the program if they can't navigate that technology or they can't get through that technology. You cannot block access. And third, as was just said, so much of the data out there, A is unknown, but B is wrong, where we have so much bad data that it's vital, especially from a legal perspective, I could have used the word due process 10 million times on this panel, is that folks need to know what data was used to make a decision. Anytime there's algorithmic decision-making, folks need to know what data was used and how it was evaluated. Because otherwise, A, they can't contest it, right? This idea of the kind of a black box decision maker, you have to know how the decision was made in order to have any due process rights to contest it. And second, we have no way of getting individualized feedback on what data is wrong in our system, right? If we don't share what data is being used. So I think of those three points often when we're talking about policy interventions that would include algorithmic decision-making. Building off of that point, and to an extent building off of what Blake mentioned earlier, there is a data consistency crisis, so lack of a better term, across the country. You have a bunch of different states that are all reporting similar data, but they report it in different ways. You might have different units being utilized. You might have different time periods being utilized. So when we try to compare different issues between states, oftentimes you can very quickly run into issues. Sometimes you'll have a variable that's just outright, not recorded by one state, but it'll be recorded by their neighbors. Other times they will be providing that data, but it'll be behind a bunch of different paywalls. There is a degree of challenge here because there's a transparency issue that ultimately sits at this intersection, right? So if data isn't consistent between states or if it's not readily reported or if it requires a bunch of various ways, different types of agencies that constantly have to go through, it can make it very difficult for people to kind of navigate the world. They're living in. We're in a world where we're in global commerce. It's important for us to be able to understand what our neighbors are doing, what we're doing. If I'm a business person in one state, I'm gonna want to be able to understand what's going on in my neighbors as I'm transporting it through. The same can be applied for voting rights. If I'm trying to understand what's going on inside of my community, I need to be able to understand, well, how are they reporting? Thank you all is labeled as an inactive voter who is all labeled as an active participant in the process. If that's different from county to county to county, it can be very difficult to actually gauge is there issues with voter suppression or are there issues with access to the polls? So there is a consistency issue that needs to be desperately addressed. And this applies across the states, but it also exists at the federal level as well. One other element I think I'll kind of tack on here is I think we need to find more ways to get the general public to engage. Forums like this, fantastic. But this is going to appeal more to like the professionals, people that are already directly engaging in this space. We have millions upon millions of people that are directly impacted by all this big data, all this technology, but they don't know that all these issues are going on. They don't know that they're an issue that's directly affecting them. We have even more people that are digital natives that have very strong opinions about how this needs to be dealt with, but we're not engaging with them on the general public level. Like this form, this is great, but we need to have an even broader conversation, especially when we have decision makers that are, for I'm going to put this diplomatically, not necessarily equipped to understand some of the challenges we're dealing with in this space. When you're yelling at the CEO of Google about why your Apple phone is giving you specific ads that represents a fundamental misunderstanding of some of the problems we're facing. So we need the general public that is directly interfacing with this to get involved, particularly with you. And we need to make a genuine outreach effort to them. I can't just, you know, hope that someone will try to get Harry Styles to make a beautiful song about how it's important to have data transparency or rely on some public outreach campaign to like pay a video game donkey or like some various VTuber like Gargurah to come in and say, I really care about voting rights. That's poggers. That's not going to get the kind of people we need to engage in the process to actually care and understand the issues that are in front of them. So there needs to be even more outreach in this space if we're going to actually have success here. I think this is a good first step, but we need far, far more. I just wanted to build on something that Julia spoke about at the beginning. And that is that we're not just talking about AI here but also about much more simpler forms of digital interaction between the state and individuals. And I think the big difference between what we see in the private economy and what we see in government is that even though digital technologies promise individualization, when governments design, for instance, digital interfaces, it's all about averaging instead. So how do we design a digital interface, for instance, for people to apply for benefits that the average user can deal with? The problem as she, I think very convincingly explained is that a lot of people are not seen as average. And so they are excluded in that process. And that amounts to a lot of people in this country alone. And what that means is that the burden then of interacting with the state is replaced from the state to the individual and from the individual on a lot of organizations that help them out. But those are often organizations that are completely underfunded. The people in question, as was said, are also dealing with a lot of different prices in their lives most of the time. And so that's not only deeply unfair, it's also highly inefficient and it leads to mass exclusion. The problem then is that if you want to resolve that you can't resolve that with design alone. I mean, you could do more reach out to user groups and see what would work for them. But ultimately there will always have to be as Julia said, offline alternatives. And that means that that costs a lot of money. And there we have a conflict because a lot of these digitalization efforts on the part of the states are cost saving efforts. So how do you square on the one hand as a state wanting to save money by going digital? But on the other hand, actually having to spend more money on certain groups that need additional assistance and that need additional assistance by humans, not just by computers. All wonderful points. So we have just about eight minutes left and many, many wonderful questions from the audience. And so I think maybe we have time to get to one of them. So I'll put this out there. What mechanisms have you seen be effective at getting government agencies to put racial equity as a priority in their work in these system designs? And again, we only have just a couple of minutes left. So. In my experience. Oh, sorry. Sorry. I was just gonna say none. I was saying we are failing. We are failing greatly. We have some black faces in high places. We are in the middle of a pandemic and vast majority of people who died were poor, we're disabled, we're black, we're indigenous, we're failing. That was it. You know, I would just say virtually every civic servant that I've talked to cares about serving all their constituents and granting access and equitable access to make sure that we look up to our collective ideals as a country. But what it requires is a lot of education. And that's why I'm so excited about this OSTP effort is because once you bring the data and the metrics to say this is what the current status quo looks like and these are the groups that are just completely left behind especially during a pandemic. That's not okay. And then recognizing like there's ways to move forward. You know, inevitably once folks see it, they act on it. But there also is what Christian talked about the funding constraint that even having seen it they're constrained by their budgets. And that's really where the federal government in particular can help and say not only are we gonna fund digitization and self-service and better security at the same time we're gonna invest in equitable access in the off-ramps that Julia mentioned to make sure that like all folks especially individuals who need the most help you know, mentally challenged adults and folks who just don't use the internet very often still have pathways to access. And I think they didn't have a roadmap for that. And hopefully this conversation leads to we'll have better outcomes and more equity in this country. I think focusing on community building and communications probably going to be core for this as well. Part of the problems that Khadija just raised, right is this is happening. These disasters are happening to people on a daily basis but we don't hear about them. We hear about them, you know occasionally they'll become salient but then they become forgotten in the background. We need to start putting the suffering and tribulations of our fellow countrymen front and center. We need to start thinking about what these people are going through. We need to start actually caring and empathetically taking steps to make sure that people are not going to suffer that they're not going to be denied the access just to have basic food but they're going to get access to health services and utilize this technology to make that available to them. And I think the best way to kind of make that happen is to start making it front and center for our decision makers. Here's the people that were leaving behind here's their faces. We've got to do something. We've got to connect with these people. They don't deserve that. No one deserves that. It's a basic human right to be taken care of. And I think building those connections and kind of making that interpersonal connection is one of the paths forward that we're going to have to use. Yeah. And when you do find a community leader that is exceptional, you know find ways to, you know offer them more power and to support them and to encourage policies for diverse, you know groups of people working together. I think some folks in government may care about this but I think it falls really low on their priority list in a completely unacceptable way. And I think that not much we're going to say or do here moves it up outside of laws, litigation and recourse that is more expensive for them than not doing it in the first place and addressing the issue. And so, you know we can talk about kind of being more buddy buddy and how we want to integrate equity issues but at the end of the day it can't be an option. It has to be a requirement and we have to especially give those who are harmed recourse in that situation to get back what was lost and to be involved in determining what the future looks like. So any other final thoughts I really want to, you know it takes the last three minutes to drill down and how can we get to racial equity? I mean, I just wanted to add that we're in a political climate also where people are not even allowed to mention critical race theory. People don't even want to have the basic foundation of how this country was formed on the backs of enslavement and genocide taught, you know like public schools are the mechanism of which we're supposed to be producing this democracy and these basic facts are banned, you know and that's not even where critical race theory is taught but I'm saying basic even just the idea that people are teaching about enslavement basic facts of this country are banned. And so how can we talk about equity in creating material forms of recourse when the knowledge production on which that is predicated is being excluded from all of our halls from K to like graduate school? I think I want to build on the earlier point made about we need laws and we need litigation. And so as I said in my initial remarks I think we have to look at what other countries elsewhere are trying to do in terms of regulating the use of AI by government. I mentioned the EU's initiatives earlier. I'm not saying they are perfect but at least they take seriously this idea that we need to legislate these matters. There need to be clear rules and obligations for governments when they use digital technologies including in the social welfare area. But there's a broader point to be made there is that this is not just about legislating but it's also about involvement of everyone in a society in these discussions because this clearly affects everyone and that goes to exclusion and the point that you just made there is that who is involved then in that process of legislating this? It's just gonna be a rather elite affair of certain CSOs and other actors who are usually involved in legislative debates or how can we turn this into a bigger public debate where everyone is involved because I think that's highly necessary. These are not technocratic or marginal matters. This is about a wholesale overhaul of how our government functions, how our society's function and everyone should have a say in what that should look like. All excellent points, everyone. I've really appreciate this entire panel. So there's so many wonderful thoughts and ideas. So with that, I will turn it over to Suresh Venkata Subramanian, sorry, to close this out. Thank you, Michelle. And thank you so much to all the panelists for sharing your ideas and offering your expertise. And thank you to everyone who's tuned in from across the country to be part of this discussion. In the coming months, the administration will continue hosting engagements like this one with partners and experts across the federal government, in academia, civil society, the private sector and communities all across the country. As this panel has said over and over again today, technology can only work for everyone if everyone is included. So we want to hear from and engage with everyone. Before we close, I want to invite you to share your feedback with us on any of the issues we've discussed today by emailing your feedback to ai-equity at ostp.eop.gov. We're grateful to you for being with us and look forward to your continued engagement in the weeks and months ahead as we work together to create a bill of rights for our automated society. Thank you.