 Here is Lillian Corral, who is the Vice President of Technology and Democracy here at New America, and we're going to consider all the future of AI accountability look like. So I'm going to hand it over to Lillian, and she will introduce one of our keynotes. Thank you. Hello, everyone. Hi, Alan. Hi. It's my great pleasure today to have this wonderful conversation. Thank you for joining us. Today is Alan Davidson, the Assistant Secretary for the Department of Commerce for Communication and Information, and he's also the head administrator of the NTIA. Alan, a little bit about you just for this audience, although everybody knows who you are. But Alan has over 20 years of experience at the intersection of public interest, advocacy, and technology, and the law. Obviously now, as the administrator of NTIA, you oversee a lot of the issues that are at the forefront of discussion in the American public debate, AI being one of the most important ones that we'll be talking about today, but other critical ones like our Internet infrastructure and so on, which we care a lot about at the Open Technology Institute at New America. Prior to being at NTIA, Alan was the senior advisor for the Mozilla Foundation, which is a global nonprofit that also promotes openness, innovation, and participation on the Internet. And then obviously Alan has a long history of being here at New America, where he was once the director of the Open Technology Institute and also vice president of technology policy. So thank you again for joining us. Oh, it's so great to be back here. Thank you. So today is an interesting day to talk about AI. I know it's somewhere in D.C. There is also a strong cohort of individuals, leaders in the industry, having an important conversation as well on the future of AI and AI governance as part of Leader Schumer's Insights Forum. Can we start the conversation here just talking a little bit about how you at NTIA think about balancing the need to police AI and optimize AI? Obviously, a lot of the debate is sort of what feels like extreme polarization around is it really great for our society or will it lead to our extinction? How do you kind of set the tone or lay the ground for your own thinking around this? Excellent question. Well, actually, let me first, by the way, start by saying it is really, truly great to be back here, back at New America, back at this forum. And I have to say congratulations to the leaders who have been, I very vividly remember the, I think it was the first forum that we did in this partnership between New America and ASU and admiring the quality of the conversation. I felt in some ways it was sort of the best of what this community can bring to furthering our thinking on the hard policy issues in front of us. So I just say congratulations to Peter, to our colleagues at ASU and for the continuation of the forum. And it's an honor to be here today for me. So, terrific question. And I think the answer is going to be yes and both, right? Like that the truth is that I think the administration is committed to the idea that responsible innovation in AI is going to bring enormous benefits to humanity, to people. At the same time, we know that there are very real risks and that those risks need to be dealt with, need to be dealt with today, real risks that we are seeing today if we're going to realize that promise of AI. And, you know, I think just unpacked that the president said it himself very clearly. I think he said we have to be clear eyed about and vigilant about the threats emerging technologies can pose, but there will be enormous, enormous potential upsides as well. And I think that encapsulates our approach, right? You know, the rise of these machine learning techniques has been pretty incredible. It's been coming and building for years. I think many of us have been surprised by how fast some of the most recent developments have been. And we see this is going to transform our society in many, many ways. And the benefits are very clear. You look at something like medicine, disease detection, drug discovery, access to healthcare information by sets of people who might never have had that access. All of that is not just on the horizon. It's happening now because of these techniques. And that's just one area, precision, agriculture, climate change. All of these things will benefit from these new machine learning tools. At the same time, we see real risk, right? And it's long-term risk and it's also immediate risk. And that we need to address that risk if we're going to realize those promises. Do you think that some of this notion that it will lead to our, you know, extinction, does that sort of obfuscate or kind of make it difficult to really address the concerns and the regulations that need to be in place? Well, it's a great question because I think we see really what should be best thought of as a spectrum of risk, right? And in some ways, it might be easy to try to bifurcate it between the ends of the spectrum. There are longer-term risks. And we've heard pretty clearly from fairly senior people in the field about their worries about some of those longer-term risks. And I think we've taken an approach in the administration to say we need to address those. They're real national security concerns, safety concerns, and we should look at those in the long-term. But I think we benefit also from bifurcating that and saying, but there are also, that should not hide the fact that there are immediate risks too. And it's a different track, the immediate risk category. And I think we are seeing real potential harms to privacy, to security, concerns about human rights and civil rights, concerns about civic discourse. And those do need bias, you know, bias in these systems and equity. And those do need to be addressed. You may have already answered it in that remark, but are there specific issues with how NTIA and the administration are thinking about the concerns around the large language models? Is there a nature or already some framework that you're thinking about in terms of categorizing these issues or risks? Yeah, it's interesting because the, you know, it's funny, I'll just say the large language model, for those of us who've been following this space for a while, right, it's not even clear that the large language models are going to be the most impactful or economically consequential, you know, developments in this space. But they've really captured the public's imagination. And they're being used already today. Certainly my kids are using chatGPT. I hope it's not on all their homework, but they're using it. And so it's behooves us to be thinking in a clear-eyed way about what those risks are. And that work has already begun. And it's been in progress, you know, from the administration point of view, OSTP, the Office of Science Technology Policy, put out a blueprint for an AI Bill of Rights. We have worked with companies, talked more about it, to get commitments from them to address some of the harms that we're seeing today. And I do think that there are these really immediate questions that need to be answered about the large language models and how they're being used and how they're impacting our society. And I'll just say at NTIA, we're thinking a lot about accountability in that space. That's a piece that we've, we launched a project about a year ago to talk more about it, but to think about how we make sure that models are accountable and do what they say they're going to do. Yeah. I mean, it's interesting you talk about your kids using it. I mean, the large, I mean, right now it has captured the public imagination and being a long time native Angelina. I mean, we're seeing the real dynamic so early on in the development of the technology, that real conflict between the use of AI and then what it's doing to, you know, like the creative economy is an example, right? I mean, the Hollywood strike is very much centered around issues of how AI will impact the ability of a whole industry. Which is, you know, one of the reasons why our country I think has been so economically and globally powerful on a cultural level. How do you, I know that the use of it and copyright laws are definitely not within your purview, but how do you just getting your thinking on how does NTIA and commerce more broadly think about the use balancing kind of these large data sets and the use of them and helping that innovative industry and economy grow. And at the same time, how do we protect the rights of creators that are, you know, whose work is being absorbed by these models and then being used to generate new content at, you know, at literally no or very little benefit to the original creator. I think it's a great example of the kinds of dislocations that are coming and the pace of it is what is probably in some ways quite surprising. I will say there, this is a very hard set of issues around intellectual property and particularly large language models, but also just generally in the training of models and what models produce. The question of who owns what a model produces, the question of can an AI own a patent or what does that look like? I think there are people who are thinking about these hard legal and policy questions right now. I'll just say at the Commerce Department, our colleagues at the Patent and Trademark Office have launched a number of inquiries, both on the copyright side and the patent questions that are really interesting and to also think about how we promote in the United States more innovation, how we provide the kind of intellectual property protections that will promote more creative uses. And I think this question about artists is quite a real one. We're seeing it play out right now. And I think that will have to, in some ways, that will have to be resolved. I don't have a great comment on the actual conflict that's going on right now in Hollywood. Other than to say, I think it's a very good example of the kinds of dislocations that were or changes that we're going to see. And we need to think about how across the board, these new tools are going to affect work. In some ways, in bursty ways, you know, 10 years ago, I think many of us probably would be included predicted that, you know, driverless autonomous systems would, you know, would be here within a decade and that, you know, we might not need truck drivers, would our kids learn to drive even. That has been slow to come. It turns out some of these problems are harder than we think. We've made amazing progress, but we're not there yet. And so it's sometimes difficult to predict, but there's, as I say, a burstiness, a non-linearity to these steps forward that I think is part of what's unsettling here. Yes. I love that example of the autonomous technology because, yes, four or five years ago, you know, we thought we were seeing cars on the road anytime now. And we know that the progress is made, but then there's definitely a plateau in terms of the development, but also then is the infrastructure really ready to support it? And is our policy infrastructure ready to support it? The policy, yes. So that's a good question about then kind of jumping ahead to how you see your own policymaking, especially, I know that you've had inquiries yourself at NTIA about AI governance. What are you hearing? Can you sort of, one, give us an update on where you see the agency's role in policymaking over the next year? And then also what are you hearing from the inquiries that you've made to civil society and others about how we should be governing this space? Well, it's an interesting time to be working in this area. And I should say, you know, NTIA, our statutory mandate by law, our role is to serve as the president's principal advisor on telecommunications and information policy. What does that mean? We are basically policy advisors. So we think we're not regulators. We're not here to regulate. But we don't think necessarily about what the law says today. We think about what the law ought to say. And so we have been engaged in this project across the board of thinking how should we as an administration respond to these new developments which have been coming for a while? How do we engage internationally? We haven't even touched on the fact that there's a giant issues internationally to think about. National security issues, a strong desire to make sure that innovation happens here in the U.S. and with our trusted partners around the world. And that we're building AI in ways that promote our foreign policy goals as well. I think all of that for us has been, it's going to be a far-flung enterprise for the administration. There are a number of big things that the administration is working on. At NTIA, I mentioned one of the interesting things we started on. We started on this about a year ago before all of the chat. Yeah, so it takes a while to do some of these projects. We embarked on a project on AI accountability and really thinking about this is just one example of an area. But AI accountability, this question of how do you make sure that models are actually behaving the way they say they're behaving? A key step to making sure that we can actually put rules in place or guardrails in place is being able to understand what a machine learning model, what an artificial intelligence system does. And it's a lot like financial audits. If you think about financial audits and financial systems, if you're going to represent that you've done X, Y, or Z financially, there are a whole set of rules around that. There's a policy backstop. There are standards for accounting. And so we embarked on an inquiry to say what would it look like to create that kind of ecosystem for AI so that we would know what the standards are that people should be measuring against and then figure out how you actually do that measurement. We put out a request for comment in the spring. We got over 1,400 comments, which for us, our little agency is a ton. We touched an herb and people are very interested in this question. We got a lot of comments back and I'll just say I think there's a keen interest in putting tools in place so people know how to figure out what accountability looks like and then making sure that we think about where we might need policy backstops to make sure that when companies or developers could be private sector, public sector, when anybody represents something about how their AI system is working, we've got tools to measure whether that's true. That's just one example of a kind of I think full spectrum approach to these issues. You know, one of the things obviously that we really care about is the connection with public interest policy and technology, right? It's not just about advancing open technology, but it's really about ensuring that it's serving the public interest. And while a lot of these technologies tend to feel like they'll benefit society, we know that they're obviously we've talked about the risks. And so as you're engaging the public more broadly, how do we actually engage the resident citizens of the U.S. who are interacting with AI? I mean, the conversation right now gets really focused on generative AI, but the reality is that AI tools, AI supported tools have been around in our mix for at least, you know, obviously for decades, but you know, definitely in cities and we see the issues with it, whether it's, you know, the bias around any image based technologies. So how do you, I mean, how do we, are there things that make you optimistic? Are there challenges you see to having a more broad based conversation where the public can really be engaged? My concern is that they're not one of the fourth, you know, like a general public is not really part of the 1400 that get to, you know, that get to give their input to NTIA. And it does seem like this is a different kind of technological moment that does require more public input and debate. Absolutely. So I think it's essential that we have an inclusive conversation so that we are building systems that reflect these real impacts on society. And if we, we develop at our peril without that input, right? And I think it's going to ultimately be, it's important for developers, it's important for us. If we're going to meet our goals as a society, we've got to have that equitable conversation. So how do we do it? I think part of it starts with good storytelling, with good narratives to help draw people into this debate. You mentioned, and I think people are seeing it and feeling it. You mentioned some of these other advances. We've had, you know, machine learning in our lives for a while. Facial recognition is a terrific example. And I have to say it is quite surprising to me as an observer how far and how fast we have moved to bring these tools into everyday life. You know, there have been, as a great example, the stories about local police departments using facial recognition tools and improperly arresting people, right? Taking people into custody, taking them out of their lives. The biggest thing that the state can do is deprive you of your freedom. And we're doing it based on a set of tools that we know, that we know we've shown our bias against people of color, right? That don't work as well. The pace of that change, the pace of that adoption just shows how much we need to do to first of all, educate and inform people to understand the limits of these tools and put good guardrails in place. But I think those stories help us bring people in. And even though, as you say, large language models aren't necessarily the longest running or most impactful thing, I think we have to lean into this moment. This is a teachable moment for us. This is our society's moment. People have stepped up. Everybody I know is talking about chat GPT, you know, this winter and spring. Admittedly, I run with a set of really nerdy people. But, you know, I think a lot of people were talking about it, and we have to use those moments to bring people in. The majority of the comments we got, those 1,400 comments were individuals. People worried about their jobs. People worried about their security and privacy. And I think we can bring people in. And if there's anything that's encouraging here, it's that we're actually a bit earlier in this discussion. And I feel this way than we were in some of the previous generations of technology. So there has been some, I mean, there has been some conversation in communities across the U.S. around facial recognition technology that has been somewhat contentious, that in fact a couple years ago forced a couple companies to stop using, to stop allowing police departments to use the technology. When we think about the public, when we think about public interest and equity, how do we incentivize? I mean, are we in any position to really incentivize companies to take that view in mind first and to really focus on serving the public good and equity, not just assume that it's a benefit to society, but to actually really center a lot of their development around these kinds of values and principles? The truth is, and this is what will help us, it is good for businesses and developers more broadly to be thinking about these issues up front. The success of their products and of their companies is going to depend on their ability to get ahead of these issues, privacy, security, the impact of bias in the models that they choose. And I think we need to do more work to lift that up. I don't want to be naive about the work that it takes to make sure that people understand that. But I talked to leaders in the developer community. I think in boardrooms across the country, people are starting to ask the hard questions. As they did with cybersecurity before, how are we getting ahead of these issues? Are we actually doing the right thing and making sure that this isn't going to come back to haunt us in terms of our reputation or in terms of our liability? I think there will be more to do. The starting point is that public attention that drives developers, companies, and others to really think we've got to get ahead of this, and that's partly why it's important to talk about it. I think these accountability frameworks that we are trying to put in place in government, starting with tools that show people how they can measure their risk, and then moving to ultimately I think these commitments, codes of conduct, ultimately are there going to be rules on the books that actually people have to obey. And that will be the path to making sure that companies are thinking about this. And it strikes me, it's important to say, I think in New America, one of the opportunities here is also to just create more dialogue between industry and civil society to really ensure that, because I think the conversation is happening perhaps in the boardrooms, but sustaining it I think is going to require a lot of convening and constant dialogue over the next five, ten years of development, so that we're working in essence in concert to make sure that whatever harms are emerging, we can identify them quickly and address them and have some level of accountability, but also create more perspective and trust. The problems are really hard. I mean that's part of the challenge for us. Some of these are very obvious, some of them are harder, and the solutions are not so easy either. So we're early days, but this is the part where we really need thought leaders and honestly it's going to be groups like New America who can step up and represent the broad views of the public and also have this dialogue with developers, with companies, with the technical community because we need it. And I think a lot of people are asking the question, what does good look like? They don't even know, right? That's where we can help. And by we I mean us. Yes. Well, I have a couple more questions I wanted to get out of the way before we open it up for audience questions, but you've been around this industry for a long time. Is there something that strikes you as different in terms of how we're approaching this moment versus other waves of development? What gives you a little bit of... I mean these are really hard problems that we don't... There's so much unknown that it's hard to... It's going to take a while, but what gives you hope and confidence in this? Well, I think the biggest thing is that we're engaging region relatively early. So I have been in the sort of technology and policy space particularly around the internet. If you compare this to the early days of Web 2.0, cloud computing, social networking, even the development of the internet, I'd argue we've leaned in more here than we did in any of those places. We've learned some of the lessons. Part of it is that the developer community has learned. Like I look at... I used to have this great chart of when did each company hire its first policy person? Microsoft, it was like employee number 20,000. Google, it was like employee number 3,000. Twitter, it was like number 400. If you look at these relatively small companies doing some of this very impactful work, they're hiring policy people, they're getting themselves to Washington. They're part of this dialogue that's happening on Capitol Hill right now. And I think they're engaging earlier, which is great. I think also we have a much more sophisticated set of civil society players than we've ever had before. New America and others out there really digging in hard on how do we make sure the public's represented here. And I look at... I have been involved in teaching over the years and I look at the next generation of computer science students. This is what they want to be engaged in. Not just the development of AI, but they care about the impact of what they're building. And that to me is the biggest reason for hope. We're building a generation and it's partly what they care about of young people who really want to make sure they understand the implications of what they're building. So that is the biggest reason for hope that we're early on, but it's going to be hard work and a lot to do. I would not be doing my job if I didn't mention to that point that, yeah, New America has an amazing program, a public interest technology program with universities across the U.S. that's focused just on that, trying to build the next generation of technologists who have the rounded perspective to really think about the implication and the ethics that go along to building these powerful tools. I think the one... And it is extremely encouraging. I think probably the one thing that we need to worry about a little bit here is that we don't really have the luxury of time. We don't. And that is a little bit different than it felt 15 years ago, let's say, or 20 years ago in the internet space. The internet was relatively new. When I started working there, we would measure... It was 40 million people online. Now there's 5 billion people online. The pace of change and the uptake in these technologies is very fast. And so we need to be figuring these issues out now. So there's a sense of urgency as well. So I'm glad we're engaging early, but we also need to be engaging early. And I'll just say the administrations, we're working on this, right? At the highest levels, we've worked on this set of commitments from companies. We're engaging with our international partners to make sure that we're bringing that. We're working with them about global solutions. And we are actually thinking about what does the legislative world look like just today? Senator Schumer is convening this very high level group of CEOs. That's exactly what we should be doing, right? Educating ourselves and thinking what are the rules we need to put in place? Well, that is a great opportunity to talk about something we've been advocating for for a really long time, which is some comprehensive federal privacy legislation that really sets us up to address not just the existing technologies that have been out there, that amass and use all of America's data, but also the new technologies and a lot of these LLMs in particular. It feels like the time says we need to act, and there's some urgency around this moment that's been building up, but there is a little bit of just inaction. I don't know what you think about our inability to get some comprehensive federal privacy legislation at the door, as the Europeans have been able to do. And more generally, with the kind of urgency we have, there are still foundational pieces of legislation that we don't have in this country around technology policy that need to be advanced. I mean, I know it's a complicated question, but what will it take for us to really take these kinds of actions on? It's a great question, and I think partly because it tells us something about how we as a society are able to react to some of these technologies. I will say it is surprising for those of us who have been in this space for a while that we don't have a comprehensive federal privacy law in this country. If you had asked many of us, not to date myself, but 20 years ago if you had said we will be in 2023 and there will not be a federal privacy law in the books, I think people said that can't be, right? This is going to be too important. We know these things are going to affect people, and we know by the way the data shows us that the public wants these rules as well, right? And so, well, the administration has said, the president has said we need to be doing more in this space. We would really benefit from a comprehensive federal privacy law, partly because it is the right thing to do to protect consumers and have a baseline that everybody's protected, but partly also for our global leadership. I mean, I've worked with a lot of people, talked to a lot of other leaders around the world, thinking about this question of how we're going to address these hard issues. And it's hard when the U.S. itself doesn't even have a baseline privacy law and how do we think about our leadership moving forward. So, I'm hopeful that with the right incentives we can move quickly. I do believe that there's a strong sense that the moment is upon us. We have to act, we're starting to act, and there's going to be a lot more to come in the coming months and years. I would love to open it up for questions from the audience if that's okay with you. Yeah. Thank you, Alan. It's good. I mean, I think we're all happy that somebody is well informed and as long with your deep background is effectively the point person on AI on the commercial side in this country. But I wanted to, you may not be responsible for some of this, but I wanted to get your thoughts on this question, which is we saw essentially the news media business essentially more or less get destroyed by the fact that social media companies were sort of giving away this product for free and no business can survive if the product's being given away for free. And so picking up on what Lilian said, and one of the reasons we have a very prosperous country is because of copyright and patents. And so if you look at the large language models, I mean essentially they've hoovered up 170,000 books. The authors involved have, and that's just one example in terms of creative content. So I mean, this seems like a very big problem. Alan, you said time is sort of running out because this thing is moving very quickly. So what are the safeguards that realistically could be put in place for content creators, whether they're writers or artists or any other form of content creator, including developers themselves, right? It's a terrific question. It's a hard problem. I think the problem, this is one where I suspect that law will play a major role and have to fairly quickly. And when I say that I'm also, I mean, not just, you know, new legislation but the interpretation of our existing laws. And I think there's a lot of work being done at the Copyright Office. I talked about the Patent and Trademark Office and Commerce to try to get ahead of that and think about hearings on Capitol Hill. But I suspect we're going to see litigation and that litigation will give us some insight pretty quickly about what the, how our current legal structure will work. And then we'll have to go from there. I'd also say there's an international dimension to this, right? You know, math does not stop at the border. You know, innovation in AI is happening all around the world. And part of what we're so keen to do as an administration is, yes, work domestically, think about also the companies that are leading here and getting commitments from them, but also then work with our partners. We're going to start with the G7. There's a big effort, the Hiroshima process that's been kicked off by Japan to come up with codes of conduct around AI and then move out from there. The Indian government's been a big leader in the global partnership on AI. We're very keen to be working with the UK. They have a big safety summit coming up. And we'll see more and more of these international efforts to try and get ahead of these big problems. I'm getting the Peters today. Hey, I'm Pete Singer. Great to see you again and thanks for a wonderful talk. I wondered if you could speak about your own personal, but maybe the administration's view of the effect of this on jobs, not merely in an economic sense, but in a political and societal sense. There's a variety of contention on how many jobs will be either replaced or redefined or reduced because of this. We've got estimates all over the place, but what I was struck by recently was there was a poll that came out about two months ago and the way it was labeled in the media was, only 14% of Americans worry that they'll lose their jobs to AI and robotics, only, so one out of seven people in the room. And then there was a Gallup poll that came out earlier this week that said, actually now it's up to 21%. So can you speak to what your view on this is and how the administration is thinking about it, particularly in terms of there's very clear social anxiety that also has real potential security effects? I think a starting point for us has to be to engage in that conversation and to really understand that there will be impacts, but also try to be clear-eyed about where they'll be and how we might mitigate risk. But I think the biggest thing is starting by understanding that there's going to be a lot of benefit and that that benefit in itself will create economic opportunity. I just think about, for example, the use of, this is a little bit of, not a perfect example, but I was speaking to the head of a local private school here about how they were handling the rising use of chat GPT and he said, well, this is just like a calculator, right? So we are going to have to change the way we assess people. It's not the end of education. It's not something that he was planning to ban in their school. It was something that they were going to embrace and think about how it was going to have to change the way they do evaluation. You know, students use the calculator today, use calculator today on their math tests in high school. We still learn arithmetic. The point is there are going to be these benefits in unexpected ways. There's a different view of many of these things and not to be dismissive of concerns because they're real, but there's data out there that also shows there's a tremendous amount of job creation that comes with these new technologies. And so figuring out how we make that pivot to help people respond and also to recognize our predictions are not necessarily so good. We talked about driverless cars a little while ago. Ten years ago we might have said, oh, my gosh, we need to embark on a massive project to rescue the trucking industry. That's not here yet. It might be here. We need to be really clear about it. We need to prepare for it. But right now it's very hard to find enough truck drivers and it's going to take a little while, we think, for some of these things to happen. So I do think there's a level of preparation being ready for this and then also trying to make sure we're benefiting from the upside. The last thing I'll say, because I think this is such an awesome question, I was just down visiting Miami-Dade College down in Miami. It is a giant college system. I say this, it's an ASU-like college system, just massive in its scale. I think they have 100,000 matriculated students. And they invited me to see their AI lab and I'm like, you don't even really have a PhD program. What is this AI lab going to look like? And I went and I was kind of blown away because their approach is we're not here to train PhDs in creating frontier models. But what we do think is that every business in America is going to need somebody who knows how to use machine learning, who knows how to use chat GPT, who knows how these basic tools work. And we are going to be the place that trains people with an associate degree to go out there and be a user and a smart business leader on the use of AI. And I think it's brilliant because the fact is we need to stop thinking about the frontier models as the only thing. And the job dislocation is the only thing. We need to be thinking how do we train a next generation of workers to use these tools across the board, whether it's radiologists scanning scans or people using chat GPT in small businesses across the country. And that's the retooling that we need to do and start getting ahead of. And that's how we're thinking about it. Florida is a great model. It's the largest, that Southeast Florida region is the largest producer of Latino engineers in the country. This question, you know, makes me ask, can you describe what efforts or maybe if they're not already in place, what kinds of investments we can be making to really up the digital readiness, especially of kids? It just seems like part of the, I think what you're describing is sort of the augmenting capabilities and potential of AI. But as we know, a lot of our children, especially children of color, poor children have just the least access to a lot of technology and even just basic tools to be successful in today's economy. So how does a country do, I mean, what I'd love to see is a country where we're starting to invest early on in really equipping our kids to be successful, all kids to be successful in this moment. But how do you, from your vantage point and in the collaborations across the federal government, where do you see the opportunities to do more of that? Well, it starts with, you know, the investments that we've been making in STEM education across the country or STEAM education across the country, we really do need more investment and more help for people and for diverse communities to be starting at a very young age, learning about these tools and the tools that they'll need to be able to succeed in a, you know, very STEM-oriented world. I think we also need better digital literacy and media literacy, and there's a lot of work that's been done on that. So a lot of it, some great work here but done here in New America. But that's got to be very real. And you see groups, I'll just shout out to Common Sense Media as a great example, who've put out toolkits and are thinking about this, how do we embrace this, make sure that we're teaching digital literacy and digital media work. Part of our efforts even at Commerce at NTIA as we think about building out broadband infrastructure, we've been given big grants to do work on digital equity. And our view of digital equity is it's got to include thinking about the workforce of the future, helping communities that have been left behind and particularly vulnerable communities, making sure they have the tools to thrive online. So there's a lot of investment that's going on right now and like I say with the Miami-Dade example, we need to take a very expansive view of this. This isn't just about it will be partly about investing in those frontier models in those PhDs who are creating the newest technologies, the big NSF investments in those academic leadership centers, but it's also about how do we reach a very broad cross-section of America to make them understand they're going to be part of this revolution too. Yeah, I often say no child should graduate high school without digital literacy training. That is the new civics curriculum of this country and we should figure out how to make that happen. Alan, this has been a delightful conversation. I don't know if we have any more questions from the audience, but I think with that we just thank you for your time. I'll just say thank you and I'm going to say this is, it is a moment of great opportunity but also some peril and I think there really is a big global conversation going on. Are these going to be technologies of openness or used by closed societies? Are they going to be technologies of freedom or of control? And how do we create more equitable outcomes as we embrace this new machine learning revolution? So I'll just say thank you for the conversation. What we do is conversations like this and the engagement of leaders like we have here today. So thank you. Thank you.