 Good morning. I should mention, I think it's all commendable that you made it here at 9AM on the Saturday morning of DEF CON. So I think you're enthusiastic enough right now. Archie is the head of the White House Office of Science and Tech Policy. And she is also the president's chief science and tech advisor, which is a far more impressive title than me. Archie, welcome to DEF CON. This is not your first DEF CON. This is not my first DEF CON. I got to come in 2016 when I was running DARPA and we did the DARPA Cyber Grand Challenge. And how was that experience? That was an amazing experience. I don't think I was hacked, but how would I know? Okay, can you hear me now? Okay, I got it. I was here in 2016 for the DARPA Cyber Grand Challenge. So why is somebody from the White House here this weekend? We're here because I don't know if anyone's noticed, but AI is happening. It's been happening for a long time, but we're having a moment where it's burst onto the scene and it's come into people's lives in more ways and more visible ways. And President Biden's been really clear that it's part of this inflection point that we're at in history. He's very clear that we're in a moment where the choices that we make are going to shape the decades that are ahead in very powerful ways. And we all know that information technology has created this moment. It has been one of the big factors that's created this moment. And now comes this huge accelerator with this next generation of AI. So AI is an urgent and a critical focus for everything that we're doing at the White House and across the administration. And part of that means getting the talents of a group like this, putting this hacker culture to work to help us get to a future of safe and effective AI. And that's why I'm here. And speaking of which, next door, there are thousands over the course of today and yesterday and tomorrow, people getting to work on that GBT and the other AI chatbots. Tell us a bit about that challenge and why is the White House back in something like this? Yeah, we found out that the organizers were putting together a generative red teaming talent at the AI village. This was AI village coming together with seed AI and main intelligence. And we learned about this a few months ago and we said that is exactly what we need at this moment in time. If you think about it, think about how helpful red teaming has been in getting cybersecurity dramatically advanced from where we were. Now we've got AI technology, which is, you know, in many ways, the technology itself is much more complex. And then, of course, how it comes out into the world is the way humans interact with it. So you have complex, opaque, difficult to understand technology. And then you have human beings talk about complex, opaque and hard to understand. And it's where they meet. That's where things are happening. And we know how powerful this technology is. President Biden keeps coming back to the fact that it's so powerful and we absolutely have to harness its power. But to do that, we have to manage and mitigate its risks. And red teaming is going to be a core part of how that happens. So when we found out about that effort, we said we're all in. We worked with those organizers to bring the major AI companies to the table and to get them to agree to participate and have worked with them along the way, including with a pilot that they did at Howard University to bring up the challenge. And I cannot wait to go see how it's going down. I hear their line. So I think it's probably doing really great. Yes. We've spent a lot of time there the past few days. And just, I'm sure, pretty much everybody here is familiar with what's happening. But essentially, they're trying to get the chat apps to say things and do things they shouldn't. So whether it's spewing misinformation or hate speech or giving instructions on how to commit a crime, all that sort of thing. How do you get them off the rails, which no can happen. Exactly. So with all that, I mean, what is your kind of dream? Oh, thank you very much. What is the dream scenario, the dream outcome of this event here this week? Yeah. Short term, what we want to see come out of this is really practical. I mean, this is being done responsibly when these systems get derailed. That information is going to go to the creators of those AI bots so that they can fix them and get better, keep getting better. So that's the immediate goal. But this is the first time we've had an independent evaluation of this sort. Of course, companies have worked on improving their systems. And of course, they've been out in the world and people have been using them and occasionally breaking them. But this is the first independent organized way of looking in a more systematic way at where these systems come off the rails and trying to both, number one, help the companies improve. But number two, really get an assessment of where we are and how good our guardrails and how easy are they to break. So that's today. And then the longer term ambition here is what we need to build as quickly as we can to a future where we have rigorous, robust red teaming across all the different kinds of AI systems that we have. And this, I think, is going to light a spark. That's part of how we get to this vision that we have here at the White House. And I think across this whole community of safe and effective AI. Have these AI apps, you know, there's been a rush of them, I guess, since last winter, you know, in terms of when it really came into the spotlight with GBT, the release last November or so. Has things been moving too fast? Have the American companies been pushing this out into the world without enough testing? Well, look, this community knows how fast technology can move. And that's been the whole history of the information technology revolution. In a really deep sense, that is a revolution that is about acceleration. And that's what this new generation of AI is doing is it's further accelerating the way we go into the future. And the way I think about it is we have this amazing accelerator. But it doesn't do you much good unless you have a steering wheel and brakes. And putting all of that together is how we're going to get to the place that we really need to get to. So, yeah, I think things are moving fast. And that's why we are moving fast at the White House. If you followed in October, just before the tsunami of chatbots came out into the world, in October, the White House released an AI Bill of Rights, a blueprint for an AI Bill of Rights. So this is a statement of values about how important it is to have safety and security, how important it is not to have discrimination built in, how important it is to protect privacy. Because when you, when the technology is moving this fast, there's no better time that it's so important to be completely clear about what our values are. So that's what that, that is the bedrock. And then as these new generative AI systems started bursting on the scene, we've moved very quickly. And again, the president is directly driving this because he knows what's going on. He sees the need to move very rapidly. Early in May, the president called CEOs of four leading AI companies to the White House. That was Open AI, Anthropic, Google, and Microsoft. The vice president spoke for an extended period with them very directly, called on them and said, told them that they had not just a legal but a moral obligation to make sure that their systems were safe before they went out into the world. And that meeting, we had a wonderfully open good discussion. That meeting kicked off a process that just a few weeks after that in July, the president announced the voluntary commitments by seven leading AI companies, those four plus Amazon inflection and meta. And those seven companies now have signed up to a first set of voluntary commitments. And I think that's a benchmark, a landmark commitment to get those seven companies to commit to something the president announced that from the Roosevelt Room. That's an important moment. And that's about the industry taking steps to do its part of the job, but we all have roles and responsibilities. And so the next thing that we're working on very actively right now is an executive order that the president will put out very quickly. This has been a very urgent effort. And that is about using all the laws that are already in place, but boosting the executive branch's ability to manage and to use and to harness AI. That's the next thing. While all that's going on, we're continuing to work on a bipartisan basis with Congress, which has also been on a steep learning curve and has been working quite hard on AI issues. And of course, this is a global issue, not just a domestic one. And so we'll continue to work with our international allies and partners. So those are all the things that we are doing. But again, I think it's going to take all communities. And that's why it's so exciting to see the kinds of things that DEF CON and the hacker community are doing as well. And so you mentioned the AI pledge commitment from the seven tech companies. And a few bullet points really. What is it that you've gotten these companies to commit to doing? Yeah, they have committed to do several things that I think are really going to be important for the future of how AI rolls out. One is a commitment to internal and external red teaming. So very much to the point of understanding how these systems break so that we can keep getting better at them. There are several commitments. The other one I'll highlight is a focus on an agreement to work towards watermarking, which can be one part of having, you know, just getting to a future where people understand whether the content that they're looking at is authentic or machine generated or manipulated or not. So those are a couple of the pieces, but it's a couple of pages. And I think it does lay out some important first steps for the companies. You know, obviously these are corporations and they're protecting their reputation and want to make money. And look, you know, the president himself has had particularly harsh words for a lot of the big tech companies. I mean, including Metta, Facebook, you know, even said during COVID that the platform was killing people because of COVID misinformation. Why should the White House trust these companies with a voluntary pledge like this? Well, a voluntary pledge is a great place to start. It's not far from the only thing that needs to happen. And that's why we're following with an executive order and working on legislative strategies. And I think we are in a moment where people understand that technology has already raced ahead very rapidly. And a lot of the dreams that we all had about information technology and social media, a lot of those have come true, but some nightmares have come with it. And we are in a time when that is visible and clear. And that I think is exactly why you'll see the White House continue to push the companies, hold them accountable, and take action as an executive branch and with Congress on legislation as well. All of those are going to be needed. So what is it? Can you tell us a bit about the executive order you mentioned that's coming from the president? When should we expect that? And what is it that's going to be in it that really could, you know, bring accountability here? Well, I don't want to get ahead of that process. I'll just say... Go on. We won't tell anyone. You know, I can't do that. Come on. But I'll tell you, you know, I joined the White House in October and everyone kept saying this is going to go on a rapid, you know, this is on a rapid accelerated path. It's actually not a normal process. It's just faster. It's just a radically different process because of the urgency that we're moving with. Let me just tell you the broad questions that underpin the EO. Look, I mean, we've got this phenomenally powerful technology. How do we get to a future where we're getting its benefits and managing and containing its risks? Lots of work to do. Number one, a lot of the issues that we are all very concerned about with the powerful new AI technologies, things like cyber crime, fraud, discrimination. A lot of these things happen to already be illegal. And so how do we do all the things that we need to do to regulate and enforce as AI becomes an accelerator of malfeasance? So that there's a whole class of things that needs to be done there. Why are we doing all of this work? We are doing all of this work to get AI right because it's going to bring us one of the most powerful sets of tools we've ever had to wrestle with the hardest problems that humanity faces. And we are in a time, the world and the country are facing a climate crisis. We have health outcomes in America that are unacceptable for a country of our wealth. We are still working to fulfill the American dream of making sure there's opportunity for every single person in our country. We have geopolitical and national security challenges that are very different than in the past. And we know that we have to continue to boost and build our economy so that it creates jobs that support families. Those are things that are not going to get done unless we harness the power of AI. And AI is going to open so many possibilities, but doing the work of the country and really tapping this power in a responsible way, that's another broad theme that we're exploring. And everything I just told you needs the tools and the methods to make sure we have safe AI and everything I just told you needs people. And so I would, by the way, I would just say to the amazing people who are here, if you want to think about places where you can really create a better future with technology and AI, think about public service because that's an opportunity. If you get to the right place at the right time with the talents and skills that you bring, you'll have a place where you can stand to move the world. And I would really urge you to think about that. Speaking of that skill set, some of it is lacking in Congress. In terms of, you know, there has been a obvious lack of tech literacy among many members of Congress. I think we have heard for years and years and years talking about bringing in some sort of regulations for social media, for safety, that sort of thing. It's clear that, you know, Congress has barely wrapped its head around social media. It's barely wrapped its head around the Internet, frankly. There's only so much a White House and all your good intentions can do through executive orders and whatnot. Do you, is there a realistic expectation here that Congress is going to actually do anything about AI and what could that look like? Yeah, I completely agree that we need legislation. The President has already been very clear that there are things we should be acting on already. Congress has made progress but not gotten across the finish line with legislation on privacy, with legislation about protecting our kids. And those are things that Congress needs to be acting on now. And so that's a very clear call from the White House. As we look ahead to this next generation of AI, I think that same urgency that we feel in the White House is very much something I see on Capitol Hill. And I see members and senators working aggressively and really actively to come up the learning curve to understand not only what's going on but what does it mean and how do they need to respond with policy. Majority Leader Schumer has been holding a series of briefings to bring a broad set of senators up to speed on AI matters broadly. And I had the opportunity to participate a few weeks ago in the briefing that he put together for senators on AI and national security. We ended up covering everything, not just national security. That was an amazing meeting. Over half the senators were in the room and we were in an environment where they were learning and they weren't legislating yet. They're in learning mode. And the quality of the questions, I think, I left very encouraged. I think if you'd heard the questions, you would have left encouraged as well. Because they were really parsing what the technology can and can't do and what its implications are from a competitiveness point of view, from the point of view of problems with bias in housing or other areas, from the point of view of privacy issues, from concerns about misinformation, from all the issues about safety and security. So I think I found that very heartening and I think there's a lot more work to do but that's we're going to be very committed to working on a bipartisan basis to make progress there. And on that issue of militarily and national security and also competition from China and elsewhere, of course, there was that infamous letter of calling for a pause. Fairly did the pause button on AI development as such in the public realm. What were your thoughts on that and how does that stack up against trying to protect the public from the potential harms of AI but also trying to remain competitive against different countries? Absolutely. The thing that everyone agrees on is we have to build a future with safe and effective AI. That is the point of all of these different perspectives that we're hearing. And again, if you step back and you think about this as the most powerful technological force of our times. And we're all living in a time where technology has already accelerated and changed the human experience in so many ways and now we can see this great accelerator on top of all of that. So the choices we're going to make now are going to determine what the future looks like. And every country around the world knows that. And every country around the world is working to develop and use AI in ways that reflect their own values. And there are so many arguments in the world of AI and in the world of AI policy, but I will tell you the one thing we are all completely clear on is none of us wants to live in a future in which AI has shaped a world that's driven by authoritarian values. That's not the future any of us wants. And that is I think why it's so important not just for our country, but for the progress of the world that we and our like-minded allies move very fast to use AI responsibly, but to use it in powerful ways that really solve some of our hardest problems and open up new opportunities. So that's I just want to never lose sight of that target. And every single time President Biden talks about AI, this is exactly what he talks about. And I've just really enjoyed every exchange I've had with him. He's completely clear. He is lit up about the possibilities here and he knows that's why we have to do the hard work to get it right. You mentioned the racial bias component of this and I think you'll see what's heartening when you go next door to the AI village right after this is a very diverse set of hackers working on the systems there. Look, obviously there's always a conversation about diversity across industries, etc. But it does seem particularly important here when you see, for instance, just take what we hear about facial recognition technology in states like Michigan and elsewhere when it comes to prosecutions. Why is this so important and how can having a diverse group of people working on this technology and red teaming is, how can that actually have positive impacts? Yeah. This is about values and the one of the most fundamental ideas of America is that every person gets a fair shot. And as our information technologies have gotten more and more intimate with us and with our lives. I mean, when I was a kid, you know, it was punch cards, right? The machine sat in a refrigerated room far away and it was locked away. Now we wear it on us. We are interacting with it all the time. It mediates our interactions with each other and with the world. Now, I think we really are at a point where we have got to stop thinking about it just as a technology and see it as something that is intimately connected with people and it is about people. So when I go to the AI village, one of the things I'm really excited about is that organizations like Black Tech Street have brought people to the table to be part of red teaming these systems. Houston Community College has shown up with students. And I think that is a perfect example of the different, you know, these are tools meant for everyone, but that means if we are going to figure out how they work as people try them and use them to try different things, we are going to have to have everyone at the table to red team them and that also means that the people who are building these systems, we need to embrace the entire universe of talent because that is how we are going to build things that really serve all of us. So when it comes to the pledge you got the companies to sign, and I'm not sure by a show of hands how many folks are kind of familiar with what the White House got to sign up to, but it is, you know, as you said and it is your job to say, you guys are on a journey to try and bring everybody along. It is my job and it is true too. But also, you know, the pledge I think any self-respecting cyber person in this room would say it is a bit wishy-washy. The companies could certainly wiggle their way out of a lot of things here. They are grading their own homework. But one thing in that that struck me as interesting and I think, you mentioned there is a thing in there about watermarking and watermarking, imagery that is essentially created using AI, which is great. It sounds like a great idea. But I am interested in the specifics to say, sure, you can stick a watermark on something that Dali spits out or you can even put something in the metadata but the second you screenshot it or something else or crop it there, it is gone. So when it comes to these actual commitments from the companies to be like, yeah, watermarking, cool. How in practice is this actually going to happen and how soon is it going to happen? That is all the work that is still ahead. I just want to say that the whole point of external red teaming is that companies don't get to grade their own homework. So that is actually why that particular commitment and the work that we are doing here is such an important step. Watermarking is a technical solution that aids in this problem that we increasingly, we already have and AI is going to accelerate of not knowing the authenticity or the origin of information. This crowd knows better than anyone how much technology can help and how inadequate it is as a complete solution because there is this other part, the human beings and that is always the hardest part. But I do think that watermarking is done right and it is not easy technically. It is not going to be easy as a practical matter to implement it. But I think we have done complex technical things through standardization and agreements among companies about how they will behave and how they will use information. So I think we have reasons to believe that we can bring that technology forward and that is what these companies and many others are working towards. So again, I think we all have to be really clear that we are not going to get to a future where everything is completely perfect. But I am really confident we can get to a future where the technology, the AI technologies we are talking about are safe and effective enough that we can use them and move forward. It is a terrible way to go through life. So we are going in and I am very conscious that you are not a political person as such on the campaigns, anything like that. But from a general perspective when we talk about misinformation and general AI and everything else, we are going into an election year next year here in the U.S. and of course many other big elections around the world next year. Specifically when it comes to how AI could be a threat to information and how we vote and understand about candidates and campaigns and a threat to democracy, what are your big concerns there? Yeah, let me I want to be very clear that I work for the president, I work at the White House and in that role I work for the country. I am not I do not work for one party or the other and so I won't say anything about this election. I think it's very clear though if anyone who's paying attention understands that we've had an erosion of truth and trust in information and I think we freak and now it's accelerating or the potential for great acceleration is here with generative AI and I think we've often tended to have conversations about will people believe things that are wrong and that is a huge concern perhaps even deeper and more subtle is when people don't believe anything because how does a democracy function when we don't have a common basis for truth and understanding and trust among parties. So I think this is a huge issue and again it's there are technical components and we talked about watermarking but the the challenge I think is much greater than just the technology and there's a lot of work to be done there. And on that you know you you are formally from DARPA, Department of Defense's I guess research is the best way to absolutely the place that does breakthrough technology and security and you know we've spoken to them quite recently about the people who are doing work there on deep fake detection of course that plays into a lot of the political environment but tell us a little bit about the challenge you launched with DARPA this week here in Vegas. Yeah that was a terrific announcement that we made from the White House. DARPA is launching a new challenge it's the AI cyber challenge and so we talked about the red teaming competition that's happening over two days here. The DARPA challenge that just got announced is in a sense it's the complete flip side it's about using the power of new AI technologies to go after really hard problems in cybersecurity and try to accelerate our ability to knit up software systems that are critical for everything that we do that you guys know better about than anyone on the planet that was announced a couple of days ago and it will roll out over a couple of years the White House again in that case helped bring the leading AI companies to the table to be participants and partners with DARPA in that enterprise and I just want to say from having led DARPA for a few years I know how powerful these challenges can be this this is an idea that is actually very deep in the DEF CON ethos is the way you solve really hard problems is you just let everyone go run at them in the most creative way possible but mobilizing that across broad communities when DARPA puts out one of these challenges the people who are always paying attention to DARPA and the federal government show up but a lot of people who are not paying attention normally also show up and I think that creativity that we tap is going to be very very effective on the cyber security issue so this is how we're going to do big things and I think people in this room can take part yeah please counting on it we're going to open up to some audience questions in it for the last 10 minutes or so but you know it's almost 9 30 a.m. so I can ask the the nuclear question what is your what is that what's what is the worst case scenario I mean what is the cliche what is it about AI that keeps the chief science advisor the president of the United States up at night I what I'll tell you I I do sometimes lose sleep and it's because AI's breadth it has so many applications that's what's exciting about it and that's the bright side and the dark side is that that means that its risks are so varied and yes there are risks of catastrophic harm national security scale harm if you think about cyber security vulnerabilities and critical infrastructure or biosecurity that's like decapitation but equally damaging is the slow cancer of embedded discrimination or privacy erosion or growing distrust in our society that erodes democracy so this is not a one and done and our view from the White House and very clearly from the president we are the United States of America we are not picking and choosing we are going to tackle all of these issues and get the whole the whole business of AI on a track that's safe and effective so I I do lose sleep at night but I wake up in the morning with hope okay very good I see but does anybody have a question you're gonna have to shout go ahead oh you got a microphone now oh we don't oh there is a microphone there we go um I'm in the healthcare space and one of the things we're dealing with is how do you prevent the personal details needed to say do diagnostic from not being absorbed into the model is now public information that's one of the things we're looking at because some of the early early examples were we ask this specific question and now that data that so-and-so lives here is now in the model how's that progress going or what are the thoughts on that yeah I think you put your finger on one of the really important issues we healthcare is an area where we have talked for years already about the power of health data to cure diseases and even understand the factors behind disease so that we can get ahead of them with prevention I would say the progress we've made is very modest compared to the size of the opportunity and these kinds of privacy issues are one of the reasons one of the barriers to getting there because of course we have to take that seriously so I think there are again this is an area where there are technical approaches and a huge number of privacy enhancing technologies that have made progress in research but again are still sparsely applied and that's this is one of the areas that you'll see the administration continue to focus on and to do the work that does the experimentation and ringing out of privacy technology so that we can get the value of healthcare data yep go ahead I'm Naushat Situ and co-founder of Blackbird AI we work on narrative and risk intelligence so really appreciate all this initiative from the president and the community has are doing like for the red teaming on the genitive AI and uh so let's assume in the best case the US companies where the government has control on or can make recommendation they comply and we built a good LLM Genitive AI with all the guardrails but this is something now accessible for everyone like anyone who has resources can build the same thing and it's easier to build it without the guardrails because we already built it right so the foreign adversaries they are going to build that and make it accessible open source it so access to those information like for you have the scale uh and accessibility of disinformation tool that you want to attack the government or anything anyone have access to that even like let's say these tools are have guardrails but they will have access to the other tools so the question I have is how can we stop that or what are the what are your thoughts like I understand like you're representing the president only but I want to understand as a country because it's not just president like we need legislation Don Sullivan already mentioned that so I would like to understand like what are your thoughts how can we actually stop it or what are our thoughts we might not have the solution today but want to hear your thoughts and understand yep thank you for a great question because this is the power of the technology and and whether it happens inside our borders or from outside these are the harms that that we are very very focused on so let's let's break it down uh number one we on the technology side uh the actions that we are taking working with companies but also with work that's happening inside of government uh how how can we make the technology as robust and as safe and effective as possible knowing that that's unlikely to be perfect and knowing I mean the the whole thing with there's a whole guardrail counter guardrail thing happening already right there was this wonderful universal jailbreak that Carnegie Mellon just published as an example and now of course everyone's gonna get better at that but I think it really just tells you that we're very early in in getting you know my hope is we get to where guardrails are so robust that it's so hard to break them that most people don't bother only DEF CON people will bother everyone else so just leave them alone it'll be fine it'll be pretty safe but I don't know yet if we're gonna get there so so there's more work to be done on the technology side and there's more work to be done on regulation enforcement and I think legislation as well because when bad actors or even by accident if people use these technologies to create harms of misinformation that is just that is dangerous or to discriminate in housing decisions or hiring decisions or in law enforcement in inappropriate ways or creating creating security threats of various sorts when any of those things happen those are things we have laws against and we have to be able to know when that happens to to stop those harms to hold people accountable and create an environment in which there are so many penalties for doing the wrong thing with the technology that people would rather not and they go on and actually use it for all the good things that are the reasons we're developing it thank you so much thank you you mentioned the Carnegie Mellon uh research paper there which were you know they by prompting stuffing they were able to get gbt and Lehman others to say things they shouldn't um I was playing around with that with the professors uh last week at Carnegie and we got of course it has some of gbt's trained on some of my information from the cnn website and things so it knows a little bit about me so we asked it to insult me uh and it was a very specific and impressive insult and I as it was you insulted that's my question um well I what I did say was I was like this sounds exactly like what people say to me on twitter and then I thought oh wait it has probably trained on twitter data so this all makes sense but anyway um oh oh we have two mics okay yeah I have a quick question well really not a quick question so please make it quick today you've touched on a couple things one is the current administration has brought leaders of ai to the table to uh voluntarily pledge to do the right things which is great that's a really great start and you've touched on uh in recent years there's been an erosion of trust in the government and some information which I think is saying it lightly but here's my question to the current administration and the u.s government as a whole what is the u.s government going to do to voluntarily pledge and assure they won't use machine learning and ai as a disinformation engine tactic model going forward not only internally but externally thank you thank you for asking that question the specific answer to your question is work that's happening right now the office of management budget is working through guidance for federal agencies that is very uh explicit about how to use this powerful technology in responsible ways the ways that advance our values rather than eroding them and that is the incredibly specific bureaucratic memorandum driven way that something as uh values driven as your question turns into the actions across government and that's something we take very seriously thank you for that yeah um the french government recently came out in favor of open source ai uh I wanted to know does the administration have a position on open source both as a way of uh mitigating risk and also spurring competition does it have an official position thanks uh you know uh what I I see what's happening with open source and ai today and if I were still uh in silicon valley being a venture capitalist I would say it is democratizing and if I were still in the defense department I would say it is proliferating both of those things are are true and uh very much to your point we know the power of open source uh for improving a core technology and I think we also understand the power of open source for creating incredibly powerful capabilities and putting them in the hands of of many many many people I want to step back from that and say the the what every one of these discussions is about is getting to a future where ai is safe and effective and um there are going to be lots of pathways to getting their measures on the technology side measure measures about how how the technology is used but that's the north star is how do we get to safe and effective ai and that's what we'll stay anchored on and is a terrible na ai or or nare can you tell us a little yeah absolutely um this is the national ai research resource uh it's there was a task force that was put together at congress's behest to uh advise the government about the resources and the capabilities that needed to be put together for publicly funded ai r and d and its recommendations uh from the task force the task force recommended a multibillion dollar undertaking by the national science foundation participation from many actors across government but the purpose of that is to build the data and the compute capabilities and to do the funding of many different uh university researchers of all different stripes uh in order to be building and um and improving ai technologies again this is part of how we get to safe and effective ai but it's also the market is going to take this powerful technology and go do everything for which there are market drivers but we have other things that this country has got to get after and using ai in powerful ways for our greatest national challenges and our greatest global challenges that's the work of public r and d and that's what nare is all about we were really pleased to see congress taking it up and there's legislation advancing that's something i'm very hopeful about yep hi i'm uh gregory o'connor and i had a question about if there is any kind of a tradeoff between accuracy and factualness and values so for example yesterday the ai red team spoke about a system that was putting out names of nurses have you asked to give me a typical nurses name and it would give a name identified as a female name most typically as somebody who's a technologist if i was making a system and i knew as a fact that 86.2 percent of nurses were females and then my system had an output that generated 86.2 percent of the names which were identified as females i'd feel success accuracy but if this uh you know the groups that are red teaming and so forth if they were looking for 50 50 then i have to code my system or build my system to meet an aspiration as opposed to the current statistical reality so how is that being handled or are you looking for other sorts of input in addition to this red teaming approach right now that would try to bring in some elements of factualness i think you put your finger on the choices that ai developers make and the red teaming is about shining a light on on where things are but i think that's a great example of choices in the design of these systems that we're all going to use thank you you had the systems be accurate rather than aspirational i i mean we're going to use these systems for so many things when i uh ask a chatbot to write uh you know a bruce springsteen song about something like it's not that's not about accuracy it's about creativity and that's a very different thing so i think it's really i think we have to stay anchored in what is the purpose and uh and often and people are using these things for medical information boy that had better be as accurate and clear as it can possibly be and i don't know if it is yet right so those are the things that we do worry about i know we're almost out of time so we'll try to get through these last three questions very quickly hi my name is mary key and i'm an information security manager for a bank and we've been dipping our toe into ai and a little bit of r&d and the more that we do it the more the business wants to use it and we're at a point that i mean information security knows what questions to ask but i can't trust the business to ask those same questions so how do you recommend implementing ai for commercial use in a highly regulated industry like banking oh an age old question yes and i mean you you all are in the trenches you're going to figure that out but i think it's a great example of you are in a high trust business i mean i'm not going to tell you the answer you know better than i do but someone was talking about health uh if it's healthcare if it's finance i mean these are these these and many other areas are places where trust is that is what your business is all about and i think it really points to the need to ring out these systems thank you it's on you you got it very quickly because i know that goons will start beating me up shortly yes um ashley and i lead a uh security testing team um so you know we've been talking a lot about uh doing red team against generative ai and i think the question that some people have since it is such a new technology is where do your teams start with that so do you start at looking at the content that you can get it to spit back out or are there other components and how do you approach that as an org yeah i think that's a very broad question and different companies or users are going to have different answers as they think about what they want to use ai for well at at the white house um the president has just been completely clear that a the power of ai is its breadth and what we as a country need to achieve is confidence and trust that ai can be safe and effective for whatever application that's going after and that that will continue to be our frame all right this question better lower socks off last question no pressure no pressure so good morning i work as part of a soccer blue team um the thing that keeps me up at night is a lack of guidance for my customers as to what they should and shouldn't do so is there initiative coming from the white house for that general education for you know best practices while this technology is still being developed you know it's so hard to get people to do what what you want them to do is technology is easy people are hard and again i think this is every company is going to have to figure this out for the particular purposes that they are using ai and i think your question really gets at the deeper point that these systems are not they are not just they're not just ones and zeros they're not just bits and bytes they are about technology and humans working together and uh that's that is the heart of the complexity that we're dealing with here it's the humans that are still going to screw us all right and also do the greatest things ever that's what it's all about um thank you so much for your time uh thanks all