 Hey everyone. Thank you so much for that. So I am Perry Adams. I am from DARPA and I have with me here today Michael Salito, who is the head of geopolitics and security policy at Anthropic. I have Heather Atkins, the VP of security engineering at Google. Vijay Burina, the CISO and head of security research at Google DeepMind. Dave Weston, who is the VP of enterprise and operating system security at Microsoft. Matt Knight, who is the head of security at OpenAI. And Omkar, I'm so sorry Omkar, we should have practiced this beforehand. Omkar, you can please give him a round of applause for putting up with me. He is the general manager of the open source security foundation, a project of the Linux foundation. So now we got through that. I wonder if I have the largest panel at DEF CON this year. I think it took us a minute and a half just to get through names. So before we let them talk, I'll give a little background on what we're here to talk about today. So I'm Perry Adams. I met DARPA where I run a number of research efforts that try to push the bounds of what's possible in computer security defense. And while I'm not doing that, when I'm not doing that, I organize DEF CON CTF and the finals are currently ongoing. So to put a plug for that, you should absolutely visit the CTF floor. But one thing that I'm doing right now that I just announced at Black Hat is called the AI cyber challenge. And so if you don't know anything about it, I encourage you to visit AI cyberchallenge.com which has quite a bit of detail of what this challenge will be. And so I'm here at Black Hat and DEF CON to announce it for the first time. This is a White House backed initiative that puts up to nearly $20 million on the line to bring together the best and the brightest in AI and cybersecurity to create AI driven systems that can secure the software that all Americans rely on. So this is a competition to develop automatic systems that can find and fix vulnerabilities in code. And so a little bit about the structure of this, we are going to put on a qualification event in May of this year and 20 teams will advance to the semifinals at DEF CON in 2024. The top five teams there will win $2 million each and will advance to the finals which will occur at DEF CON in 2025. DARPA is also funding up to seven small businesses, up to $1 million each to participate. And so the application process for those awards will open on August 17th. And the application process for the open registration, the open track will start in November. And so this is a competition that is open to anyone and I encourage you all to visit AIcyberchallenge.com to learn more about this effort. Because the reason I want to announce it to this audience is that Blackhead and DEF CON bring together some of the greatest minds in this space. And I would love to see participation from you all in something like this. Now, I'm also incredibly gratified to have some really fantastic collaborators on this effort. So Anthropic, Google, DeepMind, Microsoft, and OpenAI have come together to collaborate with DARPA on this effort to make their cutting edge technology available to participants to build on top of for this effort. Similarly, we're collaborating with the open source security foundation to help participants develop systems that can solve real world challenges, especially those, for instance, seen in the open source communities such as supply chain security and the security of our critical infrastructure systems on which are built on top of so much open source code. And so today we're really here to not hear me talk because I've been talking about this quite a lot lately, but to hear my panelists talk about what they're looking forward to in this competition, how they see it fit into the broader context of the work that they do every day. And so I'm going to start all the way at the end of the table with Omkar. And so, Omkar, help us set the stage. What challenges does the open source community face in software security? And what do you see is wrong with the status quo? Is the microphone on? I could talk really loud. I think it is. All right. Thank you. Thank you, Perry. And let me first say, as a guy who first went to DEF CON at Alexis Park, just seeing the amount of security professionals that are here, I think this was probably bigger than all of the attendees at Alexis Park the last time I was at DEF CON. So it's an honor to be here. As most of you in the room know, open source is everywhere. Open source is in safety critical systems from planes to trains to mobile phones, routers, switches, computers. And we believe that it is our duty as custodians of security within the Linux Foundation, home of things like the kernel you may have heard of it, as well as other projects like CNCF and of course the open SSF. We believe that it's our public duty to ensure that our open source supply chain is secured by construction. That is not an easy thing to do. This is not enterprise software. We can't just jam a secure SDLC in and be done with it. So we're working with our community as best we can and we're really looking forward to efforts like this one where we're able to use cutting edge technologies in order to address fast classes of problem across open source in a way that is complementary to our communities. Simply opening thousands of PRs that don't follow the coding style of the project that you're trying to fix or aren't conformant with the culture of that project is not the way you fix security. And we look forward to working with DARPA as well as our partners on this challenge. We're looking forward to some very innovative solutions to these systemic challenges that we've faced in open source that underpins some of our most safety critical systems. Thank you. And so my next question goes to my three friends from Google and Microsoft. So Heather, Vijay, and Dave. You all between Google and Microsoft maintain some of the largest code bases in the world. How do you deal with those challenges, some of the challenges that Omkar talked about at that scale? And how do you foresee the use of AI as applied to that problem? So do you want to start with Dave? Sure. Yeah. So I think that the challenge of finding vulnerabilities is really a challenge of scale. At Microsoft we use many different techniques. So for example, fuzzing and dynamic analysis techniques are a great way to at scale find vulnerabilities and especially in large code bases. We use compiler changes, platform mitigations, safer languages. I'm sure there's some Rust fans out there. Shout out to my Rust fans. But obviously we have a large code base that is often impacted by memory safety issues. So I'm really excited about the scope and the opportunity of this competition. I'm optimistic, but I also, having worked in this space a bit, I know there's some real challenges here. So I think there's going to have to be some breakthroughs here to really deliver. And I'm excited and I think we could do this. You know, one of the areas that I think is really exciting is augmenting static analysis. You know, I mentioned that fuzzing is a fundamental technique, but it has some limitations. Building tests and drivers, getting code coverage, a lot of the sort of block and tackling is not, you know, particularly beautiful and it's difficult to do. So getting that coverage is sometimes hard. Static analysis on the other hand, particularly on native languages, can be challenging as well, just reasoning at that scale. And so being able to do things like automatically generate annotations that can then feed into existing static analysis systems and augmenting those systems with some AI approaches are I think something that I think will show up in this competition if I had to guess. And it's something that we're working on internally and has been pretty fruitful in the early stages. So I'm actually just really excited as a security geek to see where folks go. And I think there will be huge benefit to Microsoft and many of the other folks here if we can really tackle this challenge. So it's great to see. And before I hand it back to Google, just to build on what Dave said is I think how he describes the use of AI, not simply taking large language models and throwing them at code and saying what's going to LLM of vulnerability, but thinking about the ways in which not just generative AI but other machine learning based approaches can augment traditional program analysis tools like static and dynamic analysis can take the hard computational problems in that space and the complexity issues and reduce the search space, guide it and provide augmentation in the spaces that traditional computer science approaches haven't been able to tackle. That's something I'm really excited to see in this challenge. So jumping off of that, I'll hand it to Google. You want me to go first? I think it's important to think a little bit about what the outcome we're trying to achieve is. I mean imagine a world where most of the new code on the planet is mostly free of bugs. And all the code that you maintain when you make new changes is also mostly free of bugs. And when you find vulnerabilities, they're patched and fixed very quickly. That's the outcome here. And we've been trying to do that as an industry for a long time. Dave mentioned a bunch of things we've been doing from static analysis manual audit, fuzzing, DARPA ran a cyber ground challenge, what was it 2015, 2016, and the winning teams there found vulnerabilities mostly through fuzzing. I think we are now in a new generation of software development where we are about to get a real leg up on this problem. And fixing the whole software development lifecycle to give developers a high amount of assurance. And so I'm really excited about the partnership with DARPA because I think we're going to really challenge in a very open collaborative community kind of way what those solutions might be, whether you're an open source developer or a commercial developer. And I think we're probably going to invent some new things. I look forward to kind of all the innovations we see there. B.J. Dimon-Champion. Yeah, great. I think one thing that's quite exciting about the software security space is that there's a large group of AI researchers that are also deeply interested in this space. The problem is quite tractable both on the static analysis side and on the dynamic analysis side. Google has one of the largest fuzzing clusters in the world across our code base and we care about that space quite a bit. You've seen some tools that we've released over the past few years, especially in this space. And there's from a research standpoint some very specific problem spaces that a lot of folks across Google have started to focus on. For example, using reinforcement learning to do deeper and better code coverage across our code base, across our integration and build pipelines. And optimizing for that specific outcome, as Heather mentioned. So that's pretty exciting. We've seen some pretty impressive results in that space as well. And then again to kind of build on top of some of the things we're doing around fuzzing, using the latest and greatest state of the art models that we have to do things like fuzz target identification and general work that we have around our ability to identify and make sense of things that our clusters find in a scalable way. Thank you. And so I talked a little bit about taking different AI approaches and combining them with program analysis techniques. But we have also seen just really interesting potential, significant potential in a range of areas from the large language models that have been put out recently. And so to both Michael and Matt Anthropic and open AI, how do you think LLM specifically can assist with finding and fixing vulnerabilities in software? Yeah, thanks, Barry. I couldn't be more excited to be supporting the AI cyber challenge and hopefully many of you as participants. It's a great vision very much aligns with our own at open AI. We believe that large language models are going to be transformational for cyber defense. And we're extremely excited to have the next generation of security tooling built on our models. We've been working with the community on defensive security research programs through our cyber grant program. We've had some just really amazing submissions from the community. We're giving away a million dollars to start plus GPT for access and API credits to get people started. And we're excited to give the same treatment to AI cyber challenge participants to look at access to our models, you'll be able to use credits and so on to do your to do your work. So I would just invite you to think about ways in which you are limited as security engineers. These are areas where language models can augment your your own work and help fill in areas where maybe you don't have time, parts of your workflow that are, you know, there are sort of drudgery or maybe where you have scale limitations. These are areas where language models are going to be able to fill in. And I just want to add a personal note here. I competed in DARPA's spectrum collaboration challenge a couple years ago in a personal context. It was some of the most fun that I've had writing software. It's a great way to get access to some like really cool technology, work on some really challenging problems and also be be eligible to earn some money too. So I hope you'll consider consider competing. I look forward to supporting you. And thanks to Perry and DARPA for making this happen. Yeah, thanks. So large language models like Claude can complement the wide range of tools already used by cybersecurity experts. In some cases, they can automate tasks that require human judgments so that vulnerabilities can be found and fixed and automated in more cost effective fashion. But I think the most important thing to think about is how quickly the technology is improving. You know, two years ago, nobody was using LLMs to help with writing code. Last year, a study came out that found that GitHub co pilot users were 50% more productive than people that didn't use it. And today, LLMs are useful assistants for researchers to find in fixed bugs. They can generate, interpret, explain, suggest alternatives to code. But by next year's DEF CON and certainly within two years when the finals are held, the base bottles just be dramatically more powerful than they are today in the same way that we've seen even in the last year, things change quite dramatically. So we're really excited to support the cyber challenge and see what everybody's able to build on our model. This could be a really great opportunity to do something that makes a huge contribution to the open source community and into the security of open source that underpins so much of our economy and society. Thank you. And I have to say this is one of the parts of this challenge just to put a fine point out that I am so excited for is that participants will have access to resources from our collaborators to build on top of. So we'll have access to very, very cutting edge technology from across multiple companies and be able to show really what's possible when that's applied to a very important challenge like cybersecurity. And I also want to stress the prize money that DARPA is putting on the table, the top prize for the final winner in 2025 will be $4 million. The second place will be $3 million. And the third place will be $1.5 and that's in addition to the $2 million each, those top five finalists will get. And that's because DARPA sees this as an incredibly important initiative to push software security forward. So in our last three minutes, does everyone want to go down and give a 20 second snippet of advice for perspective AICC participants? Sure, I'll start. So language models are super sensitive to the prompt that you put in. And so if the first prompt doesn't work and the second and the hundredths don't work, just keep trying. It could be as simple as finding a statement like think step by step and you might just unlock a huge set of new capabilities that you didn't know existed in a model. Yeah, I think also in addition to thinking about how the LLM is working, think a little bit about how you as a developer code and where it would be most helpful. We call that kind of the user journey of software development. Do you want it kind of there as you're coding? Do you want it when you check in your code? Do you want it running after? And just think about all the opportunities that you might be able to target. I think everybody here will agree but two years is an eternity in AI. So there's going to be quite a bit of improvement in capabilities. So keep that in mind when you're thinking about your final solution throughout the competition. I think that's going to be important. You probably shouldn't be over indexing on current capabilities because there could be some substantial step changes in those capabilities too. Yeah, this is obviously going to be a significant challenge. I've looked a little bit at the rule set and then the things that have to happen here. And so I think creativity is going to be a huge part of this thinking about how to nest together different solutions to get to this outcome and ultimately get a high precision outcome. I think traditionally the problem with static analysis is not that I can't find bugs but can it find bugs without the noise? And so I think thinking even beyond just the specific challenges that are there but how this is going to be deployed at scale will be key to getting the results. I would encourage you to think about places and opportunities where current tools are constrained or where you are constrained in your own bandwidth. That is where the action's at. Solve for that and you'll have interesting results. And remember that the versions of these models that we have today are the worst versions that we'll ever have at any point going forward. These things are going to get better. Do not bet on today's foundation model capabilities because the next ones may supplant it. So get ready for that. I have the luck of going last and saying something profound after all those wonderful suggestions. As I mentioned earlier, open source underpins the most critical infrastructure that we use today. The work that you're going to embark on using some of these leading edge tools will have profound impact. This isn't a term paper. This isn't a release. This isn't a new product. This is everything. This is all the marbles. So we really look forward to seeing the contributions that y'all as participants in the challenge will be contributing. And most of all, we look forward to leveling up open source supply chain security. Thank you all. And I would second all of that advice. Actually, and I'll add one last thing on Dave's, which is that we will also be looking at false positives that come out because to create good tooling, you really need to be thinking about usability and the amount of noise. But to end with the obligatory plug, please visit AI cyberchallenge.com, where there is all the information you need to understand how to participate. And more information will be coming out this fall. Thank you all so much.