 Ladies and gentlemen, our next speaker is Dr. Eric Schmidt. He served as Google CEO from 2001 to 2011. Under his leadership, Google grew from a very small startup to a large global company. He served as the inaugural chairman of the Defense Innovation Board from 2016 to 2020. And in 2021, he founded the Special Competitive Studies Project with a clear mission, to make recommendations to strengthen America's long-term competitiveness for a future where AI and other emerging technologies reshape our national security, economy, and society. Ladies and gentlemen, please join me in welcoming Dr. Eric Schmidt. I think we've got you on mute, sir. Perfect. First, thank you for having me be here. I'm in Seattle surrounded by boats, which I always think is excellent. In the last decade, I've had a chance to work with people all throughout the services and the people in our national defense. And these are the best of America, the men and women that serve our nation represented by you all. I just have an enormous amount of respect and for the people. I do not, however, have any respect at all for the system that we take these brilliant people and put them in. So let me start by saying, the people are incredible. And the stuff that they make you all do is not. And so I've emerged as a person who is thinking about how to rethink the way we do our military strategy and innovation. So I got interested in it because I'm a computer scientist. I don't really know anything about the military when I started. And I now understand that we have a shrinking national advantage and that the time of my youth, your youth, our youth, we were sort of the dominant player. Everyone else is sort of catching up. And in my view, we're not leading strong enough. We're still the strongest, but we're not leading in the ways that I care about. And that's my comment. And I think it has to do with a change in how national security and conflict will go going forward for the next, in my view, 1,000 years, at least 100 years longer than our lives together. And the doctrine that you all were raised on is basically hard power, soft power. And hard power is basically, you do what I want or I'll shoot you. Sorry to simplify many, many books on this. And soft power is the power of economics, influence, culture, and so forth, both of which are important and occasionally necessary. But what I concluded was that we miss something, which I call innovation power. And I got interested in this largely because every time I was in the Pentagon, the government, some general or admiral would talk about near peer competitors. That's a phrase the military likes to use. And I decided I didn't agree with that. I thought that China was a peer, not a near peer. And the question was, why did I believe this? Because obviously it's a quarter of the military budget, although that's increasing. They are not a global Navy. They're untested in battle since 1977 and so forth. But what I realized was that China was defining the platforms that I cared about and their strategy was to compete against my world and globally dominate. And that has enormous implications for America because our model has been innovation, growth, we invent new industries, and then eventually somebody else can make them cheaper. But we are the innovators in the world. And so to me, this global race of technological superiority with an increasingly powerful China is one that we actually have to win. And I think innovation power is how we do it. And I define innovation power as the ability to invent, adopt, and integrate new technologies. And I'm gonna argue that it's fundamental to our military hard power as developing and fielding advanced weapons systems will strengthen deterrence and if necessary, war-winning capabilities. I'm gonna come back to Russia and Ukraine in a minute because I think you can understand that there are limits to deterrence. And in particular, that we don't seem to have a good doctrine to deter an oil-rich technocrat dictatorship, which is sort of one way to understand Russia. So in any case, the technological innovation then allows economic leverage and allows us to set global standards. And then we, with our democratic partners and this the usual suspects, so think about Japan on Australia, maybe India, certainly the Europeans, Israel and so forth. So to me, what we have to do is we have to organize ourselves around the way that we do best, which is the government and the private sector working together to invent stuff that's new. And we are going to have to overcome the bureaucracy which our political leaders have managed to impose on the military for all sorts of reasons in the last 70 years, because it's getting in the way. And the best example here is if you look at the impact of AI. So where are we in AI? Largely invented in the US, on the west coast of the US in Quebec and in Britain. These technologies are moving the fastest I've ever seen and you're all are familiar with chat GPT and GPT-4 and the other competitors. And the important thing is that the AI revolution is also at the same time an autonomy revolution because it means that you can have devices that can think in a euphemistic way in a local context. And there are all sorts of issues with this. So one of the things that I worked on really hard is this question of human in the loop. And we concluded that as long as the human presses the button and causes it to happen in our recommendations which the military was happy with, it was okay for the weapon in this case to make its own decisions as to what it should do after it's launched, as long as they are legal and legitimate and planned. To all sorts of issues with this. And the problem of course is that inside the military the adoption of AI is extremely slow. So I sit there and my friends say, well, what do you think about killer robots? And I said, we're not in the market to build killer robots because we're not moving fast enough. And I do that in a facetious way. I'm not suggesting we should have killer robots. But the reality is that our system is holding us back on something which we invented and we need to make sure that it works. I was just in China for a week with Dr. Kissinger who's my close friend. He was treated like a God and we had access to all the top people in the country. And I would say that China is still between two and three years behind us but they because of the success of ChatGBT which is by the way not available in China just to the leaders. So that's wonderful. They understand the importance of this and now they're putting an enormous amount of local money into it. So again, the cycle begins. We're ahead and they're gonna try to catch up. Now I want us to say ahead. You all do as well. That's this innovation point. So the combination of AI autonomy and sensors will change the discussion about war. So to me, one of the things if you think about the OODA loop which everybody here understands really well observe oriented decide and act is that AI will allow the OODA loop to go around much quicker. And the speed matters in conflict, right? Boom, boom, boom. You don't have any time. And I'll talk about Ukraine in a minute. So the command and control systems that we use that consistently rely on human decision makers could be beaten by a competitor that has a more autonomous decision-making system. But again, we're nowhere near where we need to be to make these things happen. So I have never seen the level of innovation that we're seeing now in AI. And I mean, I've done this for 50 years. So this is ours to lose, right? We are the driving force here and I want us to organize it. So let me finish and maybe you'll have questions that I don't wanna run too long on this. I got interested in the question of how to win the war in Ukraine. Some of which we all care a lot about. And I think what I understand military doctrine well now as a civilian. And when I saw the Russians carpet bombing apartment complexes full of little old ladies and children with their 152 artillery's from a distance that are imprecise, I just, I couldn't take it. I was just too upsetting to me. I'm sure you had the same way. But literally I had an emotional reaction. Maybe this is because I spent so much time with you all. And it just made me too upset. So I went to visit Ukraine. I'm going there this Sunday for my third visit. And I've been studying how do you use these principles to win in Ukraine? And after the summary is that you all are, we're all the Ukrainian army or Navy or whatever and our commander says we're gonna go. We have to get across a five kilometer dead zone that dead zone has tanks, mines, artillery, arm drones from Iran and so forth. Let's assume we managed to get across this which I think is highly unlikely. When we get to the other side, there's some bunker. We use our hand grenade or what have you when we get, we kill the opposition and then the Russian lines behind us bomb us and we're all gone. So this is the definition of courage and or craziness. And we need a different solution and that solution is drones. So I've now committed and I can talk about this at some length about how we're gonna win in Ukraine using drones and among other things, I think this will prove, sorry to be so arrogant but this will prove that innovation power as opposed to traditional hard power is how you win. So thank you very much, Adam. Thank you, Dr. Schmidt. So I'll start off with some questions for Dr. Schmidt and then I'll turn it over to you, the audience for the next round of questions. So Dr. Schmidt continuing on your points about Ukraine and based on your visits there, how have you seen drones changing the battlefield already and how do you expect AI to change that into the future in terms of things like swarming or operating in mass with agency? So again, let's be really precise where they are right now. And the folks here in the army, I didn't know this but apparently the first thing that you learn in army is that when you have a defender who is locked in in trenches and well dug in, it takes three to five times more offense to displace the defense. And the military strategy, as I understand it, is to get them running backward and it's very hard to shoot when you're running backward. So the core question is you've got a line and you've got everybody locked in. Everyone has dug in. They've been digging in for eight years or they have been doing a lot of digging. How do you solve that problem? As I understand the US doctrine and the Air Force and the Air Force folks can talk about this, it starts with air power. And what you do is you basically have to clean the path if you will of the opponent in order to get through and then you carefully get through it and using techniques that you all understand. The Ukrainians don't have an effective Air Force. They have like something like 10 jets or 20 jets which is why they always want all these F-16s but they just don't have them. So how do you solve that problem? You use drones. So after lots of discussion and I should say about the Ukrainians that they were not prepared for this war and they should have been. And so everything I'm talking about is stuff that they've invented in the last 18 months and I can talk about at some length how they got here but the fact of the matter is they've announced what they want and they want four different kinds of drones and these are precisely what they've said. This is their equivalent of the Pentagon. First, they want surveillance drones. These are long range, high altitude, high quality 4K cameras that loiter and observe. You need that because you need eyes in the sky. They don't have the satellites that the US has that we could use and we may be helping them but certainly in the battlefield they need their own eyes and they need these. The second is that they need what are called FPV Kamikaze drones. I had no idea what First Person View stood for and those of you who are my age you probably have not heard it but you are the people who serve your the officers and the listed people who work for you all understand that this is a sport. What you do is you put goggles on and you fly these things around and so the Ukrainians have built and hear these numbers, $500 Chinese made a set they assemble them themselves drones which are Kamikaze drones which carry a 1.1 to 1.6 kilogram payload which I'm told is enough to take out anything but a tank and they put goggles on and they fly these things and they move so fast that you can't see them. There's no way to stop them except to jam. The Russians have extremely good anti-jamming systems. I was told for example that our high Mars are being jammed that they fly and then when they get near their scent not even they are capable of resisting some of the jamming but you will know if that's true or not. So a complete denied communications environment how do you solve it? You need frequency swapping radios and so forth which my team and others are working on but anyway, think of these things as goggles with Kamikaze drones that are one time use one kilogram what can you do with a kilogram? A lot it turns out. The third of the four is that they want they call bomber drones which carry four to six tubes which have larger than one kilogram and they're largely to take out tanks you basically get it over the tank and you take it out you have to hit the tank in the right place which is complicated but you get the idea. And then the fourth which I thought was unusually interesting is Slingshot launched I'm gonna say I'm gonna call them cruise missiles. So there's a company that I'm working with that for a list price of $26,000 $26,000 that's not with the military discount can build a drone that goes 400 kilometers carries 43 kilograms and it has little itty bitty wings because you don't launch it you basically rocket it out and it doesn't land it just crashes and it uses a points it uses essentially a INS system to tell it where it is it doesn't even use GPS you use those for things like artillery dumps and bridges and things like that maybe the Kremlin for all I know they don't know what they're doing operationally. Those four are with their answer to how they're gonna win the war you can debate which ones but they have announced that they are they have access to about 50,000 drones and they've said for this year they want 250,000 such drones so that gives you a sense of the scale of what they're doing. When I looked at it and I'll summarize lots of meetings it's really primitive stuff and that's where its beauty is. So they have an arm chip they don't have a GPU they have a cheap FLIR camera and a cheap 4K camera which they can get through this Chinese supply chain and they put them together the radios are inexpensive and off they go. So what will happen is that at some point they're going to figure out that you should swarm these things together they haven't figured this out yet. Right, in other words, they're still busy with person drone target and what will happen is that in the next year they'll figure out how to do mesh networks where the drones can talk to each other they can do terrain following to follow a moving tank. Let me give you an or a target let me give you an example of how clever they are. In America, the way you would build such a drone and it's indeed the way it's being done is you'd put in a GPU when you'd have a very sophisticated image analysis group and it would be able to do 3D analysis I can see it, I'm sure I know it's a T72 which by the way are really loud you know, I'm going to go for it I'm authenticated and so forth didn't do any of that. They take a CPU and they took a little bit of imaging software they rewrote it to use a tiny amount of software all it can do is see a moving box but in a warfare the only thing moving that's a box is something that you don't like and boom, off they go. So this focus on cost and speed is antithetical to the way the US government procurement system works and that is what I think the most important thing for you all to know is that you are operating in a the number of devices that you have is off by a factor of a thousand of what the Ukrainians would have in the same situation because our prices are too high and that's what I'm impressed about is the cleverness. Dr. Schmidt, I'd like to you mentioned some of the challenges we have in government and during your time on the Defense Innovation Board working with Deputy Secretary of Defense Bob Work you had some exposure to some of the Defense Department's challenges and recently you sponsored an event honoring the late Secretary of Defense Ash Carter called the Ash Carter Exchange on Innovation and National Security. Why was it so important to host such an event and what do you think the impact on innovation and national security was? Well, Ash Carter was the person who got me interested in it and he basically was very persuasive and he said, look, you have to do this and I said, like, why? And he said, well, do it for a year because I need your help and you'll enjoy it and you have to do your national service anyway. And 10 years later, here I am. So he was transformational in my world and I don't know about you all but every once in a while, there's a person who changes your life and he tragically died of some sort of heart problem a year ago, so we wanted to honor him. But he was a physicist and he understood that all of the things that you all have been taught are not permanent, right? The technology and innovation change and therefore your tactics have to change. So we tried. I've since learned that for the last 50 years there's been a long list of people, including myself, who've attempted to change procurement and every attempt has failed. So, and there are all sorts of reasons for that but the thought experiment I would offer you is let's imagine that I had my own and I'm not doing this so please don't get upset. Imagine that I had my own private army which is apparently legal in Russia but not in the US. And so I had my own private army and I'm aligned with the US military but I'm gonna do it using the Google principles. So let's just think about what I would do. So the first thing is I'd say I don't have $800 billion, right? So I have to do things less expensively. So I need things on the water. So what I'm gonna do is take an old ship and turn it into my drone carrier and I'm going to use that to force project into contested areas. I'm going to build naval drones that are on the surface or slightly below the surface. And for connectivity because I don't have the military networks, I'm just gonna use Starlink, right? I'll just put the little dish on the top of the thing as it floats along. You sit there and you go, whose idea was this? Well, this is how the Ukrainians attacked at least two of the boats that they did. And then I would say, well, I wanna have my own military for the army. So what I'm gonna do is I'm going to use drone machine guns and I'm going to do anti-aircraft stuff with drone anti-aircraft work and so forth. So once you realize that the priority is the inverse, in other words, it's people last not first in harm's way, right? Because I can't afford all the people and I certainly can't afford anyone to get killed, right? I'm gonna put them in the very end, the very back of the system, controlling on an autonomous defensive system. And that's gonna be the philosophy that I'm going to put in place. Now, again, that's a thought experiment. I'm not actually doing that. Understood. So during your time on the Defense Innovation Board, and you alluded to some of this already, but what surprised you about the challenges facing DOD when it comes to innovation, maintaining the competitive advantage, and how do you think that will affect national security in the near and medium term? So I guess I was surprised by how tough the bureaucracy was to fight. Those of you in the audience are familiar with the POM and the whole planning process. And the joke is we have an urgent need, so it's inserted into the POM process. The money shows up in two years. It'll take two years to plan it, two more years to design the first one, which won't work. Once we then do the contest, we will actually do the awards to the contractors, but of course the loser will sue, which delays everything else another two years. So by then we'll start building them, but by then the costs will have gone to the point where we can only build 10, as opposed to the 200 that we needed. This is how the major weapons systems work. And I think that if you look at the Air Force, the Air Force did a particularly good job with the new bomber because they used a different authority. If you look at SOF, they have different authorities. And I'm strongly in favor of giving our military leaders more autonomy with respect to making decisions. And what I've said, and I'll just be blunt, is you take these people, you train them to the hill, they're four or five or six star generals, who knows? And you won't give them control over anything. You won't trust our military leaders to be able to make these decisions at a time of peace and war, it's crazy. So this is this problem of a bureaucracy that prevents progress. So what I would specifically do is I would specifically set some areas and try to do them differently. I've looked at a couple, I think the one that is the most interesting is probably, it's gonna sound strange, is in missiles. We have a problem with missiles and missile defense. And once every year we take some missile and we launch it and then we show the missile defense that hits it. That's not how you innovate. The way you do is you have missile, missile, missile, failure, failure, failure. And then eventually you figure out how to deal with all the failure modes. So pick that one or pick something in the drones and that's what I would focus on. I think that the only way to change the system is to show people that there's a completely different model that works. In March you published an article in The Atlantic Magazine and in it you talked about the challenges of innovating during peacetime versus innovating during war. Could you describe some of those challenges from your point of view? Well, I've talked to a number of our military leaders and I complained to them in my usual obnoxious way and they've all said, Eric, you don't understand. We're at peacetime. If we're at wartime, it will be very different. And I said, how? I said, we'll have infinite money. Okay, and that's literally what they said. And I hope that that's true. If we're actually in a real war, the government will fully support our military in a real conflict. But I'm not sure that infinite money is the right answer. I think it's better to learn how to make hard and important choices which is very difficult in a democracy, especially in an elected one where some of these things are job programs and so forth, but simply take a tough line. So it seems to me that in peacetime, you have the time to get your formulation correct. And in peacetime, the doctrines are typically wrong. So what I'm trying to say in the clearest possible way is that the future will have extremely sophisticated military professionals and they'll be fought largely through autonomous means. And that is because we care an awful lot about the lives of our soldiers and civilians. And also that autonomy with precision allows for zero collateral damage. Because I work in tech, I've been heavily criticized by my peers as pro-military. And they said, how can you possibly be in favor of these new systems and weapons? And I said, because if you're gonna have a military, you might as well have it be accurate, right? This goes back to my earlier comment about the Russians. Their doctrine for the 152s appears to be to take a box and they go boom, boom, boom, boom, adjusting the little azimuth wheel in the artillery piece until they find something of interest and then they just kill it all. You destroy a city to save it, right? I just think that is a horrific military doctrine on their part. So I would like us to be so surgically precise. I'll give you an example. The snipers, the sniper systems that you all have invented, you have three snipers who have little green lights and when the laser locks and the little green light happens, or whatever, I guess it's a green light, they all press the button at the same time and it's a guaranteed hit to the target that they're looking at, right? That's what we should aspire to, right? Because it's exactly, it's surgically precise, it's consistent with the law of war and why can't we do the same thing with autonomy? We can, I can go on. Well, I'd like to transition the conversation towards AI. You've alluded to some of the usefulness that you see for AI in the future. In particular, where do you expect we'll first see AI on the battlefield? Well, today, because of MAVEN, there are secret projects, which I'm obviously not gonna discuss here, which use this technology for, I'm just gonna say vision. And the first and best use of AI is replacing a bored enlisted person who's watching a screen. My favorite example is I was in Bahrain where we have a very large naval fleet and we have, I think, three wooden minesweepers. They're wooden because I guess it's much better than steel for detection of a mine. And the young enlisted man is sitting there, sailor, excuse me, and was in front of a screen and I said, well, tell me about your screen. He said, well, we've just upgraded it. And I said, what, too? And I go, he goes, Windows XP. And I said, well, that's 1998. That was a good one. And as you know, Windows XP has been completely penetrated by our Chinese friends, but I didn't want to disappoint our sailor's pride. And I said, well, what do you do? And he said, I watched the screen looking for mines. And I said, okay, and how often do you do that? And he said, eight hours a day. And I said, okay, now this, what does it cost to train an officer or an enlisted person in the Navy or in the military? It must be hundreds of thousands of dollars, maybe a million dollars, I don't know. I mean, these are exquisitely trained people and the poor fellow is just watching from mines. So the hats his commander, what was his accuracy? And he says, he gets it wrong a third of the time. And I said, what? So a third of the time you hit the mine. And he said, yeah. And I said, well, I don't think that's very good. And he goes, well, that's the best we can do, sir. You know, good military answer. So that shows you the craziness. And I'm not even gonna talk about the LCS and its mine sweeping component, which is another disaster. But the point here is that the whole doctrine is just wrong, right? That humans should not be watching things that are boring. Computers should be watching and they should alert you for an exception. Maven started on the fifth floor of the Pentagon in a group that I'd never heard of and they managed to get themselves an NVIDIA GPU cluster and they used a piece of software called YOLO called You Only Look Once and they trained it on open source stuff like cars and trucks. And then they retrained it in the secret on the Sipronet on tanks and other kinds of appropriate military targets. That's a much better model. So the first use that any of you all will see of AI will be an automatic vision monitoring systems. And when you talk to admirals and generals, they've all been to the school where they said, here's what I want. I want a battlefield management system. I want the system to see all my sensors, all my shooters. I wanted to calculate what to do and I wanted to give me recommendations. We're not building that, right? We don't have enough data. The systems in the military are just not ready for that. And they're always disappointed when I tell them that and then they get mad at me. But the fact of the matter is they've been saying the same thing for 10 years and it hasn't shown up. It's interesting that in Ukraine, I was embedded in the third assault brigade in Bakhmat and which is very interesting because we're underground in the command centers and the bombs are going off on the left and right. And they're showing me their battlefield management system which was generally known to be as Delta. And Delta can be understood as Google maps with red and blue symbols of where and they mapped all of the Russian assets and all of the American assets. And I said, how impressive is Delta? And they go, that's not Delta. And I said, how are you not using Delta? They said, we didn't trust it. We became convinced that there were Russian moles on the other side of the network. So we built our own for our own brigade. Can you imagine that in the US military? And what they did is they built this incredibly subtle system which predicts where everything's going. And then the commander watches the little vectors and says, boom, boom, boom. It's exactly what you want from a battlefield management system. And I said, how many people did it take to build this? And he said, 10, how big is the brigade? 7,000 people, right? It's insane that the US doesn't have the flexibility that our services are technically speaking, the co-cons don't have the ability to do this. And that's how you're gonna get the sort of integrated battlefield management system. You're gonna do it from the bottoms up by people who are in war and in conflict doing it. And again, the scenario that I described in the US would be seen as immoral, illegal and somebody would sue you over it. It's crazy. So the audience is very familiar as you explained some of the tactical uses for AI. Could you describe some of the more strategic uses where we may see AI specifically with things like large language models and how might those challenge the future of American security, our ability to govern and to maintain a sense of community? You know, a very good question. There's a, for those of you who have time, I would read a paper by DeepMind, which is a part of Google, on extreme risks. If you just type into Google, DeepMind Space Extreme Space Risks, you'll see the paper. And it goes through an assessment of what the real risks are. One of the things that it talks about is how do you feel if an AI system of the kind that we're talking about gets control of a military weapon system? It also talks about adversarial attacks where your adversary could easily change the weights in these models and then cause them to fail or do the wrong thing and so forth. There are people working on this. But as a general statement, the first way in which you'll see AI will be in misinformation. So I'm reasonably convinced that 2024 is gonna be a disaster for democracies simply because of social media. There are huge elections going on in India, obviously in the United States and so forth. And the online world is not ready for the deluge of fake videos, so forth. When I was running YouTube, I learned that a video had enormous power compared to text. So in other words, if you produce a video that is false and I say to each and every one of you, this video is false and you watch the video, you'll still in your mind believe some of it is true or you'll decide I was lying to you. It's an extremely powerful misinformation and we America do not have a good answer to that. I've made various proposals and frankly, I think this is the case where the social media companies are not doing the right thing and if they don't do the right thing fairly quickly, I'm quite sure the US government will regulate them because of the dangers involved. That's a near-term risk. It is solvable, but it requires collective action which is not occurring. The second one is in active cyber war. So the example I would use there is there was a war. The war was North Korea attacked the United States and the United States attacked back and China thought it was a bad time to have a war so it shut down the North Korean attack and the US stopped and the entire war was one millisecond and the reason that it's one millisecond is that's how fast these decisions got made. And I think there's a lot of issues around zero-day exploits using these things. So I'll give you an example with an LLM. If I take a large language model and I say, it's called stepwise, it's a stepwise progression and I go to the large language model, attack the country of France and tell me what you discover. And so you create a thousand bots and each of them attacks the country of France using what it knows and it gets in one way out of a thousand. So then you say to it, assuming that you got in, what do I do next? So this stepwise refinement of tasks can enable very systematic, and by the way, I like France so please take it out of context. It could actually create a genuine threat to a country because of this iterative nature of these things. And the way you solve that is you increase our cyber defense which people are working on very hard right now. And the third one that is of great concern right now is biodefense. And the example would be with a large language model, you say, I want to create Rycen and I want it to work this time. What are the components I need to create Rycen? And it answers that. And then you say to it, okay, now that I've purchased those components, how do I mix them? And you go, it tells you how to do that. And then eventually you say, well, how do I deliver it? And it describes the Japan subway attack. And then I say to it as a smart ass, excuse me, why did it not work? And then it explains that the dosage was wrong and tells me the right dosage. This is chilling, completely chilling scenario. Now, the large language models can do that, but the large language models have what are called guardrails which don't allow you to ask that question. So it's super important that everyone understand that these large language models have information in it. An example is that I run an AI safety group of all, again, all the various AI groups we do it on Sundays. And so we had a, and this is the open AI deep mind, all the inventors of the stuff. It's non-policy, just the technical people. We had a group come in and show us how much you could do if you don't have a PhD in biology because the systems can educate you as a non-biologist how to build the biology, you can say how to mix stuff, you can tell you what titration looks like and so forth. So it's an accelerant for evil and that's a problem. So I think those are, I think misinformation, cyber and bio are the ones that are the ones that we should really be thinking about right now. And they're definitely coming. So that's a fascinating discussion. And a particular concern is when AI advises us on warfare because in warfare it demands that we disregard broadly held human values. And so how, what types of guardrails, what types of constraints can help with AI as we knowingly as humans disregard those human values and then AI is left to figure out how to act or react when those guardrails have been taken down. Well, this is an unsolved problem today. There's a company called Anthropic which invented something called a constitutional AI. And their idea was that you would, and you did the training, you would teach it a constitution that would guarantee that it would behave according to the constitution. So the first is do no harm to humans, the usual kinds of things. The problem that we have is that it is now understood that you can train with the constitution and then if you're sufficiently evil enough, which I would certainly assume the North Koreans and the Chinese could be, you can take all of those constitutional things out and allow it to become evil. So we do not today have a solution to this problem that I'm comfortable with. And I think this is the biggest proliferation question that we have. And there's a further, not to scare everybody, but there has been a belief in my industry for this year that the frontier models, which are the big four, which are open AI, Microsoft, Inflection, and sorry, Google and Anthroper, they will be heavily regulated. They're not gonna do open source. They all have very sophisticated groups working on these guardrails. So I figured we'd be okay. But the open source movement, exemplified by Lama II, which just came out from Facebook, is moving so quickly and the hardware is getting so cheap that maybe that we have a severe proliferation problem of these evil tools quicker than I had thought. And I say that having, I sort of come to this view in the last month and I don't think the industry's quite figured out how to deal with it. Well, so I'd like to transition to something that you said at the beginning of your talk and I'd like to get into soft power. Dr. Schmidt, I am a military veteran of 24 years. I and many of the people in this audience have dedicated and devoted our adult lives to generating hard power for the United States. I firmly believe that America's strength resides in its soft power. How can innovative technology help the US generate and project soft power? Well, I am a good example of success with soft power because Google is a global brand with American values which drives the other countries crazy because we built Google with American values, American tolerance, American liberalism, our treatment of women and of gay rights and all of these kinds of things that are offensive in other countries. Well, America is proud of our liberalism and our democracy and other countries view that as hegemony. They believe that we are invading their country with our online stuff and I would of course say, well, if you don't like it, you can block us. Unfortunately, China actually did that for a separate discussion. So I think that the truth is you're always gonna have hard power and soft power in your framing but the way hard power will be, what happens is when everybody has the kind of hard power that America has, we're gonna have to change it again and that's what I mean by innovation power. Well, thank you. So at this point, I'd like to turn it over to our audience. I'll keep the current format so I'll go quadrant by quadrant and again, please stand up, use the microphone, press and hold the button and we're ready for your questions. Yes, in the blue shirt. Good morning, I'm Trevor, I'm in the Navy. Could you speak a little bit about machine learning in a defense environment where the truth data or isn't readily available? So a general rule about AI is the quality is a function of the quality of the data and the length of training and the way the models that you're seeing are emerging is people spend many months assembling all the data and curating it and there are all sorts of open source databases and so forth of that range. The military doesn't have any of that. So I think it's unlikely the military will do a very good job of the more strategic stuff, you know, the sort of complicated reasoning questions that humans do today without more data. The one way to understand it is when you approach a hard problem, the first thing you would do is have the large language model read all of the doctrine. So we can actually just read and it's called fine tuning and you would teach it military doctrine. But if you wanna ask it any interesting question, it needs facts like how many people are in the Pacific theater? How many ships do we have? Where are they? And we don't have that data organized in a single place so that the system can consume it and use it for reasoning. And that's why the battlefield management strategic conversation, which everyone wants, is gonna be so difficult to get. Okay, more questions from the audience. Yes, here in the front. Hello, good morning. I'm Rick Becker, I'm from the US Army. When we talk about combined joint all-domain command and control and trying to get commanders to have as much information available to make decisions, a lot of that is dependent on having the right talent. And so I think one of our challenges that I'd ask your opinion on is how can the Department of Defense compete for that talent when one data scientist commands 350K a year on the outside and we probably need like 100 of them, right? But that's a huge part of the budget, thank you. So I thought I agreed with the premise of your question, but then I learned that there's an awful lot of Americans who want to serve our nation and they wanna work on interesting things and they're willing to take low salaries to do it. So here's the problem. We bring those people into the service or as civilians and then we give them boring things to do. So I'll give an example. I was at the NSA, which stands for never say anything, and we ran a secret meeting on Russia and you have this brilliant young man, I don't know what his rank was, but he was doing cyber analysis for a Russian attack, which I don't remember the details of. And so I of course said, well, how are you? I'm so impressed with you. And I said, what are you gonna do next? And he said, well, I'm gonna leave. And I said, why are you gonna leave? Because he said, my next assignment is the equivalent of guard duty over in this other post. So here you've got a person who is extraordinarily valuable, who because of the HR system in the military has to get promoted by going and doing something that he doesn't wanna do. Now we don't do that with the doctors. You don't take the doctors and the nurses in the military who are in uniform and serving us and you don't say, oh, you have to go command a tank for a while, we understand that. So there's this weird problem in the bureaucracy where it doesn't value technical skills as a career path and that's why the people are leaving. And by the way, every one of those people who leaves I hire because they're so good, right? They're that good and it's a tragedy that the military is losing them and it's your own fault in the sense that your HR system and the way you get promoted, you promote generalists. Well, innovation requires specialists. But I, for example, I was the CEO of a company and I have a PhD in computer science, right? It's very rare that in the military you'd have somebody who has a PhD who's also in a command position, right? Maybe I'm a bad commander, but the important point is the system has produced generalists but you need, let's say you're the commander, you're the admiral, or in your case the general, you want to be, you want to have specialists working for you, you don't want generalists, you want the specialist, you understand this, go solve this problem, right? So it's a lack of understanding of the need for specialization. All right, how about here from the left side of the auditorium? Yes, in the back. Good morning. My name is Taylor Haggerty, United States Navy. I found it particularly interesting in your article how you discuss China using public and private partnerships as well as the civil military fusions. And so obviously that's been great benefit to them. And so what are some of the hurdles in the United States with us modeling that for our benefit and what does that practically look like for us? I don't, a very good question, thank you. I think it's unlikely we can do it in the U.S. for all sorts of cultural reasons. The way they work is they have brutal commercial competition. And when I say brutal, I mean, they work much harder than we do and we think we work much harder than anyone else. And that brutal competition produces national champions and then they pour money into the national champions. That would be roughly the equivalent of handing billions of dollars to Apple and Google and Microsoft, which is just not gonna happen in the U.S. for all sorts of good reasons. So their model produces national champions which then have military parts. I went to the group that does surveillance before COVID and they showed me all of their tracking systems and surveillance of citizens, how they track cars, very impressive technology. The leader in surveillance, I'm not sure we want to be, but they were, but they didn't show me the building next door, which must be where they're building all of the systems that oppress Uighurs, right? So in other words, you have to assume there's two buildings and I got to see one and there's another one right next door full of military stuff. They don't have the boundaries, they don't have the primes that we have in America. And I think that allows them to move quicker. What I, one of the things that I hope, it's gonna sound terrible and I apologize, is I hope that one of the positive consequences of the terrible war of Ukraine is that the tech sector will understand that there is evil in the world and that we in the tech industry have to support you in innovation. And I've set my own agenda around that. We'll see if it actually works or if the tech people are just sufficiently stupid that they just don't see it. I think we have time for two more questions. Yes, here in the front. Good morning, sir, Lieutenant Commander Harris from the Navy. I was curious, one, thank you for the work you do on your patriotism, but I'm not sure if you're familiar with the book, The Kill Chain, but the author kind of talks about the history of the DOD and Silicon Valley tech sector relationship and how that's attributed a lot of success in the earliest 20th century and we saw a degradation of that relationship in the late 20th century and into the early 21st. I think you said a couple of things that intrigued me that hinted towards it a little bit, but I was curious if that stress relationship is something you experienced in your career and if that's something you see that's on the mend and how that plays into our future success of innovation. I think it's getting better. Strangely, when the Maven decision was made at Google, I was dual-headed, I was military and Google and I was the chairman of Google and I was not allowed to participate for good legal reasons in a decision for Google to cancel a contract with the military, which in my view was a terrible mistake. I've said this publicly, I'm not saying anything that people don't really know. I think we should be supporting the national security of the country that we're citizens of and we work in. I mean, thank God for America. So what I think is true now is that even Google has realized that it made a terrible mistake, which I would have told them that had I been allowed to speak to them. And even now Google is trying to work with the US military in its enterprise products and so forth and so on. So I think it's getting better. The core problem is the following and Ash Carter tried to fix this. I've got 50 startups that I know because everybody talks to me and they're really smart and they all are stuck in the valley of death. They've all built a product. It's really interesting. It would be really helpful to you and there's no one to call on in the Pentagon who's their friend. They are required to go through procurements while they don't have enough money to do that. They have to adhere to all of these strange rules which are unique to the military which they either don't understand or they don't have time to get into and yet they're building the equivalent of swarming drones or very specialized military targeting are things that you really want. And so various people have tried to bridge that problem. Ellen Lord when she was running the acquisition under Trump tried really hard to do this. The current leadership is different still trying to do the same thing. It remains an unsolved problem. So I think at this point that the answer is there is willingness but the path is still of poor quality. Okay. Yes, in the center there. David Jordan army. Sir, you talk about a battle of systems and innovation power and specifically what would you change about our systems and processes so we can get those pragmatic solutions to the warfighter today while allowing continuous iteration experimentation to eventually achieve an exquisite capability. So I think there's multiple paths. The one I like the most is let the people commanding the warfighters do what they want. When I got to Ukraine, there are 120 brigades and I realized that they're all run slightly differently. But of course the war is new. So they didn't have time to put in a bureaucracy. I'm sure they're planning on it. Plus there's a lot of history of corruption in Ukraine anyway. So God knows maybe that'll happen too. But at the moment it looks like a whole bunch of entrepreneur entrepreneurial groups trying to figure out how to win. Why can't we do that in the US? Why can't we empower our military leadership to make these decisions? If I were a military leader which I don't think I have the courage to be based on what I've seen in war, I would immediately say I want a hundred engineers and I want them to do whatever I want them to do. And I want them to fix this, fix this, fix this, fix this. And I want there to be a very straight, the quickest possible path. You've got a problem, it gets fixed by the engineer. That would so improve your daily life. Whether it's the HR system, the procurement system, the military billet system, the housing, the house move system, there's all these sorts of things. The targeting systems and so forth. That's the first thing I would do. The second thing that I would do, which again, these are ideas. And Chris, when he wrote the book that you're describing, talks a little bit about this. His idea was to have a competition where we actually had a real competition between two systems. So I'll give you, going back to my fake example of my own military. So we want to build an aircraft carrier. An aircraft carriers, as you know, are highly, highly vulnerable to hypersonic missiles. So what's an alternative way to get force power with an aircraft carrier that is more resilient to hypersonic systems? Let's do a trial and see and compete idea A against idea B. We don't do that. We do this sort of centralized planning. It's sort of like Soviet style where we sort of decide and then we sort of wait and then we have like 15 years. I'd much rather have competition today because that's what I'm used to and let the best ideas win. The people are good. They're prevented from doing things which allow them to compete and learn whether things work or not. Well, so that's all the time we have for questions. Dr. Schmidt, any final comments? Well, I want to begin by saying that part of the reason that you all are here is that you are literally the best and the brightest in our military. And it's an honor to work with you. I admire your service. As I said, I don't think I have the courage to do what you do, but I want to help. And I think that I would encourage you to question the doctrine of innovation and the ways in which you've been brought up because I think that there is a better way. And I think your future career will be determined to some degree by how you can seize the power, the structure, if you will, that you have control over to innovate in the space that you can control. Innovation is how you're going to win. It's how you're going to get promoted. It's how you're going to end up becoming the big cheese that everybody wants you to be. So with that, thank you so much for listening to me pontificate on this. And thank you very much. Ladies and gentlemen, thank you, Dr. Schmidt.