 So, thank you first for that very kind introduction and welcome. I also wanted to thank the organizers for inviting me to come back to this very special place because of, as the Admiral referenced, this cross between the history and the future. I mean, this was the place that the key concepts that helped the United States win whether it was in World War II to the Cold War were developed by the combination of students and faculty years before that and the same sort of enterprises going on here right now and you'll be part of it. So it's just as someone who is a history buff, it's an added honor to be able to come speak to you. Second, I wanted to add personal thanks in addition to the organizers Professor John Jackson. If you don't know him yet, I know we've got new folks here. He stands out for two different reasons. The first is that with Tim Schultz there, they teach the unmanned systems elective and it's a great course that connects to many of the topics that we'll be talking about over the next couple of days and obviously crucial to the future of thinking about war at sea and beyond. The second thing is you will definitely recognize him at some point. He's the guy driving around campus in the bright orange Corvette and so he'll stand out for that and I wanted to thank him both for everything that he's done for the Navy and its reading program and the like but then in addition for his friendship that extended all the way up to picking me up last night at Providence Airport because of flight delays at 12 20 a.m. And then I wanted to add thanks for giving me whiplash by gunning his orange Corvette on the highway here because it was wide open around like 12 45 a.m. He's saying it didn't happen but that is not fake news as it is said. So let's jump into the talk. As was very kindly introduced I'm someone who wrestles with the future and there's a seeming challenge in that. There's a belief that it's inherently unpredictable and our track record is so bad that we just shouldn't even do it. In fact one very senior US military leader described how wrestling with the future given how bad we are at it is quote like driving in the dark with your headlights off. This idea that you know you really ought not to do that. I would argue instead there's two problems with that kind of thinking. The first is you don't have a choice. Whatever role you play if you work in strategy, training, budgets, acquisition, human resources, whatever role you play you have to make assumptions about the future and choices about the future. So you will be driving in the dark no matter what. The second though comes from a look back rather than forward. When we look back at our failures at wrestling with the future whether they are intelligence failures be it Pearl Harbor, be it 9-11, whether they are acquisition program failures whether it's everyone's going to hold at what program is he going to describe as a failure. Since you are a Navy office we'll use a Army example future combat systems and you know when I speak to the Army War College tomorrow what am I going to say but the point is we know these failures whether they are intelligence whether they are operational, acquisitional, they are almost never the so called black swan. It was unimaginable, inconceivable, never wrestled with this before. Instead our failures repeatedly are the gray rhino problem. It was something that wasn't unimaginable. It was just big, ugly and tough to face, tough to acknowledge particularly when it's there in the room with you. So if we're trying to wrestle with this topic area that they've asked me to come speak to you of this cross between technology and the future of national security what are the trends what are the topics that are of that equivalent of that big ugly challenging reality that we haven't done a great job of wrestling with. I would argue it's artificial intelligence. Now let's pull back on this. We are living through not just an incredible story in technology today. We're living through one of the most important stories in all of human history. We've been talking about this moment where AI becomes real, where robotics become real for literally thousands of years. You can find discussions of it in ancient Greek mythology. Now they didn't call it AI, they call it Talos. You can find discussions of it in old Judaic text column. Maybe you're not a person of faith. We can find discussions of it in science fiction going back to the very first science fiction stories and it's happening now in our lifetime. Now many of you would go but we get it. We're wrestling with it. We're going to talk about it at the conference. We've created all sorts of new programs around it whether it's at the NATO level, whether it's at the Pentagon level with the Jake, the Joint AI Center. We get it in terms of our personal concepts. In fact there was a survey taken of leaders and they asked them what do you think is the most important game changing technology out there and 91% of leaders said AI. So we get it. Thank you very much Dr. Singer. You can sit down. I would argue though we are the equivalent of the battleship captains, the swows of the 1930s. Now as I'm giving this historic parallel, think about and this goes back to the course that Professor Jackson and Schultz teach. Think about the discourse that we've had around unmanned systems over the last 15 years. So coming out of World War I you had a debate, very fierce professional debate on basically the new air power acolytes versus the old guard and it was either or discourse. It was either this new technology makes everything else antiquated and needless. We won't need Navy. We won't need even ground forces. And then at the other end you had that no air power. I mean the battleship is still dominant. Does not matter. That was the discourse immediately after World War I. By the 1930s though the tenor of the debate had changed and the battleship captains would wag their fingers and say don't you tell me that I am not embracing the future, embracing this new technology. In fact I have given up not just valuable budget but something that even matters more to me, valuable deck space to put airplanes on my battleship. And the reason is because I am a believer. I believe, I know airplanes are going to make my battleship more lethal than it's ever been before. They're going to allow my battleship to sight the enemy's fleet of battleship at greater distance. They're going to allow my armaments, my main gun batteries to fire more accurately than ever before. And that is why I am embracing this new technology. They were layering the new on top of the old both cognitively and literally. They were embracing change just enough not to change. Do we have any situations like this today? And as you look at this picture here you can see the consequences of that kind of thinking. That's the battleship Arizona prior to Pearl Harbor. Now I know it's a bad idea to only use negative Navy examples so we're going to give a Army example from that period. The machine gun invented in 1860s, first deployed into warfare in 1880s in the so-called small wars of that period, the counterinsurgencies and the like, but really makes its mark in 1914 and World War I. And think about the effect the machine gun brought into war, everything from small unit tactics to how it affects the overall politics of war itself when you think about the just huge human loss that the machine gun causes. This is from a U.S. Army training exercise in 1931. We get it, machine guns matter and that is why we're putting machine guns on the back of our horse-drawn wagon. We're embracing change just enough not to change. So let's go from the past to today. What are some of the trends in technology that are playing out and that I think you're going to hear a lot more of? I categorize them in terms of kind of you might think of them as bucket areas. So there's one we've talked about already which is the software side of AI. There's its corollary which is the hardware side of increasingly autonomous robotics that come in all sorts of different forms operating in all sorts of different locales from the highway to the battlefield to the grocery store, meaning they're doing all sorts of different roles. We've seen this play out in Ukraine in some fascinating and important ways from large scale strike drones, the Russian TB2, sorry the Russians would love to have the TB2, sorry the Turk issues by the Ukrainian TB2s to even more interestingly the role of smaller loitering drones and again used by both sides. One of the most interesting things though is in the hands of the users and where they're coming from. This is a story not from the future of war but technically from the past of war where you had six year old boys donating their small drones that are then utilized by the Ukrainian military including of a type that was able to drop a munition through the open sunroof of a Russian military vehicle. That's not the future of war, that's the past of war. But robotics again, booming out into all sorts of different forms, all sorts of different sizes from huge in scale to microscopic in scale to we can think about numbers in scale in a different way how they are bundled together. On the top left that is a PLA, a Chinese military truck mounted UAS system that launches a set of them that are ammunition so you might think of them as like kamikaze drones. Doesn't shoot them and ones and twos as you can see a large number coming out of a single truck. I think about this example related to a past incident in warfare. About a little over a year back the Israeli military proudly announced that they had shot down a swarm of Hamas small drones. The swarm was actually five and they proudly announced that they'd shot it down using a combination of iron dome missiles and an F-16. Not a swarm, that's not a victory. But think about our capability to operate against the same. But again the numbers are more than five or 48. Bottom right corner, that's the current world record holder for drone swarms. It's the Genesis car company bringing together just over 3,000 of them operating in a swarm. But it goes beyond this. The story of robotics is not just again the forms and the numbers, it's how they come together into what I would argue and I think a lot of people acknowledge is a robotic revolution that's really about an industrial revolution. Researchers at Oxford University did a study of 702 different occupational specialties, job types, and they found that 47% of them are at risk for drastic reduction, redefinition, or complete replacement over the next generation due to AI and robotics. 47% of job types. Every single one of those job types has a military equivalent. So the discourse should not be solely lethal autonomous weapons systems, killer robots out on the battlefield. It's how does this affect everything from military medicine to military logistics? Next category area. So we had software, we have hardware, we have how it's all brought together into the network. You will be learning the skills of effective military writing as the Admiral mentioned. What you see at the top is the most influential military memorandum arguably in all of human history. A civilian working inside the Pentagon proposed the idea that we could turn computers from big mathematical calculators, which is the way they were back in the early 1960s, into what he said in the memorandum, quote, communication devices. Even more he said if we could turn computers from calculators into communication devices, we could bring them together in a new way. We could use them different than a telegraph, a telephone, or radio. Fortunately, that memorandum was listened to and his project proposal was funded. Unfortunately, they did not keep the original name of it. They didn't call it the intergalactic computer network. Instead, they called it the more boring term ARPANET, and that's the story of the creation of the modern Internet. And think about how much has changed out of that, how it's changed warfare, new domains of warfare, new organizations like Cyber Command and 10th Fleet to deal with it. It's also changed those old, so to speak, organizations redefining what they do. So I do a lot of work with Special Operations Command, for example, and they're talking about our operators have to have cyber skills as well, and I think that's the same for people here. Think about how it's changed our work lives. I've been into my physical office three times in the last year because I can work remote. And of course, you know, it's not just because pandemic, we're just not going back. It's changed our social lives. 38% of the people that you lead will have met their significant other on the intergalactic computer network. If they did, they're lucky because they have a 21% lower rate of divorce than people who meet the old fashioned way. But that network, it's fundamentally changing. In 1999, there was another new concept that looked at the communication device concept of the Internet and said, hold it. In this next generation, we can do something more. And in 1999, the term Internet of Things was conceived. And it was the idea computer not merely as communication device across the network, but you can use it for the operation of objects of things. We can lash together everything from thermostats to power grids to cars to even individual components. For example, the modern jet engine is not just network, but if it's a passenger jet, over 30 different parts within it are networked. That concept, the IoT, the Internet of Things is now becoming real. And it has massive, wonderful change. It will, for example, create roughly $11 trillion in value. It may be one of the key ways of dealing with the climate crisis through greater energy efficiency. It's also changing the amount of information in the world around us. So top left corner, that's the concept that Jeremy Bentham, the philosopher, but also architect, came up with called the Panopticon. If you're not familiar with it, it's the idea of a building where your every move is observed. He comes up with this in the 1700s during the middle of the last Industrial Age. Originally, the idea was for a factory where workers every move would be observed. And then they said, that's an awesome idea. Let's turn that into a prison. Kind of freaky when you think about factory life and prison life crossing. But the point is it's a philosophy, it's an architecture, all coming into one. We're now seeing the realization of it, where you're tracked in multiple different ways, all the way down to as you've seen in Ukraine, facial recognition deployed onto the battlefield. What does that mean? Consequences for everything. On the right, you see this is an image from, I'll talk about it later, a useful fiction project, apologies that we did for the British military where they wanted to understand some of the consequences of this. And so we looked at it, for example, when you're facing off against an adversary, be it on the battlefield, be it in a negotiation, a political negotiation, a contract negotiation, you will not just be looking into their eye, negotiating with them. You'll be looking into all the wealth of data about them. Patent was mentioned in the introduction. If you're familiar with the movie Patent from, what is it, 1971, there's a moment in it where Patent yells across the battlefield, Rommel, I read your book, you bastard. It's no longer, you might just have read the book by the adversary, you will know every book that the adversary has ever read. You will know what grades they made as a second grader, you will be able to bring that together to figure out their psychology, their risk taking. And oh, by the way, they're looking at you the very same way. And so this was from a story and a visualization to try and portray that. We're not here though to talk about all the good stuff. We're here to talk about some of the scary stuff as well. The IoT also reshapes cybersecurity. The threats are no longer merely about the theft of information, which has been challenging enough, whether it's the theft of credit card information to the theft of intellectual property, like, oh, I don't know, the design of a jet fighter that was supposed to give us a generation head advantage and the adversary's already flying copycat versions of it, or maybe it's just pure coincidence that J31 looks like a clone of the F-35. IoT though alters this where you can use digital means to cause physical effect in the world. We did a project called Burnin that, for example, was able to establish that you could use digital means to recreate the biblical plagues hitting a major U.S. city. Oh, that sounds crazy. Like how could you turn the river into blood? No, you can't turn a river actually into blood, but you can, for example, hack the water systems and go after the iron oxidation level that would turn it bright red. How do we know this is possible? A combination of two things. One, interviews with the water systems engineers who revealed how poor cybersecurity is at U.S. systems. And secondly, by the fact that this actually happened in Russia a couple years ago through a glitch, they accidentally poisoned a river and turned it bright red. Cyber attacks are often about making glitches turn real. Yet again, we've seen in the Ukraine conflict how these have started to play out in battle. Now, there was a lot of attention around cyber attacks, and actually Russia tried to launch many of them to cause this kind of physical change and was fortunately foiled by a combination of people working on behalf of Cyber Command, NSA, Microsoft, and the Ukrainian defenders. So the first of these was actually not what Russia hoped to cause in Ukraine, but instead from the volunteer IT army that lashed back at Russia in a fairly disorganized manner, but still created a new thing in war, the first IoT hack in warfare. It happened in this. Now, it got some coverage by the fact that they hit at the electric car charging station. As you see here, they defaced it. They left something not so nice about Putin. That's not what matters in the story of warfare. In addition to the defacing, they actually turned the systems off. So if you had an electric car in Moscow, good luck, you could not refuel. This was done by volunteers, small number of targets. Think, though, about the potential of using it in a large-scale manner. Think about going after something other than just electric car charging stations. It's sort of in the history of warfare going, wow, well, the first time they used an airplane in warfare, they just dropped a grenade out the side. Airplanes don't matter. No, it's just a taste of what's to come. This network, though, is changing in other ways because of the history that you see here. On the left, that is the first server for ARPANET, size of a refrigerator. On the right, that's the next generation. That's a young Mark Zuckerberg writing the software for Facebook in his Harvard dorm room. None of those computers had what you all would call sensors, the ability to gather information about the world around you. And then what Zuckerberg does is allow us not just to gather that information, but share that information, the idea that you can have a smartphone that has over 30 different sensors in it, whether it's a camera gathering visual data about the world around you to geolocation. Where is that camera? Where is that person? But then via social media, broadcast it out. What does that create? Not just as I mentioned more information than ever before, but the ability to share it like never before, and a new form of conflict, what we call like war. So if you think of cyber war as the hacking of networks, like war is its evil twin, the hacking of people on the networks by driving ideas viral through likes, shares, often lies. Just like cyber warfare traditionally, it can have real world effects. Top left corner, the story of ISIS. You cannot tell the story of ISIS without talking about the role of social media and how it enabled this group to recruit over 30,000 people from 90 different countries to travel to Iraq and Syria to join an organization made up of people they'd never met before. It's actually the flip of how Al Qaeda operated, and Al Qaeda back in the news with the drone strike. Al Qaeda, the term is the translation of the base. It was the mountain training camp in Afghanistan that you had to be known and trusted to be invited to. And then only if you went through the training camp could you be sent out on some of the most important missions, like 9-11. ISIS reverse everything from that recruitment side to in turn inspiring exotericism everywhere from Paris to Texas. Again, made up of people just connecting online. Right hand side, its effect on extremism here threats to democracy. The story of the violent insurrection of January 6. Now I know this is a challenging topic to talk about. And oddly, in the United States, we don't want to talk about political violence for fear of sounding political. So I am going to be very careful to distinguish for you what are facts and what is personal opinion. Facts. There were 16 at least different competing theories about how the 2020 election was stolen, circulated and pushed online. The theories ran from China hacked thermostats that in turn hacked voting machines to an Italian spy satellite was hacked by MI6 and the CIA, which in turn hacked U.S. voting systems. 16 different competing conspiracy theories. Many of them saw official resources utilized. That conspiracy theory that the CIA and the MI6 from Britain teamed up to hack Italian satellites to hack the U.S. election was actually investigated by Defense Department, by service members at the instruction of the Acting Secretary of Defense at the time. People took these theories seriously. Secondly, those theories, many of them contradictory, you can't have China and CIA doing it the same, right, were culminated, coordinated in terms of a belief that led to the events of January 6, which were coordinated largely online and messaged online and led to violence that left five people dead. Those are facts. We can have an argument now about what they mean, but those are the facts of the matter that social media proved crucial to the story, the events, the beliefs of January 6 and the continued threats to our democracy. Opinion, people who pushed those cockamamie conspiracy theories should not be participating in our national security discourse anymore. It is for professionals, it is for people who support our democracy, and if you enabled and pushed those conspiracy theories, I don't care what you think about China as a competitor, you have shown yourself to be unprofessional, personal opinion. Let's move on. Threats of this in terms of our public health. The infodemic, which is a term public health professionals describe for the swirl of misinformation and deliberate disinformation about COVID has led to what one study found. Sadly, there are over 300,000 Americans who are not with us today that could be with us if we had followed basic public health practices, but they were taken in by what they read that was false or what they heard that was false online, 300,000 Americans. Bottom right corner, mass killings everywhere from Myanmar, as you see here, to India, coordinated, motivated online, but the weaponization of social media, it's a technology, it's a tool, it can be used for bad, it can be threats as we talked about, it can also be used for good. Again, opinion, I take the side of Ukraine in this conflict. Ukraine has masterfully used social media to help win its larger war. Social media helped change in terms of the narrative inside Ukraine, where Zelensky was able to shift from an internal polling of around 23% popularity to 91% popularity through showing himself being out there in the fight, but it also altered the way that we talk about and think about Ukraine. Ukraine became literally the most popular thing, cause, not just political cause, online. And that altered the discourse not just in the United States, but we saw nations as far away as Australia and is beforehand unlikely as Japan and Germany sending military aid. It also altered the economic side. Over 400 of the 500 biggest corporations in the world decided that they would stop doing business profitable business in Russia because of fear of what it would do to their brand. Again, new things playing out. We've seen the introduction of the past topic, artificial intelligence, into this space. Deep fakes where it's hard to figure out what is real, what is not. Next area, about software, hardware, network. It's also about how it affects us. Yesterday, I watched, I was waiting for American Airlines to finally deliver a plane. I watched a TV show that was advised to me by an algorithm. You would like this, but it didn't just stop there. The production company made that TV series because algorithms told it, people would like to see this topic and they would like to see these actors in it. This morning, I woke up, I checked the news. But the news was sent to me via an algorithm deciding what I would find both important and acceptable to my opinion. So what I believe about the world shaped by algorithms already now. Again, move forward. Think about the professional side of this. It's not just about professions ending. It's about how people in them change the military doctor, the military logistician following recommended courses of action from an AI to the military commander themselves. I was at a Marine Corps amphibious laning exercise. This would be now two years back. It was pre-pandemic. They were looking at all the different ways that new technologies would change amphibious landings. The admirals were taken in by the physical robotics. There was a drone and there was a motor boat that looked like Batman's motor boat because it had a machine gun on it. It was all black and it was all cool. But what mattered to me was actually in the talk where they were utilizing a military version of a Waze map where it was providing recommendations on which way the platoon should go based off of all the data being gathered and recommending routes but Waze recommends you routes based on time savings. This was recommending it based on expected casualties. If you go this way or you go that way. I'm the son of a U.S. Army lawyer. That technology it will save lives. Best information brought together recommendations. That technology it's also a court martial waiting to happen. Did that officer listen to the recommendations? Did they take more or less casualties based on following or not following the recommendations of an AI? To how we interact with our technology. In all of human history we used our little monkey fingertips. Whether it was picking up a stone or flying a drone. Now we can start to use something else. To final category I don't even know what to put this in. Quantum. Is it a story of hardware? Is it a story of network? It's also kind of a whole new area of physics and science. This is from a project we're doing with NATO on this. So we can go off in a whole array of this and I think we're going to have some panels talk about it. Okay let's pull back on this. Man, it's a lot of change. I would argue that we are seeing a massive rethink of not just technology and its possibilities but the battlefield itself. Now it's again very bold of me to say that. But why should we think all of this is somehow going to be less in consequence than the machine gun in 1914 or the airplane and wireless communication in 1939? Shouldn't it actually maybe be something more? Because this technology is different than every other technology in history in that it is increasingly intelligent, increasingly autonomous, ever more improving. So what does that mean? Well I think there's a series of questions that it yields that we all have to wrestle with at the grand strategic level to you in terms of your role as this combination of officer and student slash researcher. First, these technologies have inherently lower barriers to entry. This is not the story of nuclear power or the aircraft carrier. The users of them range from big states to small states to non-state actors. That means they will be proliferated. What does it mean to operate in a world where some of the most game-changing technologies are proliferated? Second, the qualitative edge. This is not the story of the Cold War where the United States had a generation ahead advantage in technology against our adversary in almost every area. Think back to the Cold War. The Soviets were really only peer technology competitors for a limited period of time in a limited set of areas, early part of space race and rocketry. But overall, I mean they not only don't develop stealth, they don't even develop a personal computer, a decent car. That's not the story of a China because one, engaged into the global economic community in a way the Soviet Union never could. Second, because of a prior discussion point, cybersecurity, they can pull an out in sight and develop it rapidly in a way the Soviets couldn't. Third, because they're doing really cool game-changing technology in a wide variety of areas from AI to quantum. And what that means is not only is it that a potential adversary in a great power conflict will have the same or in many cases maybe even better technology, but, and this is a parallel to the Cold War, other actors will as well. So we fortunately never fought that World War III against the Red Army, but we did have to face AK-47s to Navy pilots going against MiG-21s. Same thing in the future battlefield because China is displacing Russia as the alternative supplier. Third, these combine to make every not just conflict but arguably battle multi-domain. Contestation not just at land, but air, sea, but also places that we've never fought before, space, cyberspace. Now folks would say hold it, but we've been very active in, we've had contestation in the air at sea again and again. Yes, but we really haven't had battles for control of the air sea going back to really 1944-45 period. But as we explored in the book Ghostfleet, it's not just about multiple domains. The graduate school version of this, the war college version of this is how do they connect and importantly not just how do they connect, but how are the timelines connected in different ways. So you can have a small tactical cyber action 10 months earlier that can have a massive strategic effect 10 months later, or it can have a tactical effect. It can be why an F-35 get shot down 10 months later. It's the connection points, but also how they're not sort of directly linear in time. And we use Ghostfleet, if you're not familiar with it, it's a novel. It portrays US-China war, but it comes with 27 pages of research end notes to document how every technology in it, every tactic has already been used somewhere. What does this mean for you all? What does it mean for PME? Well, we're moving from a realm where you need to master your domain to you need to be able to engage in the equivalent of three-dimensional chess. And I'm a Star Trek fan. The only person that was actually good at three-dimensional chess was Spock. We need to train an entire new generation of Spock thinkers. Next, doctrine. It's not about how much of the technology you have. It's not about how good the technology you have. History, again, teaches us that the Germans actually didn't have the most tanks at the start of World War II. They didn't even have the best quality tanks, but they had the best concept for bringing it together. Okay, what about now? What are our concepts of technology? Whether we're talking about battlefield use to military medicine, they basically break down into a couple of options. One is you envision the technology to be an extension of you. It extends your reach. The example for instance of a surgeon doing robotic surgery from a distance, a drone pilot at Creature Air Force Base in Nevada, flying over, well, maybe over Kabul in Afghanistan. Next concept. No, it's not an extension of me. It's a teammate. It's a wingman. It's part of our organization. Or no, it's an agent. It's out there, not an extension of me, but acting independently on my behalf. But in that independently, again, another break point, is it operating as a one, or is it gathered into a swarm that has an agency of its own beyond the singular systems? This vision, which one you choose, matters. Whether you are thinking about this for ISR or military medicine or logistics or strike. But it's also not just which one will you choose. It's which one will the adversary choose, and then how do they interact? What do we know, at least from open source intelligence, of what the adversaries are thinking? We know that first, Chinese civilian political leadership has told its military, and this is in the 19th party congress document, to quote, accelerate the concept of intelligiation. Now, intelligiation, it's a translation of basically the belief that the 20th century conflict was defined by industrialization, and America wins that out. Then from the 1990s to roughly 2010, it's information dominance, same term that we were using, say America wins that out, but they say no, moving forward, it's going to be something else. It's going to be intelligization, machine on machine. They go further. What will this look like in execution? What will it look like on the battlefield? There's some interesting writing by PLA officers about what translates as the new concepts, and the new concepts cut across everything from kinetic unmanned systems to cyber, to electronic warfare, and it basically, it's a vision where, and this is the translation, the small but numerous defeating the big but few. I'm going to say that again, because I think it's so crucial, quote, the small but numerous defeating the big but few. Who do you think they think the big but few are that are going to lose the future of war? Conclusions. What can we do about it as we explore these questions, these trends? What are your responsibilities? First, education and leadership are now fundamentally tied together during a time of transition. It's always been true, but it's particularly during a time of transition. Go back to that example that I gave of 91% of leaders saying, AI, I get it, it's important. 17% of leaders say, I understand AI, how it works, what are its applications and dilemmas. Now, your advance enough in your career is to know that senior leaders, they're pretty confident people, so 17% saying they get it. It's probably not actually 17% that get it, but even if it is, that is a massive delta between 91% saying this is the most important thing for me and my organization and 17% saying, I understand the fundamentals of this most important thing. We need to close that. And I'm not referring specifically just to AI. I'm saying we need to have the modesty to look at ourselves and our organizations, those that we lead and say, not just what do we think is important, but how well do we grok it? How well do we get it? And if we don't, how can we expect to make decent decisions on it? Second, talent management. Think about all those changes going on, technology, geopolitics, etc. How much have we changed our human part of that, whether it's recruiting, whether it's, as I mentioned, training, whether it's how we figure out promotion and assignment. If we are seeing drastic change in one area and marginal change in another, why would we think that we're going to be able to master and keep pace? And this institution is a great, it's part of that history. If you go back and look at, Trent Honesbook is very good on this, the battles of the 1940s were in many ways won by the Navy personnel reforms from the late 1890s to the 1930s. Next, the centrality of trust in both its meanings. Trust actually has two meanings. One is that emotional thing, I trust you. The second is a way an engineer or a robot looks at trust. Does the world, does my model, my expectations, match reality? Let me give a different way of illustrating that. Imagine there's someone who's just an inveterate liar. They lie about everything. They lie about their golf score. They lie about they've done professionally. You would not trust them, but if you were interacting with them in a golf game or in business, you could trust that they would lie and then operate effectively. So trust these two meanings, they're the key to one, integrating technology successfully. You've got to deal with both the how does it work, but also how do people feel about it, their confidence in it. But secondly, it's what every threat actor is increasingly going after. They're either going after the human side of the trust or the mechanical side of the trust. And it's also a little bit of the differentiation between Chinese versus Russian cyber attack focus. Next, we need to change the way that we do much of our training and war gaming and exercises. Too much, I would argue, is about validation right now. Validation of existing concepts. Let's say, get good at what we need to be good at, or it's controversial. Validation of our alliances, like what we saw on RIMPAC. They're great, but they're validations of, I like you. You like me. You're good at what you do. We're great allies. Those are valuable. I'm not knocking them, but we need a lot more like the Louisiana maneuvers or the old fleet problem exercises, open-ended. And the goal is not just to learn about a solution to a certain problem, but for those of you not familiar with the army history, the Louisiana maneuvers is where they figure out horse-to-mechanization. The key, though, was it was really about the human side. It was everything from, hold it, what do we need to do different, not just in tactics to the logistics to most important? Who is thriving in this new approach and who's not? General Marshall is going around taking notes of which officers are doing well. He's talent-scouting at these exercises, and over half of the U.S. division commanders during World War II actually were ones identified and that is thriving in those, doing well. Are we using our exercises not just for open-ended questions, but for talent-scouting? A good example, I think, of an approach to this we're actually seeing out in the CENTCOM area in Task Force 59, where they're doing actual testing with unmanned systems, problem-solving, but secondly, going back to sound like, oh, Singer, you don't like allies. No, they're actually doing it with our allies, so it's not about validation, it's about joint problem-solving. Next area, don't think that lack of budget is an excuse. This is the fleet marine exercises from the 1930s. They figure out all the key concepts of amphibious warfare that will be used during the island hopping in the Pacific, et cetera. You'll notice they've got none of the technology. You also want to use your approach not just to find solutions, but to find out what doesn't work before you make major commitments. Think about, I don't know, a certain service that decides to pre-commit to major combat systems before they have them and envisions using them for multiple decades in the future. Just imagine there's some service like that. Compare that to what the US Navy did in the 1920s with the concept of aircraft carriers. On the left, you see the USS Patoka, and on the right, the USS Langley. Figuring out aircraft carriers, is it best for air ships or is it airplanes? You want to figure out not only what works, but what doesn't work before you make major commitments. Learn from other people's wars. I think it explains itself. Spanish Civil War leading up to World War II, all the ideas of Blitzkrieg were out in the open. It shouldn't have surprised people in 1939. What are we learning from other people's conflicts today? Next, change the way we not just envision, but communicate. We have found that a challenge is taking your really smart ideas, your war game AARs, your strategy papers, your doctrines, and actually getting people to read and retain them. Whether it is young junior lieutenants all the way up to four stars, to members of Congress. What we do is what I do to my kids every morning, I sneak fruit and veggies into their smoothies. Science fiction, I love it. I reference Star Trek. Techno thrillers. I love them. Clancy to, Steph Reedus has a new 2034. Those are milkshake. They are designed for entertainment. Now, they may have some good stuff in there, the way like a strawberry milkshake has some strawberries in it, but at the end of the day, they're designed for entertainment and they can break rules as they build them. They can wave their hands. The enemy hacked everything. They just hacked everything. Don't make me explain why. At the other end of the spectrum, you've got the white paper. You've got the doctrinal paper. You've got the war game AAR. That is kale. It is good for you, but it is hard to get people to consume it. What useful fiction is, is you're taking, you're deliberately taking the vitamins, the kale, but blending it into a narrative that the parallel continues with smoothie. Not only is the target audience more likely to consume it, the research shows they're more likely to absorb it. Let me give you an example of how this works. The Australian military had a 21 page report on defense education enterprise reform. I see everybody's like, you know, first question is going to be, where could I get a copy of that report? The admiral is like, oh my God, 21 pages of education enterprise reform. Hooray. Obviously, huge topic to what we've talked about here, right? But it just didn't strike with the effect that they wanted. So we took their concept, we identified the three key themes of their report, the 37 nonfiction nuggets that they wanted to convey. What are the vitamins? But we recast it into a story that follows an officer as they go back and forth from war college to an exciting mission in the field. It's an embassy evacuation in Jakarta and the wake of a tsunami. Why do I use this example? One, if you can do it on defense education enterprise reform, you can do it on any topic. But two, the metrics show the value of it. We've gotten 15,000 readers of I for a Storm. It was republished in task and purpose, one of the most popular online military mags. But most important among those 15,000 was not just millennials and Gen Z, the chief of the entire Australian military and six US four stars, not your normal readership for reports on defense education enterprise reform, but we got them this way. You can utilize this in lots of different manners. You can use visuals. These are examples related to maritime expeditionary combat command to Australian forces looking at small drone threats. It's a tool. It's an approach that we need to use more of, I believe, I'm obviously biased on that last issue and I'll end here. We need to kill our sacred cows. What's the equivalent of that horse drawn wagon? What's the equivalent of that battleship Arizona? If we're honest, it's not ready for the future of war. It's probably not ready for the present of war, but I'm not just talking about technology or systems or platforms. I'm talking maybe most difficult org charts. And what do I mean by sacred cow? What is it that you're thinking right now that some part of you is saying, yeah, but I shouldn't say that out loud in front of this audience. I shouldn't write about it in proceedings because it might be bad for my career. Those sacred cows, we need to kill them because if we don't, they're going to end up getting people killed. So I know I've thrown a lot at you. End point, one key lesson. Given all of this change, individuals, organizations that look at that and say, I'm not going to change, they're making a choice through their inaction, they'll be making a choice to lose the future. And I hope none of us do that. Thank you. Thank you. So we've got time. We've got time for Q&A. As was said, I'll call upon you. So raise your hand and then use the mic so that people can hear you online and introduce yourself. What I'm going to ask for is not just name rank, serial number, but where you're coming from, what you were doing before you arrived here. So we've got, and then we've got someone raising a hand right there. Our cameraman gets first question. Thank you, sir. Christopher Burris, photographer, coming from Atlanta. With AI advancing, much of the military becoming smarter but smaller, the enlisted makes up 80% of the military. And what direction are we looking for military officers and enlisted personnel in the future of job security? Oh, what a great question in terms of, I think what you're really after is sort of a double part. There's one, will we see alterations in our sort of ratio between enlisted and officer corps? And then a second part of that question is, what are the skills and responsibilities that you might want out of enlisted versus officer corps? Great question in a lot of different ways because you could add on to that, how will different service cultures wrestle with that? So as an example, and again, you know, take this for what it's worth, you could envision one of the challenges that the Air Force has had with unmanned aerial systems is it is a officer-centric culture, but officer-centric culture lashed up to a specific platform. So you are obviously, you know, it's pilots have to be officers and then pilots introduce themselves by that system that they trained and spent their favorite times in. So go back, go to the example of unmanned systems. There was a guy I knew who had flown MQ-1s and then MQ-9s, so Predator then Reapers out at Creech. I think it was like seven years. That's where he had launched, you know, scores of airstrikes, save lives, you name it. And yet he still introduced himself as an F-16 Viper pilot because that was his identity, right? That's different than army culture. And I'll make a parallel with Navy culture where it's more kind of, you know, I'm a SWO or I'm a submarine or right, it's still somewhat technology, but it's not so much officer at the middle of the technology, more about the team. And so what you've put, it's a great question. I do think we, I'll put it a different way, if we don't alter our ratios of enlisted to officer, if we don't alter what we think are the skills that are needed in those different roles, given all that change, then of course we're going to fail. We're going to be the equivalent of the Prussian army, let's get history with you, the Prussian army when, you know, had been the dominant military in 1700s Europe and everyone's like, they're the most professional, they're the best. And then you've got Napoleon changing the concept of nation at arms and even though they're the most professional, they get their clocks cleaned in the early part of the Napoleonic War. And then now we get to an interesting question, it goes back to wire diagrams. The Prussians from roughly 1807 to 1812 do a huge series of professional military reforms, altering everything from the enlisted officer mix to the skills of an officer, to their org charts, they come up with the staff officer model, etc. And they, then it helps them become on the winning side. Our modern system is basically an extension of those Prussian reforms from the 1800s. Did what Klausowitz and Charnhorst and the others come up with in the 1800s, is that actually the best model for the 21st century? Ooh, that got uncomfortable. Should we think about maybe altering our concepts or our numbers of staff officers or the org charts or, oh my God, J123457, etc.? Maybe that's the kind of reform that we're going to have to go through. What will it take to get that kind of reform? Will it have to be a major loss? All right, let's do another question. So the way I'm fair is I go by quadrant. So I already did this quadrant, so that quadrant there. That, Commander Tembush, United States Navy, Strike Fighter community. I was really glad to hear you talk at the end of your piece there about trust, also the sacred count concept, and then just now the service culture piece. And I really like to also how you start out with the concept of battleship commanders and that they're adopting technology enough to not really induce change. I received a lot of pushback. I co-authored a piece on War on the Rocks, Wing Luddites, about mature technologies that have shown proven characteristics that weren't really being adopted within specifically the U.S. Navy. What cognitive barriers, the top cognitive barriers, or maybe one or two that you've identified that you think are the biggest imposing factor on senior leaders risk appetite and, but more importantly, trust of employing and adopting emerging technologies going forward that are going to be the most consequential or things that we, via professional education and writing, can help to change. It's a great question. It actually touches on something that Professor Jackson and the Admiral and Tim Schiltzware and I were talking about beforehand, and it links to that Chinese article, The Small But Many, Defeating the Big But Few. So go back to where I said these are the different concepts of technology, whether it's your operating unmanned system or something else. You know, it's an extension of me. No, it's a teammate. It's a robotic wingman as the phraseology that's been used to. No, it's a swarm. There's one question which is, which of these is best? There's another question which is, which of these is best in which setting? There's a tougher question which is, not which of these is best. Which ones will we actually adopt? And again, military history shows us that militaries don't always, you would think they would always choose the best, but they don't always do that for other reasons. So that example of the tank, the British come up with the concept of the tank. They're the ones to first use the tank. They actually follow a lot of what I said we ought to do during the interwar years. They run some really great war gaming, training, open-ended testing, and it was a 1929, 32-period Salisbury plane. British actually come up with the concept of the Blitzkrieg, but they choose not to adopt it because military culture. It's in particular, they're not willing to, it means some challenges to the regimental system. And if you've worked with Brits, there's the sort of the regiment is, it's your like, it was even more so back in the 1920s. It's like your identity, because it goes back hundreds of years, and it's who you go to drinks with, and your grandfather was in this regiment, and they're unwilling to change that because of it. It's not a monetary issue. Maginot Line. Again, the French had some really great writing on this. The French choose the Maginot Line not merely because they think it's the best, it's because of military culture slash cross with political issues. They are unwilling to go to a fully armored force because of what does it mean for domestic politics? They'd rather have a large conscript force than a more professional mobile force, and given that the Maginot Line is the best one. Okay, so what does that mean for us? I think we have, it is a lot easier for us to contemplate certain types, the extension concept, and maybe the robotic wingman concept than, for example, the idea of swarm of small. The robotic wingman concept or the extension concept, you can still have the human at the middle. It is not as threatening to the pilot community. Okay, you'll still be there, but you'll have a wingman with you, right? You're still in charge. You're still the quarterback. Versus, no, we're going to push out. It's not going to be one man F-35 and four equivalent robot F-35s. It's going to be a hundred small Ukrainian style UAS going around the battlefield. But there's another part of trust, which is it's also about the defense economy. Does the defense economy trust you? Even if you say, yeah, yeah, yeah, okay, but we really want a treatable or we really want swarming. If you are a defense contractor, do you trust the DoD to actually buy enough of them for you to turn a profit? Or do you say, you know what, I don't trust you given your track history DoD, and that's why I'm, and now you have sort of the self-licking ice cream cone, I'm going to push to you and Congress a physically large expensive system that I can load up bells and whistles. So my profit margin is better on that selling you these and onesies and twosies rather than hundreds and thousands. That I believe is playing out right now in UUVs. We are willing to contemplate, we were having a discussion, the ORCA system, it's roughly $60 million empty loaded up with sensor packages, it's probably another like $100, $200. We're willing to contemplate robotic subs that are big and expensive, but that means we can only buy a couple of them versus thousands of cheap small ones, air power. We're willing to contemplate, okay, maybe just maybe we'll put some of them on an aircraft carrier, but they'll be F-35 equivalent types versus no, it's going to be a world where I dump a thousand of them out of the back of a B1 or whatever. I think the trust factor, these combined issues are going to lead us to potentially not choose an area that might be best for certain domains. Right there. Good morning, left-hand commander Nitin, India. Thank you. First of all, I would like to compliment you on a well-written book, Ghost Fleet. Thank you. It was particularly interesting because of the footnotes. My question is, we are seeing a spurt in sci-fi novels and movies coming out from China. All of them picturing China to be the center of it. So is this an attempt at propaganda? And what is going to be the implications of this? And the second question is, by useful fiction, are we not opening up a can of worms for our adversaries? Are they not likely to exploit it? So one, thank you for the very kind words about the book. And the footnotes is actually what makes it different from science fiction is the fact that you actually have to have the footnotes. And footnotes are, one, it validates, hey, this is real, not made up. Two, like you will do in your own reports, it's not only, I'm not making this up, it's also for if someone wants to learn more, this is where you ought to go to. So that's why we've got it in there. So to speak, the vitamin, you have to list the vitamins to the China part. I think it's a combination of both China's power and confidence in the global market and a little bit of deliberate aspect. So China is a huge market for movies, has, like in every other industry areas, taken off, producing more and more on its own with greater and greater budgets, and looking towards not just its own audience, but exporting them. And so there's sort of that natural market at play. And then there is, there's a change in the tenor of them. And you're probably familiar, the Wolf Warrior, it's not just, it used to be the Chinese movies would be like a retelling of something from ancient Chinese history and it have a big battle scene. Now it's increasingly there's ones where it's sort of contemporary and in your face and almost like, you know, they're, again, they're like an inverse of our Rambo movies from the 1980s. And they're very cliched, but it's also expressing a confidence we are on the world stage and we're the heroes of the story. And the bad guys of the story are those greedy Americans. That's the most of the Wolf Warrior type movies. Obviously, the government is happy to have them out there. I've not seen data that says, you know, they've funded them and the like, but it's clearly kind of in, you know, at the very least, it's the same way that in the 80s, our government was excited and in turn, and this is where it's really important, inspired by that kind of machismo. And you see that they're the new guard of Chinese, for example, diplomats talk like characters out of the Wolf Warrior movies compared to the Chinese diplomats before. Interesting questions for great power competition is, as you have that playing out, not just how does it affect the way China operates, but how does it affect audiences in other states that are seeing more and more of those? And then the flip side, we're not making those kinds of movies anymore for a variety of reasons, most importantly, because of the power of the Chinese market and Chinese censorship models, which is why you see movies where they're superheroes and not more contemporary or kind of realistically set stories. Useful fiction, do we give anything away to the enemy? No, because the enemy is researching too. They've got access to all the same footnotes that we do. So, you know, like when we talked about in Ghostfleet, like, oh, there might be supply chain security risks. You know, one, it was from an open source DARPA report to, you know, China, we didn't come up with the concept of trying to hack in through the microchip. So we don't, I don't think we give anything away. It's actually quite valuable and that we can explore using open source intelligence and not give away classified insight. You can also set it in different scenarios or locales. It's, we found it useful for war game AARs to share, you know, you all run war games and you'll get your like AAR with the insights from it that are often classified, but there's certain thematics or findings that you want to share to a wider audience by turning it into a narrative. You can share them, avoid the classification issues, and you can reach audiences that wouldn't normally read a war game report. An example of that that we did for the Norwegian military was they ran a war game on high north Arctic scenarios. Their war game findings reached to the level of a two star general. They commissioned us to turn it into a story that's actually, it's from the point of view of an imagined soldier in that scenario. The defense minister read our insight and it was published in the Norwegian version of task and purpose. So we, you know, a bunch of young Norwegian lieutenants got the inside of it as well. That's the, I don't think we give anything away. Other questions sometimes people ask is, you know, is there an equivalent in China or Russia yet? I'm not aware of any equivalent to it, I think in part because it requires independence and kind of, it can't be, it's a combination of nonfiction, but you need some creativity. What we have seen out of China and it's funny to this notion of creativity is literally their leader President Xi ordered Chinese universities, not just the military, but Chinese universities to train more science fiction writers. He said, because science fiction is inspirational to the future. So therefore you are ordered to create more science fiction writers. Yeah, let's see, this side. I, so I'm going to, yes, right there because I was going to say it's all, it's all been all dude so far. Yes. No, no, no, sit down, sit down, right there. Hello, oh, okay. Hi, I'm Emily Bjerke and I am an intern with the Ethics and Emerging Military Technologies Program. I'm 20 so I don't have a career, but I am an international relations and English literature double major. So as a humanities major and as somebody who is interning with this program where we discuss so many ethics in terms of warfare, I feel a responsibility to just ask and put it out there, like what kind of responsibilities and challenges do we see and do we face with implementing AI in warfare? And not only that, but is there a certain human aspect that we feel a responsibility to preserve when we are going to war? Yeah, great question. And really what you've put your finger on is this, I believe the fundamental sort of cross between research in the engineering technical side, but also research and exploration in the fields of ethics and philosophy as it relates to the military, but also military law, which is related but different than the ethics side, but also those same questions play out in tactics and doctrine. What we're wrestling with are new technologies that change not only what is possible, what can we do now that we could not do before, but also questions of what is proper. What are our senses of what is right and wrong based on the possibilities of these new technologies? And that right or wrong question might be anything from, I mean I'm looking at, let's go back to the prior question. It's not just the idea of what can you, how can you use your unmanned systems or not, to he asked a question, the photographer, about enlisted versus officer. That is a, that relates to an issue of everything from recruiting to training within Air Force, but also Navy, who can operate an unmanned aerial system. It was earlier, well it should be an officer because you require all of the pilot skills, but as the technology advanced became more and more autonomous, the issue of right and wrong, of right and wrong person to assign to that role shifted to no, I want an officer because, and they would say things like, they're better trained to take a strike decision because they had, and it got to like a military law issue. It wasn't you lack the skill, it was, and actually one of the interesting things is they did data on, original on the pilot part of it, a program that had NCOs flying drones had a lower crash rate than officers flying drones because the NCOs trusted terminology, trusted the technology more than the officer who kept trying to take it back over. So then it became, okay, no, no, no, we want officers in that role because they know and understand military law and ethics questions more. We can argue whether that's true or not, but what I'm getting at is this area continually crosses what's possible technologically to tactics, to assignments, etc. One thing that I will throw out there in terms of a history parallel to think about is not only what we believe to be right and wrong now, but how our sense of right and wrong might evolve or change over time, what aspects are fixed and what are not. So we explored in Ghostfleet and then a new book called Burn-In, the areas that people were not comfortable right now utilizing AI and unmanned systems that we felt we would in the future, not just because of change in time, but change in context. Why did we think that? Because we pulled from history. The story of the submarine, submarine was once a science fiction technology, the horrors of 20,000 leagues under the sea, the villain uses it, and he attacks civilian shipping. It then becomes deployed into navies, but prior to World War I, there's a short story by Arthur Conan Doyle, where he talks about the idea that there might be a war between Britain and a unnamed continental power and everybody knew he was talking about Germany. And in that war, the submarine basically the British fleet loses and in part because of this new technology. The British Admiralty goes public to mock Arthur Conan Doyle for this idea that submarines would be used not just in war, but that they would attack civilian shipping. They literally go, you know, stay writing murder mysteries, dude, stay out of military stuff. And they talk about how not only would a submarine not be used that way, but if a submarine captain attacks civilian shipping, if I remember the exact quote from the British Admiralty, it was that his own navy would line him up against a wall and shoot him for so violating the laws of war. Move forward 1914. First real naval battle in 1914, a German submarine sinks three pre-Dreadnought British battleships. Then they move on into unrestricted submarine warfare against civilian shipping. So not just a sort of game change for the prevailing technologies of war, but also like the ethics of war. The United States is so horrified by the continued unrestricted submarine warfare, the sinking of the Lusitania and the like, that we ultimately enter World War I. It's our reason, one of the reasons we say we're going to war with Germany is you've so violated the laws of war with this. Okay. Got one change there. Move the timeline forward. December 7th, 1941. Pearl Harbor. Five hours after the first bomb has dropped at Pearl Harbor, the order goes out, commit unrestricted submarine warfare against Japan. What changed? Well, one, the submarine had moved from being science fiction, horrifying to we're using it too. We've got comfort and experience with it. Secondly, we're losing and we're pissed off. And so what we thought about certain ethics laws and uses, we changed our belief on it. World War II tells a different story of reverse of that. Think about atomic weaponry. Think about mass area bombing. There's a lot of use of weapons in World War II that we would now describe by us, that we would now put in the framework of potential war crimes. And so what I'm getting at is that is this sort of wrestling with these questions, I think is again, not merely an ethics question, it's going to define everything from who you think ought to be in these roles to what doctrine is proper, et cetera. It's a great question to introduce for us. Basically, you all just got your total research assignment from the intern in the room. So do we have time for another question? Well, someone's got to come yank me off stage. I don't make myself. So I've just heard that we've got, we should end for break. So again, I want to thank all of you for allowing me to speak to you and for the folks for inviting me. Appreciate it.