 Also, just want to contextualize this for all the students in the room, whether you're going to serve in the Department of Defense or some dimension of security or maybe law enforcement or some civilian sector, this is not something that's outside of what you're going to be doing. It's critically important that you understand the nexus and the intersection between artificial intelligence and robotics. And by the way, if you have a cell phone, you're being influenced by this right now. The majority of you, I hope you understand that, but maybe some of you don't, but this is not something that should not of be of interest to you. And this is why we're having this conversation here at Norwich University, is that we pick subjects that matter to 21st century security challenges, and this is one of the most pressing. The things that you'll hear over the next couple of days, some of us are trying to imagine what that looks like, but you're going to be leading through this. You're going to be the ones that has to make the decisions. You're going to be the ones that's going to have to fight on different multiple dimensions of a battlefield. So some of the things that are of discussion over the next couple of days, it's going to be your reality. And this is why we're here, because we want you to excel and we want you to exceed. Now, artificial intelligence and robotics is not just a U.S. centric conversation. This is something that's a global conversation. We're having this conversation right now in English, but it's being held in Arabic. It's being held in Hebrew and German and Chinese and Korean governments and Department of Defense organizations around the globe are having similar conversations. And this is why we are having this at Norwich, whether you're a computer science background or not, criminal justice, biology, or chemistry, this subject matters to you. So in spirit of that, we've asked some students to come and share some of their reflections and we've asked some of them to read some of the things that they've written in their native language. And so I've asked Rodian, he's going to share something that he wrote and we've asked students to also share what they wrote in their native tongue. So if you would, just permit Rodian to share and then we're going to turn it over to our moderator and our very distinguished guest, August Cole. Rodian, the floor is yours. We're going to repeat the Russian occupation for the time being of the Soviet Union and the imperialist Russia. If we're talking about motivation, where else can we find ordinary working people who can only take over 600 million dollars in just a few days to buy military drones? This case ended with the fact that the military-technical company was overwhelmed by the actions of the Ukrainians without any effort to produce the desired military drones as a gift to the nation. You're asking what happened to the Russian? The Ukrainians did not stop. The Ukrainians did not stop but bought a national satellite that is now helping to collect high-precision intelligence from the battlefield. As you know, the active information war is going on on the Internet. If a public response to something that is not related to the Russian aggression or annexed territory, there will be Ukrainians. In recent cases, there may be so many comments that people will remember that this is a direct attack on the bot. But that would be a wrong assumption. All these comments are solutions to the Ukrainians who send their thoughts and assessment of the situation. Of course, people in Ukraine are different and everyone has their own mind. But in front of the face of evil and dishonesty, they are mobilizing and uniting to fight together. I believe that this will also help other countries to help us and give even more support than in the end in the dark. Of course, it can be said that the war will be won by robots and drones. But the truth is that real history is always about people who stand behind these technologies. These are soldiers, operators, volunteers who will lead their lives to the help of the international people, people who work every day for a stable economy and people who are fighting for Ukrainian rights. They are dying for this right and, more or less, they are fighting for power to break the cycle of Russian colonialism on our land and protect our rights as a nation once and for all. When there is a reason for people to live and protect their parents, they unite and all of a sudden create breakthrough results that can make even the most skeptical people on other continents. The main factor of achievement and victory is our human nature and the motivation that burns the light in others. The country that people gave only three gains to the capitulation is now 230 days. This is not cash, this is a lady and a man of Ukraine and her people. War and people. War and people. I believe that all of you have already heard about Ukraine and brave Ukrainians who are fighting on their freedom and their homeland right now. Ukraine is my homeland too. With the great help of critical international partners, one of which is the United States, Ukraine gets more met and weaponry every week to months. Rocket systems like Heimers, long-distance drones like Switchblade, Houtsers and air defense systems. All this innovative means of combat significantly protect Ukrainians land from Russian occupants and counterattack the enemy moves. Yet there's another vital thing to say. All this fight back progress you can see on the military maps wouldn't be possible without the key component, people. The Ukrainians are motivated because they are know what we're fighting for. Our closest one, the home and the way of life. Drones it dies and be under Russian occupation just like during the Soviet Union and the Empire of Russia. Talking about motivation. Where can you find ordinary hard-working people fundraising over 600 million dollars in just a span of few days? This case has resulted in a military tech company overwhelmed by Ukrainians deed, producing desired military drones for free as a gift to nation. You would ask what happened to the money? Ukrainians didn't stop and purchase a national satellite that helps to collect high-resolution intelligence from the battlefield. As you know, active information warfare also unfolds on the range of the Internet. If a public figure tweets something inappropriate about Russian aggression or the annexed territories, Russian Ukrainians will be there. Recent cases can prove that there would be so many of them in the common section that people can assume it was directed by the tech. But it would be a wrong assumption. All the comments are just angry Ukrainians giving their opinions and evaluating the situation. Sure, the people of Ukraine are very different and each has their own mind. But in the face of evil and dishonesty, they mobilize and reunite to fight together. I believe this also motivates other nations to help us and to provide even more support, which is crucial during dark times. Of course, one can say that wars are won by robots and drones. But the truth is that the actual story is always about the people behind those technologies. It is about the soldiers and operators, the volunteers who dedicate their life to help those in need. People working day to day for a stable economy. And people who fight for this flag. This flag is signed by those who are on the front lines. Some of them might not be alive. Yet, they die fighting for home to break the cycle of Russian colonialism on our land and to protect our future as a nation once and for all. Those people create victories and write history. When people have a reason to live and to protect their homeland, they unite and all together create astonishing results that can surprise even the most skeptical people on other continents. The main factor to achieve and win anything is dictated by human nature of us and the motivation that ignites others. A country that people gave three days before capitulation is now staying 230 days. This is not a fairytale. This, ladies and gentlemen, is Ukraine and its people. Thank you so much. Thank you very much. So it is my pleasure to introduce Andrew Liptak, who's gonna serve as our moderator for today's session. So students, please, if you have questions, he's gonna open it up for questions later on, so have your questions prepared and ready to go. So Andrew, over to you. Good morning, can you hear me okay? Seeing some nods, okay, good. Welcome to, we're gonna be talking about science fiction, artificial intelligence, and the future of warfare. My name's Andrew Liptak. I am a graduate of Norwich University. I graduated in 2007 and again in 2009 with my master's in military history. And I've worked as a journalist, historian, and science fiction writer, particularly focusing on the future of warfare and military science fiction, so that's why I'm here today. And that's how I came to know our speaker, August Cole, who is an author who has explored the future of conflict through fiction and other forms of what he calls a ficint, fictional intelligence, as a form of storytelling. His talks, short stories, workshops, and more have taken him from speaking at the Noble Institute in Oslo to presenting on the future of warfare at the South by Southwest Interactive Festival to lecturing at West Point. And with Peter W. Singer, he is the co-author of The Best Seller Ghost Fleet, a novel of the next World War, which was published in 2015, and Burn in a novel of the Real Robotic Revolution, which is this book right here. I highly recommend both of them. They're excellent reads. And he's also written a number of short stories and presentations through a project that he and Singer have called Useful Fiction. So I guess what we wanted to talk about today is just the idea that warfare is advancing in many, many different ways. And in some cases, it's been influenced by science fiction. It's been, certainly, imagined by science fiction authors and creators over the years. And I just sort of wanted to go back a little bit to the earlier days of science fiction to sort of imagine how, or talk a little bit about how warfare was, or science fiction authors imagined warfare. So what was your introduction to science fiction and when you sort of realized that these were authors imagining what the future warfare might look like? So thank you for the opportunity to connect back here again, Norwich, for a second time. The history of my kind of love for science fiction goes back to more or less when I started to the Cold War era that I grew up in was heavily influencing science fiction at the time, not just for books, but also films. My parents took me to Terminator, 1906, I think, I think, in the theater, which is a little bit unusual, I suppose, but that I think speaks to the enthusiasm I had for that kind of a subject. And that's actually a film that may seem anachronistic today, but it's had a long shadow filled over the thinking about robotics, about gene uprising, about science fiction. That was one form. The other novels, such that I read, that really kind of integrated this sort of ideal of a robot as a soldier, but in a more kind of complex way. You had David Drake, for example, in the 1980s kind of payday science fiction, military sci-fi writer. But what really got my attention, though, and I think came kind of a source inspiration, was the sci-fi writer William Gibson, his work more or less helped establish how we visualize what the internet is, cyberspace. It's called Back in the Day. And some of his most recent writing delves into this question of nature of robotics, not in the kind of conventional Terminator sense, but the notion of telepresence can relate to one another, through, for example, robotic systems. So the peripheral, which is a version of his novel about four years ago, it's coming out now, for example, I think it's on Amazon Prime. So, you know, from that kind of arc of both books, from Gibson to Jim Cameron on the Terminator, I think I kind of, I'm always constantly kind of searching for a North Star, trying to understand the problems. One of the interesting things I've always found, just delving into the history of science fiction, is how much the image of a robot has changed over time. If you go all the way back to the 1920s, the pulp era of sci-fi, you have authors sort of imagining these mindless automatons, just raging and killing, and there's all these different depictions. You go forward a little bit, you jump over to Isaac Asimov, who wrote stories for books like iRobot, or in later Caves of Steel and Foundation, where he, or Foundation, a little bit later. But he was sort of imagining them as being useful components of society, or tools that people would use that weren't necessarily threatening. And then you advance a little bit more further down the line, and you have William Gibson in Neuromancer, sort of imagining artificial intelligence being a slightly more menacing, nebulous thing, rather than something that is in the body of a robot. And it's been interesting to see how those technological changes have progressed as time goes on and how they are influenced by the technology of the day. Certainly when Asimov was writing about his first robot stories, computers didn't really exist, and by the time that they started to exist, there was a size of rooms in the 1950s and 60s. So that was certainly a leap forward. So what science fiction are you seeing today, minus, not including your book, that you are seeing that as being influenced by the state of robotics today? Or you conclude your book, I guess, if you'd like. You know, I mean, in many ways, you know, Vernon was kind of a pushback against the, you know, Skynet annihilation of humanity model, but the thing that kept and keeps me and my co-writer, Peter Singer up at night is less the robot uprising in the Terminator concept, but more what are we gonna do to one another with robotics? We pose enough of a threat that I worry about that more than a generalized intelligence. The point you made too about the kind of interwar years is really important for understanding robotics, because in many ways right now, it might be in that kind of interwar period. You know, one of the really foundational, creative products pieces of content is this check play called RUR, R-U-R-M-A-K-R-N, and it's about a robot uprising in a factory in San Francisco, and where the machines rise up and wipe out humanity. In this, a couple sit-backs story, which is Rossum's Universal Robots, he introduced this notion of robot robot. And so, you know, we're at a kind of centennial point for that, right, for that book, for passing. Exactly a centennial. Yeah, so it's a reminder of kind of how new much of this is, but yet we've been wrestling with the same questions for so long, and we haven't quite answered them, so that makes me think we're asking the right ones. You know, the really interesting and intriguing aspect of the future of robotics is actually more to be, less about the anthropomorphization, right? Less about the thing of the legs, robot dog, but more actually about how the software that underlives robotic systems that shapes how we relate to them in an emotional level. I think it's really crucial. What is the essence of trust? This is a theme that Pete and I spent a lot of time trying to figure out, how do you know when you can trust a robotic system in your life? You know, when I'm in my, you know, forerunner and I have my radar cruise control, I'm trusting, right, that that system is gonna keep me at a safe distance from the car in front of me. That's a variation on the theme that we're gonna see more and more and more of in industrial applications. The medical world's earlier heard, but also, of course, the future of conflict. You know, the robotic wingman programs the US Air Force is investing in is going to certainly put people into that position of having to trust systems with their lives, not only just in the sense of, is it safe to fly, you know, next to the spot, but also, can I counter it to defend? When you're looking, well, what do I then go read, right, what do I watch? And I think actually the books and the films that get at this relationship with machines are the most important ones. It's almost 10 years now since the film Her came out from Spike Jonze, Joaquin Phoenix, falls in love with his phone operating system, essentially, which is a really, really very important film for understanding the way we begin to under, we begin to really kind of know ourselves through our relationships with machines. Similarly, there's a novel by Casuo Ishiguro called Clara and the Sun, which came out, I think, about a year and a half ago that I would highly recommend too, because it's really about the role that a robotic like Android essentially can play in a family and the notion of what is human and what is love and attention and where do we want from our machines, right? And how does that change society? Because as we consider conflict, as we consider this kind of arc, going back again to say 1921 to 2021 to 2022 and then 10 years out from now, 10 years past that, are we gonna be in a position to keep asking the same question again or are we gonna kind of evolve that thinking? I think the more we understand software, particularly as it relates to like more capable systems like artificial intelligence, we're gonna get closer to making decisions today that help us avoid the kind of outcomes that we want to avoid. One of the things that I spoke at a conference similar along these lines at West Point a couple of years ago, and one of the takeaways that I had, and I still have it saved somewhere is, one of the speakers said, I'm not afraid of the humanoid terminator bots, I'm afraid of the server farms and not necessarily the things falling over on you, but like just the amount of information that they can process and the influence that they can wield as a tool. And that has struck with me, or stuck with me for years now. I guess let's talk a little bit about your book because you draw on the last, I don't know, 10, 15 years of robotics advances and your book is very much a different take from the term, actually, let's back up a little bit further. How many of you all have seen Terminator? Okay, good. It's an 80s film. You can never quite be sure if everyone's seen it. That movie has really cemented the idea of killer robots, I think in the hands of, in the minds of people whether they're in the military or artificial intelligence folks. And I'm guessing that over the years you've seen a lot of people sort of have that, this is what the future of warfare is gonna look like, skeletal robots killing people. How have some of you seen that image sort of get dropped into, it seems like it's like it's something you sort of have to work around in order to talk about artificial intelligence either in introducing people to the idea of artificial intelligence that's really aimed at killing off humanity but also it's not necessarily accurate, right? You've seen this in the debate around the ethics using robotic systems and AI type systems in warfare. The campaign against killer robots had a movie called Slaughterbots which came out, I don't know, four years ago or so. That was a very sensationalized look at a pretty gory attack using swarms to target civilians. And it is clearly the kind of narrative and messaging that resonates with a lot of people as they try to understand how these technologies are gonna shape conflict. And what's difficult in those kinds of contexts is that much like when we see debate and social media using extremes to capture eyeballs and attention, you often actually get further and further away from reality, in fact. And many of the bigger challenges that especially they'd have to do with ethics and do care in modern conflict have continuity from the era we're in now to the era we're going into, you know, law of armed conflict is gonna evolve just as much for online activities as much as it is for robots, for example. So the challenge then is how do you come up with like a kind of credible vision of the future that you can then start building policy or rules or doctrine and tactics towards? And that's, I think, you know, to some extent a response to this question is that you really have to work with the facts and, you know, the technological facts but also understanding how people have related to technologies in the past how big organizations deal with disruption. So many of these systems, if they are actually implemented, will be incredibly disruptive. You know, one of the bigger themes in Burn-In is the economic and labor market disruption that's coming from not just, you know, robotic machines that handle the kinds of tasks that are kind of in the everyday sense, how my food is prepared, for example, how I receive something in the mail. But software is fundamentally changing fields like medicine, of course, as we know, AI is doing this already, the law. All of the communities in American society that have thought themselves immune from disruption, that thought their jobs were safe, so to speak, are just as vulnerable as people all in other parts of society too. And that is back to this nature of like what is conflict itself in this era that we're entering? You know, is it a van full of swarming, you know, biotag robots that are gonna find you in a crowded room like this? Or is it a much more cognitively oriented campaign that induces people to, you know, essentially fight one another within a society so an adversary doesn't have to do that? We have these kind of big meta questions about the nature of conflict, but riven through them all is, of course, this question of not just the hardware itself, but more so the software. Let's, you talk about going, you know, authors sticking to facts rather than sort of sensationalism. So let's talk about your book a little bit because you take a very different tact towards the idea of a robot. So tell us a little bit about like what, where did this become from, and a little bit about TAMs, the robot that's sort of at central to the plot here. Yeah, the way I write fiction is not necessarily how everybody else does it. We use a lot of end notes and foot notes, and this was something we tried experimentally with Ghost Lead, our first book, because this was about a fairly big idea that was, you know, really we thought gonna be potentially like laughed at. You know, China essentially going to war with the US by taking Hawaii. So we said, all right, you know, let's like use the same sort of non-fiction research that myself and my co-writer had done in our careers and apply that to fiction. And so every time there was a technological capability, you know, a robot, for example, that we were imagining China using an occupied Hawaii or some piece of like PLA doctrine, like we footnoted that, cyber vulnerability in the US supply chain that, you know, was tactically relevant, footnoted it. When we took on Burn-In, this is a different kind of book because this is a counter-terrorism story, and it's really about the nature of American society, you know, in the AI era. And the FBI agent Laura Keegan, who's partnered with this robot, which is a biped kind of a conventional, you know, looking robot in some ways, but we tried to then think, well, how do we make this not like another Terminator story, right? So the, like nonfiction research that we did, we talked to roboticists, we talked to ethicists, we worked on really trying to understand not how we might imagine a robot partner might work for a federal law enforcement agent, but actually like how it would work in the mechanical sense, things like charging, things like mobility. And then of course, you know, we relate to these kinds of systems, you know, at a certain point you begin to develop that trust or emotional attachment. And that was a really important and really challenging part of this book in terms of the character arc, right? Because you want your characters to start and end as different people, right? They need to become more interesting, overcome bigger challenges. So how do you do that with a machine? Well, and then not only that, thinking about how the human relates to the machine. And so we really tried to bake that in. And one of the ways we played with that, which is the thing reflecting of the way that many of you are going to start experiencing, you know, these kinds of technologies is the capability that gets added and added and added with access to more and more data, right? Again, if software is driving the importance and relevance of a robotic system, it's not the thing itself, you know, what data it has access to, how it's able to process that is going to be incredibly important in changing the nature of your relationship with it. Now, TAMS itself, we actually made in the book a pretty small robot, you know, kind of diminutive, almost childlike. And that was a very deliberate choice too, to kind of push back against this sort of Terminator, Arnold Schwarzenegger, you know, paradigm was, you know, let's really think about how you might want a robot to operate in the real world. And in the research conversations we had, you know, you'd want it to be small so it could fit in the trunk, you know, when you weren't using it, or it could crawl into spaces that a human couldn't necessarily go. It should have limbs that are articulated in ways that, you know, would function like our skeletal systems do. A torso that can spin around, you know, full articulation on a shoulder, for example. And also the way that we sense and perceive, you know, with our five senses, those would be, you know, kind of analogous sense present in this robot, but there would be other capabilities, right? Be able to see behind you, to be able to look into the Wi-Fi spectrum in a sense of what's around you, see through walls in that way. And so that to us began to kind of really answer the question in a very practical sense of, you know, what is a robot actually gonna do for us, right? Versus what we think it, right? One of the things I'm always frustrated by with science fiction is when you can, the author is basically, or creator has decided to use a robot as sort of like a stand in for just a combatant. And they miss their shots, they don't see things. And it's always interesting to see, to imagine, like just how much we imagine robots or robotic systems as, or give them abilities that are human-like, and let those, you know, there's a lot of limitations, like you just said, like we can't see behind us, we can't see into the Wi-Fi spectrum. So I'm curious to see like how, what about, what level have, do we balance sort of the idea of making them appealing so that they're not creepy, or yeah, if everyone's seen the robot dogs and start to feel sort of unsettled by them, versus something that is fully functional, you know, on the battlefield or operating in, alongside people. With TAMS, which is an acronym for Tactical Autonomous Mobility System. We spent a lot of time, by the way, thinking of like the right acronym for this robot. Originally it was called August of all things. Not a robot. You know, one of the tests we wanted to kind of apply was like the kind of kindergarten test, right? Like would a child be comfortable holding its hand without being fearful, right? So that's a really interesting design question, because you know, these sorts of systems, and this is true in the military sense, you know, have to exist in a civilian context as much as they do in a pure military application. You know, it's really easy to lose sight of that, you know, in a very kind of narrow view. But if you do step back and think, you know, about what is the problem I'm trying to solve with this system? Which is very much kind of the challenge that we undertake when we do these fiction projects. What is the thing I'm trying to address? You know, you get, I think, closer to the actual reality. Similarly, you know, the way we perceive military robotics, understanding how challenging some of the engineering thresholds are around, you know, size, weight, power, heat, et cetera. You know, may mean that we are going to be in an era where robotics are more and more disposable and they're actually quite small for the reason that, you know, bigger things present easier targets. We are gonna, we're already in an era where if you can see something, you can destroy it. So thinking about the hiding, seeking aspects of the modern battlefield. And then as well, too, you know, these are systems that are gonna have to be acquired, right, there's a bureaucracy behind that. And that defense industrial, that acquisition side, which is probably really hard to write a novel about, although Liu Shixin, the Chinese science fiction writer, wrote Ball Lightning, which is actually about defense acquisitions in China. It's a very good novel in this regard about a super weapon, not a robot per se. But the point is, you know, you have to also understand the context of, you know, from where these things come, right, and, you know, being able to kind of crack that really almost boring part of it is actually, I think, as a sci-fi writer, as, you know, a military professional, in your case, as students and now, is gonna be really crucial because all these things exist in an ecosystem. And it's quite easy, again, in the cinematic sense, or the video game sense, or even, you know, science fiction isn't really rooted in this kind of usefulness, can just kind of ally it over that. And I think that that isn't actually, you know, often as helpful as we need it to be because we really do need new ways to think about this. And it is okay to play video games to think about the future of comic. It is okay to read graphic novels, to write them for that matter. I would encourage you to be doing those sorts of things. Yeah, Terminator can be a good homework. Well, sort of a good homework. One of the things that I've been endlessly fascinated by, especially because you all are going to be, presumably at least a good portion of you are going to be going into the military at some point. And there's a very good chance that you will be operating alongside some sort of robotic system, whether it is a drone or one of the Boston, maybe not one of the Boston Endemic's dogs, but like a knockoff or a similar version. There was a really interesting story that I read about a year ago. It was published in Slate Magazine, part of their Future Tense series. This is a series where they partner with the Arizona State University and they are writing sort of like science fiction about something that's extremely relevant. And this story was by an author named Justina Ireland called Collateral Damage. It is about a squad of soldiers who are basically given a robotic system and they're testing it out because the military has begun to vet whether or not these are worth acquiring or not. And it just does not go well, not because the robot has malfunctioned or anything. It actually performs perfectly, but the soldiers alongside it just do not trust it. They think that it's watching them and it's getting ready to, like, you know, it's reporting on their every move and they, spoiler alert, they end up basically destroying it. And one of the things that I found endlessly fascinating is that there is a level of trust that is needed in order to operate alongside these robots. And it's not really something that science fiction has ever really talked about. Usually our AI is an antagonist. It is something that you were fighting against in your book. And I think there's a couple of other stories where you were actively trying to work alongside robots. But like, what do you use? This seems like a really good source of human drama. You know, how do you see these types of stories, you know, existing or coming into existence in the future? Or have you read any others that are like, that are along those lines? Yeah, this is a great tension. And, you know, these kinds of, you know, contradictions or inherent tensions make for great narrative, right? And, you know, and Burnin, you know, one of the challenges is not only do I trust this machine to save my life or to help me with this investigation, but am I training it to become so good that it takes my job, right? That's also a really interesting factor on the defense realm too. You know, when you're trying to consider to the trade space that anybody has in their 24 hours in a day, you know, many of these systems are not intuitive. Many of the systems are incredibly complicated. They break a lot. How do, you know, commanders, if you're a company commander or even a platoon commander, how do you assign and assess the amount of time, for example, that you should be allocating the learning and new system versus working on marksmanship or fitness or just recovery or whatever else? This is like an ongoing tension at this moment right now, right? With some of the small robots that are being used in urban operations, for example, there is a lot of skepticism to your point. And, you know, many of the practices that we've seen with increasingly intelligent software from the civilian world and the work world, particularly during COVID, about persistent surveillance of remote workers, et cetera, are really gonna be problematic on the battlefield and be really problematic for military culture. And so I think it's actually incumbent to start thinking about those things now as that short story collateral damage did. Where are our left and right limits? What are our guardrails where we don't want? You know, what are the essence of the experience of service or what do we need to complete the mission and how do we keep the machine from not getting in the way of that? I don't mean in the literal sense, like, you know, it ran out of batteries and it wasn't useful, but how does it change our command culture? How does it affect the human dimension as we just heard, right? You know, we can write novels and make movies and video games that are, you know, grand in scope, but if we really focus too much on the technology and not understand enough, spend enough care exploring that human experience with it, then we're not really doing anything, you know, that's gonna be a much help trying to figure out some of these hardest questions. Yeah, and science sections can do a lot more than just inspire readers. I mean, it is a good way to sort of, I've always mentioned sort of prototyping the future where you are, it costs millions of dollars to actually make a robot and field test it and everything, but it does, it costs a couple hundred bucks to hire a writer to write a story that will sort of work out some of the implications. You know, what do writers need to be doing today if they wanna write convincingly about robotics and not going sort of down the terminator route of evil, you know, it becomes sentient and decides humanity is worth destroying because we've seen that story in movie many times, but like how do you tell something that's relevant and what do you need to draw on to make sure that you're not just sort of going down old tropes? Yeah, trying to be original in writing about robotics is really challenging and it's, I think, therefore one that's really interesting to take on. You know, the challenge for a writer today who's literally, let's say, writing science fiction or fiction about the future is that we're living in the middle of a moment where science fiction feels like everyday technology, right? So I think you actually have to embrace that. One of the ways, though, that I like to approach some of these sorts of tech trends, whether it's, again, radicalization through social media, whether it's this kind of employment question of everyday robotics and AI, is to kind of crank the dial up to like 11, right? I think that is a really, really effective way to kind of take something to an extreme and then see what does that vision look like and does that feel credible or not? Secondly, I do think there's a fundamental level of just kind of, and I'm a history major, right? I'm not an engineer, but doing a fundamental level of research to be familiar enough and then if you're not able to kind of get a grasp on something to find somebody who is and to be able to reach out to them. One of the things we constantly rely on in our books, in our short fiction, Peter and I, is close readers, you know, people who we trust who can give us essentially a thumbs up or thumbs down whether something passes the professional like giggle test and sometimes we'll write something that doesn't. We have to go back and start over again. And that's what we want to have happen when we're designing, still building the story even after having written it. Similarly, I think too, when you're trying to think about the things in your life, what are your pain points, right? What are the things that are hardest for you? Emotionally, physically, et cetera. And then try to understand, where does technology alleviate that or how does it make it worse, right? Think about some of those everyday aspects of your life, the persistence of your phone. And sometimes when you wonder, is it listening to me or not, right? Again, to the point of the collateral damage story or others like that. What if you took that and turned that up to 11 and it was? What if you took some of the paradigms about things like Valor, looked at past military records of service and began to look at those historical examples because I do believe history is one of the best tools that a sci-fi writer, a vacant writer has. And began to interject robotics and things like that in there. So you can really ask these bigger questions in the nature, in the era of an infantry formation that is, let's say more than half a sheet. What is Valor, right? What if the most charismatic person in your unit is like an AI, to use the Spike Jones for a film? I did a project like that for the British Army. I was turned into a short story to explore that question. And I thought it was really important to begin to understand because that's a question that if you don't wrestle with today, you're gonna get caught out by that sooner than you think. Last question, we'll turn it over to you folks to ask your own. What books or stories have you read recently that are well worth, you know, use folks reading when it comes to robotics and AI? Yeah, I would recommend Clara and the Sun, like I mentioned earlier by Kazuo Ichiguro. It's, again, it's not a military story, but it's very much important to understand the kind of human dimension of what it's like to live with machines and software in your day-to-day life. Similarly, you know, if my antidote to the kind of Terminator is, you know, this, I keep saying software, software. Earlier, there was a book that was put on the screen, The Chaos Machine by Max Fisher about kind of how algorithmic-driven, you know, companies like Facebook, like Google, you know, YouTube particularly, are shaping our kind of cognitive realm. I think that's actually incredibly important because that same design impulse, that same, you know, engineering pipeline that's, you know, reforming radically our civilian world is gonna do the same thing to military robotics. And many of those things we must not see repeated, you know, in the realms of future, of kind of future forces that are using these systems. All right. Who's got questions? Come on up. Hello, my name is Gabriel Williams. I'm a senior here at Political Science and I really enjoyed your talk and I had a question really. So if you talk about this dynamic moving forward where a human machine teaming is- speak up just a little bit. You talk about moving forward, this dynamic where a human machine teaming is gonna be really important and the military and defense industries and also in the civilian sector as well. And you touched on the skepticism component, right? And my question is really, you know, kind of on the defense side, but also from a holistic perspective, you know, how do we codify, you know, the planning process of integrating, you know, some of these AI components, these cognitive systems into these teaming environments? And how do we overcome that gap of skepticism by, you know, projecting this persona of having AI that's governable, that's responsible, that's ethical? How do we bridge that gap between the humans and the machine to create that human machine teaming environment? That's an awesome question. And I think, you know, fundamentally being able to experiment as much as possible in an applied sense, whether it's national training center, whether it's taking things into operational deployments is essential. But within that though, you have to wrestle with failure, right? How do you, as a military institution, how does Congress particularly address these kinds of phases like the one we're in, where, you know, we live in a zero defect world, right, in the political sense. And so being able to create space to make mistakes that aren't catastrophic for careers, for programs, for budgets, I think is essential. Because through those failures, you begin to understand what works and what doesn't. In a believable, incredible way. At the same time, I do think a level of skepticism is always healthy. It's inherent to our system, I think, and can be really, I think, smartly applied. But fundamentally, it's a culture question about how we're allowing people to experiment with these systems. Can you imagine, for example, and I've joked about this before in the Marine context, but imagine Norwich, you know, on arriving, you're given a terrestrial drone or a small, like, you know, palm copter type thing. And your job, for like your first year, is to modify it, is to fight with other students, with theirs, and so that you're understanding and living with that robotics in almost that Tamagotchi kind of sense. I think that's actually kind of a profound cultural step that would be really interesting to see, because you begin to have the kind of hands-on practice that lets you understand what these technologies can actually do and what they can't want to support. In bridging the gap, there's two things that come to mind for me. The first one is, you know, seeing, or living alongside these sort of technological systems in your everyday life. Look at, you know, there's a big experiment going on right now with Tesla. I mean, you have the cars driving all over the place, learning as they go to help better their self-driving emissions. So the more people who drive those cars begin to understand, you know, intimately what those systems are able to do and what their capabilities are, because they are using them every single day, whether it's, you know, being able to take their hands off the wheel for a couple minutes, or hopefully not, you know, falling asleep and not crashing it. The other thing that comes to mind is the, you know, people, you know, canine units. Like, you know, you are working alongside another system and you are, you know, every day you are training with it and you're working alongside it and you begin to very, become very familiar with how, you know, the animal works and what they are capable of doing. So. I really appreciate it. Thank you so much. I think we've got time for just a couple more. So we'll try to run through them quickly. Yeah, hi. I'm Lyle Goldstein. I'm with Defense Priorities in Washington and also at the work at Brown University. But it's a great honor to listen to August Cole here. I'm a huge fan of Ghostfleet and I would recommend to everybody in the audience to, this is a must read. I focus on China and if you're interested in what a US-China war would look like and it's something hard to imagine, but that I think Ghostfleet is probably the best possible examination of that topic. So really strongly recommend it. A couple of questions. You talked about, you said your new book is more kind of, talks about the labor market, how disruptive it is. But of course for military organizations it's also disruptive and you were hinting at that. So I mean, I wondered if you could kind of, in fact in the latest Top Gun film, I think that's a major theme. You know, will, is there a future for manned carrier aviation? So how should military leaders cope with this inherent problem that, you know, pilots are always gonna be uncomfortable with drone operators and dislike them. You know, this idea that you're, the machines are putting a lot of whole jobs within the military at risk. And this huge transformation is probably an order that will involve a lot of bureaucratic tumult or chaos. So could you speak to that issue? And then also, Let's just, let's do the one because we have a couple students. Yeah, I think the way the Top Gun sequel kind of slightly played at that point is a really apt, you know, very kind of public exposition on this tension. And there is no, there is no, you know, alternative that, you know, that kind of autonomous and unmanned, particularly in aviation is coming. And especially in, I think, you know, maritime kind of naval context. You know, will you see a transition that is slow rolled perhaps for cultural or, you know, parochial reasons? There's a good chance. The risk that you face though is that an adversary is not encumbered by that same tradition, if you wanna call it that. I think, again, similarly, you know, in the kind of round, you know, slash infantry, you're gonna see a lot of, I think, tension around that too. This notion of how we look at like base realignment closure processes is gonna probably have another wave in the late 2020s or early 2030s as we begin to kind of wrestle with this question of, again, economy's changing, you have to still pay for entitlements. How big of a force can you support? And, you know, there's a good chance that that trade space is gonna come from, you know, essentially a smaller formation, a smaller end strength to the favor of robotics. But whether we're ready for that reality, I don't see a real desire and appetite for that because it's so politically and culturally challenging. But that doesn't mean that's not what the future is, right? In fact, that's often an indicator that that may be actually what's coming. Great question. All right, next up, this side, and we'll do you over there. The rest of you who are lying, meet us up at the top, just outside the auditorium, we'll have a one-on-one chat. Hi, Joseph Bornes. I'm a senior here at Norwich Political Science. You touched earlier on bridging the gap between humans and AI. What do you think that gap is gonna look like in the coming years where kids who have grown up with AI systems in the home, such as Alexa's, where they're asking them more questions than their own parents? What's that gonna look like as those systems go from like an Alexa system and may develop into more of like a, what is the fictional Jarvis system from Iron Man? Like what does that gap gonna look like as those kids come into our position and then into the adult field in the military and civilian sectors? I think there is a real kind of cultural chasm around that. And being able to design in a way that allows a generation to rise into roles of responsibility. So those technologies are waiting is really important. One of the challenges is a lot of that tech, though, is gonna come from the civilian world or it's gonna be on the leading edge in the civilian world first, not probably within the government space. So one of the challenges I think and this especially when you get to like AI and data is incredibly thorny, but that may be part of what you end up seeing, right? Does you, if the army for example is gonna have a tactical or even a kind of talk level assistant that is akin to something like a super Alexa Jarvis, whatever, are they gonna have to go to Amazon because Amazon is only, the only company that has the data set that has the kind of natural language processing capabilities to do so. And the people. And the people. Great point. So I think you're gonna see that gap persist because the demand signals when they come from below typically aren't heard, right? But that's sort of incumbent on this, the generation coming up, people like you is to kind of communicate enough or to go make it happen yourselves. So you're not waiting for people who don't wanna take a pilot out of a plane, for example, that again is a healthy tension, but given the nature of the threat environment, given the nature of what countries like China are doing, you can't fall in love with that tension and dichotomy too much. You have to actually really think about the kind of battlefield effect first. Thank you. All right, next area, and then we'll chat with us up top. Good morning, my name is Sean Bassey. I'm a senior in political science here at Norwich University. I wanna ask you in the discussion here, we talked a lot about bridging the gap about negative connotations of AI, particularly stemming from earlier iterations of AI, being terminators, killers, and so on and so forth. But as my peer had pointed out, more younger generations are now seeing AI or the concept of AI as a more positive light, particularly talking to Alexa for all your answer needs, your Siri on your phone, the algorithm helping you pick up exactly what you needed that exact same time. What do you think is the risk of over-optimizing, or over, what's the word? Over-optimized, no, over-optimistic. Over-optimism about the integration of AI rather than the inherent concern over it. So I grew up, Siri's my best friend, she never did me wrong, so why don't we put as much military drones as possible into as much military technology as possible, and there's no way this could possibly go wrong. What's your thoughts about that opposite effect risk? If someone thinks about what could go wrong all the time, that's a great question. One of the biggest risks is that we lose our sense of agency with technology, and the nature of many of the most powerful AI systems in our everyday lives are effectively invisible. The systems that recommend your next V2 recommendation or keep spam out of your inbox, those kinds of capabilities exist throughout our experience with the network, the cloud, whatever you wanna call it, and we no longer, especially in an app-driven world, understand computing in a way that is directed by us, we're kind of choosing from menus by choice. That's a different era of computing, a kind of cultural level. So I worry about people losing the ability to understand that there is a choice, a certain capability, let's say it's super Siri, whether the risks that go with that, that we can say, we don't wanna do that because that's how it works in this part of our lives that therefore in the defense realm it should continue. I think having that sense of agency is incredibly important with technology. In the specific sense, what are the concerns about being over-reliant on AI? Good one is looking at the problems we have, getting good data sets, data that's accurate, like clean data that's not biased. This is an incredibly challenging problem that no matter how much money, some of the biggest technology companies are spending on their most profitable products, they still run headlong into this. And I don't think that we're gonna be any different when we look into the defense realm. So I think being very aware, and again, having that sense of agency about what we do or don't do with technology is an attitudinal or cultural thing that's so important. And I think actually is a competitive advantage. It's a little bit of skepticism but a lot more about, again, this notion of agency. That'll do it for us. We've run out of time. Thank you very much for coming and enjoy the rest of the chat.