 So, good afternoon everyone. Welcome to the 28th Military Rider Symposium. My name is Dr. Travis Morris and I have the privilege and honor of being the director for North University's Peace and War Center as well as the executive director for the Military Rider Symposium. We're glad that you're here. And for those of you, if this is your first session of the day, if you permit me, let me just discuss about what's transpired and then what's going to happen over the next couple of days. So this morning we started off with our keynote. We had the Israeli Ambassador on campus speaking about the intersection of artificial intelligence, robotics, and insecurity, particularly between nation-states. And then we also had John Abelay, who is the founder and director of Boston Scientific. And I will say, if you're not familiar with Boston Scientific, you should Google that. We're thrilled and honored to have him here. We've had sessions that look at the intersection between science fiction and robotics and what that means. And then we also had a session from August Cole. And he is one of our globe's leading futurists, both him and his co-author Peter Singer have written books that have been read around the globe in multiple different languages. So the Military Rider Symposium is the only event of its kind in the United States. And our charge, what we've been doing for 28 years, is to select topics that are a relevant security concern for the 21st century. Last year we focused on the Arctic. Before that we focused on the weaponization of water. And this year we're focused on the intersection of artificial intelligence and robotics. And for the students that are in the room, whether you're here because a class is here, AI and robotics impacts your life, whether you recognize it or not. If you have a cell phone, the world in which you live in is centered on AI and what that means, both positive and negative. That's why we're here having this discussion over the next couple of days. This is a team effort. And what does that mean? That means that in order to adequately discuss this topic, we brought in subject matter experts to help us think through this security issue and just event in general from multiple different angles. And for the students in the room, particularly those that are going to serve in some sort of capacity, either security, defense, law enforcement, policy, you need to recognize that this is the world in which you're going to be living in. In which we already are living in. And the topics and things that some of us here are imagining, it's the world in which you are going to operate. You're going to be making decisions and you're going to be leading. That's important. And that's why you're here. That's why the symposium exists. And that's why we're having this discussion over the next couple of days. Outside in the lobby, there's a program. Make sure you take a look at the program. And if you see, there's three other reports that are there. Those are done by your peers. Those are research that's done by student fellows that have been associated with the military writer symposium. Wasn't done by staff, wasn't done by faculty. Those are done by your peers. It's important to know what your peers are sometimes thinking about and doing and just some of the things that are available for you on on campus. And one of the things that Norwich has been doing for 200 years plus is we've been trying to think about what's next, what's ahead, and how to prepare you to lead through that. And this is what's on the docket for you. And if you're thinking that cyber or AI is not part of something you should be thinking about, doesn't matter your major, it certainly is. And that's why we're here today and we're thrilled that you're here. But also we're thrilled that our guests are here. So one of the things we do at Norwich is we try and include students in all that we do. This goes back to our beginning. This is what Partridge did as he marched them around the Green Mountains and the White Mountains. Now we do that digitally and we do that overseas and we want student voices to be heard. And the other thing I just want to make sure that we all realize is even though we're having this discussion today in English, this discussion is being held across the globe in multiple different languages from Spanish to Japanese to Korean to Arabic to Swahili and also to English and to German. It's a global conversation. We've had some students read some of their essays in another language. So we've had Ukrainian and we're going to have some others over the next couple of days. But we also have some students that have written some pieces to talk about their reflections on security. One of the things also to recognize is when we're talking about AI and robotics, it does involve cross-cultural competencies. And so it's not something that is irrelevant. We should be talking about thinking cross-culturally. But we also should talk about the psychology of AI. How does that impact with your emotions? What happens when you have robots and artificial intelligence that can read your emotions better than you? And they also portray emotions. And in part of those emotions we know from our veterans have to deal with trauma. And trauma has to deal with PTSD. And so when we're talking about these variables, we definitely want you to know that AI is not just about programming. It's not something that's always just about science fiction. It's not something that's about the DOD. It involves a wide spectrum of disciplines and conversations. So we're going to ask our student reader today to come on up and share his reflections. And then after that I'll turn it over to the moderator for our panel today. Floor chairs. So good afternoon to everybody who is here in person and online. First and foremost, I'm going to thank my parents who have always helped me with my writing career. And without further ado, here's my piece entitled Coming Back. His hands couldn't stop shaking. The Ring of Battle had long since ceased to thrum in his ears, leaving him calm but not at peace. He was fine. More than fine he was alive. That was more than half a squad could say. If only his hands would stop shaking. He thought back to all the seminars, the words of wisdom, the briefs meant to prepare him for the vital moments his life would rest in his hands so like the rifle that took more of his life with each shot fired. And with every hand clapped on his shoulder, he'd believed more and more that it was possible to get through this unscathed. And he had, right? The words of every combat veteran who warned him flooded the panic that could no longer threaten him. In a span of minutes he had come to understand combat was the easy part. Coming home, pretending that everything was okay, smiling at the loving wife and the neighbors who encouraged him to give the enemy what was coming to him as if they had a clue that was the hard part. How was he supposed to go back to watching ball games with his friends after firing rounds into flesh? How was he supposed to watch his daughter in her Christmas play when he saw dead friends waving from the wings? Plenty of people came back from this. He knew that. He could do it. And didn't he owe it to the friends who couldn't make it back from this? The ones who would never get the chance? He would, he promised himself. If only his hand would stop shaking. Thank you. So a question to you in the future. Is it possible that that could be from some sort of robotic perspective? Don't know, right? 100 years from now, 200 years, we don't know. But these are some of the things that we're thinking about. So it's my honor and privilege to turn the floor over to Dr. Brian Bradkey, Spotlight Labs. Welcome back to Norwich. Thank you so much for taking the time to be here and serve the floor's yours. Hey, thanks Travis. I appreciate that. Good afternoon, everyone. My name is Dr. Brian Bradkey. As Travis saluted with you, welcome back. I was faculty here, mechanical engineering for six years. And I truly miss it. It's nice to see some of my colleagues. Good to see you again. And if any of my students are still sitting in the audience, I'm sorry that you couldn't get help in four years. Hopefully there's not any, I haven't seen them yet. It is my privilege to be moderating this panel. My background, apart from Norwich, I left Norwich to join Spotlight Labs. And we are a company that is building biomedical sensors, systems and algorithms to bridge the man machine gap. And that is how do we bring the physical machine and the, in this case, the soldier together to be a more effective, more effective fighting force. As a part of that work, we're sponsored by DARPA. And that leads us into the topic for today's panel called Secret Science, DARPA and Unimagined Technologies. And I have the distinct privilege of introducing Sharon Weinberger, who's joining us from the Wall Street Journal. She is the Wall Street Journal's National Security and Foreign Policy Editor. Previously, she was the Washington, D.C. Burrow Chief for Yahoo News. Prior to that, she was an Executive Editor at Foreign Policy Magazine and earlier the National Security Editor at the Intercept. Her third book, published in 2017, is called The Imagineers of War, The Untold Story of DARPA, the Pentagon Agency that Changed the World. And so today, what I'd like to do is, originally, Sharon was going to open with some comments, but I think we just want to jump right into discussion. And I have a few canned questions here, but I'd like to invite our guests here in the audience, if you've got questions, to please come down to the microphone and participate in the discussion because that's a lot more engaging and I think a lot more interesting and it certainly keeps us on our toes. But ultimately, we want to discuss the things that you're interested in as well as talking about the things that we're interested in. So I will get it started. I read a little bit of your book called The Imagineers of War and DARPA, which actually started as DARPA, originally got a start after Sputnik in the space race. And then after they had sort of handed off the responsibilities to NACA, the NASA, they almost went extinct. But they found a way to maintain relevance. And I was wondering if you could talk about what it was like in early days at DARPA and kind of how did they, where did they get their start? A little sentence, oh, much better, that said, you know, that this agency will do those projects as directed by the Secretary of Defense, which were initially in the space race. So within a year, that was successful. We actually weren't that far behind the Soviet Union. We launched the first satellite that we launched the U.S.'s first satellite into space. And then very quickly, this is one of the reasons I call it an untold history, this new agency lost its mission. The space programs, the military space programs were taken back by the military services. And the civilian space programs went to the newly formed NASA. So you suddenly have this agency without a clear mission. It had some sort of leftover programs. And it had a very ambitious official there who kind of looked at how they could use this new agency and saw a brewing conflict in Vietnam. Not a part of history we hear about today when we think about DARPA as this Pentagon agency that invented the Internet stealth aircraft drones. But in fact, from the early 1960s to the early 1970s, one of the biggest growing, one of the basically the biggest and high profile mission area that DARPA was involved in, was the war in Vietnam. At one point it had it had several hundred personnel located in both Vietnam and in Thailand, trying to come up with counter insurgency technologies. And so it was involved in that fight in Vietnam as sort of the deployed counter insurgency agency, deployed counter insurgency agency for the better part of ten years. And in fact, getting to the theme of today's conference, some of the work that we most associate with DARPA today, like drones, came out of their work in Vietnam. To some extent, even computer networking, the Internet came out of the early work that DARPA did in Vietnam, working with computers to come up with to help with targeting along the Ho Chi Minh trail. So a lot of what we associate with DARPA today actually came from counter insurgency. And it's kind of an interesting inflection point here, where as a nation we're coming out of 20 years of insurgency and counter insurgency work and pivoting back towards great power competition. When the US came out of Vietnam in the early 1970s, DARPA had to rethink its strategy. How do we take these technologies that we developed for counter insurgency and make them relevant for the battle against the Soviet Union? I think one of the challenges that is facing probably scientists and technologists who work in and for the Pentagon today is how do you take that 20 years of work on counter insurgency and make it relevant for this pivot that we're making back to great power competition? When it comes to robotics and our official intelligence, one of the reasons I was so interested in DARPA was to ask a question, which is, you know, we often consider it one of the most successful research agencies, certainly most successful military research agencies around is what makes it successful. I think one of the things I came out of interviews for the book thinking was, you know, there's really two questions to ask. What is the national security problem you're trying to solve? And is that an important problem? There's lots of problems you can solve that are small problems. And is the solution you're proposing going to actually solve that problem? So those are kind of the biggest lesson I took away from interviewing DARPA researchers. Yeah. And so thanks for the historical context. Fast forward now, you know, 50 years to today. Has there been a great evolution or change in the way that DARPA operates? Or I guess what is there? Is do they still have the same mission or kind of strategic implications? They have the same fundamental mission. I think there's a lot of mythology around the agency. So it's known I mean, to give you kind of the thumbnail sketch of what DARPA is today, it's an approximately four billion dollar agency. It has 160 some technical personnel, usually program managers who come in to run programs for limited periods of time, usually anywhere between two and five years in general. You know, there's always exceptions to the rule. And it is project based. So they will start a project, do it to the completion or failure. And then it is up to usually a military service or some other place to kind of do something with it. What was so interesting about the research is back in, if you look back in the archives in 1958, the entire personnel of DARPA was on like a, you know, a notebook card, an index card like you have there. Today there is, or at least there was, I don't think they actually print it any more a phone book of DARPA personnel because it's not just these 160 technical personnel. It's also support workers around them. So it is still a much more nimble agency than other parts of the Pentagon. What they often like to say is we have the freedom to fail. You don't learn unless you can fail. They certainly have, you know, more ability to move money around, but they aren't quite as small and agile as they like to believers. They once were. And I think it's actually one of the biggest problems that they're facing today is, you know, how big should the agency be? How much money should they spend? I think the other major change that we've seen, and this is one where I get a lot of criticism sometimes from current people in DARPA. And I'm happy to hear that criticism, but I can only say I, you know, the research I did was the research I did. They have a lot less access to the senior leadership of the Pentagon that they once did, you know, back in these early days of 1958, DARPA was located, you know, just on the same corner or in the e-ring of the Pentagon as the Secretary of Defense. They were getting taskings directly from the President, directly from the Secretary of Defense. So for instance, you know, speaking to robotics, artificial intelligence. So ARPANET, which was the predecessor to the Internet, came out of some direct taskings from Pentagon's senior leadership in some ways. Now, you know, DARPA sits in a very nice building in Northern Virginia. Certainly the director has communications with senior defense officials, but I don't think they have quite that connection. So what happens when you're sort of separate from the Pentagon leadership? Well, in one case you have, you have more freedom. You can do interesting projects. On the other hand, you know, if you don't know what the major national security problems that senior leaders are facing, whether it's in the Pentagon or the White House, you're sort of defining them on your own. And I would argue, and I think that I've had pushback to that argument, that the problems they're trying to solve today are often less important. And I know that sounds terrible, and I'm a big fan of the work the agency has done. So when we take it back to the early days, they were trying to get us into space. That was the number one thing in 1958. They were trying to, you know, they were trying to win the Vietnam War. It certainly was what I think one DARPA director called a glorious failure. And what he meant was, yes, it was a failure, but it was glorious in terms of its ambitions. I think DARPA works on, you know, very good problems now. It does very good work. I think it is perhaps less ambitious than it once was. Those are the changes. Yeah, thank you. So working in, you know, as a small business owner and bringing products to market, I noticed a stark difference between how private markets do innovation and funding and, you know, seed capital and investment, you know, venture capital. And they seem to have an attitude of take and make a lot of small bets knowing that 90 percent of them are going to fail and let the, let the few really good ideas kind of flourish and turn into, you know, products, companies, et cetera. Over the last few years, we've seen the Department of Defense try and do the same thing with the defense innovation units and the afwarks and the, you know, these other kind of things. Has DARPA been successful at doing that same kind of model or are they trying to adopt more of a kind of a private equity model or are they sticking to more of the bureaucratic government model? So what DARPA has long argued is exactly that, that, you know, I forget the exact percentages they use, you know, if 90 percent of our projects fail, but that 10 percent or even that 1 percent is the internet, you know, ARPA net, which led to the internet or it is stealth aircraft, then that success, that big success makes up for all the failures. Well, there's, and I think that's a good philosophy. And so DARPA will often, DARPA leadership often doesn't like to be compared to these other organizations. They kind of look at them as encroaching on their territory. I think one DARPA director told me, you know, we've always worked with Silicon Valley, you know, we don't need all of these other new agencies. And that's an interesting argument. And I, you know, I think they may have some points there. But the things that I always point out is it's important. The other thing that DARPA often likes to say is we don't remember the failures. And that's very dangerous. And what they mean by that is, you know, because they want to be able to try, you know, like hypersonics. You've failed 10 times, but the 11th time will be a success. My counter argument to that is often it's really important to know why you failed before, so you don't make the same mistake. You know, one of the stories I sometimes relate. And it's, yeah, it's important to know why something fails. And this goes in a little bit to the area you work in. Back in the early 1970s, the CIA asked DARPA. The CIA was funding psychics and parapsychology. You know, the idea you had people with ESP powers. You could sit in a room and imagine a Soviet subbase and draw it. Anyway, the CIA was very excited about this work. So they invited the DARPA director over to brief him on it. And we'll DARPA take a look and maybe get involved in funding it. And so they sent this program manager out, George Lawrence, who had been one of the originators of research in U.S. government for biofeedback. The idea that you, actually you probably could find yourself, that you have sensors that give information that can feed back to the person. I'll actually let you talk about that a little bit. And you know, in the 1970s, this was kind of considered a hippie-dippy area. So they were like, who better to send out than George, who kind of is in these hippie-dippy areas? And he can go meet the psychics. So he went out to view this CIA-funded work, actually out in Silicon Valley at SRI, the old Stanford Research Institute, where a couple of scientists there were doing work with psychics, including Yuri Geller, who people of a certain age will remember from the Johnny Carson show Spent Bending Spoons. And anyway, George went out, took a look at this work, pretty quickly decided it was, you know, Yuri was a showman and a magician. Yuri still says he helped the CIA, but I'll let that debate just be out there. And came back and kind of reported like, yeah, it'd be very nice to imagine Soviet bases, but this isn't the way you could do it. But actually what George took out of that was another program saying, OK, suppose you could move things with your brain. How do we actually do that? And from biofeedback, he then originated sort of the US government research program and brain-computer interface, which some of you may be more, I mean, this is, you know, five decades ago. We're still not there yet, but you have people like Elon Musk with Neuralink trying to link the human brain with machines. Can you have sensors whether on the head or implanted that will help you move a computer or a device? I think the way they have imagined it back in the early 1970s at DARPA was you would be able to operate an aircraft with your mind. That's going to be a ways away. But the point being, you know, there are, it's important, you know, if you want brain-computer interface, I'm going to say that computers are probably going to be a more successful route to that than the psychics. So knowing why something works and making it scientific is, you know, knowing that if you want that 1% to succeed, choose the right programs. Yeah, absolutely. And that is a challenge, right? Is how do you have the foresight to look at a fledgling program and decide whether or not it could be successful and then ultimately whether or not to fund it and to continue to support it so that ultimately does become successful in transition. And that's a struggle, too. So you're absolutely right. And in terms of, you know, emerging technology, so I know that I know the title of this panel was secret science. And I know everybody would love to know about all the secrets that we have going on. I'm sure I know there's at least one UFO pilot in the audience. I don't see him here. But there is one. And you can you can find him after and ask him about doing that. But when it comes to DARPA and everybody has these ideas that they're working on this just incredible technology that's way out in the future. And we haven't even thought of it yet. What do you see is coming next? What where do you think where do you think the state of defense innovation and technology is ultimately going to go? Is it propulsion? Is it aircraft? Is it is it just AI robots that can go fight our wars for us? What do you see is, you know, from your experiences as coming next? You know, I think what has been a big focus of DARPA and now the Defense Department writ large has been hypersonics. I often, you know, I guess I've interviewed a lot of skeptics over the years about how useful they can be. But I've also seen some convincing arguments in recent years. I think that is where you're seeing a lot of progress. You're seeing a lot of funding go in. It is actually one of these areas where it has become very classified in recent years, which makes it very difficult, at least as a journalist on the outside to say, like, are these programs successful? Are they not successful? I don't think we will know for some years. You know, I thought when I finished up the book on DARPA back in 2017, which when I think the writing and research was done in 2016, their Biological Technologies Office, which actually was, again, reviving this work on brain-computer interface, I kind of thought that was going to be the big area, at least of investment. Hard to say now. I mean, so where goes the Department of Defense? There goes DARPA. So I don't know how much they're focusing on that now. I think less and less. But this question, and this goes to the sort of the artificial intelligence and robotics themes, what problem are you trying to solve? And is that technology capable of solving it? Back in the early 1960s, the DARPA program manager who started the program that became Arpanet and the internet, JCR Licklider, he wrote this very famous essay on man-computer symbiosis. And so JCR Licklider, the godfather of the internet, basically said, I think we're going to get to artificial intelligence someday. I think maybe he thought it was pretty far away. I still have my, I think it may be still pretty far away of true artificial intelligence. But he said, you're going to have this in-between period of what he called man-computer symbiosis, of working seamlessly with machines. That's sort of where his idea of computer networking came out of. So is that brain-computer interface? I don't know. I still think that there is a lot that we don't understand, a lot about the human brain. But could we see usable devices? I think some people are arguing that they're, I don't think there are things that are particularly useful. But I think that's the question that, when I look to the future, I would ask, if you want to know where things are going, what is the national security problem we're trying to solve with artificial intelligence, with brain-computer interface? And if you can kind of solve that one, you have a better idea of where we're going with the technology. Absolutely. I mean, one of the biggest challenges we face right now is with recruitment, you know, manning levels across the services and shortfalls in that regard. So I certainly think if you asked anybody the Pentagon what's keeping them up at night, you know, China, Ukraine, and probably recruitment retention, which then leads into how do I build myself an autonomous fighting robot? Now, I'll let you guys discuss that the next hour when we get to arming artificial intelligence. But the question I wanted to ask was, you know, again, this kind of secret science. So assuming that DARPA is working on sort of these technologies that are presumably secret because we don't know about them yet. You know, as a scientist, secrecy kind of hurts your innovation, right? Not being able to freely communicate and listen to other people communicate their ideas and poke holes in your hypotheses and then work really hard to kind of advance and then defend your ideas. I wonder if you think that operating in secret as a defense agency, do you think that has held us back at all in the defense industry? You know, what I would say, you know, what I hear from DARPA directors and program officers I've spoken with is that you want to keep something secret if it provides you a capability that's an advantage over an enemy. That would be a justification. I can only speak to what I've seen as a journalist looking at, you know, an array of science and technology programs. It's possible. It's we're keeping technology programs classified for that reason. I think is a, you know, not whether I personally think, but that's a coherent argument that I can buy into. You know, we have a hypersonic technology that gives us a strategic advantage, a tactic, whatever sort of advantage we want to keep that secret from our competitors, from our enemies. I think where I've seen programs a lot of damage done is when science is kept secret because it is first of all so much at odds with the way science is conducted, a peer review of publication. So, and this is where it gets into brain-computer interface. There were, there was a program at DARPA in the early odds in the 2000s called augmented cognition. You may be familiar with it, which was exactly that. You're going to put sensors people's heads and, you know, give feedback to them. The science was just from what I heard from people who worked on it just a mess. So you need to get that out there. One of the most disastrous DARPA projects of all time that we talked a little bit over lunch was back in the 1960s when the U.S. government realized that Moscow was irradiating the U.S. Embassy in Moscow. This may be familiar to those of you who have followed the Havana syndrome controversy. And the CIA was like, well, why are they doing this? Why are they irradiating the U.S. Embassy? And one of the theories at the time in the 1960s was that they were doing this to sort of affect the brains of, you know, CIA personnel and embassy workers in the building. And so the CIA asked that, eventually that translated down to DARPA, that DARPA start a top secret project to look at the scientific effects of microwaves on the human brain. And, you know, this program continued on for five or six years, all in secret. And it took, it was basically a hodgepodge of bad science, of dirty data. There was no baseline set, but no one knew it because it was all conducted in secret. So that's where I see the problems, where the science is kept secret, not the technology. Interesting insight. And thanks. I'm sure, I know I'd never heard of the Moscow signal before, but certainly Havana syndrome had been going around the news in the last six years. It was a very parallel thing of sort of, you know, concerns about microwave radiation effects on humans. Actually, the big difference was in, you know, in the 1960s, I think it continued through the early 1970s was we knew, you know, we could detect the microwave radiation. The question was, what's the effect of having an effect on personnel? With the Havana syndrome, these US personnel that reported health symptoms out of Havana and then around the world, microwave radiation was never detected. It was theorized that that would explain the effects on the personnel. And is that still unsolved? It's unsolved and it suffers from some of the same problem that a lot of the data, you know, because the patients themselves, the initial group were reportedly CIA personnel. So there has been a lot of secrecy around that. I would say it's a similar problem. So again, in keeping with secrets, so when conflict does arise, things obviously transpire that are unplanned and new technologies are often divulged sort of in an unplanned manner. The best example from recent history would be the stealth helicopter during the Bin Laden raid. Hey, we didn't know we had those and there it is on the front page of the paper rights of the Wall Street Journal, as it were. And so you're obviously watching what's happening in Ukraine and I'm wondering with this sort of near peer conflict, have we seen any new technologies or new techniques or tactics emerge that before we hadn't thought of or planned for? So yeah, I mean, so the Ukraine war is, of course, it's characterizing, it's kind of what we think of a traditional conventional war. But you do see aspects of technology actually that we tried to do in other places like Afghanistan. So I think one example, and again, this was a DARPA program as well. So back at the height of the Afghanistan conflict, DARPA had this idea that they would hand out cell phones to Afghans to help basically do open source, crowd sourced intelligence. So Afghans would record where they saw IEDs being placed. All of this information would go back and be mapped and it would help us basically create a blanket of sensors across Afghanistan. I think many of you can imagine what the challenges were of that. That, of course, it wasn't that we had a, there was not a unified population of support. You were giving cell phones, military cell phones to people that were, in some cases, put in danger. By that, there were a whole host of problems and people giving bad data. And it was certainly an interesting idea. It did not work by anybody's account. I think the Defense Intelligence Agency wrote in a report that, yeah, it was a pretty dramatic failure. But what you're seeing in Ukraine is the Ukrainian government has done just that. They have an app where Ukrainians can report where they've seen Russian troop movements, all sorts of things. That, from what I've heard, has been very, very successful. So one of the differences is, of course, you have much better cell phone penetration in Ukraine. You also have a highly motivated population wanting to work with the government and reporting this data in. But still, the idea that, that is an example, that's something new, and it goes back to that idea that we have this network of sensors out there which are humans. I mean, we all, almost all of us carry cell phones. So that would be, I think, probably one of the examples I've seen. Another, of course, is drones. You know, it's interesting. We're seeing drones employed in such different ways than perhaps, I mean, certainly different than how the US employed them in places like Iraq and Afghanistan and elsewhere. I think that's one of the things that's also been a defining feature of the Ukraine War. And using them, frankly, much more like DARPA was trying to use small armed drones back in Vietnam. It's almost like, you know, it's how you use them, in that case, less than the technology evolution, perhaps. Now, interesting, and drone use on both sides is a conflict in this case, right? So a thing that we really haven't had to deal with in the Iraq and Afghanistan scenario is defending against the drone situation. Have there been any, you know, kind of big advances that have kind of shown up on the battlefield that you've seen in your reporting? No, you know, I used to, you know, certainly the US has been thinking about counter drone technology for a while, using everything from directed energy to, you know, kinetic solutions. I know that the US has promised to provide something called the Vampire, which I think is a Harris-produced counter drone system. I kind of, you know, one of my, the people on my team was going to the Big Army Convention, AUSA, and DC. And one thing I said, it's like, when I go to one of the big military shows, what I always look for is like, what do I see a lot of? So I haven't had a chance to talk to them, but I'd be really interested to see, have a lot of companies, are they showing new counter drone solutions? I would expect, based on sort of previous history, that's where you'll start to see a lot of innovation. You know, what do countries do? They study the conflicts that are going on, and they're seeing how small, often cheap drones can be used. So it's, you know, I'm sure they're thinking about how to defeat against them. I mean, there's things that are out there, of course, like anti-jamming technology, but I'm sure there's a lot of other things being thought about. Sure, absolutely. As a fighter pilot, Unmanned Aerial Aircraft are my sworn nemesis, right? And I would love to jump into that conversation, but I really want to challenge you guys to talk about that. It's a nice panel, which is Arming Artificial Intelligence, because I think it's plain to see that you take a drone and you make it truly autonomous and put it in that contested environment to seek and destroy. You have a very capable and a very lethal weapon, but you also have some serious ethical questions, and I hope you get into that discussion. If not, I'll ask the question for sure. But going back to Ukraine, have there been any examples of technology that, new technologies that were ineffective, something that we developed, or that someone else perhaps had developed and deployed and brought to this kind of near-peer conflict that didn't work as advertised? That is a great question. I remember there were initially reports of drones. I think some types of drones that were provided to Ukraine by the West were defeated early on by jamming technology. But as we've seen, drones as a whole have been successful. I think my opinion, and this probably doesn't bode well for a DARPA pitch, is that what we're seeing in Ukraine first of all, that a motivated population really, really makes a difference. I think it's not always about advanced technology. I mean, I think when you look at something like high Mars, it is when you have intelligence plus precision targeting it can really, really do a lot of damage. But part of what we're seeing are traditional defense issues. Do your supply lines work? Can you produce enough bullets? Do you have the ability to re-equip forces? That's certainly, that's a problem that Russia is facing right now. And it's a problem for Ukraine as well. It's a problem, frankly, for the West as well, which is supplying them. We don't have enough air defense systems to provide Ukraine. That's an old fashion. That's not a technology problem. That's can you produce enough? I mean, it's something that probably the United States needs to look at for its own capabilities. One more question before I shift away from kind of the conflict in Ukraine. And it is an interesting case study because it is a near-peer sort of first world major militaries kind of competing against each other here. What about the more non-kinetics sort of soft impacts? And I want to come at this from the standpoint of innovation, specifically DARPA and other industries. But when we look at the effects of social media or just the media in general controlling the flow of information between sides and then amongst their peer groups and then across sort of the front line, what effects of technology have you seen or can you comment on in terms of employing AI to say filter messages or send certain information or control what's being shared and the importance of that? I don't know. Maybe it's because it's not my area of expertise. I don't know that AI has played such a role in it. I could be wrong. It's more what we've seen in the evolution of conflicts going back to the Russia's war in Georgia as well, which is this battle for information. With this information, frankly, on both sides and also with the popularization on social media. I mean, it's just when you look at what Ukraine and to some extent the government there as well has been able to do on social media and understanding how important it is for them to kind of wage the battle there. It's interesting. Actually, when it turned back, I thought of something in your last question about something that we haven't seen work out and pose it more as a question. I think the one thing that those of us who watch Ukraine have all been asking since the beginning is why have we not seen sort of the devastating cyber war that many people thought we would see in the Ukraine conflict? US has invested in this area. Russia has invested in it. Many countries have invested in this area. So at the beginning of the war, we all expected, all is a little bit hyperbolic, but many of us expected that the war would start with sort of a series of cyber attacks that would take out Ukrainian infrastructure. We didn't see that. We've certainly seen cyber attacks. I don't have the best answer. I think that history is yet to be written and it's a really, really interesting question to delve into. Is it because these capabilities aren't as useful or as developed as we thought they were? Was there some strategic reason that Russia didn't employ them? Did the US, I mean, we do know and suspect that US has been helping them in a number of areas, certainly to include in the cyber area. Did the US come in and protect them in some fashion? We don't know, but I guess we would say that's been an area that has, from face value, hasn't been perhaps as important or dramatic, and maybe it has, and it's all in the shadows, but that is a great question for people to delve into in the future or now. Yeah, fantastic. I think, do we have a question? Please. Yeah, hi, great discussion. I'm Larry Goldstein from Brown University and also Defense Priorities. My question is more broader than DARPA, if you'll permit, but I think related. I worked for the Navy for 20 years and one clear trend, I think this might be true in the other services, is this tendency in military industry toward kind of gold plating, putting every bell and whistle on the system so that it's exquisite, but so complex that it's difficult to operate and the Navy is really struggled, I don't know if you're aware, but with the littoral combat ship, basically among specialists viewed as a total failure and also the Zumwalt destroyer, which gets a shout out in the ghost fleet book, but unfortunately most naval analysts consider the entire surface fleet to be in huge trouble. And I don't know, maybe it's not as bad as any other services, but in some ways drones appear to be kind of a solution. I mean, they're relatively cheap and we could produce them in mass numbers and get us out of this bad situation. So I wonder kind of what your impression of that is, has this been alleviated to some degree? And by the way, not to drag out this point, but the Navy force structure, I think they're constantly shooting for something like, what is it, 400 ships or something like that. And dreaming of the 1980s when we had 600, but I think now we're trying to kind of square the circle by saying, well, we get to 400, but a quarter of them will probably be unmanned. And so I wonder, do you have any thoughts for military leadership? I mean, that's an extraordinary decision to make when you're saying a quarter of your force structure are gonna be unmanned without any of them having been tested in combat. So your thoughts? You know, I always say I only know as much as the last person I interviewed. It's sort of, your question reminds me of back in when I first started covering a lot of defense industry issues for defense trade publication back in the early 2000s. And this was when unmanned aerial vehicles, drones were first becoming very, were first sort of entering warfare. I mean, now we think we almost take it for granted, but it was very new. And Congress mandated in one of the defense authorization bills I think of 2000 or 2003 that by 2020, one third of army ground vehicles are gonna be unmanned. Anybody know what percentage that is today? I'm guessing 0.1. So I think in that case, it was more of a technology problem. It turns out it's much easier to go around in the air with a drone than it is on the ground where you have all sorts of objects. And in this case, I think what the Navy is facing is a procurement challenge and a leadership challenge. I mean, they have to make that decision. They have to decide what their fleet should look like, what should be the mix. It's an area I haven't followed that much over the past few years, but I don't hear those decisions, those hard decisions being made. Certainly there's probably scholars of the Navy out there that could answer better than me, what is going wrong there that those decisions aren't being made? That's the part I don't know. But it would take a major decision and those are hard in the Pentagon. I think the easiest way to put that back in the form of a question is would you get on an airplane to fly home that didn't have any pilots sitting in the front? I'm sure there's some people here who would and there's probably a lot of people that say, I don't know if we're quite there yet. And if you're not willing to sit in the airplane with no pilots up front to take you home, understandably, if you were to put our entire national security posture under that same context of armed vehicles that are essentially roving the perimeter or flying as your wingmen in combat, that's a big leap of faith. And I'm not sure that the technology is there yet in terms of relying wholly on drones to fill that need. But I think we're moving towards that. I think they'd like to get there. I mean, what's interesting for me is I've been, when this question of AI and ethics kind of came to the forefront a few years ago, I was always sort of mystified from it because I always look at it from the opposite standpoint, like the congressional mandate back in 2002. So many of the things that I saw being predicted back in the early 2000s are not. We have taking pilots out of the cockpit. It's actually the small drones that are being used mostly on the battlefield. It's surprising, things don't always go where you imagine them. I think science fiction is wonderful and it can provide ideas for the future, but it's rarely a blueprint in my view. Sorry to go back to Ukraine, but it's about drones. Ukraine is, and a lot of the drones you create have been used for mainly surveillance because they don't have any like big drones like predators and such in the US. But those have been used for like spawning for artillery. You think that drones are gonna have more of like a kinetic future or more of surveillance because like a human can't have a thermal vision essentially, but drones can. And what do you think about like the future like the surveillance and with like battle damage reporting with like missile strikes as with high Mars and because you need BDR after attacks and like sometimes like Russia's problems like they've taken up two to three days in order to get BDR and another attack on a target after a failed attack. So how does that, do you see drones as that future or do you see them more moving towards Connecticut and like being like the weapon at the front? Are you talking about Ukraine or more broadly in the future? Just like what we've seen in Ukraine and then how it's gonna affect the future. So I think it's gonna depend on the conflict. I mean, that's what's been so interesting to watch how drones are being employed. So in a lot of cases, I think one of the things that I've seen reported is that they have gotten frustrated. They can buy Chinese drones cheaper and they have good engineers there and equip them with cameras for surveillance and do a lot of things that don't necessarily require advanced technology. Sometimes it's cheapness. So I think it's gonna depend on where they're gonna be used and how they're gonna be used. I mean, I think back in the 2000s, going up to 2009, 10 and 11, as a reporter, I thought, okay, there's gonna be the predator and then there's gonna be the replacement for them. And there was, there was a program to replace the predator and we're gonna be going towards more drones that are armed and that's not necessarily how the whole world is going. You had Houthis in Yemen using effectively drones very effectively. I think it's, it's gonna depend on the conflict and I think it'll evolve in ways that we didn't expect. Human ingenuity is amazing if nothing else. I don't know if that answers your question. Yeah, another question. Do you see the future of personnel warfare moving towards, more towards urban and underground because of the prevalence of drones in the air? I don't think so. I don't think so. I don't, I just haven't seen any movement towards that. Thank you. And we have time for one more question, please sir. Thank you. I'm really appreciate hearing some of your comments about what I would call a technological blind spot or a tendency to over bet on a particular tech because AI was, if you go back to the 50s, since the 50s everybody's been saying, oh in another 20 years, this is what's going to be happening. And it's taken 70 years and considerable advancement to get to the point where we've got mass computing and network availability of data to have some of this actually coming to the fore. And so the question I've got for you is what's the other technological blind spot or the sexy tech that we're oversold on that we think is gonna change the future when in fact there are other solutions that as you said, what's the problem you're trying to solve? What's the tech going, what's tech do you have that's going to help you solve that? Or is the tech you're working with going to solve that? Are we over betting on a particular technology and hoping something will spill out of it like it did from the space program or from DARPA investigations into other tech? Are we oversold on something or do you know of or see a particular blind spot for us in the future that we should be bringing into that space that we think about for AI in the battlefield? We're oversold on AI and robots for this particular seminar but is there something else we're not thinking about? We are oversold on AI and undersold on the importance of humans. I remember back in I think around 2009, 2010 when there was a lot of investment before the current set of hype over AI, a lot of DOD investment was being made in computational social science trying to track where will the next IED attack be in Afghanistan mixing in quantitative data on IED attacks with social data on Taliban influence, price of oranges, all sorts of unusual stuff. And I remember interviewing back at what was the organization called JIDO, the Joint IED Defeat Task Force Organizer. I'm mixing up what the full JIDO, it was the bomb fighting agency. And I remember interviewing the chief scientist there who was in charge of running this investment in kind of what you might call nation AI. Can we come up with programs that will predict the next IED attack and save lives? And sort of like towards the end of the interview, he just starts kind of ad-libbing. I mean, in a thoughtful way. And he said, you know what? You know, the Taliban are beating us and they're beating us without the computers, without the technology, without the science, and they're beating us. And I think we oversold ourselves on technology and undersold ourselves on the ability to deal with human beings. I think, you know, Russia might learn that same lesson in Ukraine. It's the same thing there. Well put. Well, with that, we are out of time. So I'd like to extend a very special thank you to Sharon, please join me in giving a round of applause. Thank you, it's a real pleasure to be here.