 The U.S. Naval War College is a Navy's home of thought. Established in 1884, NWC has become the center of Naval sea power both strategically and intellectually. The following issues in national security lecture is specifically designed to offer scholarly lectures to all participants. We hope you enjoy this upcoming discussion and future lectures. Well, good afternoon, and welcome to our seventh issues in national security lecture series being held here in the virtual world. I'm John Jackson, and it's my pleasure to be the host for today's event. To kick us off, I'd like to call on Admiral Chatfield to offer her greetings. Admiral? Hello, good afternoon. Thank you for joining us. This is part of a community of scholarship that really inspires my husband, David, and I. And don't be surprised if you see him before the end of the night, he's on his way over. And so we are looking forward to the topic tonight and to hearing Professor Schultz. And so thank you so much for being here with us. And over to you, John Jackson and Professor Schultz. Thank you very much, Admiral. This series was originally established as a way to share a portion of the Naval War College's academic experience with the spouses and significant others of our student body. Over the past four years, it has been restructured to include participation by the entire Naval War College extended family to include members of the Naval War College Foundation, international sponsors, civilian employees, colleagues throughout the Naval Station Newport and participants from around the nation. We will be offering 11 additional lectures between now and May of 2021. An announcement detailing the dates, topics, and speakers of each lecture will be posted by our public affairs office. Looking ahead on Tuesday, 8 December, I will deliver a lively discussion on drones that fly, swim, and crawl. Please note that my lecture will take place one week from today and that it will be the last lecture until 12 January, 2020. Okay, on to the main event. Please feel free to ask questions using the chat feature of Zoom and we will get to them at the conclusion of the presentation. I'm very pleased to introduce our speaker, Tim Schultz, the U.S. Naval War College Associate Dean of Academics. Prior to joining the Newport faculty in 2012, he served as the Dean of the U.S. Air Force's School of Advanced Air and Space Studies. Tim earned his PhD in the history of technology from Duke University and his research interests, the interaction between technology, strategy, and the transformative role of automation in warfare. He is the author of The Problem with Pilots, How Physicians, Engineers, and Airpower Enthusiasts Redefine Flight from Johns Hopkins University Press and his co-editor of Airpower in the Age of Primacy, Air Warfare Since the Cold War, which will be published next year by Cambridge University Press. Tim is a graduate of the U.S. Air Force Academy and the Colorado State University, the Air Command and Staff College, and the School of Advanced Air and Space Studies. Formerly a U.S. Air Force Colonel, he spent much of his aviation career as a U-2 pilot enjoying the view over interesting regions of the globe. I am pleased to pass the digital baton to one of the college's brightest and best scholars, a true friend and colleague, Professor Tim Schultz. Thank you, John Jackson. Thank you, Admiral Chatfield, for setting this whole enterprise up and hosting it. And it's such a pleasure for me to play a role this early evening with you, this first day of December. So welcome, everybody. Today's topic is gonna be both backward-looking and forward-looking. I'll use some historical perspective and also try to look ahead to our shared future. My overall position right now is that Naval War College graduates, they think and they lead and they fight at the frontiers. And they always have the frontiers of sea and air, the frontier of new knowledge, the frontier between science and fiction, the frontier between technological possibility and ethical constraints and political realities, the increasingly contested frontier between machine control and human control. And in some ways, this is terra incognita, unknown territory. So for this session, I'm gonna examine this frontier of humans versus machines. And I'm gonna look at it through a number of different angles. And I'm gonna, let me start sharing some slides and images here with you to do that. If you'll just give me a moment. Okay, so I'm gonna look at this human versus machines or the human-machine relationship in several ways this evening and we'll use these four kind of ideas, the frameworks to approach the issue, various examples of the fusion of human and machine activity. Sometimes it's a literal fusion. Some of the fears associated with machines out of control and some of the opportunities and the challenges associated with these emerging frontiers. But first, what's going on with this picture? As you can see, this is not your normal typical everyday fist bump. It's something a little bit different. It's a little special for a lot of reasons. Here's whose fists are being bumped. It's the president of the United States back in 2016. And he's fist bumping with a gentleman named Nathan. Nathan, you can see him there sitting in his wheelchair. On the right, he's paralyzed from the neck down. But he's controlling this prosthetic robotic arm with an interface that's implanted in the top of his skull. But he's controlling it with actual, his mind. This is thought-controlled robotics. So the goal years later, just to make this a wireless machine interface, but just to make it better and more reactive and more capable. And I'll show you images later of how that is becoming possible. But wireless or not, this allows Nathan to move this robotic arm just by thinking about it. And he can sense what it is touching as well. So he is embodying himself in this machine. So just imagine what he might be able to do in the future. I mean, he could become a surgeon possibly and a little more on how that could feasibly take place here a little bit later. So a lot of wonderful things going on in this image. This new relationship with technology, with the machine is expanding Nathan's universe of possibilities. But before we spend a little more time on Nathan and new technologies of thought-controlled machines, I wanna provide some frameworks to consider this changing relationship between human and machine. So let me transition to that now. The first rule about the future is we've been there before. We can recognize ourselves in the past. History tells us that human nature doesn't seem to change that much over time. As a matter of fact, I argue that people change throughout history only in their costume, only in what they wear. And there is plenty of evidence of that. Let me give you some of that evidence. This is an image from the Wayback Machine taken in 1839, shortly after photography was developed. And this gentleman is an amateur photographer, a chemist. And what did he do with his brand new invented camera? He took a selfie. We've been taking selfies since 1839. Peter Singer says that the average millennial will take 26,000 selfies in his or her lifetime. This is the first selfie. So we can't resist. Not long after photography became increasingly more capable and the movie camera was developed and what did we make the first movie of? We made a cat video. This is from 1894. It's called The Boxing Cats. It was filmed by Thomas Edison himself. We haven't changed in terms of our nature and our personalities. Here's an image from 1906. It's a cartoon. It's one of my favorite examples of the constancy of human nature. And you may not be able to read the caption at the bottom, I'll read it for you. It says, these two figures are not communicating with one another. The lady is receiving an amatory message and the gentleman some racing results with these devices that are sitting in their laps. You see this now a century later at your dinner table every night. Human nature doesn't change very much. And this also applies, I would add to the profession of arms. Here's what some military leaders thought about the advent of steam power. They feared it. They clung to tradition. They would turn out to be clearly the wrong approach. You can see what the admirals were thinking. They were afraid of steam. They considered the introduction of steam as calculated to strike a fatal blow at the naval supremacy of the empire. So we have a tradition of causing change but also fearing and resisting change. And here's another example from another naval example, British Navy. This is from 1901, shortly after the advent of submarines. And one Admiral declared that they are underhand, unfair and damned on English. So again, this fear and resistance to change is somewhat of a constant. But let me provide some evidence also of how we've gone through periods of rapid change in the past. We feel like we're in one now. We are in one now, but we're not strangers to it. We've experienced this before. Just consider the 10 year period between 1947 and 1957. We had the Bel-X-1 breaking the sound barrier, something that people didn't think could be done. We have the development of thermonuclear weapons. Nothing says the status quo changes other than when you're sitting in your pool side in Las Vegas at a hotel in Las Vegas and you look on the horizon and you can see a mushroom cloud boiling upward in the distance. That's a pretty good symbol of threatening technology. And here are the means to deliver it, the development of ICBMs in this timeframe and the development of the first nuclear submarines. In this case, the Nautilus. People were practicing duck and cover drills in the schools. The transistor was invented during this time. They've gotten a lot smaller since then. Sputnik appeared and jolted the status quo. Watson and Crick decoded or figured out the structure of human DNA, which ushered in the genomics revolution. All of this in a very short amount of time. So we are no strangers to rapid change. We're used to it. But I would argue to you that something is different now. Something is going on. We are in this period of nonlinear growth. And Thomas Friedman, the New York Times columnist, notes that it's caused by a combination of computation and interconnection and innovation. And they're all clashing together to create these new things and these new opportunities. Like this young man and his imagination, we all recognize ourselves in him. We still have those same creative impulses. But now our technology captures our imagination differently and lets us see and manipulate the world differently. So let me talk about that a little bit more. Here's just a basic rendition of technological change. And I'm suggesting here that technological capability rapidly increases over time, particularly in the last few decades from the, say the 1950s to the modern era. And it's increasing at an exponential rate. And this curve describes things like the number of drones in the sky, the number of things and people connected to the internet, the number of people connected wirelessly, the colonization of the population by smart devices. There are some of you sitting out there right now and you are wearing a Fitbit. You have a smartphone in your pocket. You're obviously looking at a laptop or a desktop. You're virtually bristling with computational power and it changes your experience of the world. So this curve, it's important because it also highlights a challenge for our Naval War College graduates because there's a super empowerment of not just the state, but the marketplace and the individual. And this is a strategic problem. But there's something else going on here. How do we humans keep up with this radical technological change? How do we follow this curve? Thomas Friedman in his book, Thank You for Being Late, points out that this curve, it looks like a hockey stick and it reminds of if something that the great Gretzky said, is that I skate to where the puck is going to be not where it has been. How do we skate to where the puck is going? How do we do that as individuals and as an institution? We want our graduates of this institution to be able to skate to where the puck is going. And part of this means dealing with human capability. And I've indicated here a slightly upward trend in human capability over time, but that might be wrong. It may be just flats. It may be actually declining some of you with a Twitter feed or a teenager may argue that is in fact declining a little bit, but Naval War College graduates need to figure out how to bend this curve of human capability. And that's a large part of what I'll talk about this afternoon. One way to bend the curve is figuring out how to team up humans and create this human machine teaming ability to cultivate this. And this involves various methods of fusing humans and machines together. Sometimes cooperatively, sometimes literally. So let me turn to that now. The fusions of humans and machines in this dynamic relationships. And there are many different ways to team up humans and I'll consider now a few different ways. Here's one that doesn't work very well. It's a poor example of human machine integration. This was from the late 1940s when the Air Force was trying to figure out if pilots could fly airplanes laying down. Not because pilots are lazy, but because you can withstand much higher G-forces in a horizontal position than you can when you're sitting in the cockpit. It turned out this was not a good fusion of humans and the machine. It was too awkward, too complicated. But there is a very effective way to fuse or pair up humans and machines. And that involves how they share information. And this brings me to this notion of cybernetic theory. And here's an image of the guy who invented it. It's a mathematician named Norbert Wiener. He coined the term cybernetics in 1947. It involves the manipulation of information. And Norbert was one of the three titans of the information age in the 1950s along with people like Alan Turing and John von Neumann. They ushered in an entirely different perspective on how to use information. And Norbert, he opined correctly that if computing capability is good enough, you can create a system with almost any degree of elaborateness of performance. And we certainly see that now with that exponential curve. So cybernetics helps bend that curve and we see it in a lot of different technologies. And the example I'll use is the rapid evolution of human machine teaming in aviation. And I like to talk about this because it's part of my research interest. It gives us a historical perspective that we can apply to modern times as well. So I'm asking you here, who are the best pilots? It is certainly not based on image. It's not the macho ones who look good in pictures, but instead it's the ones who are able to reimagine their roles and adapt their role to new technology. They are the best pilots are the ones who are able to subordinate themselves to superior forms of machine control so they can better take command of the air. This is from the B-17 flight manual in World War II. Said, below 10,000 feet, you're a flyer, you're controlling the airplane, sticking rudder skills. Above 10,000 feet though, the mission gets much more complicated, the environment much more dangerous. And you have to integrate yourself in a machine-like way with the overall larger system and machine. You have to become like a machine in order to survive. I'll give you some more examples of that. This is a simple technology here. It's gyroscopically driven. It's an artificial horizon or what aviators call an attitude indicator. Humans cannot fly at night or in bad weather without this device. It keeps them, it prevents them from being disoriented and spiraling into the ground. This technology was developed in the late 1920s and it ushered in this whole new regime of instrument flight. So now pilots could fly in the blind, they could fly at night or in the weather with even these rudimentary crude instruments because they provided this machine information that let them interpret the world differently, not with their own senses anymore, but with information from their machines. So they had to insert themselves into this cybernetic information feedback loop in order to survive and really exploit aviation's potential. So it's hugely important to aviation. This is an image from a device invented in 1933. It's the guts of an autopilot, the electronic guts of an autopilot. The New York Times referred to this as the robo-pilot. In 1933, it helped a test pilot named Wiley Post fly around the world in seven days, which is impressive in 1933. And he was able to do that because most of the time, this robot pilot was in control that expanded his horizons. It helped him do new and different things, kind of like Nathan with his prosthetic arm. It opened up all new types of capabilities. So we see something similar to this with the Norden bombsite in the late 1930s and World War II. Pilots learned that during the bombing run, they needed to turn control over to the autopilot and over to this high-tech bombsite because it controlled the aircraft much better than humans could. So the human-machine relationship was changing. Here we see an example of the roboticization of bomber aircraft in World War II. This is a remote control hookup and a B-17 bomber so it can be flown unmanned into precision targets in Europe, sort of kamikaze style, except without the inconvenient suicide associated with it. And it was used to some modest effect in 1944. Here's a comment before the war from the leading Air Force General. He recognized that, hey, we need to relegate the human flyer and elevate the mechanical pilot, elevate the machine. And after the war, he observed one year ago, we were guiding bombs by TV, controlled by a man remotely in a plane 15 miles away. I think the time is coming when we won't have any men in a bomber. And boy, did that ever turn out to be true, as we'll see. This image from just after the Second World War, it shows how pilots are fusing into something different. They're becoming electronic. They're becoming increasingly reliant on electronic forms of control. In 1947, a robotic piloted aircraft which still had a crew of about eight people in it, it flew from Canada to Britain without humans touching the controls at all. Pretty sophisticated feat of engineering to show the rise of the machine. And we see that in modern times now with this very sophisticated, very elaborate cybernetic control system that this predator pilot is now operating. And you can see how this human machine interface has changed and what it now looks like in the modern era. But it's based on our previous experiences. This gentleman is part of that bending of the curve. A few years ago, the chief of staff of the Air Force said that the old way of doing things, of one pilot flying just one aircraft was quote, a neanderthal way of thinking. I think he's right. Now one pilot can control not just her own aircraft or his own aircraft, but an unmanned wingman or a number of unmanned wingmen like this F-16, this unmanned F-16 that's on the image before you. I talked earlier about selfies. This is what it looks like when a robot takes a selfie. As you can see, there is no pilot occupying that seat anymore. And a few years ago, the secretary of the Navy said that the F-35 will almost certainly be the last manned strike fighter the Navy will buy or fly. And we'll see if that's true. Elon Musk similarly said, hey, the fighter jet era has passed. At least the manned fighter jet era has passed. Now it's drones. And you'll get more from that about this from John Jackson during the next lecture later this month, I'm sure. So but there are some other fusions, that I want to talk about beyond the history of aviation, where human and machine teaming is important. This is a headline from just a few days ago about how the Army plans for robots to be in their platoons, where there's a drone. Every soldier has a drone and their robotic mules. And these soldiers aren't going to necessarily control these robots with keyboard commands or spoken commands. But these robots are going to rely on emotional cues from the soldiers, like facial expressions and signs of stress and body language to aid cooperation actually out in the field. These robots will read their emotions. This is the next stage in that cybernetic feedback mechanism in this human and machine relationship. We see this and this may be in military medicine at some point, robotic surgery. Here's an image of a surgeon. He doesn't have his hands inside the patient. It's the robot has its devices inside the patient. This is called the Leonardo da Vinci surgical device, aptly named, I think. But we can do even one better now. And this happened in 2001, two decades ago, surgeons in New York removed the gallbladder of a patient in France. Utilizing tele-surgery, this remote controlled robotic surgery. It was called the Lindbergh operation because it went from New York to France. So we can see this changing relationship. And in November of 2019, Chinese doctors did the first 5G-enabled remote brain surgery. So we see this relationship changing. And we see it with this notion of cyborgs. We talked about cybernetic theory earlier. Well, a cyborg is just a cybernetic organism. It's an organism that is somehow fused with a machine. And in the modern era, humans are becoming a type of an emerging technology. We are both designers now and the object of design. We are engineers and products that themselves can be engineered. And this takes us back to Nathan with his robotic, mind-controlled prosthetic emerging of man and machine in this dynamic relationship between humans and machines. And here's Nathan reaching out where he can sense things that he has not been able to sense organically. And he can tell apart these different objects, even with his eyes closed, he can tell what he's touching. There's a convergence here where there's this emergent ability that Nathan is developing as he's embodying himself in this new machinery. But here's something that might be next for Nathan and people like him. Instead of that clunky device that's drilled literally in the top of his head, here's something from another one of a modern company, one of Elon Musk's company, it's called Neuralink. And it's a device that can be implanted in the brain and top of the skull. And it can help people control machinery just with their thoughts, with their minds. And this is, ironically, it's surgically implanted not by a human surgeon, but the Neuralink device is installed by a robotic surgeon, which seems appropriate. And the humans are monitoring the process for safety. And there's even an app for that. These people who get these implants, they're gonna have an app on their iPhones and they can think through the power of their own thoughts, they'll be able to control their iPhones and a keyboard and mouse and whatever else those might be connected to as well. So think what that might mean for people like Nathan. He could perhaps become a surgeon who does tele-surgery robotically and from a great distance, a whole new universe of opportunities. We also see this with cyborgs being genetically engineered to integrate with machines. Here's an example where we did it not with a human, but with an insect. This dragonfly, its neurons were genetically engineered so they would become light sensitive. And then a little backpack device was put on this insect so it can control the direction of flight by pulses of light. And it has a little solar panel to power this and it carries little sensors. It is a true cyborg. It's this true emergence of a living organism with technology. And you might ask, is this ethical or is it some sort of a perversion of the natural world? What you and I think is important, but what your children and your grandchildren think will be increasingly important. And here's another example of a literal fusion of an animal and machine, this injection of these special nanoparticles into the eyeball of a mouse so it could see at night so it has infrared vision. Perhaps come into some special operation forces near you. So let's kind of transition out from some of these examples of different fusions of humans and machines to some of the fears that are associated with this phenomenon. So Hollywood does a great job in monetizing these fears. You know, there's this idea out there about our robot overlords. Are they going to unemploy us? Are they going to enslave us? Are they going to eradicate us? You have the hell 9,000 computer from Stanley Kubrick's film, 2001, A Space Odyssey. And of course you have the Terminator here. We don't know if they're going to eradicate us. We do know at this point that they can and they will beat us in chess and in other games. Gary Kashparov, the great human chess master learned this the hard way where he lost to an IBM computer in 1997. When that happened, some people predicted that oh, this is near the end. We're in danger as a species. Machines have surpassed humans. 23 years later, we're still here. We're still doing okay. We've got better at playing chess. I'm going to return to Gary Kashparov later and his views on the benefits of artificial intelligence. But let me, in terms of the fears though, I'm going to break it down into these topics, this notion of big brother and big other and loss of cognitive control, loss of physical control and this concept of the singularity. And I'll let me address those briefly here. So big brother notions been around since Orwell wrote his book 1984 in the year 1948. We hear the term Orwellian apply to a lot of today's technological advances. One of those is rooted in an old concept that's called the Panopticon. Panopticon is just a fancy word for seeing everything, pan optics to see all, to see everything. It's an old idea. They used it in prisons. If you have a guard tower with tinted windows, the guard can see the inmates, but the inmates can't see the guard. So they always have to behave like they're being watched. Big brother shapes your behavior through this Panopticon visual type of effect. Here is a modern Panopticon. It shapes behavior. This one is in a Western city we're all familiar with. Here is a Panopticon set up in Tiananmen Square. China likes to use these to a significant extent, but they're used elsewhere here as well. Here's the capital in DC. And here's the New York police department's portable Panopticon, if you will, note the tinted windows. It shapes people's behavior. This is big brother, but now we can make it more effective with our computation and our technology. And we can also use it to help recognize what's going on. Some Panopticons can be worn. Here are two Chinese police officers sporting the latest in Panopticon accessories, if you will, but it lets them do facial recognition in real time. That's a pretty powerful policing effect. And in China and just about everywhere else in the world, we are carrying these little Panopticons in our pockets called smartphones. They potentially give government an idea of who you are, where you are, and what you're doing. And here Chinese citizens are boasting about their social credit scores reflected on their personal Panopticon iPhones. So let me now move from big brother, which is a government sort of form of power to big other. And this is a term used by Shoshana Zuboff in her book, The Age of Surveillance Capitalism to describe the pervasiveness and the invasiveness of the modern machine age in the marketplace. And this includes the Internet of Things. All of us are a part now of the Internet of Things, our smartphones, our doorbells, our thermostats, our ovens, everything electronic in the home now. Increasingly, we have TVs that hear us, homes that know us, books that read us, you get the picture. Some of you here this evening may have a Nest thermostat in your home. Well, it can observe your pattern of life. It can learn things about you and alter its behavior based on that. But it's also gaining information about you. This is something a predator drone does over a village. It determines patterns of life, but now the things in your home do that as well. And personally, I think they should redesign the Nest thermostat a little bit to make it look more like what it really is. A little more like how 9,000, the murderous supercomputer from Stanley Kubic's film, because how 9,000 is smarter than us and how knows what is best. And your Panopticon now can also stare outside your home and various police forces in a growing number of cities are asking for permission to access this imagery from your doorbell to make the neighborhood more secure. That makes sense. Who wouldn't want to be more secure? But this is part of that internet of things. Here's something that the CEO of Google said, and I think it's important to know that he said it, Eric Schmidt said it 10 years ago. And so it's even more relevant now. It's even truer now. You give us more information about you, about your friends. We can improve the quality of our searches. We don't even need you to type. We know where you are. We know where you've been. We can know more or less what you're thinking about. Think about it this way. You are being digitally stopped and frisked constantly, even in your own home. And big other is also trying to get you to think about what it wants you to think about and shape your perceptions for the marketplace. So this is one of those downsides of the human machine relationship. Another downside or another fear that's associated with it is this loss of cognitive control. We might be more safer or safer and more efficient in general, but the types of mistakes that we make as we fuse ourselves in these technological systems, they're harder to predict. The famous airline pilot who landed his aircraft on the Hudson, Soly Solenberger, said that new technology changes the nature of the errors that are made. We saw this with the USS McCain in its tragic accident a few years ago. There was confusion over how to operate the steering technology on the McCain. The series of errors were connected to that confusion about how to use and interpret some of the ship's technology. So there was some legitimate fear and concern about that loss of cognitive control. We saw a loss of cognitive control and a subsequent loss of physical control with the Boeing 737 MAX aircraft where pilots didn't understand what the autopilot was doing and they couldn't figure out how to fix it and it resulted in two terrible collisions. So some of the downsides of this loss of cognitive control and physical control, we also see a loss of physical control or a concern about it in terms of humans being outside of the loop. And this is one of the main arguments about a propaganda campaign, the campaign to stop killer robots and this notion that the drones are out there and they're not subject to human control and that they're taking over the skies. That's very hyperbolic. The drones that are used by the US military are exquisitely and closely manned by human will and human decision, but this recognizes the potential for this type of development. And an interesting connection is just last week, there was an assassination in Iran of a top Iranian nuclear scientist and the conjecture now is he was assassinated with the use of a remote controlled device, remote controlled machine gun. More I'm sure will develop on that. Another fear associated with the human and machine relationship or maybe not a fear but a hope for some people is this convergence of human and machine into what some call the singularity and it says here in the small print on the cover of Time Magazine, if you believe humans and machines will become one, welcome to the singularity movement. This is the notion that machines will soon surpass and usurp human capabilities to the point where humans will have to download their consciousness, their neural network into an immortalized digital form. At least that is the hope that they will be able to do this. Some people think this, given the rate of technological change that this will absolutely happen by around 2045, they think it's inevitable. I'm always wary of arguments about inevitability. I think this is more of a myth of the future, a myth of a possible future. It sounds to me more like a techno mystical ideation of the future, but it has something that a lot of people express concern about. And this brings us to the idea of the notion of frontiers. And this is the fourth major segment. I'll talk about this evening in terms of this human machine relationship and the frontiers that are involved with it. We've already talked about some of them, but let me be a little more explicit. Here's an example from the Navy. We're in a stage of cognitive computing decision and cognitive assistance, machine assistance to help us all make sense of this data, this immense amount of data. And this is a stage of the sea hunter, an unmanned naval vessel developed recently. It brings me to the notion, this frontier really of cognification. Think about it this way. Well, I'll compare it to cognification to electrification. Electrification gave us a new form of power. It let people heat their food and cool their drinks and operate their machinery and watch Game of Thrones and all of these good things. Just by plugging something into a power outlet. But now instead of plugging into an outlet, we can plug into the cloud and open up all sorts of different possibilities. We have access to these forms of cognification, things, these forms of intelligence, artificial intelligence that do some of our thinking for us. They interpret the world for us. They let us focus on different, more creative things. When you first drove to Newport, when you were assigned to the Naval War College, none of us looked at a physical paper map. We looked on our smartphones, our used ways to help us drive here. That's cognification. It's this outsourcing. It's this help from the modern world of computing. And we see that with Amazon and Fitbit and Facebook and Uber and all the rest of it. So instead of worrying about artificial intelligence and slaving or eradicating us, we need to think more about what we will do with artificial intelligence and what it might help do for us. It also is involved in cognifying warfare. So cognification implies to war as well. How might future warfare be cognified by intelligent machines? And what does this mean for the role of humans? What does this mean for the role of leaders in Naval War College graduates? It brings up the question, is it okay to be killed by a machine process in which a human is separate from the decision or didn't make the decision? What does, is that an affront against human rights and human dignity? That's a question that many people are starting to consider. Here's a more immediate example of the cognification of warfare. And it's this notion of algorithmic warfare. A recent Secretary of Defense, Bob Work, argued that the future of warfare relies on actionable intelligence and insights at speed. And when he says that, he means machine speed, computer speed, not the slow, slow speed of human thinking, but the much faster speed of machine intelligence. And this applies not just for war, but perhaps law enforcement and other walks of life as well. Let me go back to aviation here for a second. There's an old-time pilot there on the left and a newer one on the right. We've gone from pilots relying on their basic senses and sensibilities to pilots who now rely on computer-generated imagery of the world, this cognified view of the world. And this F-35 test pilot says, you can look through the jet's eyeballs to see the world as the jet sees the world because the jet's view of the world is put on his visor. So he interprets the world through what the jet sees. He says it's like wearing a laptop helmet, a laptop on your head. This is cognification. This is a fusion of human and machine. And it's evidence that human organic vision, that's becoming kind of a, you know, that's a 20th century thing. Now, in warfare at least, we need machine vision. So this pilot is still an important part of the system, but not because of his physical skill. He's important because he's learning how to become a manager of systems and he's freeing himself up or herself to see things more holistically and creatively and exploring new frontiers in the command of the air. So we fought robots in the air before. Here's an image from 1944. This is a British Royal Air Force Spitfire taking out a German robotic drone, a V1 drone. It basically comes up next to the drone and flips the wing and it tumbles the drone's gyro and the drone then spiraled into the English channel. But this was a fight against an unthinking drone, an unthinking robot. What if the robot thinks and acts faster than its human adversary? Well, we have that now. Just recently, a couple of months ago, artificial intelligence easily dominated a human fighter pilot in a trial put forth by the Defense Advanced Research Projects Agency and also done in concert with Johns Hopkins Applied Physics Laboratory. And in this experiment, the artificial pilot won five to nothing. It beat the human pilot every time. The score was five zero. If this was combat, the artificial pilot would have been decreed an ace. So in the future, would you even know if you are fighting a human or an artificial intelligence? This brings us to Alan Turing's notion of the imitation game, something he coined way back in the 1950s. Can artificial intelligence convincingly mimic human behavior? Here's a brief paragraph on this topic. And it's pretty insightful. It says artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They'll never be able to appreciate art or beauty or love. Never feel lonely. Never have empathy for the animals or for your environment. They'll never enjoy music. They'll never fall in love or cry at the drop of the hat. It makes you wonder, who wrote this because it seems pretty insightful. Well, as it turns out, this wasn't written by a human. It was written by an artificial intelligence program called GPT-3. The GPT-3 program was fed a prompt of say a few words or write a few words expressing skepticism about AI. And here's what the AI came up with. It mimicked human intelligence to an astonishing degree. Here's some pictures or images of humans, another form of mimicry, because none of these are real people. They're all AI generated images. And you're seeing this used more and more in commerce because you don't have to pay these models and actors. You can just create them with AI. We'll see where this might go in the future. In terms of connecting to the past, we've all heard the term sea change before and it comes from a line in Shakespeare from the play The Tempest. And you can see it here in the last two lines where he talks about suffering a sea change, becoming something rich and strange. Is the human machine relationship undergoing a sea change? Are machines becoming something rich and strange? And are they making us humans become something rich and strange? We see this with these implanted devices in people who have lost their own organic physical control of their limbs. And we see how that's advanced into these wireless devices there in the bottom right, that new Neuralink device. Well, where does this lead? Will we become more like cyborgs? Will we be changing ourselves significantly in the future? Will this be happening to our children or our grandchildren? Will they be becoming something rich and strange? Will these enhancements though, might they just feel like amputations? Would they be something less than human? Or will they make us more creative and thus make us feel even more human? We'll see. One of the key questions here for Naval War College graduates in this new frontier is will ethics keep pace? So there's a pacing problem at play here where technological change, outpaces, changes in laws and social values and ethics. That's always the case. It's a fundamental leadership challenge. And in the classic Naval War College tradition of answering a hard question with posing other additional questions, let me provide these questions. These are three classic questions I didn't make them up. These are from Emmanuel Kant from the 18th century. What can we know? What should we do? What may we hope? These are good questions for us to ask and for our graduates to ask. I would suggest to you that as machines learn, humans must unlearn. Humans must think differently, challenge assumptions, challenge the status quo. And as machines operate and where machines operate humans must orchestrate. And while machines imitate, humans must create. And while machines think in artificial ways, humans must think and act in ethical ways. That is the challenge for us. That is what we must hope. Going back to Gary Kasparov briefly here in the last couple of minutes, as it turns out being beaten by a computer was a good thing for him. It made him more creative. It made him a better thinker, not just a better chess player because it made him focus on what makes us human, our minds. So that is something to consider as we walk backward into this future. And I always like to share this Lincoln quote with new groups of incoming students. This notion that he expressed in 1862 that we must think anew and act anew. And that still applies to us here at the Naval War College, most certainly. So a few decades ago a novelist and a scientist named Charles Snow, he wrote that scientists must have the future in their bones. Well, I would argue to you that Naval War College graduates, they must also have the future in their bones. And they must have the future in their minds because this, I think I've tried to describe the frontier that they're going to face. It is wild, it's unpredictable, it's dangerous, it's exhilarating, it's promising, it is there for the creating and for the leading. So what can we hope? Well, the real problem is not whether machines think but whether men do. So we can hope that we can all learn to think adaptively and think differently. Machines are learning how to think differently, but will we learn how to think differently? We know that Naval War College graduates must be skilled and learn how to think differently. So this brings us back to the beginning, the story of technology, the story of conflict, the story of peace, the story of the future, all of these in the end are not a story of machines but they're stories of humans. This is a human story and it is a story that our graduates will write. Okay, thank you for your attention. And now I'll turn it back over to you. I think we have a few minutes here for some Q and A. So over to you. Thank you, Tim. Robbie, the robot and I are gonna pass along a few questions that we've gotten from our listeners and whatnot. So Robbie, you stand by and let the human do it for a little while. So as always, we've got a number of very, very interesting questions. I'll jump to one of them and that's basically, based on what you've said, might we need to put limits on what an artificial intelligence can do and is it potential that the AI will prevent us from doing that? So it comes back to the, who's gonna be in control manner machine question? Yeah, that fundamentally important question. There is a lot of talk about limits on how we develop and train AI for ethical reasons. We don't want increasingly artificial machines to reflect our flaws, our human flaws and those are many. So how do we get them to reflect our values and our ethics that are universally accepted? Can you put guardrails on that? I think it's very difficult. I think it will be giving lip service to that but will it actually be happening? I am not so sure. I'm not overly confident about that because machine intelligence is developing at such a fast and iterative pace. We lag behind it. It's hard to recognize it and what's going on even with the way these neural networks work in AI. They come up with solutions where we don't know how they came about these answers. A friend of mine and I were just sharing some texts and some articles on this new AI development and the ability to predict or to solve this super difficult problem in biology of protein folding. And AI was recently able to do that but it's very difficult to figure out how it did that. So keeping up with AI and corralling it and focusing it, that is replete with many new challenges. So I think the important thing to remember is that we humans are still the ones who create and we fund and we steer research but we also need to embed in those institutions government institutions, marketplace institutions the importance of our ethical values and ethical behaviors. We need to be talking about that much more as a society. And John, I got you on mute there. Well, that's technology for you. The Chinese have said they intend to be the world leader in artificial intelligence within the next several decades. Do you believe the United States is doing enough from a national perspective to look into what AI needs to be and how we need to control and how we need to partner with these systems in the future? I think China right now in by some measures is eating our lunch in terms of development of AI the amount of funding that's going for it to government funding. But we, I don't want to dismiss this inherent advantage we have in a free market here in the US and with our Western allies and in our free market business apparatus, our industries, our universities and this government this Troika this relationship, which is really important. I think more government funding and attention needs to be put towards the development of AI but the leading edge of it is still occurring at universities in the United States and in Western Europe. And we see companies developing these capabilities that are mostly belong to the US and the Western countries but that's could be a fleeting advantage. China is very serious about this. The Russians are serious about it but not as well-equipped as the Chinese and certainly the West. So I'm of two minds really on China. Yeah, it's a growing problem for that and other reasons but we also have inherent advantages in the United States and in the Western free world that we can take advantage of that allows really accelerate human creativity in a free marketplace that I think is a powerful advantage. Question, many of these innovations seem to be focused on offensive capabilities. Are we seeing similar increases in the pace of defensive capabilities using such systems? Yeah, I think those two will proceed a pace in a complimentary fashion just as they do in terms of the traditional arms races. You're gonna see a shifting focus on offense and defense. Something that comes to mind is Israel's iron dome system. That is heavily computerized and it's designed to perform a defensive function against incoming rockets and has had some significant degree of success. There are offensive capabilities are gonna be pursued. The thing with the defense though is you have to be right all of the time or almost all of the time for it to be effective and if you're just wrong a small amount and a significant weapon gets through, those stakes are pretty high. So it is difficult to get everything right on the offense. I think that on the defense, I think the important thing is to be resilient and to be adaptive. And that is an inherent part of the defense. So if something, if damage does occur and something does get through, can we adapt to it? Can we be resilient enough to repair and overcome? I think that's something we need to focus more on. And I know in the cyber world, from an unclassified perspective, there are of course, among different nations, offensive efforts and significant defensive efforts as well. And that relationship between the two would apply in these other forms of emerging technologies. For this next question, you may wanna reach behind you and put on your flight jacket because the question is, how do you believe that aviators and pilots are going to transition into a world in which perhaps the machines are the primary pilots? Well, I think to a degree they have, but we'll have traditional pilots around for a long time. I don't wanna get on an airliner that doesn't have a human at the controls, who at least is monitoring what's going on and has the ability to override what's going on. I think that's the same for everybody here as well. There is still a significant role for humans in the machine. However, what they do in the machine is different. I'll go back to the F-35. That was designed for air-to-air engagements that occur BVR beyond visual range, dozens, scores of miles away, far beyond what the pilot could physically see herself or himself. So there's a huge reliance on technology. If you're in a close-up dogfight in an F-35 with an opposing aircraft, a MiG or whatever, something's gone wrong. The plan has not gone correctly because that adversary should have been shot down before the pilot ever could lay their Mark I organic eyeballs on. But there still is a room for a human in the machine, but their roles will be changing. And sometimes it will be much better to have aircraft that behave like fighters and bombers without pilots in them, as long as there is a human, if not in the machine, at least on the loop that controls that machine that will open up different military possibilities and different opportunities to project power. And we've seen that in the last nearly 20 years in terms of drone strikes in any number of nations that unmanned capability provides a whole new range of military possibilities in terms of surgically taking out bad guys such as the Soleimani back early in the winter, taking out what we think was by a drone strike and in a long list of other unseemly characters who pose various threats. So humans will play an important role, but increasingly there will be aircraft that don't rely on them directly and maybe don't have a human in them. And that makes those things more capable. I don't wanna completely dismiss my brothers and sisters out there who wear wings. We are still valued in that degree and I think will be for the indefinite future, but we're gonna be required to be more creative and think differently and evolve ourselves along the way. And we're about out of time, Tim, maybe one final question and you've touched on it a little bit is what's the proper balance between technical education and the humanities in the professional military environment, professional military education environment? Well, we're mostly a humanities, exclusively a humanities program here almost exclusively. The STEM aspect fundamentally important, we need men and women in the profession of arms in the national security apparatus of our country and our allies who are steeped in science and technology and engineering and mathematics, especially in the modern age, but we also must have those same people imbued with a sense of the humanities as a friend of mine, Tom Hughes emphasizes strategy is a humanity. War is a humanity, it's not humane necessarily, but it is a humanity. It involves politics and economics and culture and anthropology and language and all of these things outside of the STEM world that our graduates must be well-versed in because we're teaching them not specific ways to think about specific technologies, but how to think about how to adapt technologies to the evolving security environment and to the human condition and how to lead in that environment. And that ultimately is a humanity and that shows that it's so important to integrate those two together. That's what we're about here at the Naval War College. Nicely said, thank you, Tim. Any last comments before we switch back to Admiral Chatfield for her closing remarks? Any last thoughts, Tim? Well, I just wanna thank everybody for their attention and for you, John, for introducing this and scheduling it and Admiral Chatfield for orchestrating all of it and making it possible where we can just come together and talk about ideas and just try to make each other think better in this dangerous world, yes, but also this one with wonderful opportunities and a future that we can make and we're gonna make it together and I think we can make a good one. Thank you, sir, very much. Normally we have a family discussion group meeting but considering that we're all very, very busy this time of year, we're going to conclude with Admiral Chatfield's remarks. So Admiral, over to you, ma'am. Well, I wanna say thank you so much for Professor Schultz, just a really thought-provoking lecture this evening and we're bombarded by these increases in the pace of technological improvements and innovations in all aspects of our life and it's hard to manage. And I love that Friedman book because it allowed us to be a little bit gentle with ourselves as each of these technological changes impacts us and as I think toward the future and think how much has changed already since I entered the Navy 32 years ago, I think the pace is the biggest change is how quickly things change for our young officers, our mid-grade officers and our senior officers and your summary to really highlight how important it is for leaders to be educated about not just technology but the ethics of technology, not just about uncertainty but decision-making amidst uncertainty and not just about leading in times of peace and war but leading people in this realm of uncertainty. The things that we knew are constantly being reexamined in this uncertainty and throughout it all we have to innovate and integrate across the force, across the joint force with allies and partners and it's just a lot for a single person and for a single organization. And that's where we need the support and being open to getting support in a way that maybe we hadn't in the past through technology will be key in how we all move forward together. So I'm just really thrilled that you brought this lecture to this forum and I hope that everybody enjoyed it as much as David and I did. So thank you very much again. And as always, John, what a tremendous job in moderating and thank you again for being here and a consistent part of this program and all of our folks in support in the background, Gary Ross and all of our events and technical personnel. Thank you so much. Have a great night. Thank you very much Admiral. Our normal battle rhythm as we do these lectures about every two weeks but again, we're changing. So next Tuesday, same place, same time. We'll be talking about drones that fly, swim and crawl. So thank you very much. We'll see you next week.