 The U.S. Naval War College is a Navy's home of thought. Established in 1884, NWC has become the center of Naval seapower, both strategically and intellectually. The following issues in national security lecture is specifically designed to offer scholarly lectures to all participants. We hope you enjoy this upcoming discussion and future lectures. I'd like to turn the microphone over to Acting Provost Dr. J. Hickey for his welcoming remarks. Dr. Hickey. Good now. Thank you, John. So I bring you greetings from Admiral Chatfield. She is a big fan of this lecture series, and every year she and David attend virtually every session. Unfortunately, today they're away, so they asked me to step in so I will be a distant second for this quick introduction. So I'd like to welcome all of you here today, and at last count the 50 or 70 and rising people we have on Zoom. For the last couple of years, obviously, we've done this on Zoom. So every year we bring throughout the year about 600 students in residence in both the intermediate level and the senior level program. And we give them a gift from their nation. We give them a year or 10 months, depending on when they arrive, of time. Time to listen, to reflect, to read, to think big thoughts and what's going on around the world, and how that will impact their future career. And we have lots of great faculty, both military and civilian, of a lot of flavors, peer scholars, hybrids, practitioners, to help them in that journey. And they're supported by a lot of people. They're supported by spouses, significant others, friends in the community, sponsors, who don't get to enjoy all of that. They get to hear about it from their student. And when he or she comes home and says, wow, I heard a great lecture on blah, blah, blah today, the person at home goes, well, that was probably cool, but I didn't get to see that. Maybe I get to see snippets of it sometimes, but in general, not so much. So we developed this program. So the INS lecture series was built for spouses, partners, friends of the Naval War College to get a flavor of what your students, what your sponsors, or however you're tied in, do it. So this is a pretty impressive series of 16 lectures. So when John went out and requested who wants to give a lecture for this lecture series, I put my hand up and I thought, I'm the provost. This is pretty good. I'm going to get in, right? And John wrote me back and he said, now, not this year. And I did last year and I said, I don't get it. Why am I not allowed to speak to the group? And he said, well, you know, you didn't make the cut. And I said, John, 5,000 mollusks of Narragansett Bay, that sounds like a killer lecture. And he says, what do you mean? I said, incredible New England seafood, the INS lecture series. He said, no, it's national security lecture series. And I said, oh, so I am not in the lecture series this year. And I have one lame joke, which I've now given you, but you're going to get 16 great lectures between now and May. And there's some ground rules and you might get a certificate if you come to enough of them. And if you're here for the first time and you've got friends in quarters or nearby or for the folks online, invite your crowd. We've seen upwards of 200 sometimes here and it's for everybody that wants to come. And you'll hear about regional expertise, you'll hear about thematics. Today you're going to hear about humans and machines. You're going to hear lots of neat stuff and you'll get to talk to each other about it, but you'll also get to talk to about that person who ties into the college here. So if you're a spouse and now your husband comes home and he says, hey, John Maurer, you can say, I heard that lecture. And why don't we talk about Churchill and whatever. So with that, I think I'll stop. Thank you for all coming and thank you all for coming. And I hope that as the year goes on, you will find this worthwhile and enjoyable for you. So thank you. Okay, so don't spend too much time getting to know me because I'm probably going to lose my job after turning him down as a speaker. But anyway, over the next academic year, the 2021-2022 academic year, we'll be offering 16 lectures from some of the best scholars in the world from our resident faculty. We're intending to offer this as indicated to anyone who'd like to come. We started out in Spruance Auditorium, which is our biggest auditorium primarily because we wanted to have a reception in the lobby afterwards. Many of these events will probably be down in Pringle Auditorium, which is in the older section of the building. We'll send out an email before each event so you know where we're going to be and what the topic is going to be. And again, we're very pleased to have the Naval War College Foundation, international sponsors, civilian employees, members of the military, spouses of Newport. Everyone is welcome to participate. So let's take a brief look at our schedule for the next nine months. It's kind of an eye-chart, but see what you can do there. The next one will be on Tuesday, the 12th of October. Admiral Peg Klein, who runs our Ethics and Leadership program. We'll talk about ethical leadership. Then on the 26th, we'll talk about climate change on 9 November space. Certainly in the news a lot. November 22nd, China and Zombies. If that title doesn't entice you to come and hear what Jim Holmes has to say, nothing will. On December 7th, on the 80th anniversary of the attack on Pearl Harbor, Dr. John Maurer will be speaking about President Roosevelt, what happened in the Pacific in significant detail. So you don't want to miss that one. We'll take a break, get through the holidays, come back 11 January. We're going to talk about the Olympics. The Winter Olympics will be coming up shortly thereafter. There's talk about whether or not there will be countries that do not participate and choose to step aside and whatnot, so it'll be an interesting discussion. 25 January, Women in Combat. Then we'll talk about North Korea. Who lost the Vietnam War? Arctic Affairs, which is of increasing interest. Just War Theory. And some guy named John Jackson is going to do his drones and robots pitch if I'm still employed. We'll talk about humanitarian assistance. We're not sure on the number 15. We're leaving a little room for growth there. And then the 16th one should be particularly interesting. There's a retired Navy captain planning to come out of Hawaii. His family has been in Hawaii for many generations. And they used to offer a refuge to Admiral Nimitz during the war. And he would go over to the other side of the island and get away from the war and spend it with their family. And also hopefully being there that night will be Admiral Nimitz's grandson, Chester Nimitz-Lay, who will again talk about his grandfather to the degree he knew him and all the research that he's done. So that's going to be the rundown and I encourage you to participate. As the provost mentioned, we give certificates of completion, certificates of participation. If you do 60 or about 10 out of the 16 events, no tests, no quizzes, so just need to participate. And again, on 12 October, our next one will be Admiral Klein. For today's lecture, we invite you to join us in the lobby to share some light refreshments after the presentation. And we've got some partners out there with information, the Fleet and Family Support Center, the MWR Department and the Naval War College Foundation, which is generously funding our refreshments. Okay, on to the main event. During the presentation it follows our virtual participants should feel free to ask questions using the chat feature on Zoom. And we will welcome questions from our audience at the conclusion of the prepared remarks. Please use the microphones that are there attached to each seat back so that the virtual participants can hear all the questions. And we will be off and running. Today's presentation will provide some perspective on the evolving relationship between human and machine. Dr. Schultz will use historical and modern concepts to examine how human machine fusion advances our creativity, how we can deal with fears associated with robotic overlords and Orwellian consequences, and he will touch upon what it means to be human in the age of technology. Tim Schultz is a college associate dean of academics. Prior to joining the Newport faculty in 2012, he served as the dean of the U.S. Air Force's School of Advanced Air and Space Studies at Maxwell Air Force Base in Alabama. Tim earned his Ph.D. in the History of Science and Technology from Duke University in 2007. He's a graduate of the U.S. Air Force Academy, Colorado State University, the Air Command and Staff College, and the School of Advanced Space Studies. Formerly a U.S. Air Force colonel, he spent much of his aviation career as a U-2 pilot, enjoying the view over very interesting regions of the globe. I'm pleased to pass the microphone to one of the smartest guys I know, Professor Tim Schultz. Thank you, John Jackson, for that very generous introduction, and you all should look forward to his lecture on unmanned systems, and you will learn why we refer to Professor John Jackson as the Duke of Drones here at the Naval War College. It will be an exciting time with John Jackson, no doubt. So I am delighted to play a role in this wonderful series and to start off the year, and I'd like to thank John for organizing it and Chris for running everything in the back. Nothing would happen without our experts making all of the levers and buttons work, and also for the President of the Naval War College and Provost for setting this up and providing the space for it, and the Foundation for supporting it as well. I'm excited about this topic tonight, Machines vs. Humans and the Evolving Relationship Between Humans and Technology. That's a research interest for me, and we're going to go through various topics tonight and some different ways of thinking about it, but something we do here at the Naval War College is we teach our students to kind of step back and look at things from a different angle and take a non-standard view and practice thinking differently. So in that spirit, to start off a discussion of Machines vs. Humans, I am just going to talk about something a little bit different. We're going to talk basketball, and I'm going to start with a question. Who is the best professional basketball player ever? Let me hear some names. Michael Jordan, okay. LeBron James, alright. Other names. Bill Russell, okay. Larry Bird. Wilt Chamberlain, who said Larry Bird? I love Larry Bird, but sir, that is not the correct answer. The correct answer, and I'm not saying this just because my boss, the dean of academics said it, the correct answer is in fact Wilt Chamberlain. Wilt Chamberlain was a big guy, 7'1", 245 to 275 pounds, dominated the basketball court. He, I argue to you, is the greatest player ever because he had the single greatest game ever. And here he is in action back in 1962. That season, 61-62 season, he averaged a little over 50 points a game. Nobody's ever come close to that. That night, he's playing for the Philadelphia Warriors against the New York Knicks, and Wilt Chamberlain scores, how many points? Famously, 100 points. Kobe Bryant scored 81 points, and Kobe Bryant is number two on that list. And so Kobe wasn't really that close. Wilt Chamberlain, 100 points in regulation, no overtime. He set another record that night with the number of free throws that he shot. He went to the free throw line 16 times, shot 32 free throws, and he made 28 out of 32. He shot 87.5%, which is excellent, especially for a big guy, big players like Wilt and Shaquille O'Neal. They are notoriously poor free throw shooters. And Wilt Chamberlain was the season before. He shot about 50%, which is exactly what Shaquille O'Neal shot, which is exactly terrible, not a good free throw shooter. But that year, that season, Wilt Chamberlain was trying something different. And you may have seen pictures of this. He was trying a different technique. That's Wilt Chamberlain, a different game, same season. He's shooting granny style, underhand. This is a much better way to shoot a free throw because your arms hang symmetrically and you just kind of lob the ball into the basket. And when you have a wingspan like Wilt Chamberlain, when you release the ball, it's so close to the basket already. So it's just almost like plopping it in the basket. So that season when he's shooting underhand, he shot over 60%. So he made a notable improvement in his free throw percentage. That night, he shot nearly 90%. He was on fire that night and that got him to the century mark of 100 points. But what's really interesting, and Malcolm Gladwell tells this story in a history podcast that's just excellent. He was very successful shooting this way, especially that night. But the following season and the rest of his career, he gave it up. He went back to shooting about 50%, shooting overhand like every other player does. A couple of years later, he shot 38% from the free throw line, shooting overhand 38%. Any one of us in here could shoot 38%. If we're blindfolded and spun around and chased by zombies all the same time, that's probably a 40% shot. He gave it up. Why did he give it up? Why do you think? It looks stupid. It's embarrassing. It's not manly or something. It has a ridiculous air about it. Shaquille O'Neal, when asked about why don't you shoot underhand? Shaq, you'll be a much better free throw shooter. He said he would rather shoot 0% like this than ever try to shoot like that. Not going to happen. Later on in his autobiography, here's what Wilt said. I felt silly like a sissy shooting underhanded. I know I was wrong. I just couldn't do it. So he had adopted a superior technique and one that disrupted the opponent's strategy because at the end of a game and a close game, what do you do? You foul the bad free throw shooter. And if Wilt Chamberlain's in the game, he's dominating the game, but all you have to do is foul him and he's going to maybe make one basket and the opponent gets the ball and they can score two or three points. So this technique, it thwarts, it undermines, it disrupts the enemy's, the opponent's strategy. But still he gave it up because he preferred vanity over victory. He was more concerned with how he looked. He was more concerned with fitting in and looking like everybody else. So he gave up a winning strategy. Does this happen in your professional life? Do you see this in your organization or institution where you stop doing something or never try something just because you're worried about how it looks, about how you might fit in? So you might be wondering, what's this have to do with technology? Well, Wilt is going to make an appearance a couple of more times this evening when we talk about this human-machine relationship because that lesson has something to tell us. So tonight I'm going to talk a little bit about some frameworks to look at technological change and human role in it and also some of the fusions of humans and machines, some of the cooperative fusions, some of the actual physical fusion of them. We'll get into some of the fears that predominate about machines dominating us. And then we'll get into some of the frontiers that await Naval War College graduates. And Naval War College graduates, they are very familiar with fighting at frontiers. That's what they've done for decades. They fight at the frontier of sea, air, and land. And now in the modern era and even in past decades, they fight at the frontier of technological change, that frontier of science and fiction. So they're used to it. First, I want to talk a little bit about this image going on here, though, this fist bump. And you can see it's a bit of an unusual type of fist bump. Those fists belong to some interesting people. The gentleman on the right, his name is Nathan Copeland. He's paralyzed from the neck down. And I think most of you know the gentleman on the left, President Barack Obama. This was back in 2016. Nathan is controlling this robotic fist with his mind. He has an implant in the top of his head. And what he thinks is what it makes that fist do. So it's this almost telepathic control of a robotic arm. You can see an image of it here. This expands Nathan's universe of possibilities. He can do things that so-called normal people can do, and he could probably do some things potentially that so-called normal humans wouldn't be able to do. So let's think about that as we get into some frameworks. There's some ways to look at technology and technological change. And the first rule about the future, we've been there before. We recognize ourselves in the past. We recognize ourselves with this enthusiastic, enthusiastic, imaginative young man. It also suggests to us that over time human nature doesn't change. One of my mentors says that humans change throughout history only in their costume, only in what they wear. By nature they remain constant. So we recognize ourselves in the past. And I'll offer you some evidence of that. This is the first known selfie taken in about 1839. Almost immediately after photography was invented, we turned the camera on ourselves just like a modern-day millennial. It takes on average in their lifetime. It's predicted about 26,000 selfies. This was selfie number one. Human nature hasn't changed. What did we do after we invented the movie camera in the late 1800s? We made cat videos. This was one of the first motion picture films filmed by Thomas Edison himself in 1894. Cat videos. Human nature doesn't change. This is a cartoon from 1906. It shows a couple sitting in a park and they have what is referred to as a wireless telegraph in their laps, what we would call nowadays a radio, basically. New technology back then. And the caption down here, I'll read it for you, it says, these two figures are not communicating with one another. The lady is receiving an amatory message telling them some racing results. You see this almost every night at your own dinner table or in restaurants, human nature doesn't change. More evidence of how what we've thought in the past predicts how we think in the future. This is the British Admiralty's reaction to the advent of steam power. It made them nervous. They didn't like it. It frightened them. You can see what they wrote. In 1826, this may strike a fatal blow at the naval supremacy of the Empire. They were used to the old way of doing things. Fast forward several decades to the advent of submarines and here's a reaction from one senior admiral. Submarines are underhand, unfair and damned un-English. You see what I did there with the underhand? They don't want to accept a new way of doing things because it doesn't comport with their tradition and their identity. But we see now, especially in the news the last couple of days, that submarines do in fact matter in modern warfare. So we've been to the future before. One other way of looking at this is we're right now experiencing a period of rapid change. We can kind of sense that, but we've experienced periods of rapid change in the past. I'll use one example of a 10-year span, 1947 to 1957. One of the things that went on just in that fast decade, we broke the sound barrier. Thermonuclear weapons were made and tested. Nothing says the status quo has changed more than if you are at your hotel in Las Vegas, cool side, and you look and you see boiling upward in the distance a mushroom cloud. The status quo has changed. That causes inner society. The ICBM was invented and deployed. The first nuclear-powered submarine, the Nautilus. Children were practicing duck-and-cover drills at school. The transistor was invented and it got a lot smaller since then. Sputnik in October of 1957 jolted the status quo. Washington Crick in 1953 worked out the structure of DNA to help usher in the genomics revolution. 1957, the pill, a technology that vastly changed society. So we see in just a 10-year span a lot of technoscientific and sociological change. It's okay. We've been there before, but now something is a little bit different. Thomas Friedman in his book, Thank You for Being Late, attributes this to this combination of computation and interconnection and innovation all coming together and resulting in this really fast pace of change where it seems like it's hard to keep up. So we have this young man with his imagination then, and here's a young man, human nature doesn't change, but his imagination is being captured a little bit differently. He's viewing the world a little bit differently. He might be able to manipulate the world a little bit differently. So with this in mind, I want to just kind of portray technological change here. You can see capability over time and with some forms of technology, there's this exponential increase. This maybe represents the number of drones in the sky over the last 10 years, the number of deep fake images that are out there, the number of things that are connected within the Internet of Things, the colonization of the population by smartphones. Things like that are on this steep rising curve. This poses a strategic question and a strategic conundrum for Naval War College graduates. How do they predict this curve? How do they stay ahead of this curve? Also, Thomas Friedman in his book, he talked about how this curve kind of has a hockey stick shape, and so he likened it to Wayne Gretzky's quote, Wayne Gretzky, the great hockey player, who said, what makes him great? I skate to where the puck is going to be, not to where the puck has been. How do we skate to where the puck is going to be? How do we predict that? How do humans keep up with this rapid technological change? I put here human capability over time, and I very generously gave it a slight upward increase, but in reality, it's probably flat. Those of you with Twitter or TikTok or a teenager would argue to me that it's actually declining steeply over time. But how do humans keep up? How do we bend this curve so humans can maintain their pace with this technology? One form is human machine teaming, where we integrate humans with machine capabilities in novel and experimental ways. And this gets me into the notion of fusions, or how we might fuse together humans and machines in those ever-changing and interesting ways. What might human machine teaming look like? Here's an example from 1949, and this was a failed attempt at human machine fusion. This is a pilot, and they were wondering, can pilots fly an aircraft laying down instead of in the seated position? If they're laying down, they can withstand many more Gs, up to maybe 18 Gs. Sitting down, they can only withstand about 9 Gs or so. So this is a much higher G tolerance, but pilots didn't like it. Aircraft engineers didn't like it. It was too complicated, not a good human machine fusion. So some of them don't work out. The ones that do tend to work out are fusions where humans are managing information differently and playing a different role in a complex system. And the guy who came up with this notion is depicted here. His name is Norbert Wiener. He's a mathematician. He was one of the three titans of the information age in the middle of the 20th century. He is right up there with John von Norman and Alan Turing. We'll hear more about Alan Turing here in a few minutes. But he came up with this theory of cybernetics or this theory of information feedback and information control, and he figured that if you have enough computational power and good enough sensors, you could create a system that could do just about anything, and you could have a degree of elaborateness of performance. So this gets me into aviation where the elaborateness of performance has increased. The question, who are the best pilots? These are significant pilots, Lindbergh, an airmail pilot named William Hopson, and Jacqueline Cochran, a test pilot in the 30s and 40s, first woman to break the sound barrier. Outstanding pilots, but the best pilots, as it turns out, are the ones who are able to adapt and learn new skills and integrate themselves in more and increasingly complex machines. Here's the flight manual from a B-17 in the Second World War. It says, below 10,000 feet, you're a flyer. We're relying on your stick and rudder skills, those typical normal traditional pilot skills, but above 10,000 feet, your role changes. You and your nine other crew members, they are integrated into a complex system and there's this very machine-like interaction within that complex system. An example of that involves the Norden bombsite on that B-17. This is a very precise machine that is connected to the autopilot, and during the bomb run, the bombardier looking through the Norden bombsite connects it to the autopilot and the pilot and co-pilot, the human ones up front, they go hands-off. The autopilot flies the airplane because it can do so with much greater precision than human hands. The automation is better. The most successful pilots are the ones who can shift their role and let the automation perform when the automation performance can far exceed human performance. There are examples from the Second World War where we completely roboticized B-17s, made them fly via remote control, took out all of the crew and controlled them by a pilot sitting in a mothership airplane miles away. So we're trying to try different things with this cybernetic apparatus. The soon-to-be five-star general in charge of the U.S. Army Air Forces in 1937, obviously before the war, he predicted that we need to relegate the human flyer and elevate the mechanical pilot. He saw what was coming, and at the end of the war, after the war, he said, one year ago we were guiding bombs by television controlled by a man in a plane 15 miles away. That's that image I just showed you. I think the time is coming when we won't have any men in a bomber. Well, that time has come, not necessarily for bomber aircraft, but certainly for strike aircraft. We also see in the late 1940s this understanding of the pilot's role is changing. He's becoming, or she's mostly he in this case, becoming more electronified. He has an appointment in electronia. The pilot's role is changing. The relationship with the machine is changing. In 1947, an aircraft flew from Canada to the United Kingdom with no crew member ever touching the controls. They were in their plane, but they just sat there and observed from standing still on the runway for takeoff to standing still on the runway after landing. They didn't touch anything. It was all automated. So we can see how aggressively this automation has been taking place and we see it now in terms of this remotely pilot aircraft operator and what he is able to do in the modern combat environment. The cybernetic theory, this use of information and the changing roles of humans within that information system, that has profoundly changed our relationship with machines. The chief of staff of the Air Force a few years ago said that for one pilot flying one aircraft, that is a neanderthal way of doing business. Nowadays, we can have one pilot controlling several aircraft. They could be unmanned wingmen like this unmanned F-16. This is a reality now. And by the way, I like this image because this is what it looks like when a robot takes a selfie. Maybe this is the first robot selfie. Here's what the secretary of the Navy said not that long ago. F-35, last man strike fighter, the Department of the Navy will ever buy or fly. Elon Musk, it's going to be drones. But there are other fusions going on. It's not all just aviation. We see in, with Army technology, unmanned systems and John Jackson will get into this a little bit. The Army is looking at systems that you don't need a keyboard or a joystick to control, but these robots, they read the soldiers' emotions and gestures and they can respond to them in real time and in subtle ways. That's a fusion of humans and machines that we might expect. And what about cyborgs? Cyborg is just short for the term cybernetic organism. What happens when this actual fusion happens? When humans no longer are the designers, but they are the designed. No longer the engineers, but the engineered. We see that to some degree with Nathan and his robotic arm. This merging of man and machine that's opening up whole new possibilities for him and people like him. And note here, he's got those boxes drilled into his head, literally drilled into his head. It's a little bit cumbersome, but technology is continuing to advance. This is an implant that can go beneath the skull and integrate into the brain and do the same thing. You can kind of get a vision of it there. And when that is implanted in a brain, it's not a human surgeon who does it. Ironically, it's a robotic surgeon because the robotic surgeon can do it much better and with much greater precision and safety. And there's even an app for that. You can control your phone by thinking about it and then control other things in the environments. Another form of fusion is this notion of the singularity. When machine intelligence reaches a level where it is superior to humans, hopefully by then humans will have developed the ability to download their own consciousness, their own connectome, their own neural network into a digital form so they can live forever. Or that is the so-called vision of many. The fine print says, if you believe humans and machines will become one, welcome to the singularity movement. It's a myth of the future. It's a technomistical ideation based on some people's understanding of where that technological trajectory might be going. Is it inevitable? No. I'm always wary of people who make claims about something being inevitable. There are fears associated with this human-machine relationships. And I'm going to get into some of those now. Hollywood does a great job monetizing those fears. We're all familiar with the Terminator. I think most of us are familiar with the image on the right. That's the Hal 9000 psychotic computer from Stanley Kubrick's and Arthur C. Clarke's 2001, a space odyssey. HAL 9000 actually, HAL is a play on words because if you just add one letter in the alphabet, it's IBM, HAL, IBM. So they were making a subtle commentary about IBM. And by the way, an IBM computer defeated Gary Kasparov in 1997 at chess. And when that happened, people took notice. Now, a purely human game, we've been beaten by a machine. That means something. And then we see the same machine making the game show circuit after that. It feeds into these other fears, perhaps, of Big Brother and Big Other and loss of human control, the sort of rise of the machines, if you will. The Big Brother notion has been around a long time, even before Orwell wrote this book in 1948. But it's this notion that an authoritarian system can use technology to control your lives. And we see this in terms of the technology of the panopticon. Panopticon just means to see all, pan means all. This is what a guard tower is, especially one with tinted windows. The guard can see everybody in the prison, and if the windows are tinted in the guard tower, there may not even be a guard in the guard tower, but the prisoners have to assume that, right? And it shapes their behavior. The technology shapes behavior. We see a form of a panopticon all throughout different parts of the world. Here they are deployed in Tiananmen Square in China. The chairman knows all. He sees all. He is watching you, or he might be watching you, and that's good enough. We also see them in Western countries and elsewhere. Here is a panopticon helping keep honest people honest in London and in Washington, D.C. and in New York City. This is a portable panopticon that the NYPD has. Notice the windows are tinted. You can't tell if there's a police officer in there or not. Therefore, it's going to shape your behavior, whether it is manned or not. Here are two Chinese police officers sporting the latest in panopticon fashion, these cool sunglasses that have a camera built in, and they have on their handheld device the ability to do instantaneous facial recognition. So they are in the digital stents conducting a stop and frisk operation of people walking by. They are looking for certain people to trigger the facial recognition algorithms and so they can identify who you are and whether or not you are of increased interest to the police. Almost all of us are carrying a portable panopticon in our pocket every day. It can tell the government things about you. It can tell private enterprise things about you. In the case of these young Chinese women, it can help them keep track of their social credit score. Everybody wants a high social credit score in an authoritarian regime so you can have privileges to do things. I wonder at this picture, because this young woman over here, she's got a much higher score than everybody else, 753. So I wonder if she's kind of undermining her friends somehow and boosting her score relative to their score. Who knows what human dynamics go on in that scenario? So in addition to Big Brother watching you, there's this notion of Big Other and an author named Shoshana Zuboff talks about this in a really interesting book, The Age of Surveillance Capitalism and how technology sort of pervades human life, human day-to-day life. We see this in the Internet of Things. You are a thing in the Internet of Things. Congratulations. And the number of things in the Internet of Things in your life and in broader society, it just continues to grow. It's on that steep part of the exponential curve. So you have a TV that hears you, the house that knows you, the book that reads you. Anybody here have a Nest thermostat? A couple of hands go up. I've been thinking about getting one. My wife here, we need to discuss that perhaps. You know, it's pretty cool. It can monitor your activity. It can establish your pattern of life, kind of like a predator drone over a village somewhere establishing a pattern of life. Well, your Nest thermostat can do that on a smaller scale in your home. It can just determine what you want and what temperature you like and all that things. When you want it, what your active periods are, et cetera. I think though, they should redesign the Nest thermostat so it looks a little bit like this but upgrade it to what it really is and that is just like the Hal 9000 murder psychotic computer hanging on the wall in your own home. So 11 years ago, the CEO of Google, Eric Schmidt, made this observation, which is probably even more true now, that all of us give away information about ourselves, Google and others. They don't need us to type at all. We know where you are. We know where you've been. We can more or less know what you are thinking about. That maybe gives you pause in that relationship with humans and machines and this notion of big other. There's also part of the fears of this machine dominance is loss of control. And we see this at sea. We see this in the air. We see this with unmanned systems on the ground. Solis Olinberger, the pilot who landed his airbus on the Hudson River, a highly robotic airplane, nevertheless which he had full control of. He says that improvements in machine control technology change the nature of errors that are made. If the nature of errors that are made is changing, then the nature of the relationship with technology is changing. And oftentimes human controllers, pilots, sailors, they don't understand what the technology is doing. They lose cognitive control. And after you lose cognitive control, you lose physical control. And that attributed was part of the chain of events in the deadly collision with the McCain. They had difficulty understanding what the ship's fancy new electronic steering system was doing or how to control it and have it do something different. And that was a contributing factor to this disaster. The two 737 MAX airplanes that crashed in late 2018 and 2019, that was an automation system that took over under certain circumstances. The pilots didn't know what it was doing, why it was taking control of the aircraft or why it was making the aircraft's nose pitched down. They didn't know how to disconnect it. So they lost cognitive control and then they lost physical control, and that was primarily due to very poor design of that automation system and also secondarily due to some inadequate training of the pilots, but they could no longer interface or fuse correctly with that technological system. The cybernetic system broke down. We see fears manifested in terms of this propaganda campaign of stop the killer robots, this notion that machines are out there unmaned, fully autonomous machines hunting and killing humans. These types of remotely pilot aircraft are exquisitely controlled by humans, but the argument is the potential is there and we don't know if drones are going to be out of control and something that we can no longer understand. One of the fears. And this brings us to this idea of frontiers. The frontiers that Naval War College graduates will face, I'll identify just a couple elements of those frontiers now. Big picture and important here is this concept of cognification. And I'm going to compare it to electrification and actually Kevin Kelly, the former editor of Wired Magazine, makes this comparison. He talks about how electrification changed society. You could plug into a network of power and do things that you couldn't have imagined doing before then. Well now, with cognification, you're plugged into a different network of power, a network of cognifying power, you're plugging into the cloud and it helps you do things that you could not have imagined doing. How many people here, when they were first driving to Newport, Rhode Island, used a fold out paper map? Well I see one or two hands, okay, some old school folks. Most of us relied on something electronic like Waze, which not only tells you how to get to Newport, it'll give you the best route based on the actual traffic. Things that you could never know based on your own senses. Now we're having that outsourced and cognified for us. And we can see elsewhere where that occurs in the marketplace. But the issue is, can we cognify war or warfare? Should we let warfare be cognified to the point where we may not understand what the machines are thinking or how to control or correct their behavior? Another way to look at this is is it a violation of human rights to be killed by a machine that has made the decision to kill you? Is it a violation or an affront against human dignity? Do human rights require that if you're going to be killed in war, it must be another human making that decision rather than an algorithm making that decision. Perhaps. This is from the CNO just a few years ago. This is an unmanned vessel. One of a few like it that are out there being tested. We're entering a new age of cognitive computing and cognitive assistance and machine assistance to help us make sense of all of the data because we can't make sense of it ourselves. Just recently, the Navy concluded this project overmatch which was this integration of fleet capabilities, manned fleet capabilities with unmanned or minimally manned capabilities. You see the USS Fitzgerald here up at the top and here at the bottom you see another unmanned or in this case maybe a minimally manned vessel which at some point could be armed with various weapons systems. The Navy is trying to learn how to integrate those together in this new frontier of naval warfare. And here's what the CNO said just two weeks ago. I think in terms of manned unmanned teaming he's on to us. We've already been talking about manned unmanned teaming. The man in the loop is going to be an important piece of this for a while before we get to a point where you know it's hands off so to speak with a high degree of autonomy. This is a frontier that our war college graduates are going to be perhaps fighting in. And not to neglect the other domains on the land, on the surface. The Russians have just been experimenting recently with some unmanned technology. This particular device has a cannon, it has a machine gun and it has a flamethrower. So we have an unmanned flamethrower. What could go wrong in that type of scenario but the Russians are working on integrating that just like we're working on integrating unmanned systems at sea. Cognification involves this notion of algorithmic warfare and Deputy Secretary of Defense Bob Work talked a few years ago about algorithmic warfare and Project MAVEN where we're gaining these actionable intelligence and insights at speed and we need to make decisions at speed and by what he means at speed he's talking about at machine speed. Human speed, human cognition human decision making is not fast enough in the modern battlefield. We need to outsource that, cognify it with machines so we can keep up with the enemy and have an advantage over the enemy in this modern, cognified warfare. Let's go back to the air in terms of cognifying combat and I'm going to use some historical images to draw a contrast about a modern approach. So here we have Colonel Benjamin Davis, Tuskegee Airman of the Second World War. He has sharp vision, fast reflexes, killer instincts and a pretty cool mustache. Here is Colonel Robin Olds from the Vietnam War. He has sharp vision, fast reflexes, killer instincts and a way cooler mustache than just about anybody. These two flew their aircraft much differently than modern pilots do now. Just think about well, he's an actor so he's not relevant but just think about what these two how they dominated the battle space. They relied on their own physical and cognitive capabilities essentially to take command of the air. What would they think about a modern F-35 pilot? This F-35 test pilot is wearing a helmet that costs $300,000. Much more expensive than what Colonel Olds and Colonel Davis wore back in the day. But this helmet it completely transforms what this fighter pilot can do. The machine he is flying, the F-35 it senses the environment far beyond the visual range of its human pilot and it takes all of this data and it sorts it and analyzes it and it displays it on this visor bearing so he can make some overarching big picture decisions. And the way he fights is he'll probably never see the aircraft that he's trying to shoot down. That will happen beyond visual range unlike with what these two gentlemen on the left would have experienced. This F-35 test pilot says this helmet is like wearing a laptop computer on your head. You have to assume a different role as an aviator. What would these two gentlemen on the left think about that new role? Would they think he's cheating because he doesn't look right? This is kind of a lame way of doing things but actually it confers a tremendous advantage even though it may not comport with the traditional identity of that community. So here's what that F-35 pilot says. You can look through the jet's eyeballs to see the world as the jet sees the world. You're not relying on human eyeballs nearly as much as the human ability to interpret the information that is to him or her and dominate that battle space using a cybernetic system with a immense elaborateness of performance. This is the frontier of combat. Here's another frontier of combat that happened in December of 1944. The smaller aircraft on the right that's a German V-1 jet-powered cruise missile basically. It's flying towards London. The larger aircraft flown by a British pilot pulls up right next to it. It's hard to do because there are about 400 knots and it puts its wingtip under the V-1's wingtip and flips it. It's called V-1 tipping. Tips the V-1 over, tumbles the V-1's gyro and it goes spiraling harmlessly into the English Channel. That's how you fought a robot in December of 1944. But that robot didn't fight back. It didn't think. It didn't react. What happens in the modern era if robots fight and think and react? Well, we had that last year in a simulation with an AI flying a fighter aircraft. This was all done via simulators, sophisticated simulators against a very experienced pilot. He was put on by DARPA and Johns Hopkins Applied Physics Laboratory. And guess what? They did five engagements and the AI pilot won handily every time. Five-nothing. This was combat. The AI, the artificial pilot, would have been an ace. It vanquished the experienced human fighter pilot with relative ease. If you're that pilot, you may not know if the aircraft you're fighting is controlled by a human or an artificial intelligence. You might just realize that you're losing. AI has achieved, in this case, a parity with human capability if not a superiority to it. And this gets us into what Alan Turing talked about in 1950 with the imitation game, the Turing test. How do we assess whether a machine has become as intelligent as a human? He devised a test for that and we see that test being passed now quite a bit. I love his quote at the bottom and, by the way, he's been honored to be on the currency in Britain now on the 50-pound note. It says, this is only a foretaste of what is to come and only the shadow of what is going to be. We're looking ahead, peering into that frontier that we're concerned about. Here's another example of this frontier. An essay assignment was made. Write one paragraph that expresses skepticism about AI. And here's what was written. AI programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, our beauty, our love. They will never have empathy for lonely, never have empathy for other people, for animals, for the environment. They'll never enjoy music or fall in love or cry at the drop of a hat. That's a nice paragraph. I'm feeling generous tonight. I would probably give that a B-plus in terms of how they answered the question. But it turns out I couldn't assign that grade because a human didn't write it. This was an artificial intelligence program called GPT-3 that was given this question and here's what it wrote independently. Indistinguishable from a human author. To me, that passes the Turing test. These images here, they're all mimicries. They aren't real humans. They're fakes. They are used in commerce sometimes if you want human models because you don't have to pay humans that don't exist if you just have an AI, create them. You can't tell that those aren't real people. We hear this term C change a lot. And here's where it originated from as many other things originated from Shakespeare. He's talking about a C change, becoming something rich and strange. Well, are machines becoming something rich and strange to us? Are humans, ourselves, are we going to become something rich and strange with neural implants or these other capabilities that might be available to us in a generation or two, your children or grandchildren, will they be able to have augmentations like this? Will these augmentations make them become something rich and strange? Will it make them better or will these augmentations feel more like amputations for them as they seek some sort of advantage over their peers? And another question that is important in Naval War College graduates and faculty alike, will ethics keep pace? Ethics in the law and cultural norms typically lag far behind technological change. That is a significant problem for the profession of arms. I asked will ethics keep pace and I'll follow that up with three other questions because that's what we do at the Naval War College. We never answer a question. We just ask more questions. Here are some famous questions. What can we know? What should we do? What may we hope? They aren't my questions. These are famously noted by the 18th century philosopher Emmanuel Kant. He thought these were important questions that we can ask ourselves. So for machines versus humans what can we know and what should we do? Well as machines learn humans need to unlearn. We need to break away from the status quo and think about things differently. As machines operate humans need to orchestrate, do a higher level of orchestration. As machines imitate humans need to create. While machines are artificial perhaps then humans must be ethical. We must play to our individual strengths as humans. And this is something Gary Kasparov learned after he was defeated by that IBM computer. It made him a better chess player. It made him more creative. He said machines have allowed us to focus more on what makes us human, our minds. He has a point. I always share this quote with the incoming students. It's from Lincoln in 1862. He talks about how we must think anew and act anew. But this is a very hard thing to do. Easy to say but difficult to do. Will Chamberlain, I told you he'd be back a couple of times. He was trying, he was thinking anew and he was acting anew but he gave up on it. He could have been a game literally a game changing leader but he decided to continue that. He no longer wanted to act anew because he just wanted to fit in and be comfortable. Will we be able to do that at the frontiers of new knowledge? Will we value vanity more than victory? So how do we deal with the future? Well what may we hope we can hope that machines won't outthink humans. We're not thinking but what matters is whether humans think. Whether they think and lead and act ethically. Whether they continue to value basic principles or just machine expediencies. These are questions that Naval War College must wrestle with because this story of the future it's going not to be written not by machines it's going to be written by humans and it's going to be written by Naval War College graduates and we need them to write the best possible story and lead us into that. Thank you folks for your attention. I think we have just a minute or two for any questions John. We have not good questions that anyone can achieve with any questions. Yeah I should have probed. Yeah basketball thing but the upward tossing style. Upward tossing exactly yes. Yeah It's all about framing. I was just saying we are at the 530 marks I want to honor everybody's time. Absolutely one question I'd like to put out there the tragedy of the drone strike in Afghanistan would you try to attribute that to human error chain error where do you think that's going to be? I've seen this killing of I think 11 people or so in a remotely piloted aircraft probably a Hellfire missile strike in Afghanistan a couple of weeks ago who's to blame where's the liability that's an ongoing question in terms of the remotely piloted aircraft operator the decision maker who pressed that button he or she is acting on the intelligence the imagery the sensors the data that's being fed to the human decision maker how were they misinterpreting that data was the machine system the cybernetics system indicating something that wasn't there was it mistaking somebody carrying water for somebody carrying explosives Ultimately humans were in the loop humans were in the control those innocence died due to human error but certainly the machine system aggravated that it maybe it had the confirmation bias sort of effect so there is that untangling that needs to be done in terms of what was being sensed and what was being actually then perceived by the humans as reality and sometimes when people appear on a screen as fuzzy images as pixelated ones and zeros then maybe there is a disconnect between what that person that remotely piloted aircraft operator is seen on that screen and what is actually happening on the ground clearly the system broke down where does the liability go Ultimately it has to go towards a human Thank you John Nice to hear Ok well we have cookies and cake and drinks for about 100 people Unfortunately 79 of them are on zoom and I don't intend to deliver to any of you guys up there on zoom but I do encourage you all to go out in the lobby there take a little time to get to know each other and ask any questions that you might have and we look forward to seeing you back here on the 12th of October Thank you