 Lewis M. Duncan currently serves as Provost of the U.S. Naval War College. Additionally, he continues as a founding member and present co-chair of the U.S. National Laboratory of the International Space Station. Dr. Duncan is a research scientist in experimental space physics and radio physics. He holds a PhD from Rice University in space physics. Please join me in welcoming our Provost, Dr. Duncan. Well done. Good morning. Well, I hope you enjoyed Peter Singer's talk on like war. This is nothing like that talk. I'll be talking about emerging technologies projected into the future, and there's a number of those that you'll hear from where we take rational extensions of technology that we see today and talk about what we think the world is going to be like in the future. This is not that talk. This is actually looking decades ahead and obviously unable to predict what the breakthroughs will be that will actually define the future, but just taking what we know today and extrapolating it forward, what that world will look like and the kinds of ethical challenges that that world will present to us. I'd like to begin with just putting this in the context of time. A brief history of history. We live in a universe that's about 15 billion years old. We live in a solar system that's about four and a half billion years old. Earth is roughly that same age. Earth first appeared on Earth about 3.2 billion years ago. That differentiated cells. The combination of cells working together began about 800 million years ago. That differentiated brain function appeared about 200 million years ago. That the dinosaurs occupied the top of the food chain until about 65 million years ago. And humans, what we think of as our history, is actually just a blink of the eye of the recent past. Beginning about five million years ago, in which our proto-humans, the Homo erectus, began walking out of Africa. Another physiological change that happened only about maybe 150,000 years ago. We became Homo sapiens. We doubled our brain capacity. We developed our larynx, our ability for speech. Because of that, we were able to become much more efficient about passing down information, not just genetically, but information that we could communicate to the following generations. And then probably the biggest sociological change 13,000 years ago, just yesterday, the end of the last ice age. When we began our final journey out of Africa, we began to settle into agrarian societies. We changed from being hunter-gatherers to agrarians in a course of about a thousand years. And it was the agrarian societies with the surpluses of food and their ability to successfully reproduce in larger numbers that allowed specializations like soldiers, politicians, scientists, teachers, doctors, businessmen, commerce. All of those specializations were allowed by the success of the agrarian society. And that's only just in the last 12,000 to 13,000 years. And everything that we think about as human history is just in the most recent blink of an eye in terms of actual history. And it's occurred with the geographic boundaries that either define avenues for commerce or barriers of cooperation and conquest, and all of the events that we tend to think about myopically within our sense of time as human history. So some lessons from technology that we can draw from history. One is that technology is accumulative. There's only been a couple of examples where we've actually forgotten how to do something. It's catalytic. It doesn't actually use itself up. It just continues to build. It moves exponentially, and we'll talk about that. And it's also imperative, which is a challenge for the ethics of technology, because history shows that with only a couple of small examples that once there is the scientific capability to do something and a competitive advantage in doing that, it happens with amoral certainty. A couple of examples in history where it was suppressed briefly. In Japan, we were able to suppress the introduction of firearms for about 150 years, but the rest of the world moved on, and eventually so did Japan. The other is actually an experiment that's going on right now that about almost 500 years ago, almost 600 years ago now, China was the world's greatest maritime trading fleet. I traded across the Indian Ocean, the Pacific, some evidence even into the Atlantic, but the emperor died, a child prefect was appointed emperor, but the religious leaders did not like the world that was being discovered and explored outside of their realm of influence. And the entire Chinese maritime fleet was burned in the harbors, and all of the records were erased. We know about it though, because all the places they traded with, where most of them still have records of that time, and China is just now coming out of that self-imposed isolationism that happened now 600 years ago. And finally, technology is decisive. History shows that when two societies of very different levels of technological advancement come into contact, the least technologically, the lesser technologically advantaged society is either assimilated or exterminated. There's no peaceful coexistence when there's great levels of technological difference. So I want to talk about timing. We have, because of our life experience, we have what's often referred to as a linear myopia, that we see the world up close, and we think of things progressing as the human mind likes to do. Let me back up a bit. As the human mind likes to do, we see things in a linear fashion, but that's not really the way, if you step back, that history shows that technology tends to be exponential in its growth. We talked about Moore's law a couple of times already today, which applies to computers. Exponential acceleration of change can be intuitively misleading for us. It just doesn't like this slide. As an example of how to think of it, or how we experience it, I use sort of the social riddle of if you had a goldfish bowl and you had one marble in it, and after one minute it doubled to two marbles, and after the next minute it doubled to four, and after an hour the goldfish bowl was full of marbles. When was it only half full? The 59th minute, and then it doubled one more time. And in fact, if you were to stand back from the goldfish bowl and look at it, you would say for a very long time, nothing's happening, nothing's happening, nothing's happening, and then the last couple of minutes it looked like it would suddenly fill up with marbles. And that's the way technology sneaks up on us, even though in this example the rate of change was constant, the acceleration was constant, that the experience that we have because of the way we experience the world is that very little happens then all of a sudden it fills the goldfish bowl, and that's the way many technologies burst upon us. Because of our linear intuition, our myopic way of seeing the world, we've often believed that these technologies have some asymptotic limit, and throughout history there's these bold projections that everything's been discovered. They wanted to close the patent office in the Lincoln administration because there was nothing left. Famous quotes from John von Neumann, the father of computer science in 1949 said he believed that computer science had reached its limits. Bill Gates thought that you'd never need more than 125k of memory. Thomas J. Watson, the founder of the Watson of IBM said he saw in the world a need for five, perhaps six computers, that we just don't understand because of our linear intuitive perspective that these technologies just keep advancing. So there's four stages of technological development I like to talk about. The one is that it just responds that need is the mother of invention. It adapts to other new applications. It can become disruptive. That is, that it becomes something that we depend upon. The invention becomes the mother of our necessities. But in the final stage it's transformative. That it changes fundamentally the way that society functions. And we heard about a good example of that in the light war talk that we just had earlier this morning. So a couple of examples. We talked about the printing press in a way. A fundamental greatest, probably the greatest invention of the last millennium, because it allowed the dissemination of information to move beyond the control of the religious and the highest levels of aristocracy and distribute it over time to all of those who were literate, not the whole society, but those at least who were able to read. And once we started doing that, it rapidly followed an exponential leading to where we are today. And we'll continue on that exponential. It's often the first manifestation of a new technology is to look like it's trying to replace, although not quite as good. The first published books were made to look like they were handwritten. They're beautiful. But that wasn't actually the advantage of the printing press. And eventually it moved to different forms that accelerated the dissemination of information to different audiences. And if we were to put a timeline with these different variations of printed word, we would see that that follows that same exponential curve that almost all technologies do. And it's not just the capacity but the speed with which these changes occur and the convergence of different technologies. Peter Singer talked this morning already about the convergence of two different approaches to information dissemination and how they have shown up in the social media. In this case, it's just different forms of communication taking advantage of wireless technologies. They're often poor business investments. You want to be a fast follower probably more than an inventor in these things. I love the quote on the Marconi's wireless technology. Never have so many lost so many millions in so little time on an idea of so little inherent value. And even visionaries of the time, H.G. Wells on 1925 said it looks like it's run its course. I see no future for radio waves. The one my generation thinks this is funny, in 1939 when the television was introduced which fundamentally changed the way that we see the world and experience information and entertainment. When the television was introduced, New York Times said it's a fascinating device but it will never catch on because the average American family has no time for it. It became one of those inventions that became the mother of our necessity. I'm sure you can see comparisons, parallels to the social media world today. A transformative technology like information transmission changes the way the world works and I think Dr. Singer did a good job this morning talking about some of those examples. We are seeing today the acceleration of the convergence of different kinds of technologies and I'm going to talk about not just in the information world but biological, computer, nanotechnologies, fabrication. And of course, just because it's a new idea doesn't mean it's a good idea although I think this actually kind of looks pretty cool. But we have to question, do we have a future? Do we have a future and what might it look like? For a long time we've had discussions about how technology uncontrolled by the ethics, how technology may rise up against us. And we've always had to ask the question of how do we survive our own creations and that's what the ethics of emerging technology tries to get to. And even those like Albert Einstein talked about how he was concerned about what the future of war would look like because of the destructive powers that we were creating and whether there would be a world left after the next world war. So we have the weapons of mass destruction. I spent ten years at Los Alamos National Lab working in the nuclear weapons area. We've managed to survive so far the creation and then some proliferation of nuclear weapons but there's so many other weapons of mass destruction now that are available and much more accessible to so many. We talked about the weapons of mass disruption in cyberspace, a little bit about that. And here at the War College we do a good job of talking about the attacks on critical infrastructures not just attacks on the military systems. Advanced technologies that I'm going to spend a little bit more time on, robotics and autonomous systems you'll hear this afternoon about those artificial intelligence and synthetic biologies and some more about that. And then just the fact that these technologies proliferate and are accessible to something more than just big nation states. It doesn't take gaseous diffusion plants and uranium enrichment plants and missile programs actually to deliver some of these destructive technologies. And even if we didn't do it on purpose, we are human and we've been known to make mistakes. So the three concurrent revolutions that I want to talk with you about this morning, nanotechnology, biotechnology and information technology and a little bit about how future conventional weapons may change as they become smaller, faster, lighter, more lethal, distributed, accessible, commercial, all of these things. So I'm going to start off with materials and manufacturing. Material seems like a fairly boring topic except that it's how we've actually defined our societies. This is the Stone Age and the ages of bronze and copper bronze and iron and steel and the silicon age and so forth. And we've, as we've changed these, the biggest one, I won't use the pointer, the biggest one that launched the last millennium, the 12,000 years ago, is when we took clay and we put it in fire and we changed its properties and these pots had held the water and the seeds and the food and the clay tablets that we wrote on. And we discovered metals. Copper was easily mined and then we alloyed that with tin to develop bronze and then we discovered how to smelt iron. Before that, you could only recover it from meteors. Once we discovered how to extract iron from the earth and we started the Iron Age by alloying that with carbon to make steel, the silicon age, the age of polymers and all that we have. And if we look at that, I put the timeline there, you can see that it's on an exponential. The lifetime of a defining material for our society keeps getting shorter. And my prediction is within the next 50 years the applications of these kinds of materials will fundamentally change how we live our lives and most of the problems that we think of today as the most crucial, whether it's the availability of fresh water or climate change or the energy cycle that we're in, all of those will be easily solved problems with the kinds of technologies that come out of this world of new materials. Because for the first time in human history, up until today, this era, we were restricted in materials of what we find in nature and then we'll heat it up, we'll mix it together, we'll cool it down and we'll see what properties it has. But it's been discovered science and technology. For the first time today, we can build materials one atom at a time, one molecule at a time. And it's been estimated that there's at least 100-fold more materials. Conductors, semiconductors, insulators, light-admitting fluorescent, thinner than a spider's web, stronger than diamond. A whole range of material properties that we've never had available to us today, just waiting for some of you all to figure out what to do with them. Because we can now build these materials at the molecular level and manufacture at the nano-scale as well. 3D printing is something that you've heard a lot about. We've only scratched the surface of what 3D printing can do. I'm on the board for the space station. There's three 3D printers on the space station because it's so much easier to send up a computer program than it is to send up a rocket with a replacement part. And we're also now into bioprinting on the space station and other places. Nano-scale fabrication that I mentioned, we've been on this curve of actually being able to produce things, machines, working machines, intelligent machines, mobile machines that are smaller than human blood cells, and that they will become very quickly, if we get comfortable with the concept, the internal housekeepers for even our biological selves. And it's not just that we're talking about that they're actually here and even DARPA is having competitions for these kinds of tiny machines. So, while machines have nanobots, biology in many ways are just organic nanobots. They have a physical structure. They have a computer program that determines how they operate. They have a set of rules. The biocomputer is our genetic code. The genome is the operating system. And we have an algorithm that's been given to us whose basic instruction set is reproduced to survive. And all of life's evolution has been guided by just that same operating system. We're not nearly as complicated as we'd like to believe that this will not be on the test, but the human genome of 20,000 to 25,000 genes, the human brain, for example, is defined by just about 3,000 genes. And only a few dozen are active at any given time. We're learning a lot about this from the space station even, as we put folks in a different environment, whether it's radiation, microgravity, and seeing how genetically they change. We're able to understand and perhaps end diseases that we can genetically type pathogens and actually attack them at the genetic level. We can reprogram bacteria to do housekeeping chores and actually health keeping chores. We're at the point where we're beginning to be able to regrow failed organs, either wholesale replacement or actually grow them in situ. We're using undifferentiated stem cells to actually learning how to control those to regrow organs. And we're learning the genetic code. Actually, we know the genetic code. And within a few years, each of you will have your own personal genetic code available to have been decoded. Many of you may have already used some of the ancestry type, 23andMe and Ancestors.com or whatever that is, different ways of finding out about your genetics in a fairly crude level. But we'll be able to get down to the individual genetic codes for individuals very quickly. And it's a computer code. Now it's a complicated one. It's about four billion words long. It's a long computer code, but it's finite. And for those of us who write computer code, the first thing you do when you write a code is you debug it because it doesn't work quite right the first time, although we always hope it does. We learn how to debug the code and fix it. And then also we learn how to copy it, which is something else it's a little, raises some serious ethical questions. And then another ethical question, who gets to decide what's a bug? Who gets to decide what are flaws? And what are the limits on that? Do you just fix people back to the human average? Do you want to tweak yourself a little better? Your children a little taller, a little smarter? We actually are learning the genetic code in a way that allows us to do this even now. And then the final question, why do we die? We understand cellular aging, cellular senescence. And the question of why do we grow old and die? We grow old and die because it was important in an evolutionary sense for natural selection that we be able to change as the environment around us changed beyond our control, and we had to adapt. And so we developed a way of biological natural selection by dying and having generations follow after us. But today we control the environment ourselves. We have much less need, survival need, to be able to adapt to changing environments because we change the world to meet our needs. So why do we grow old and die? We understand the aging process, and we're beginning to talk about how to stop it or reverse it. It has to do with things called telomeres. The best analogy is that they're like the hard caps on the ends of shoelaces. They're hard caps on the ends of chromosomes. And when the chromosome divides, the cap erodes away slowly, and after 50 or 100 cell divisions, the shoelace unravels because the cap is gone. But we know how to use telomeres enzymes to stop that erosion from happening and even how to anneal it back to full strength. So in principle, we can stop cells from getting old. Now you're made up of 100 trillion cells, but really only 300 different kinds of cells. And we're studying how to stop the aging in each of those cells. And you could still die for two things. Two things. One is, so you could live forever without dying biologically, getting old and dying, but you could still die from accident or violence. It's not good news for the military, perhaps, but if that was the only way that you could die today, your life expectancy would be on average about 650 years. But it would be a flat average. It wouldn't be that you're going to get very close to 650 and then a lot of people die around that. The odds are the same tomorrow as they are a thousand years from now. And it might change the way we lead our lives, although it will take some neurological changes as well. Would we set the speed limits at five miles per hour and big bumpers on the cars? Or would we modify our life to do that? Living in a really long time, that's a procrastinator's nightmare. You can always do it later. And it has all sorts of enormous consequences for society today. I promise you, the social security system does not count on you working for a few years and then retiring for a few hundred years. You probably never get to retire. Maybe at best you can have a sabbatical like professors every seven years or so, although it's very hard even to take that much time away from a career path that you're strongly on. You may not get to have children, not your generation, but the ones following you, because the earth can only support a certain population. And when I give this talk to college age kids, I ask the question, so I'll ask it to you. How many of you, and most of you, many of you may have already decided this question, but believe that it's such a fundamental human value to have your own children, to have your own family, that even if biologically you did not have to, you would agree at some time in the future to die, to make room on the planet for them for the right to have children yourselves. How many would do that? Looks like about half. And that's the easy question, actually. The hardball question is the way generations work. How many of you believe it's such a fundamental human value to have your own children that you'd give up your parents? There's always one hand that goes up. It's going to be a very complicated world if we start living for very long times. And fundamental changes to the way that society works. And by the way, this is coming, but estimates are that maybe it will be at this point in 40 or 50 years, but if I'm off by a factor of two, it's coming in the next 100 years, almost for sure. And maybe it will be your children deciding whether they want to have children or keep their parents. So be really good to your kids. And it's not just science fiction. I mean, there's actual journals being written talking about the biological science of immortality or maybe self-extinction because the very same scientific capabilities that we can use to extend life can also be weaponized to shorten it. We're beginning to get to the point where we actually are able to develop pathogens that could eliminate biological life on Earth, which is where we reach the dawn of synthetic biology. And we have learned over the last couple of decades how to read genetic code. There are now graduate students, postdocs in college who are extremely fluent at writing genetic code. And the most ambitious of them haven't restricted themselves to what nature has provided us. The genetic code, I mentioned it's very complicated, but it's three to four billion words long, but it's written in an alphabet of only four letters. Well, some of the young people today have said, why only four? They created some more letters, something that does not exist in evolutionary nature. And they're fluent at writing code and inventing different kinds of life forms, putting these codes together, just playing with genetic codes. And then the question of, can it be weaponized? I've hosted over a period of about 15 years, most recently last summer, a discussion of what happens when we weaponize synthetic biology, when we create pathogens for which there is no human immunity. Our immune system is evolutionarily based upon the diseases we grew up with. What happens when we weaponize the immune system, the biological pathogens, so that they could kill everyone? I asked a question of biologists, and they've said, oh, yeah, that's actually easy. No, I had one who said just a number of years ago, said I've trained post-docs from 24 countries who could do this. But who would be so crazy as to actually release a pathogen that kills everyone? We did an experiment when I was a dean of engineering at Dartmouth, where we had them just take a variation of smallpox, something called monkeypox, an almost trivial genetic change, make it infectious for humans, but for which we have no immunity, and then we did a tabletop of what if we released a vial in the Dartmouth green. We brought in the local, state, federal first responders, and in six months in the U.S. alone, 200 million dead. The only people to survive were those who ran away. The first to die, first responders. In fact, they became the transfer agents for the disease. So he said, who would be so crazy to do that? That would kill everyone. The biologist, the geneticist, raised his hand again and said it doesn't have to. He gave me about another six months, and I'll start identifying cellular receptor sites for the pathogen that are tied to genetic identities. So I'll only kill the Jews, or the Asians, or the blondes, or anything that has a genetic signature over time can be used to design a pathogen that could identify that genetic community. And, by the way, we're now reaching the point where it could be tailored to individuals that you could actually design a lethal or harmful pathogen that's tailored just to your genome, to attack just you. Well, that would only be a problem if there were things like available databases that have your genome, like 23andMe and Ancestry.com and others. And then this last summer where we had some of these creative young postdocs and grad students, they said, by the way, you're from the Navy. I can keep the U.S. Navy fleet in San Diego Harbor. They will never leave. And my warhead will be a mason jar of 100 mosquitoes. And I'm not talking about killing them. He said, I'll just develop an aggressive neurovirus and give it about 12 days' latency, which means that you're infectious for 12 days before you start feeling sick, and release it into the population and those ships will never leave harbor. And another postdoc said, yes, but I can decode that pathogen's genetics and, of course, have a couple of weeks come up with an antidote and a treatment. And I can 3D print with biological ink the vaccine. So we're not waiting months and months. I can get it out to the fleet within a week or two. But then the first, the grad student said, yeah, well, I'll just do another pathogen then. So it's, I mean, it's just a horrifying world in this environment of synthetic biology. And frankly, I don't know what to do about this and currently there are no controls on it. And then the information revolution that we're in that we talked about kind of more this morning. So you might be immortal, but it may not be as a biological being at all, but that we will transition into hybrids that are more and more machines. And we're already sort of that. I ask my students, if they know of someone with an artificial hip or an artificial knee, and I say, are they fully human? What does it mean to be human when you start having sort of biomechanical parts that replace some of our existing parts? What part of us makes us human? And if you answer the mind, the mind is also transitioning into this world. You've seen Moore's law curves like this where it's not that far into the future that we'll begin to have machines that rival human intelligence. We talk a lot today about artificial intelligence. And when we may reach the limits of traditional computing, quantum computing, quantum technologies are just around the corner as well. I mean, there's the quote that most of you have heard from Vladimir Putin who said that AI is the future. And the country that becomes the leader in this sphere, and he said it's the future and it's important, not just for Russia, but for all of mankind. And the country that becomes the leader in that sphere will rule the world. Well, I think he's half right. It is the future, but I'm not sure why AI needs nations at all, that it has a very different sense of machines that first right now are learning from us. Increasingly, they're learning with us, but in the future they will learn without us. And what does it mean for us when that happens? We may be looking at the singularity is when these machines actually surpass us in terms of their thinking ability, thinking like us, using neural networks that are based upon the human architecture, but much faster. And there's even an alternative approach to intelligence which is swarm mentalities, distributed intelligence, the ant colony kind of phenomenon, where a single ant has almost no intelligence, but collectively they share. And there's been those, I'm not going to go into this slide. Those that think about what warfare in 2045 will look like, this is so wrong. This is what warfare in maybe 2025 will look at because the evolutionary pace of change is accelerating so much faster. We're coming up to tell everything, maybe even tell a war as Peter Singer talked about this morning, but the ability to transmit information becomes much more important than the actual physical presence. But there's a problem for us, us biological humans. Evolution never designed us for that world. The kink in the bandwidth, the interface in all this is, and I think Peter mentioned it even, it's us, we're not designed for that world. Even things that we think of as fundamental, our languages, written or spoken, these are just information compression algorithms, taking thoughts, turning them into symbols that we can transmit, that then get uploaded by a receiver and have something like the thoughts. But machines, there was a Facebook example a couple of years ago that got a little bit of press where they had computers that were running an AI level of analysis of traffic on the internet through Facebook, through the Facebook network and they were watching what the computers were discovering and then suddenly they couldn't understand it because the computers had invented their own language and were no longer talking to them. So what, so they unplugged them. But over time, intelligent machines may not be bound by the same limits in terms of language that we are. We're a lousy computer. I mean the human brain runs at 8 hertz. It's why we don't see the lights flickering on and off 60 times a second. An 8 hertz computer you would not put on your desktop. We also need chemical fuel. We require oxygen. We are injury prone. My gosh, we need 8 hours of maintenance downtime out of every 24 unless you're a student here. It's just, we are not really, there's a few things that we do pretty well but we're not really a very good computer. And so where's evolution going to take us in this world where we're becoming much more in terms of digital intelligences. Maybe we're obsolete. Maybe the value in the future world is something where we're approaching biological obsolescence. I would like to think that rather than being replaced by the machines, the Robo sapiens that we talk about, that we'll find a way of actually converging that the boundaries between humans and machines will begin to disappear. Because that's our choice in many ways as the machines become technologically more advanced than us. We will either assimilate or be exterminated. And that's happening in the next few decades. We already see the seeds of how the convergence, the human machine boundaries might work because we're developing neuro interfaces, neuro robotics, primarily used with prosthetic devices today. But just like we've seen in some other prosthetics that they can be used to try and get you back to near human capability, but they don't have to stop there. They can keep getting better. Just with the International Space Station, it's being used. One of the cool companies commercializing microgravity is developing artificial retinas. And they use microgravity because they can manufacture it with much more purity. And it's actually placed on the back of the human eye and it gives you better pixel resolution than the human eye. And also, by the way, they haven't gone there yet, but it doesn't have to be restricted to visible light. You could do IR infrared to UV. And so you can improve on the human condition. These are the neuro robotic devices that already exist, the interfaces, and lots of both popular and scientific literature talking about the coming neuro robotic revolution. So before we get freaked out by it too much, maybe this is just supposed to happen, that we've reached this point in technological advancement, that this is evolution, that we've reached this threshold where we're stepping across this boundary of outside the constraints of our biological selves. And Arthur C. Clarke noted that even if we talk about artificial intelligences that are not biological, as they develop a sentient sense of self, that they are entitled to the same rights that we are. So the question becomes, what does it mean to be human? And there's lots of futurists who get a little carried away, but they talk about either a trans-human future in which we merge with the machines or a post-human future in which we become entirely non-biological. It's even beginning to question our sense of what is reality because we're creating the possibility of virtual world that are almost indistinguishable from the ones we're in. We've talked about telepresence and enhanced realities and purely virtual realities. Right now, for those still awake, the human brain is processing about 10 to the 18 bits per second of input coming from just instantaneously. If we were to put in a population of a million people that you might interact with randomly, it's another 10 to the ninth bits per second we would need, and about another 10 to the 14 to 15 bits per second to put in environmental randomness, just how the world changes environmentally around you. So we need about 10 to the 44, 10 to the 45 bits per second of processing power to completely recreate reality as you're experiencing it right now. Now, the gaming world is nowhere close to that, but if you take Moore's Law and project it, we'll be there by 2040, 2045. We'll be able to give you virtual realities that are indistinguishable from the one that you're experiencing right now. And while many of us kind of like the world we're in right now, if you see young people gaming, and you see the emotions that it has to go into actually very, very poor virtual realities, think of how addictive, how attractive the virtual realities can become where you get to play God, and you have reset buttons and multiple variations of the life that you might lead, and never actually enter the world that at least we hope we're interpreting as actual reality today. So these virtual realities are coming. They're not very far away, ethically how we deal with that as well. So all of this seems far-fetched, and so I like to go into the scientific literature and read some of the headlines from recently published papers just to give you a sense, and this will not be on the test either, but I just wanted to quickly go through in nanotechnology new materials. We're talking about 3D carbon components, wearable devices, quantum materials, 3D printers for increasingly small devices, superconductors, quantum light sources, geoengineering to actually solve the issue of climate change, power generation, by this thing, self-assembling biofuel cells. It means that you can put things inside of you and they don't need batteries, but they would be powered. Talking lasers, that's just actually bouncing lasers off of walls and having the walls talk to you, which kind of freaks me out. I am aware of listening devices that work that way, but so far the wall hasn't talked back. Implantable devices that produce energy similar to the previous story mentioned. Strong artificial composites, even little robots that are essentially unsquashable, which I don't want. Biotech and synthetic biology, new causes for cell aging have been discovered as we go into the telomere world. We actually have successfully 3D printed a working human heart. We're learning about limb regeneration, an article about some discoveries based on fish studies. Stem cell therapies are under work. We're beginning to put organs on a chip. In this case, we're talking about an eye on a chip, but we have lots of human organs that can be reduced down to chip size for experimentation and modeling. Nano-vaccines that can be disseminated in a more continuous way. Health monitors with electronics that become much less klugey than the watch that many of you are wearing. Artificial catalysts, artificial proteins. Artificial catalysts, artificial proteins, artificial chromosomes. Biocircuits that give unprecedented capabilities in artificial cells that sense and respond to their environment. In robotics and AI, we talk about quantum particles, untethered soft robots, techniques using magnets to control soft robots, robotic jellyfish. You'll hear more about those sorts of things this afternoon. Micro robots, the nanobots that we talked about. Prostetic arms that are becoming much more able to be manipulated by thought and have sensory input. Efficient artificial neurons using carbon nanotubes. Different ways that AI is being applied to healthcare. I mentioned next step in AI mimicking a baby's brain. We're teaching machines to think and learn the same way that the human brain thinks and learns. They're actually with a group of neuroscientists that I interact with, they're all complaining that their top graduate students in neuroscience are being hired by AI computer programming companies. One that I'm still curious about, teaching AI to overcome human bias that we just don't trust them. Then ethical challenges that, a story that's been in the news for the last few months, a Chinese CRISPR scientist, designed genetically engineered now two babies and a third about to be born. It was pointed out in a news article that he actually had plans to start a whole company so that you could order your designer baby. Facebook is funding brain research that reads your mind. If you think that they're drawing back and know they're moving forward, Google's also leaning in trying to understand building in systems for facial recognition and even gesture controls. Scientists manipulating brain cells using your smartphone. I hope it's your smartphone that's manipulating your brain cells. Digital games that are actually getting better at how we are teaching stress reduction as well as other kinds of learning outcomes. Virtual reality being used to help seniors. An article on reality, the greatest illusion of all. A question about where we transition from biological death to the cyborg existence. AI-led inventions question future of patent law. The patent office is now struggling with what to do with patents submitted by computers, invented entirely by computers. Who owns that? An interesting article about if we get AI ethics wrong, we could annihilate all technical progress. Actually, I think it's more like annihilate us. I mentioned that I love reading scientific literature and pulling some recent stories. Those headlines that talked about nanotechnology and micromanufacturing and biotechnology and computer technology and robotics and AI and so forth. All of those scientific articles, the headlines that I just pulled off, all of those were published in the last three days. This world is moving really fast. It's happening mostly beyond our comprehension, but the idea that this is coming is not debatable. It is coming and work is happening really fast. We're on that exponential curve, and the types of questions that come from this are profound. Things like, what about the ethics of genetic refinement? A CRISPR kit you can run in your garage ordered from Amazon. So who decides what goes into genetic designing of improving yourself or your children or maybe other children? How do we use this in terms of cognitive enhancements or increased abilities or even controlling thoughts and actions in a neuroscience world where, as Peter described, you can be hacked. In not this case, not just manipulated, but actually biologically hacked. How do we deal with artificial intelligence? We talk about always wanting a human in the loop, but in many scenarios, there's not time for a human to be in the loop. Remember, where the eight hertz kink in the pipeline. So how do we deal with military action where speed is decisive and how do we build into it, into that, sort of the ethical boundaries and principles, the moral principles that we would like machines to make those decisions by even if we're not calling time out and having them phone a friend, but actually making the decisions based upon our programming. What is the ethics of immortality? I mean, how do we value life when you never die? I mean, religion, the big prize, you live in heaven forever after you die. Well, if you never die, you're never getting there. Think about institutions like marriage. When I said, till death do us part, wow, I didn't think about it that way. I don't know. But it's coming. Or will the sense of we, how we interact as individual human biological units, will that even exist in the future? And so, kind of the deeper questions of, I mean, how many of our moral principles are we willing to compromise in order to preserve of what we think of as our human moral, our human morality? I mean, how many of the, we talk about this all the time now, even how many of our civic freedoms are we willing to surrender in order to preserve our civic freedom? And how much of our humanity or what we think of as our humanity are we willing to sacrifice in the name of preserving humanity? Those are deep questions that don't have, the answer's not in the back of the book. In 1929, J.D. Bernal had a quote that I enjoyed. He said, scientific theology at the time. Science is approaching the point where science and technology will be able to divorce us from the limits of evolution, responding to our environment. And that when that happens, we will at first take actions merely to improve upon ourselves, but eventually science and technology will allow us to abandon these biological forms bequeathed to us by the chaotic evolution of natural selection. And instead choose designs that are more efficient and of our own choosing. And that when that moment comes, it will be more important than the moment when biological life first formed on Earth. That moment, very likely, is going to happen in your lifetime. And that you are the navigators. You have to decide what are the moral principles, the ethics that guide us as we move into this world. And how do we work as a world understanding that technological advancement happens with amoral certainty when there's scientific capability and competitive advantage. But we are very fast approaching. In the next 50, and if I'm wrong, next 100 years, we will be at the point in which we are no longer bound by the limits of our own biology and deep questions about what it means to be human need to be asked. And it needs to be asked by you all. So this doesn't have anything to do with your next assignment after you're detailed after leaving here at the War College. It does have to do, though, with what we think we do here at the War College, which is ask you to think in different ways to step back and think deeply about the future of humanity. We're not so concerned with the answers to the types of questions that we ask. We are very concerned about the questions that you ask as you help guide us through this future and which is very much a destiny of our own design. So thank you very much. There's not a lot of time for questions, but I would ask that maybe if you have questions to hold them and we'll reconvene here in about 500 years if that's okay. Class reunion. Very much. We've got about five minutes, I think, though. Any questions? Shock? Yes. Sir, Lieutenant Colonel Finnell, Marine. So my question is I can see how this is happening. It seems like in maybe a silo you have STEM developing and businesses commercializing and government maybe regulating. Where do you see... I'm one that believes in body, mind, and spirit, right? So where do you see theology fitting in, not just reacting to the development, but actually participating in the development? That's a very good question. Theology is the source in many of us for our moral principles, the values that we assign to what it means to be human. And I think it is those theological-based principles that need to guide us into this world. I don't believe that we can simply say I don't like that world so I'm going to pretend it isn't coming. It is coming. And like history has shown, religions have a way of adapting to the world as it changes. Sometimes grudgingly, and sometimes they lead the way. And I think that the opportunity here is to use the fundamental values that are part of our different religious persuasions, the theological-spiritual approach, to talk about what it means to be human and how we try and preserve that for the future. A lot of what we're going to have to do in the future will not be the way that we've always done it. The moral foundations, the principles that we want to hold on to, we have to design a way of getting there and what that would look like. For example, the AI, we've got to teach AI the same sense of moral responsibility that I think most of us share. And also it needs to be done if it's going to be done in competition, it needs to be done globally. The concept of a nation-state begins to lose value That's a very good question, though. Yes, sir. Mike Erwin, US Navy. The previous talk talked about basically the effects of networking. We've talked to hear a lot about just changes to humans in general. I'm just curious if there's any discussion in scientific literature about the concept of networking humans together. Like if the idea of our brains can somehow be turned into the internet, the idea of shared consciousness. Yeah, that's a good point. The internet has we've been looking at it in two different ways. One is that it is a system like the virtual reality I described. It's an anarchist dream. You can have your own universe. So the internet in some ways drives us with capabilities that allow us to be completely separate, to distance us. One humanity, but many individuals. But also it has a way of uniting us. That it allows us to develop new values that were many individuals, but one species. And I can tell you from the space station, you don't see the national boundaries. We are one human species. And this is pushing us to have to embrace that concept. And human intelligence, we haven't really mastered it yet, but we are a not very good, but still improving distributed intelligence entity. Things like the internet allow us to do that. Things like classes that we have here allow you all, you will learn more from your classmates as you discuss. And so we are sharing our wisdoms with you and yours with us as well. So I think there's possibilities there. I'm not sure how to overcome the national differences that separate us. And Elon Musk has predicted that, for example, the question for AI superiority will be the reason for World War III. And I hope he's wrong. Maybe one last question very quickly. I'll try and talk less. Yes, yes sir. Hi, Lieutenant Marquez, US Navy. You mentioned Elon Musk and Tesla, some of their cars, they're exploding and things like that. Some are the rockets, by the way. People are getting their heads chopped off because the automation, you know, it doesn't work correctly. I guess my question has to do with this idea. It's a contrarian approach that perhaps we're in a technology bubble. This idea that our eyes are hungrier than what our stomach can really digest. Is anyone talking about that? What are your thoughts on that? Well, not in that way. I'm a pathological optimist, so I don't think of it as work nearing the end. Elon Musk, for example, I know him more through the SpaceX rockets and the space station, but I remember when the North Koreans were flying, as they were flying missiles, and they kept falling into the ocean, and we took some satisfaction that they were falling into the ocean, but everyone fell into the ocean farther away. And I think failures are a part of learning, not a sign that we're about to hit that asymptote that history shows isn't really there. So as long as we approach technology as an experiment and where we are allowed to have even celebrate learning failures, then I don't think it's a problem. In terms of what a good example of how creative we can be, though, the rocket engine on the Falcon rockets is 3D printed with titanium ink. And Elon explained to us that the world's greatest master machinist could not build their rocket engine starting with a block of titanium, that because of the internal structures it can only be built from the inside out, not from the outside in. And that's just one example of where there's whole new ways of doing things that we hadn't really thought about before. And yes, things will explode and fail, and as long as they're learning failures, I think they just continue us on this exponential. So I don't see us approaching I don't think of it as a bubble where we're going to hit some wall and come crashing back. Unless as humans we have sort of a revolution against knowledge. And there's a little bit of that showing right now because in schools today I was a college liberal arts college president for 10 years and I was struck by how many young people graduate from college today clueless with how the world technologically works. And I don't think that technology is the be-all-end-all of everything. But it's one of the dimensions along with history and politics and religion and economics and culture that define how the world works and it used to be called liberal arts and sciences now it's just the liberal arts and the sciences is a class you take, physics for poets and it doesn't really tell you how the world works. And I think in this technological age that's a travesty because we're graduating generations that are I think prisoners to those who actually make it work and how it's designed. So it's incumbent upon us to actually stay technologically aware. I'm not saying that you have to read all these papers but just be aware that the world is on this exponential growth curve and the challenges should not be solved the moral challenges should not be solved by the engineers although they're very good and they have a code of ethics. But we need philosophers and theologians and economists and historians and military officers, the users the practitioners to collaboratively come up with how we're going to guide ourselves into the future. Not just for our sake but for the sake of all of our children and grandchildren who I hope keep their grandparents around. So thank you all very much.