 Good afternoon, I realize it's Friday afternoon, so you all are quite eager for this to be a brief presentation. And I wanted to begin with an apology that, typically, the wrap-up talk at the end of a year in which ethics has been infused across the curriculum at the last symposium of a series, that the concluding talk is one that wraps it all up, ties up the loose ends, connects the dots, brings it all into context so that you can digest and have a comprehensible whole of what's come of this year-long exercise and this symposium in particular. And it's been just fascinating to hear the talks that we've heard today. But this is not that talk. This is the other talk. It's one in which we just turn it all upside down. It's not even thinking outside the box. It's questioning whether we're considering even the right boxes at all. It's not about hoping that you will get the right answers, but questioning, sort of challenging you to think about what are the right questions. And there's nothing particularly practical or operational about the talk I give today because I'm looking over the horizon at a very different type of world. I'm a technologist, and so we're going to set the table by talking about emerging technologies. But we're going to really pose some deep ethical questions that are quite different than what you get with the myopia of our day-to-day challenges. In some sense, the tune to that adage of life is what happens to you while you're planning to do something else. And in this case, it's really the coming conflict. To call it a world war is really a misnomer. But the coming conflict is what happens to you while you're planning to do another type of war. And in order to address that, we've got to go through a quick overview of some of these emerging technologies. It doesn't really get interesting toward the end, but it's a climb that's worth it once we get to that vantage point. So I'm going to talk a little bit about what may be happening in the next 40 or 50 years in terms of these emerging technologies. And as I said, it's a misnomer to think of it as World War III. But it is a very different kind of coming competition that we will find ourselves in. So to put it in context, a brief history of history, the universe we're in, it's about 15, perhaps 20 billion years old, that the solar system we're in is about 4 and 1 half billion years old, that life first appeared on Earth about 3.2 billion years ago, that multicellular life appeared about 800 million years ago, that differentiated brain function appeared within the animal world about 200 million years ago, that dinosaurs ruled the Earth till about 65 million years ago. And our first ancestors, the Homo erectus, stood up and started the beginning of our migration out of Africa about four, perhaps five million years ago, on this time scale not very long ago. Our direct ancestors, Homo sapiens sapiens, only about 125, maybe 150,000 years old, when our brain size doubled and we developed the voice box. So we began to have meaningful speech. And with languages, with speech, we were able to begin to actually store up and transfer to succeeding generations much more of our intellectual DNA, not our biological, but our learned lessons. And then the biggest event recently was just 13,000 years ago, the end of the last ice age when our true colonization really took root around the world and everything that we think of as human history. I think I've got a slide on this, but basically everything that we talk about is our civilization of human history has transpired just in these last 13,000 years. Barely a blink of the eye on the scale of even life on Earth. It took about 1,000 years after the end of the last ice age, but we transitioned to an agrarian society, moving away from hunter-gatherers. And it was the surpluses of the agrarian society that allowed us to have specialties, which included businessmen, politicians, doctors, educators, soldiers, all of the things that really led to what we think of as our civilization today. And everything that passes for human history has played out in these last 13,000 years or so against a fairly restrictive landscape of boundaries and barriers to trade, commerce, competition, and conquest. There was another event, if we're talking ethics, it's worth noting. In the West, to the Enlightenment, only a few hundred years ago, the development of the age of reason, the sense that nature existed and could be learned and was exploitable, controllable. And as we began to move away from the idea that there were these impetuous gods that caused the sun to rise and fall because they rode flaming chariots across the sky, that we began to realize that nature had its own set of rules, that we could learn them, and that we could use them to do things. And it's in the use of those rules, the technologies that came out of that, that's where ethics really hits the crux of what I'm trying to talk about today. The responsible ethical use of our knowledge about how nature works. And all of that has really taken root just in the last couple of hundred years. It wasn't an immediate success. The industrial revolution that was the aftermath of the Enlightenment and the age of reason fueled in part by the printing press. And the printing press is a great example I'll mention in just a moment, but it disseminated knowledge from the ruling elite to the next class, not to everyone, but to those who could read the literate class. But the peasant class, those that worked in the factories were still viewed in this, I love this pseudo speciation idea, that they were still viewed almost like interchangeable cogs in a machine. And it was the, so the technology that came out of this period was very dehumanizing. And it caused a backlash in education and in our sense of how we apply knowledge. Oops. Shows up in literature, it's when, there was a number of literary masterpieces that talked about our own technology rising up to destroy us. And so it was the cautionary tale of the ethical use of these technologies that we now were beginning to have available to us. So if I look at the lessons of history with regard to technology, there's a few things that whether we like it or not, I think are absolutely guaranteed. I'll get that, Jamie, it's okay. Actually hand me the Diet Coke down there. I'm an addict. Thanks. Technology accumulates. It doesn't really, it's catalytic. It doesn't really destroy itself in building. It actually converges. That it's an imperative, whether there's more uncertainty about whether we want something to happen or not. If there's a competitive advantage and the scientific capability, history shows it does happen. There's only been a couple of examples in history in which technology has even been momentarily suppressed within different cultures. The Japanese suppressed the introduction of firearms, suppressed it for about 150 years because it found itself in conflict with their samurai culture. Now, despite what the movies may show, a peasant with a gun was more than match for a samurai, which is why it was so resisted. But it didn't stop the introduction of firearms around the world and their evolution and it didn't suppress it for very long. The other case, which is much more interesting because its fingerprints are still in the world today, is that about 600 years ago, a little less than 600 years, the Chinese were the world's greatest maritime trading fleet. They traded throughout the Indian and Pacific Ocean. Some evidence they may have been into the Atlantic. But the religious, after the emperor died and a child emperor came into power, the religious prefects really didn't like the world that was being encountered outside their range of influence. And so in the 1420s, the entire fleet was burned in harbor and all of their records were erased. We know about them because every place that they traded with still had records of their presence. But the Chinese are just beginning to come out of that very xenophobic and isolationist view. And it's important that we understand that because that actually has a lot to say about their world view even today as they're moving beyond that attempt to suppress a technology. Final thing though, well, final two things. One, technology is decisive that when two civilizations that are technologically very different come into contact, the less technologically proficient civilization is either assimilated or exterminated. There's really been no historical examples of just wonderful, peaceful coexistence. And then it's exponential, which is something I'm gonna talk about a bit. It's sometimes hard to understand the exponential of technology because we see the world up so close. We have this, what's called a linear intuitive, a myopia, that within our own personal life experience, it is human nature to connect the dots in a straight line. And yet that's not when you step back, that's not the way that you see a lot of technologies progressing. And I like to use the example of a fishbowl. There's a party puzzle. If you had a fishbowl in which there was one marble and then a minute later, there was a second marble. And then two minutes later, that would double again to four and two minutes after that to eight. And within an hour, the fishbowl was filled. When would the fishbowl only be half full? Let the 59th minute, it had one more doubling. And in fact, if you were standing back and watching the fishbowl during this hour, you would say for most of the time, nothing's happening. I don't actually see anything happening. And then in the last few minutes, it would look like an explosion of things happening, even though it was on an exponential the entire time. Well, technology in lots of areas is on that exponential. Sometimes we can see the explosion because it's happened within our life experience. In other cases, it's still bubbling along and we know the exponential where it's headed, even though we haven't quite reached the point where we can see that fishbowl really filling up quickly yet. All through history, people have said, well, it just can't continue. For several thousand years, we've looked at technology and said, it's been wonderful up until now but we've got it all. There was a proposal in Congress to close the patent office in the Lincoln administration. Everything had been invented. It's just Bill Gates said, we'll never need more than about 125K memory in a computer. Thomas J. Watson, who founded the IBM labs, said the world needs five, maybe six computers. I mean, we're always coming short when we look at the effects of these technologies. There's four stages of technology that generally are talked about. One is simply responsive. That is the old adage that necessity is the mother of invention. Then we often find other applications of those inventions. They become adaptive. Then they become disruptive that we think of really whole new ways that we can do things. That is, in those cases, that it's the invention that becomes the mother of our necessity. And then for a very few, they become truly transformative to our society in the way that we lead our lives. So a few examples. I'm a radio physicist, so you got to put up with my examples coming from that mostly. But Rutherford B. Hayes' first long-distance phone call, fascinating from Alexander Graham Bell, fascinating little device who would ever want to use one for the telephone. The printing press, a perfect example. Usually when a disruptive technology is first introduced, it's trying to do something that was done actually better a less efficient way. If you think of the very first printed books, they were made to look like they were handwritten. Wonderful, beautiful script, colored and everything. It's only when you got away from that that you realized that you didn't have to do that. And now today the printing press is replaced by the, except for those of us who were just tactile, very addicted, it's being replaced by digital forms of printing. But it causes the dissemination of knowledge to ever broader portions of the population. Telephone, another great example. Radio, often these technologies, they come out, they're bad business investments. The quote from the San Francisco Telegraph, the San Francisco Observer about the Telegraph, never have so many lost so much money so quickly on an idea of so inherently little value, talking about the Marconi Company. Even visionaries, H.G. Wells in 1925 said, radio's really run its course, the wireless world is over. It's going to completely disappear soon. This is one for if you've had kids in the last few decades, I like it. In the 1939 New York World's Fair, when the television was introduced, the review of it, nice little invention but it'll never catch on. Because the average American family simply has no time for it. It became a disruptive technology it changed the way society works. And we have a number, not quite transformative but certainly disruptive. Today we have a transformative technology upon us, the internet. It is fundamentally changing the way that we communicate, the way that we stay connected. We can begin to see, with some trepidation, we can begin to say that we're beginning to see the end of nation state boundaries that we're beginning to work across those boundaries in ways that really have no geographic restrictions. A young Chinese teenager with an iPhone has more in common with a young American teenager than either of them have with their parents. That they're growing up in a very different world and it's changing the connectivity. They may not be great buddy friends, they may not, as we heard earlier, smell and exchange pheromones, although that might be coming. But they are a different class. In fact, neuroscience has shown their brains are even wired differently because of this access to a different kind of technology. Not every idea is a great one. This comes from, I'm sure this was a volunteer from one of the military services, but. So before we talk about some of the possibilities of technology and the ethical challenges that come from that, that it's important to discuss some of the down to the dark side. Do we have a future? Many in this room have grown up in a time in which we were faced with the possibility of nuclear extermination. We've been on the nuclear threshold for so long that the rising generation of many of you all have nuclear amnesia. You don't remember that we were faced with what we thought was a credible threat to the extinction of our species. And when I show my classes, the old 1950s and early 60s civil defense films, the duck and cover types of films, they think it's a Saturday night live skit. I mean, it is that far removed from their sense of what the world could have been like. But the technologies that give us some amazing possibilities also give us the technological capability of extinguishing ourselves. And nuclear is just the first, but not the only example. There's all sorts of range of weapons, of mass destruction. The one which disturbs me the most right now is biological. It's possible today to genetically engineer biotoxins for which there is no inbred immunity within the human population. And if the prospect of killing everyone bothers you, you can also genetically engineer those to have cellular receptor sites tied to specific DNA traits, genetic traits. So the idea of ethnic cleansing could take on a whole new meaning using genetically engineered biotoxins. And it doesn't take large uranium enrichment plants or other things, basement laboratories have this capability. Not quite yet, but not very far away. We've talked a lot recently about our vulnerability in cyberspace and the weapons of mass disruption to do at least society as we know it. Lots of interest in new forms of advanced technology for war fighting, robotics, autonomous systems, space operations, electronic warfare and so forth. And then the empowerment that technology gives to either the small rogue states like in North Korea or to even non-state actors of which there's a rising number that may have access to very lethal technologies. And even if we didn't do it on purpose, we've been known to have some fairly serious little accidents. Having been at Los Alamos for 10 years, I assure you we do a lot to make sure that the bombs don't detonate when you drop them. But anything that humans have designed will ultimately find a way to fail. We're just, it's part of our DNA. So the conclusion I come from this is simply that the world is no longer safe for conflict. We can't look at military solutions at the global scale for resolving this because the technologies are soon rising that will enable even small actors to have devastating consequences for large numbers of the Earth's population. That puts a great deal of pressure then on the political process and all of the sanctions and so forth. But the ethics for the military, I think become particularly acute when we realize that we're dealing with technologies that can have very leveraged consequences and it's no longer the normal dynamic that we've gotten used to. So I would argue that because of the connectivity of the world that the whole sense of war fighting within the next 30 or 40 years will fundamentally change. And in terms of how it will change, we're not just looking at how we fight and where we fight, but fundamental questions of who we're fighting and even at some point as I'll get to in just a moment, the questions of why we fight because the actual adversary may not be whom we think. I'm gonna talk about three technologies just briefly to get us to a vantage point that I think is worth the climb. Nanotechnologies, that's both micro manufacturing and new kinds of materials. The biotechnologies and computing robotics and information technologies. Excuse me. And materials, materials seem like a pretty dull topic for trying to look at the future. But if you think about it, we've used materials to classify our societies for human history from the stone age on up. We've talked about the importance of clay, the pots and the tablets for writing and so forth was the substrate engine for the agrarian societies that were successful. The extraction of metals beginning with copper and then learning how to alloy that with tin to make bronze, the iron age, the steel age, the silicon age, so forth, plastics, composites. What's interesting is that the dominant material, you can see the exponential in this. The dominant material stays dominant for a shorter and shorter period of time. Today, for the first time in human history, we're no longer bound to using materials that we find in nature. Everything that we've used thus far is stuff we find in nature, sometimes we heat it up, mix it together, cool it down and see what we've got. But it's stuff that is naturally provided. For the first time in human history today, we can build materials one molecule at a time. We know how to use very sophisticated techniques and it's been estimated that there's more than a hundred-fold materials available to us today than we've ever had before. With properties that are insulators, conductors, semi-conductors, luminescent, stronger than diamonds, thinner than spider webs, a whole range of possibilities. Just waiting for bright minds to come along, figure out what they can do and then what to do with it, the applied side of that. Nanotechnology gets a lot of press. We're seeing it beginning to show up in materials. It will fundamentally change electronics. The ability to manufacture at smaller and smaller scales is fascinating. Here's powers of 10, an old slide, but showing starting with a human hand and then magnifying it down and down. The one that I think is fascinating, whoops. Where's the pointer on this? Well, that slide five, the white blood cell, if you look at that slide, in the next one, we actually build devices that are working devices that fit easily inside of that. We're now able to build little machines, some of them movable, robotics, even slightly programmable, although control is a little bit of an issue. But we can build machines that are smaller than the machines that nature evolved to take care of our bodies. The housekeepers of the future human body may well be these little nanobots, these small machines that we know how to manufacture today. And we can build them smaller by a factor of two about every 12 months. So we're getting, that's the exponential that they seem to be on. And at some point, the science fiction of shrinking down into housekeepers that we actually manufacture, we're very close to the point today that in 2011, we developed the first pills that you could take that actually had little nanobots that would do some functions inside the human body. And they actually exist. We have manufactured some and used them inside of humans. Of course, not all of us are comfortable with the idea of little nanobots running around inside of us, so it's not particularly attractive yet, but it's coming. Also, if you just think about biotechnology. Biotechnology is just a biological form of nanobots. They're just little robots, they're biological molecules. They run on a computer code called our genetic code, but they perform functions, the operating system is the genome. And they've been given basically one operating task which is to reproduce. And that's who we are. And everything that's built up in our biology has come out of that sort of simple operating system and task. So if you look at the computer code, we're not really as complicated as we would hope. I mean, 25,000 plus or minus a few thousand genes in the genome. The human brain, only about 3,000 genes seem to make it up. The human genome is a complicated computer program. It's about three to four trillion words long, really big. But finite, and we've decoded it. And within another decade or so, every one of us will have our own genome decoded. It's a, I mean, it's a many, many billion word computer code, but it's written in an alphabet of only four letters that combine together to form only about a dozen amino acids that string together to form families of proteins. It's a large but finite engineering problem that is almost solved. We've learned how to read the genetic code. We're now beginning to learn how to write the genetic code. The first thing a computer programmer does, as most of you are aware, when you write code is you debug it. You take the flaws out. We're learning how to debug the human genetic code. Of course, there's a little bit of the debate who gets to decide what a bug is or not. What's fascinating is we're moving beyond even just writing this code to something called synthetic biology. We're also creating our own letters, so we're not restricted to the four that nature gave us even. We've begun to create synthetic biologies with our own alphabets. And the possibilities are endless once you start doing that. As I said, we're not very complicated. My kids get a kick out of the fact that we're almost half worm. That we're a Rube Goldberg machine, built out of stuff that has evolved over the last several million years, and for humans really just the last few million years. And the differences among us as human beings are much smaller than we often within our sense of the world attribute. It's possible, we've talked about genetic codes, being able to end disease. We can genetically code bacteria and viruses. We can actually write our own code. It's not inconceivable that all of the work that's going into disease prevention and treatment today will be a solved problem within another 30 or 40 years. We're already at work within the military of the technology it takes to regrow injured or missing limbs. I mean, we can actually do that in some cases today. And we're now talking about not just debugging the computer code, but improving on it. All sorts of ethical questions that come about when you start being able to go into your genetic code or your children's genetic codes and checking boxes in terms of skill sets or IQ or other things that you would like to improve. Our understanding of all this, it's been pointed out, is limited only by the finiteness of our lifespan. That we're, it's beginning to take us a significant part of our lives to learn all this. And so we don't have much time left to do something with it. So the questions come up, why do we die? Why do we get old and die? That is a solution to a problem that evolution came up with that we don't, a problem that we don't have anymore. We needed to be able to have cellular senescence so that there could be succeeding generations that could mutate and adapt to changing environmental conditions in order to ensure the diversity that would ensure the survival of our species. But today we shape the environment to meet our needs, not the other way around. And that there's less, and maybe no, environmental reason at the moment for us to have to have this constant cycle of cellular senescence and renewal. So could you live forever? Turns out the answer is yes. We know why the cells age and die. It's actually a clever but not very complicated technique. There's something called telomeres. They're little enzyme caps. The analogy is often used that they're like the hard caps on the ends of shoelaces. And when the chromosome divides, the hard cap erodes away a little bit. And after 50 to 100 cell divisions, the cap is gone and the shoelace unravels and that chromosome dies. It's a fuse, a time fuse built into us by nature to ensure that we go through this aging process. But we've learned how to stop that from happening. We know how to stop the telomere from eroding away and even how to restore it back to full strength. We can build cells, we have built cells that are essentially immortal. In fact, nature did that before us. It's called cancer. But in fact, we're made up of about 100 trillion different cells, the average person, but only 300 different kinds of cells. And we're learning how to stop each of those cells from getting old. It's completely conceivable that our children, the young people in college today even may never experience biological death, that this will be a solved problem within the next 50 years. Now, you could still die from violence or accident. And if that were the case today, the only way that you could die, your life expectancy would be about 650 years. And I've asked students, does that mean that you would drive cars with really big bumpers and really low speed limits? And that's not built into our DNA at the moment. They said, of course not. It would have, of course, also be a procrastinator's nightmare. Or dream, yeah. So we're learning what it means to biologically have cells that have whole organisms that are essentially immortal. But what's the ethics of living forever? I mean, all sorts of societal changes come. The earth can't keep supporting new and more and more generations of people. I asked my students, how many of you would willingly, but believe that it is such a fundamental human value to have a family that you would willingly give up your life at some point in the future, even though you biologically did not have to, in order to have your own family. About half the students will say that. Then I say that's not really the, that's the softball question. The way generations work, how many of you believe it's such a fundamental right to have your own family that you'd give up your parents? And there's always one who raises their hand. And if this timetable is wrong, it's not wrong by a lot. It might not be your children, it might be your grandchildren. But this is technologically a problem that we're well along the way to solving. It's showing up in the scientific literature. And even in the popular literature now. Openly talking about immortality research. And then we move on to computing. This is a curve that shows something called Moore's Law. This year is the 50th anniversary of Moore's Law. Gordon Moore worked at the time at Fairchild Electronics, later co-founded Intel, the computer chip company. And he was asked to make a prediction 50 years ago for what he saw in terms of computer chips for the next decade. And he made an observation that it seemed as if, to put it simplistically, that the price fell by a factor of two and the computing efficiency went up by a factor of two in computer chips about every 18 months. About every 18 months. That was the exponential curve. And that shows up here on a semi-log plot. There was a little break, if you go back even further, a little break in the 40s when we went from vacuum tubes to transistors. And in fact, after we went to transistors, the curve picked up speed a bit. So it's now about a 12 month doubling time. This has happened within our lifetime. This has happened within our lifetime, and yet we don't really notice how astounding this is. To use an example that was different, math was included in an article in the New York Times about the 50th anniversary of Moore's Law. If you bought a car 50 years ago, if you bought a car in 1965, and you assigned Moore's Law's type of growth to its efficiency, cost, and operating speed, then that car today, that car today would get 20 million miles per gallon. It would have a top speed of 2.4 million miles per hour. And the best part, the cost of that car, one half cent. That's what's happened within our lifetime in computing. Now it's just astounding that that's happened, and it continues to go. We stay on Moore's Law and it will continue for at least the foreseeable future. This is happening kind of below the surface of what most of us notice, but just a phenomenal change happening within the technology world because of these exponentials. And in computing, we're beginning to develop machines that rival actually now human intelligence. And right now, the really smart machines are big supercomputers. There was the Watson that crushed people at Jeopardy now a year or two ago. In the late 90s, there was a computer that played Gary Kasparov, the world chess champion. The first time they played Gary Kasparov won, and people, pundits said, computers will never be able to actually analyze the aesthetic value of different positions, but they just programmed that in. The next year, the computer crushed Gary Kasparov. Absolutely just annihilated it. Machines are beginning to develop artificial intelligence. I co-hosted a neuroscience symposium in London about 12 years ago. And all of the top neuroscientists from around the world, and they all complained, they're very top students were all getting hired by computer programming companies. Because these artificially intelligent machines are using the same neural network pattern that the human brain uses, and improving on it in some areas. So these are machines that think like you and I do. One of the father figures in predicting in this field, Ray Kurzweil talks about the singularity, the moment in which artificial intelligent machines surpass human intelligence. In his predictions by the year 2020, not very far away, we'll have machines that can pass the Turing test. Alan Turing, a famous computer scientist in the 50s and 60s who posed a challenge that in blind interrogation, so that you cannot see who's answering your questions that a computer be indistinguishable from a human. And they actually have that competition every year. There's a really good book, I found it enjoyable, that's been published a couple of years ago now, called The Most Human Human, because they run that competition till there's just one person left. And so the one person left, they considered to be the most human of the humans. But machines will pass the Turing test very soon. They're on this exponential curve. The problem is that once they become able to pass the Turing test, they don't stop. They double in capacity every 12 to 18 months. By 2030, Kurzweil predicts that we'll be exporting a lot of our routine thinking to computers embedded in the world around us. Most of us could use that right now. I mean, remembering phone numbers, interfacing with the internet, it's where we keep all of our photographs and other things, all of that to be just embedded around us. And that the human brain will be more like a central processor. But Kurzweil says, yeah, but the computers keep getting better faster. By 2040, actually we'll have exported most of the central processing to the computer world. And the human brain will be more like a peripheral device. There's a few things that we do pretty well. Pattern recognition. We do a pretty good job of decision support when there's limited, faulty information. We need a quick decision. It may not be the most optimal, but it's a pretty good one. Sort of the fight, flight or freeze issue that was brought up earlier. So the human brain has some things that it's fairly good at. But Kurzweil predicts by 2050, everything will be exported. That human intelligence, that whatever it means to be human will be completely transportable over into the machine world. Machines thinking the same way we do, in some cases, more efficiently in others. And that the biological brain will be obsolete. I'm not really comfortable with that, but actually there's plenty of evidence to suggest that we're already on that curve. We're already on that curve. And we don't wanna get into competition. We're moving into a world in which there's sort of wireless tele-everything. When you talk about war fighting, this is certainly true. It's come up several times today and many times, but there's a problem. Evolution didn't anticipate that. It was pointed out in a conversation I had a couple of days ago that the computer on your desktop doesn't look that different from the computer that was on your desktop 50 years ago. Screen, keyboard, there was no mouse, but. The reason for that is not limitations in the computer. It's because of limitations in us. That we're the kink in the bandwidth. We only run at, our brain only runs at about eight hertz. Eight hertz, that's why we don't see the lights flashing on and off every 60 times a second. You know, our brain is really not a fast machine. So let me back, talk about this a little bit more. We don't wanna get into competition. We don't wanna get into competition. We'll lose. The human brain is slow, requires chemical energy, requires oxygen, has eight hours of maintenance downtime out of every 24, unless you're a student. And we're really not competitive in that world. And all of the things that really are weaknesses in that system are tied to our failures, not limitations of the machine. If you look at Wall Street today, a couple of years ago, the number was 84% of all trades on Wall Street are done by computers, high-frequency computers, that you have to be that fast. And then there's an article that came out relatively recently talking about, are these computer systems that cause some of the flash crashes and so forth? Are they now too fast to fail? If you think about technology and its place in warfighting, we are the slowest part of the decision tree when it comes to warfighting decisions. So how do you feel about the ethics of a machine that's not just our extended tool, but in a world in which speed can make the difference between success and failure of turning over the decision tree to a machine? And what does a flash crash on Wall Street, what's the comparable exercise in the military world when a computer makes a mistake in that few milliseconds of decision-making that it has? Or do we wanna keep humans in the loop? And if we keep humans in the loop, then we become vulnerable to be in this, we're moving from a world in which the big eat the small to a world in which the fast eat the slow. And the warfighting world that causes a really ethical question about how engaged we allow the machines to be in decision-making. So as I said, evolution never designed us for the interface that we need to work with this transformative technological world. If we don't find a way of working, of having a good relationship with this technical world that's coming, we will either, as history has shown, either be assimilated or exterminated. I vote for assimilation. What we don't get to vote is say, no, we just don't want it. It's coming. We're already there. We already have lots of mechanical adjuncts. We wear glasses, we have watches, some have pacemakers, artificial knees. I asked my students, someone with an artificial hip, are they only 98% human? Well, no, what we think of as being human really isn't tied necessarily to the biological structure that surrounds us. So we're actually getting to a place where the boundaries between these machines and humans are likely, I think, to disappear. People joke with different names like Robo sapiens, but we actually are at one of these speciation moments, practically, of seeing the whole essence of what it means to be human transform into something else. We're seeing the convergence of these three revolutions in technology that I mentioned before. One of the examples I just pulled this out of the literature, artificial eyes, retinas, ways of actually augmented reality that shows up in warfighting and lots of interesting examples today. In the future, we're already doing this in laboratories, direct neuro robotics, direct interfaces between our biological brain and the outside world. The research right now is motivated by helping those with brain stem injuries, but in the very near future, their advantage over us will be so great that we'll all want that. And I ask students, suppose, at least in the beginning, because of neuroplasticity, that your best advantage has done if you install one of these neuro robotic connectors, transducers, when you're very young before your brain has actually quite developed this super highway of networking. Would you put one of these in your children, in your child? And usually, the audience, the room of students will say, no, I'll wait. But there's always one or two who will say, well, sure. And then we start talking about, well, your kid's gonna be in the same classroom with that kid who has access to the full internet and the multiplication tables are not something you have to memorize. The advantages are so great that by the end of the conversation, usually 90% of the students in the class say, I don't like it, but yes, I would have to let my child have that same advantage. And these transducers already exist. As I said, we're doing this work already. We know that the human brain right now is the machine that does the interface. We provide electrical signals and it figures out what to do with them, but we're understanding how that works. And it's not science fiction. These are occurring across the scientific literature today. It's in the laboratories at this moment. Here's the artificial eye I mentioned. This is actually a couple years old. An artificial retina look at a hand. You can tell it's a hand, still not great, except that that's getting better every 12 months. That's doubling in resolution every 12 months. And within about 10 years or less, the artificial retina will be far better than the human eye. And it's not restricted to just visible light. You can do infrared, ultraviolet. You can do the same thing in audio. In fact, all of our sensory inputs, you can extend with these electromechanical devices. And so the future warfighter may be a very different looking human, not constrained by the limitations of our own evolutionarily developed biologies. Even the sense of reality is changing, which is getting really far out there. That it's possible now to go into virtual worlds. Everyone has seen kids that get virtually addicted. That wasn't meant to be a pun. They get addicted to virtual worlds. The warfighting games are very popular, but any of the gaming has tapped into the reward structure of our own biology. If you were to take the human brain right now, you're processing about 10 to the 18 bits per second of information coming into the brain through your sensory inputs right now. If you were to add a million people that you might interact with within a finite lifetime, that would add another 10 to the eight, to 10 to the 10th bits per second of processing requirement. If you throw in the environment and all of its randomness, another 10 to the 16 bits per second or so, was roughly 10 to the 40, to 10 to the 44 bits per second of processing power, you could create a virtual reality that was indistinguishable from what your brain's processing right now. We're nowhere close to that capability, nowhere close, except if you take Moore's law and start projecting it out, and we'll get there in another 30, 40 years, we'll be able to actually have virtual worlds that are indistinguishable from the world we're in today, except that you get to make up the rules, and you get to have a reset button. In fact, if you go into that world, there gets to be more than one of you. You can have copies of yourself. I mean, it's just nothing in our evolution has prepared us for what I think is the addictive attraction of that type of world, and it becomes the anarchist dream. You can have your own universe and never have to interact with anyone. Or if you look at the way gaming has worked, you can actually then interact with lots of people on lots of different levels, and it becomes a mechanism for a different kind of community. So in that virtual reality, as I used the analogy before, it's not just thinking outside the box, it's recognizing that the boxes that we normally think of aren't really there, and that the future of this Robo sapiens may be these types of cyborgs that have been hypothesized in science fiction, except that they don't actually have to look like us. That's just done, I think, for our own immediate comfort. And it's not necessarily, it makes me really uncomfortable, because I'm an old timer, but it may not be that this is something really perverse. It may be that simply any intelligent civilization reaches a point where it takes control of its own evolution, and that this is just actually what nature intended. That at this point in the development of intelligence, we reach this transcendent moment when we can move beyond our own biology. This actually comes from Ray Kurzweil, has a singularity university to talk about this. People, though, have issued a number of folks, have issued dire warnings that once a computer gets to the point that it's sentient, that it thinks for itself, it has its own consciousness, and that it's smarter than us, and it's doubling in capacity very rapidly. It may decide it doesn't need us. That in fact, once we create these kinds of sentient artificial intelligences, and they stay on Moore's Law beyond where we are, because we don't double in capacity every 12 to 18 months. That we become obsolete, and that ultimately there will be no need to keep us around. I don't know that that's true or not true, but certainly as an ethical question, it probably is worth thinking about. Arthur C. Clarke talked about how once these machines become sentient, if you develop war-fighting autonomous systems that have a sentient sense of self, are they not entitled to the same human rights that we attach to biological humans? We don't think as much about sacrificing a machine, but I don't know. So the future conflict, this World War III, that as I say is a misnomer, one of the things that I worry about is that it's not that it's coming, that it's already begun, and we haven't noticed. Nobody's told us about it yet. In fact, to use an extension of the discussion this morning, if the end of war is when you have convinced your enemy to give up, that they lose the will to resist, how many of us are resisting? How many of us are in there fighting the ethics of technology, or have we surrendered? Have we already begun to surrender? Because it's just so hard. We still have our kids programmed the VCR. It's no longer a case that technology is our tool. Increasingly, if you look at how people spend their lives, that we are the tool of these technologies. So this war, this competition, is in many ways already underway. It's already underway, and we have to think of what our appropriate place in it is. And to show that it's relevant and timely, this is this week's Economist. And two things in it that bothered me. One, the big headline, artificial intelligence, promise or peril, which is this issue of, once artificially intelligent machines pass us, then we become increasingly obsolete. The little headline up at top, special report on financial technology. That's just as meaningful, I think, that this computer world, this internet and the connectivity and the intelligence, the information domain that exists, it's already out there. And we talk about information dominance. We may have lost that one already. Whoever they are, they're going to win that war because that's their turf. So finally, just the Pogoism, we've met the enemy and they're us. This is actually a future conflict of our own design. JD Bernal talked about this. I use, actually, if I back up one, every time I see popular or scientific magazines or literature like this, to me this is, remember the canaries and the mines that alerted miners when there were unseen, but dangerous lethal levels of gases present when the canaries stopped chirping. Well, these are the canaries of the mind. And they're chirping really loud, trying to tell us that there is an unseen but present danger and we need to get off the sidelines and start thinking about it. And I'm a former dean of engineering and a technologist and I should be, I don't know, should I be cheering for the home team, the robots or? But these are the canaries in the mind. Every time you see an article like this, let the bell go off. Remind you, this is not science fiction, this is real stuff. And then finally, I like to end with a quote from JD Bernal from 1929. I'll paraphrase it. He said that scientific capability was approaching the point that we would no longer be guided by environmental constraints and that we would have the capability of improving upon ourselves, but that soon after that, we would have the opportunity with science and technology not to simply work with the biological forms bequeathed to us by evolution, but we would be able to have a destiny of our own design. And he further pointed out that when that moment occurs, it will be at least as important as the moment in which biological life first appeared on the earth. And all of the indications, all of the indications when we look at these exponentials are that that moment, that transcendent moment is going to happen in your all's lifetime. I mean, I look at my generation and think, wow, we're the lucky ones, we get to die. But that transcendent moment that JD Bernal talks about, that destiny of our own design, that's coming within the next 50 years and if I'm wrong, the next 100 years, but absolutely the signposts are there. The goldfish bowl is filling up with marbles. We haven't noticed it in many of these technologies yet, but it's well along that curve. And we're seeing the convergence of technologies and things happening in the laboratories. And it's beginning to show up as the canaries in popular literature, a very different kind of ethic. So once again, I wanna apologize. This is not to wrap it up, make it all make sense, bring it all to a nice tightly wrapped package kind of talk. This is more the other one of over the horizon that at some times within your day to day, and I'm not crazy, well, not all the time. But I am a technologist and I see this happening in lots of laboratories in lots of fields of science and engineering. And we get so consumed with the myopia of our day to day lives that it's pretty hard to step back and look at what's gonna happen 25, 30, 50 years from now. But when you do, I get a very scary picture. And I think it has enormous calls for ethical thinking, helping to navigate a future that's coming whether we like it or not. And it affects war fighting, it affects society as a whole. It changes the whole dynamic even of who our adversaries really are. So once again, I wanna thank you for the opportunity to come and scare you and I hope you have a great weekend. Thank you.