 Good afternoon and welcome to the Mercatus Center at George Mason University. My name is Stacey Rumman-App and I'm the Director of Corporate Relations for the Mercatus Center. Hopefully all of you know me. If there's anyone in the room who hasn't met me, please let's try to connect before you head out the door today. I want to thank all of you for joining us today. I want to thank Microsoft for hosting us in their innovation space. I couldn't think of a better backdrop for today's discussion on permissionless innovation. I know all of you are familiar with the Mercatus Center, but for anyone who needs that just quick refresher, we are an organization that advances the knowledge of the study of markets. And what does that mean? It really means that we bridge the gap. We want to serve as this bridge taking academic ideas and coming up with solutions to real world policy problems and providing that dialogue and discussion. And today certainly we will be addressing the topic of permissionless innovation. I want to turn the microphone over to Bill Beach. Many of you know him probably from the Heritage Foundation. Some of you may know him at his most recent stint which was on Capitol Hill where he served as the Chief Economist for the Senate Budget Committee. I'm pleased today to be able to introduce him as a colleague. He joined the Mercatus Center just this year as our Vice President for Policy Research and he will be moderating today's discussion. So Bill, I'll turn it over to you. Thank you. Thank you very much, Stacy. It's a real pleasure for me to be here and an honor to moderate what I think will be an important discussion of technology, the challenges that new technology poses for a lot of people. It may be not for anyone in this audience, but if you are challenged you needn't identify yourself. Just absorb. You know we actually live in a period of a really exceptional technological achievement. We shouldn't remember that. Indeed it often defines us. When I was in college we were all reading Leslie White and listening to or absorbing his view of how technology does define our culture. It seems like those discussions are coming back and yet we are often apprehensive, even fearful of our creation. We worry that the worst in us may be reflected in the things we make. You know you can just think about the myth of Frankenstein and there's a continuum in that notion that human nature and its creation is reflecting what's not very good to see. Your eyes belie that, don't they? I mean we appear to use most of what we create in an entirely peaceful way. If we step back and look at the record of our creation it appears to many to be reflecting what is best in us, not what is worst in us. Maybe technology is truly neutral. Without objective ethical content, neither bad nor good. And we're really debating about our natures and propensities as we talk about the threats of technology. If so the advocates of letting a thousand technology flowers bloom may be too naive, given the recent history of the human race. That said, the record of the past 100 years is one of the most unprecedented records of great progress in the history of human life. I mean every quality measure of life has gone up. Doesn't this record indict the technology skeptics as guilty of excessive gloom and pessimism? Well that's kind of what we're going to be weighing today in its various forms, those two possible realities. And we've got just tremendous people here to speak on that. We have two speakers who have published two books just recently. You can't get any better than that. Wendell Wallach, who's here to my immediate right, has published a dangerous master how to keep technology from slipping beyond our control. And Adam Theer, my colleague at Mercatus, permissionless innovation, the continuing case for comprehensive technological freedom. Wendell is a consultant, an ethicist, I practice that word, and a scholar at Yale University's Interdisciplinary Center for Bioethics. He is also a senior advisor to the Hastings Center, a fellow at the Center for Law, Science, Innovation at the Sandra Day O'Connor School of Law at Arizona State University. Adam is a senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University. He has spent 25 years, and I've known him for almost all of that period of time, covering technology policy for five different research institutions, and has authored or edited eight books on these topics. And the way we're going to start this is Adam and Wendell will open with brief statements, I think, in the neighborhood of five minutes each, summarizing why they wrote their respective books and what the key takeaway from those books are. After these opening statements, I will ask a leading question and moderate a dialogue between them about the issues raised in these books and where they part ways, more importantly, where they find common ground. So without any further ado, Adam, would you like to lead? Thank you, Bill. And thank all of you for coming out here today, and I want to thank my friends at Microsoft for hosting us in this beautiful space. And I want to thank Wendell for coming to join me in this conversation, because when I read his book, A Dangerous Master, I was very impressed by it, even though I knew there was some tension between our worldviews, between his book and mine. I thought this was probably the most thoughtful and thought-provoking kind of book I've seen on technological concern or criticism of modern technology that I've ever read, and I said, well, I need to get Wendell to Washington. We need to discuss these things, because they're all the same things that I discuss in my book on permissionless innovation. Now, as Bill said, we're going to spend a few minutes just saying why Wendell and I wrote our respective books, and then we'll have a dialogue about these issues, both what we share in common and where we part ways. Why did I write this book, Permissionless Innovation? For 25 years, I've noticed that in each and every technology space that I jump into, and I've been in quite a few of them, there seems to be sort of a tension, a conflict of visions, if you will, between two different worldviews about the way technology policy should work, how technology should be governed, or technology ethics. The way I've finally described it and put it in the book, you can think of this worldview as I think a clash between the permissionless innovation worldview and the precautionary principle worldview. The permissionless innovation worldview, which obviously as the title of my book implies, I'm enamored with, basically is the belief that, generally speaking, that new technologies and innovation should be allowed to evolve freely without prior restraint, and that to the extent problems develop, they should be addressed in an ex-post fashion with sort of bottom-up tools and remedies. The precautionary principle approach is one that's instead based upon more of a top-down approach that says, we need to sort of preempt new forms of innovation and ensure that before these innovators are allowed to run the market or release their technologies into the world, that they exercise some constraint and that they receive some sort of a blessing from somebody or regulatory agency or so on and so forth. So in the sense you can think of the precautionary principle camp as sort of more of a mother may I kind of approach, the permissionless innovation approach sort of a nothing ventured, nothing gained approach. And so what I have tried to do in the book and in all my work at the Mercatus Center at George Mason University is apply this to all the various technologies and sectors and issues that have come about in recent years. Everything from things like drones and driverless cars to 3D printing and virtual reality, the Internet of Things and wearable technology, advanced medical device innovation and all of its forms and functions, and so on and so forth. Sharing economy, Bitcoin, you name it. In every one of these debates we see this sort of tension between these two world views. And it governs the debate that happens not only here in Washington but in state capitals and as well as international capitals and even in the academic space, which leads me to Wendell's book, which I've spent a lot of time focusing on his book after I wrote the first edition of my permissionless innovation book. And after I read his book I went back and revised my own opening chapter, my preface to my book to say what Wendell does I think so well is he forces those of us who align ourselves with the permissionless innovation camp to think hard about the very difficult questions raised by new forms of technological innovation and disruption. I generally define sort of five fault lines in technology policy debates where we see these tensions or concerns rise up. Privacy, safety, security, economic disruption and intellectual property. There are certainly other fault lines and other issues, but generally speaking I think you can use those five buckets to sort of itemize the concerns that technological skeptics or critics have about various types of emerging technologies. The question is what do you do about those different types of concerns? Privacy, safety, security, economic disruption, intellectual property. Again, do you treat them as something to be addressed preemptively and solved before innovations allowed to run wild? Or do you allow for the innovations to go out there, see what happens, see how the public adapts perhaps and then adjust accordingly after the fact including potentially with some legal or regulatory remedies? So again, I think that question really lies at the heart of the policy debates we hear about in these towns, about these technologies as well as in the academy. And I think what's interesting however is that at the end of the day when we do start talking about solutions to some of those difficult concerns you'll often find, and I think this is where we'll find a lot of common ground when we have our discussion here in a moment, that many roads converge on common solutions and that even if you start in different places regarding how you think about technology or your innovation, you often end up in about the same spot in terms of how it maybe should be governed going forward. More on that in a moment but I think now it's best to turn it over to Wendell so he can talk about his wonderful book. Well thank you very much Adam. I was really pleased to have been invited to join you today and to honor the publication of the second edition of your book. If you looked at just the titles of our books and also after Adam's introduction I think that I'm going to take the full precautionary stand which is not at all the case and it's not why I wrote the book. I wrote a dangerous master largely to be a primer to introduce people to the emerging technologies, what the sciences are, what some of the history is, what the benefits are, why these various fields are being developed but I do focus particularly on what can go wrong as we adopt new technologies and how we should address those harms or those potential societal impacts that we don't want through ethics, through engineering and through public policy. So there's some yes, some public policy in the book and I'm sure that's what we're going to finally focus on in this particular discussion but that only occupies a part of the book. And I actually wrote this because I thought there were so many intriguing philosophical and ethical and societal issues at stake here that it would be nice if everyone could join in the discussion and join in the reflection and join in the debate. And so I considered how in a hundred thousand words or so I could give everyone a textbook, it's not written as a textbook, it's written to be entertaining, but give everyone a book that would make them feel comfortable in joining in the conversations that they would have a framework, that they would know what the key positions are not just around whether or not we should be regulating or not regulating technological innovation but also what you think about whether machines can be as smart as human beings or whether human enhancement is something we really want to further or something we would like to slow down. So while I'm not trying to be overly precautionary I do want us to get engaged in shaping the development of emerging technologies not just adapting to them. I often use the metaphor of the self-driving car. I think the self-driving car, in a sense, technology is moving into the driver's seat and it's moving at the driver's seat as the primary determinant of humanity's destiny. And the question is should we just allow that to take place unfettered or do we want to engage that in a way where we shape what we would like the future to be? Now, I'm not naive about that. Shaping or resting or even slowing technological development is certainly not going to be easy particularly since most of us perceive it as a source of broke promise and productivity. Of course, if you just listen to the techno-optimists we are on a highway to heaven on earth and the buses are speeding up at an exponential pace. On the other hand, if you listen to the techno-pessimists we're clearly headed to hell in a handbasket. But I think for most of us we do see technology as a source of great optimism of great productivity and yet there's considerable anxiety considerable disquiet among certain trajectories in technological development and perhaps about this overall development of technology sort of taking over human destiny and perhaps even designing the human species as we've known it out of existence that is certainly one of the more science fiction type scenarios that are out there. So my concern was well what's developing how can we maximize the benefits, minimize the risks how can we engage and at least mold the development of technology to some degree or not There's a well-known principle in technology policy known as the Colling Ridge principle and that was from David Colling Ridge in 1980 and he basically said that it's easiest to shape technological development very early on but often we don't know what the problems will be early on and by the time we know what the problems will be a technology becomes so entrenched that we can't do anything. So that's been kind of folklore for 36 years and it keeps being brought up over and over again but those of us who do believe in anticipatory governance and anticipatory engagement we think that the duality of that is a little too simplistic and often times you can fully perceive what the challenges will be long before the technology is fully entrenched I talk about those moments as inflection points Windows of opportunity which may stay open for a long time sometimes open and closed very quickly where you can redirect very early on the development of a technology and sometimes just a subtle nudge will take a technology toward a very different destination So that's a lot of what a dangerous master is about ways we can do that through reinforcing certain ethical principles I talk about that in terms of how we can bring values into engineering and engineering design which relates to an earlier work I co-authored called Moral Machines in Robots Right from Wrong which was really about how you could implement sensitivity to moral decision making to moral considerations into the decision making of artificial intelligence Suddenly a subject that's gotten a great deal of energy in more recent years particularly in the past year and the third area is how in creative public policy we might find new ways to nurture the development of emerging technologies but also engage them in ways in which we could slow down things that are problematic So Gary Marchant and I have written a series of papers Gary is the director of the Center for Law and Innovation at Sandra Day Connor School of Law and he is a strong advocate for what's often referred to as soft governance which is something I think both Adam and I agree on which is largely getting the industry using various mechanisms insurance policies, professional codes of conduct industry initiatives to be the preferred method of shaping technology policy and only turning to regulatory regimes and laws when you have to when there's no other way of enforcing something that is quite serious So I think we're going to get more into that We'll talk much more about governance coordinating committees as we move along I believe I have recently committed myself to start a governance coordinating committee an oversight and governance body for artificial intelligence and robotics and as we originally wrote about this we're originally writing about it largely in the American context but I think the reality is we're going to need something like this in the international context so I'm thinking more about an international oversight and governance body for AI and robotics Thank you both this is great and I thought coming into this I understood where you're both were coming from but I'm now a little bit awkwardly confused so let me ask you this question and frame it around the pacing problem Adam your book is Permissionless Innovation and yet I have heard you say or read that you have some there's some things which are not going to be permissionless there's some governance and here you say that you want to slow things down a little bit but yet you kind of agree with Adam and so let me ask this as technology develops and flows Adam are you in favor of letting that go and just letting it go and let me use biohacking as the hard example here and Wendell where do you want to slow it down and how do you want to slow it down or are you kind of closer to Adam's viewpoint than not Adam could you start? Sure so let's go back to something that Wendell mentioned that's popular in the study of the philosophy of technology which is the so-called pacing problem and the Collingridge problem or the Collingridge Dilemma as it's sometimes called because for some of us the Collingridge Dilemma is we look at the flip side it may be the Collingridge benefit that if technology moves fast enough it actually can improve society in surprising ways that we couldn't originally envision or foresee and that's really something that's missed by I think a lot of the critics of technology I'll give an example of Uber and Lyft I often speak to a lot of law school audiences and philosophy programs and public policy programs I often start to be deliberately provocative and say what are Uber and Lyft if not the two biggest law breakers in America today and God bless them for it and we were like whoa you're for law breaking I said well you know what what they've done is essentially they've come in and said the old rules don't seem to work very well not just for us but for consumers and we're going to give people a taste of something different we're going to give them some choice we're going to give them some competition and this is the way it's played out not just for the sharing economy but for things like 3D printing where there's things that are going on or questionable legality regarding the 3D printing of say prosthetic limbs with limb deficiencies or the 3D printing of basically certain types of weapons certainly probably butts up against some laws this happens in the world of immersive technology helps in the world of internet of things and wearable tech is this a smartphone or is this a mobile medical device the FDA struggles with that question right now and drones people are supposed to register them all my son and I didn't register as ours when I got him one for Christmas but you know are we breaking the law probably but we're still flying that thing in our backyards and a lot of other people use it more impressively to do things like search and rescue missions with drones and so on and so on and so forth so the question is in each of these cases the so-called colon ridge dilemma if spun differently might look more like the colon ridge benefit we're giving people new choices and options they never had before and the bottom line of my book is really that we should embrace a certain amount of that permissionless innovation even when it butts up against traditional regulatory standards or social norms because if there's one thing I want you to take away from my book that maybe differentiates me from one to a little bit is that my general argument is that if we spend all of our time worrying about or obsessing about hypothetical worst case scenarios and then basing public policy upon worst case scenarios then best case scenarios can never come about so generally speaking even though I think at the margins there does need to be some governance of some of these technologies and services and applications I think there's a great benefit to allowing us to see what happens to giving innovation a chance to prove itself to make it innocent until proven guilty if you will and then if the problems develop we address them so maybe I'll leave it at that so where do you draw the line where is that line that's the really hard question is where are the hard cases or where is the harm that requires necessitates if you will a preemptive or precautionary approach to these new technologies so in the second edition of my book I spent a lot more time thinking about this and I'm trying to write a longer article about essentially a theory of precaution when is it people of my persuasion tend to think that innovation generally good and we make a good case I think I hope against the precautionary principle as a policy prescription but we don't do nearly as good of a job at saying well what are the cases where you would draw the line that Bill just asked about where would you have some precaution after all most of us would agree we shouldn't be allowed to roll tanks down main street or have surfaced air missiles or bazookas on our shoulders or possess uranium right in each of those cases we would almost all agree like there should be some preemptive precautionary restrictions what I try to do in the book in a short section again going to be a longer paper is try to begin developing a more robust test for when it is that the potential harm in question regarding a new technology or innovation is the harm associated with it is highly probable tangible immediate irreversible and catastrophic this isn't a perfect five-part test but it begins to get us at a sort of a framing for when precaution might make some sense I would argue however that the vast majority of technologies that I cover in the book and many of which Wendell discusses in his would not be applicable for a precautionary principle based approach to those innovations there are going to be some and we can get into that in a moment but I think most of them we would do well to allow for innovation to go forward see what problems develop because they may not be the ones we anticipate and then we address them after that let me get Wendell on that where do you believe the line should be drawn is there a bright line or is there a set of rules how did you come in on that well I think it I would agree with Adam on the five principles that he talked about in terms of where there are serious harms that need to be attended to I also agree I mean we could have somebody who is much who sits you know to my left who says that the precautionary principle should be put in all cases which is sometimes how the European Union is read that it lies with those who want to innovate to prove that there will not be any harm created by the technology I think that goes too far and I don't think it's always served the European Union either but I also think that a kind of permissionless innovation that does not force manufacturers to be responsible to what they are putting in place that is something that causes great concern to me too so the question is again where do you draw the line I don't always know I think it's nice to say that we can draw some line that is going to cut across all technologies I don't think that's the case I think sometimes it's very individual and sometimes you do have to let technology to develop a little bit before you can see what the problematics would be so we've been talking about designer babies now for 20 years it's not a new subject area if you wanted to implement a line restricting that 20 years ago it probably would have interfered with a great deal of genomics I grew up in the 1950s the big fears were robot takeovers and giant locusts that were genetically created who among us would have wanted to arrest all the innovation of our lifetime on those 1950s fears and even now when we have concerns being raised again about whether super intelligent machines will be friendly to humans that's still pretty far in the distant future unless you listen to Ray Kurzweil but everybody else thinks if that's possible that's at least 50 to 100 years off and it's so far from what we have in AI technology that talking about innovation based on that that would be absurd on the other hand there are people who have a great deal of respect for like Stuart Russell who are talking about yeah but let's start attacking the control problem now it's not talking about regulation of course say as much as he's talking about investing in research now so that if we come to some juncture some breakthroughs in the next decade or two we are at least prepared to know whether we can manage so that's not regular regulation but that's also a precaution in the sense that we are investing in the innovation which will mitigate possible harms not just investing in the technology that's going to make somebody a quick buck let me press on that point press you both but Wendell I'd like to start with you argument, argumento here technology has been changing human beings for centuries and centuries it's a continuous change most of it's happening in our mind rewiring of our minds changing of our minds Microsoft is involved in the mind rewiring business and it's going on and on and on where is the distinction between doing that and building healthier bodies through athletics and weight lifting and better nutrition and putting mechanics in our bodies is there a hard stop at biohacking is there a point at which we don't cross because then we are not human anymore what's the where is there that line well I don't know that there's a hard stop and we're already putting mechanisms within bodies and largely for therapeutic purposes so the issue is when these therapies these neuro prosthetics can be shown to enhance capabilities do we want it or do we not want it and I think there we have a lot of guidelines in terms of human subject research because most of these will be research efforts that need to demonstrate that these can be done in ways that are clearly won't be harmful for the individuals I'm not sure but it's just an individual right to say I should do that I can do that or for example a parent right to say that because I'm blind I would like my child to be blind all those kinds of things they are disturbing on so many different levels and at the least we need to have the public discussion about it I would not like to see all that kind of innovation going on in a permissionless format but I don't think it will go on because we have a lot of mechanism in place for the oversight of those kinds of technologies we perhaps don't have the kind of healthy public conversation about what we truly want to embrace and what we don't want to embrace but I don't think that technology has just been changing us mentally I mean consider your relatives a mere 200 years ago by our standards they were uncouth they were they were unsanitary their average life expectancy was probably around 35 years in fact in 1850 the average life expectancy was 38 years in the United States and it it doubled over the next 150 years largely because of the industrial revolution the germ theory revolution in medicine the sanitation revolution it changed us in so many ways on the other hand I get very concerned when people say I'm not so worried about my job disappearing in the next 20, 30, 40 years I think they're pretty naive about what's taking place let's say in artificial intelligence and robotics or even the fact that people are going to live longer and retire later and how that's going to affect the job market so we aren't very good at looking at change over time we're not very good at preparing for the kinds of changes that we're about to witness let me go back to the biohacking case study it's a wonderful one because it exemplifies a problem I think we see in a lot of these fields and Wendell discusses it in his book you might define it sure, so biohacking refers to efforts by the general public to essentially take matters in their own hands when it comes to their health and their abilities in terms of modifying their body there are some examples right now there are forums online where people come together and collaborate for the different types of things they might want to put into their bodies there are forums that include freakish things the number one forum I've seen on biohack.me is putting magnets in one's fingers turns out you can do some pretty cool things with magnets in your fingers not particularly safe but there are videos of people slicing open their own fingers putting magnets in there are people who put cellular reception technologies in their skulls and basically can take phone calls like tapping the back of their ear with bone conduction there are people who are putting all sorts of other things in their bodies of course this started in the field of sports and athletics it started with people trying to enhance their body through chemicals or drugs and then with other devices but what this gets at is that we now live in a world of more distributed decentralized bottom-up type of innovation where we've empowered the public to essentially do some of this innovating on their own and that's a really different world than the one we used to be in where there were large corporate gatekeepers that controlled drugs and devices and everything else my colleague Bob Greyboys is in the audience and he and I were at a conference at Johns Hopkins University Hospital a couple of years ago where all the volunteers were printing and assembling 3D printed hands for children with limb deficiencies and then giving them to the parents and then fixing them to the children and they were doing this at a price point of around I think somebody estimated like 40 bucks to the parents that was all funded by foundations but these are parents that routinely used to have to spend thousands around the order of $4,000 for a prosthetic hand or arm think about that price point differential 4,000 to 40 bucks and then think about the fact that everything that was being done in that auditorium at the hospital was being done voluntarily and collaboratively with open source documents and general purpose technologies we call 3D printers in that sort of a world I'm not sure exactly how we control innovation I'm not necessarily sure that we have to in every case but I do know that it involves risks and what I try to suggest in my work is that when we get into these hard cases that we're discussing here today about what are we going to do even if one desires a policy solution or prescription it's not always even tenable that we'll get one it's just because of the pacing problem things probably are going to move slower than we'd like and by the time you get ready to solve it some other problems develop and solve the other one you are worrying about and so what I think we need to get really serious about and this includes our government government's plural is risk education about talking to the public about the trade-offs associated with the types of risks they may be engaging in if they involve themselves in biohacking or 3D printing or any other types of bottom-up technological innovation to the extent there are companies and other groups, researchers, universities and others involved in the innovation process it's a very different discussion and this will help us transition to our next conversation about collaborative soft-law soft-governance mechanisms because when you do have large intermediaries they may be in a position to actually have some say about the early development of ethics and technological capabilities and maybe bake in a different set of ethics or rules or norms for that technology so that the worst cases don't develop let's go there my next question was on the FDA and I'd like to direct it at Wendell because I think it's a good case study towards better understanding your governance boards or councils and we'll not take the pharmaceutical side we'll take the medical device side medical device approval rates are really slow and there are people who are suffering as a result and yet the FDA is perhaps the most best known example of precautionary regulation in this town are you comfortable with what the FDA does on medical devices is there an ethical problem there does your your governance coordinating committees change that those calculations in any way, can you kind of talk about that as an illustration of how your governance committees might might work well that isn't a specific thing that we are we've been talking about for governance committees largely because that's an area that's already so entrenched so the governance coordinating committees have more to do with the emerging technologies before they become overwhelmed by so many parties competing over different pieces of the action and conflicting guidelines and so forth so I think in the FDA we do have one very strong very strong party that's actually closer to my life than most of you may realize my wife oversaw the institutionary U boards and compliance regimes for a major university until recently she oversaw that for a hospital a few years beforehand I live in the center of the bioethics universe which is often about research compliance and medical research the Hastings Center which I'm a senior scholar for is also very concerned about those issues the difficulty here is that on the one hand we had some real atrocities when medical research was left wide open and there was great resistance from the medical industry to introduce any reservations until we had the thalidomide scandal and so that spread very broadly and some people would argue that it spread too broadly and we can speed up some of these devices the approval of some devices without going into these rigorous research regimes interestingly the FDA agrees they are also searching for ways to change the common rule so that in more areas of research do not have to go under the same rigor on the other hand how much do we want to open the door do we really want a situation down the road where we have a major health crisis because something got through a little too easily and we may see that I mean in nanotechnology for example we have basically approved more than nanomaterials because we aren't pushing them through the same kind of regulatory regime that we do for drugs we have technologies that people in biohacking will probably adapt for ways that nobody had ever considered when they brought these technologies into being and the question again is are we so risk-averse that we don't let our children play be creative I don't want to see that we are so much more risk-averse than when I was a child I roamed around northwestern Connecticut from miles into woods into all kinds of encounters that we won't let our children deal with today so something has happened in America where we are overly risk-averse I totally agree with Adam on that score on the other hand there are really serious problems out there that I don't want us to be cavalier about letting permissionless innovation dictate what we do I'm very clearly against lethal autonomous weaponry I just got back from talking about that at the UN in Geneva two weeks ago those weapons don't really exist yet I mean some of them are being developed in research laboratories but if we don't take a stand now if the world doesn't take a stand now we are opening the door to all kinds of future which will be much worse than any short term benefits that we are getting from these technologies let it be known here that I am not in favor of killer robots so that being said not every issue is a killer robots kind of issue end of times kind of scenario but that is a good example of somewhere where we might look towards more precautionary type approaches there's already a major campaign of the way campaign to stop killer robots and there's a lot of really interesting work being done on the ethics of robots and autonomous systems and AI systems in warfare and in other contexts and I think those are important discussions I'm not always sure we're going to be able to put that genie back into the bottle there's also something to be said for whether or not we should be allowing certain types of research into these things so that we know what the capabilities look like and how to counter them which is probably the most challenging aspect of this we've lived through in the debates about chemical warfare and things like that before about what should our knowledge look like how do we gain knowledge without experimenting but then also making sure we keep it in the bottle and don't release it in the wild that's the real challenge that is the real challenge but let's look at just the simple fact that we now have new means for editing genes very quickly so one of the doors that has opened up is what are called gene drives so gene drives are basically a method where you alter a species to a more what you believe to be a more beneficial form so let's say insects that don't carry the zika virus or that don't carry malaria who could be against that or locusts that don't swarm so that all these things that look like they will be very beneficial they are already being engineered in the laboratory and gene drives is really a method of getting them to replace the existing species within a few generations now that looks great on the surface particularly when you are talking about those applications but we have no idea what the ecological structure is for many species and we have no idea that when we are tinkering with a species how much we have altered the ecology of which that species was a part of and perhaps will lead to the elimination of species that we never intended to eliminate so there is an example where innovation by itself is not enough there has to be some procedures put in place to control the deployment of that innovation and there has to be much more research taking place about what will be the effect of these new species if they are introduced into environments outside of the laboratory on the other hand we have seen a rejection of species anywhere so it may be that this opens the door to the proliferation of some new species that are really beneficial I believe in diversity but it is not a simple thing and it requires probably a great deal of investment on our part and a great deal of precautionary attention if not regulation Adam, how does soft governance handle that scenario? Yeah, that's a great question let's first step back and explain what we mean when we talk about soft law, soft governance soft precaution because this is what I was getting at in my introductory remarks which is that even if Wendell and I started slightly different places or have slightly different philosophical world views or worries at the end of the day our books at the end tend to converge on the idea that a formal bottom up type of governance mechanism is sometimes going to be superior to the more rigid methods of the past or mechanisms of the past and so soft law or soft precaution can take many many different forms Wendell's book goes into some of these details in the last chapter I've sort of itemized these things and this is not an exhaustive list but they can include some of these things that have already been mentioned by Wendell things like codes of conduct or industry best practices that include collaborative agreements or informal sorts of treaties it can include things like ongoing multi-stakeholder processes of which our government right now has many many of these things in place some of you probably follow this at the FTC the NTIA, the FDA, FAA and so on and so forth we now have multi-stakeholder collaborative multi-stakeholder processes underway for drones, biometrics internet of things advanced mobile medical devices advising of mobile medical devices big data there's been multiple workshops and multi-stakeholder processes on that these are all sort of various types of soft government mechanisms and they're not even in beginning to be an exhaustive list there is discussions today about how we might embed within companies in innovative organizations digital ethicists, data ethicists or sort of chief ethical officers to mimic the role being played by privacy officers who sit around and help companies make decisions on a day-to-day basis but like do we want to go there is that a good idea how can we bake in some privacy by design or some security by design or some safety by design so that's just a few of these mechanisms now the problem is is as Wendell already alluded to every case is a little different every issue is a little different some of these issues you know have a heightened sensitivity about them for talking about end of days type scenarios as with the case of killer robots something we want more collaborative governance and formal governance and maybe even formal precaution but if you're talking about say internet of things and wearable technologies clearly it raises some privacy and security concerns when we all are fixing devices to our bodies that track our vitals in real time but that's a totally different thing than killer robots walking down main street right so we can allow for a different soft governance or informal governance structure for the internet of things and wearable technologies versus robotics or some of the other issues that Wendell's raised so maybe I'll let transition to Wendell because he's written an entire article about this in addition to his chapter in his book on self-governance so our idea for governance coordinating committees and it's just a term we've coined I don't really care ultimately what form it takes but that's largely a body that tries to coordinate oversee comprehensively the development of a particular technology and particularly flagging gaps in existing oversight mechanisms both hard and soft law and look for means to address those gaps and I think Gary and I agree and I think we agree with Adam on this let's go to that method first but going to that method really will require industry to be more responsible really require the citizenry to get more engaged in expressing what some of its concerns are I would love to see for example the robotics industry engage in certification processes for the safety of service robots in the home whether to take care of the homebound and elderly or to be play things for children or even to supervise children to some degree I would like to see some certification processes that makes it demonstrable that these are safe I would like to see at least more transparency from the automobile industry that before it upgrades the software so cars can be self-driving we know that it's been through a significant testing regime none of that goes on right now and in a certain sense industry would have to be empowered in a new way perhaps with certain forms of cartels or at least giving rise to other bodies in order to put that kind of soft governance in place but we do need that the downside of soft governance is not that you just can't always rely on the non-legislative bodies to put in place the appropriate mechanisms but you very seldom have any punishment scheme for those who violate those mechanisms and for that reason you probably do need to continue to have hard governance for serious issues and some degree of regulatory oversight but I think all of us agree we don't want regulatory oversight to be frivolous we don't want to get entrapped in putting in place legislation that crystallizes in 20 or 30 or 40 years later you have a bureaucracy that is basically regulating the technology that no longer exists there's no longer central to the area of development in which it was originally developed for so we have Tosco which looks at toxic toxic chemicals well they were that was developed in an area of PCBs when we were talking about massive quantities of chemicals being dumped into rivers and streams today we have nano materials which tiny amounts could be toxic but since Tosco was actually designed in an era that we were talking about these we have no way of dealing with the toxicity of nanoparticles and we still struggle with putting in place appropriate standards for that I've got to ask you one more question but I do want to announce we're about to start Q&A there's a it's not called a camera it's called a microphone it's right back there's also a camera in the back don't line up behind the camera and so if you have a question I hope you have lots we'll take those in just a moment we'll be framing your question and while we're getting the questions framed I want to ask you a really softball question but it kind of gets your views on the major technological developments out there that you're kind of worried about you've mentioned self-driving cars Wendell and all of that what else is kind of on your mind that you would like to flag for us and Adam same question to you on Wendell's creation by the way I'm not really worried about self-driving cars I just think we need to put in place social mechanisms to resolve the very difficult issues because they are going to kill people that humans would not kill in some circumstances and we need public conversations and the evolution of norms to cover that presuming we believe that self-driving cars will cause many less accidents than human drivers do so it's not that I'm worried about that as much as I think it's deeply problematic what we're going to have to go through as the next step to get those on the road or people purchasing them because there are some real dangers there I am very concerned about gene editing I think that's a major issue I think the toxicity of nanoparticles is not a small issue I think geoengineering is not a big issue when you're talking about painting roofs white or planting trees but it's a big issue when you talk about tinkering around with the the stratosphere I don't know if you all know this term geoengineering but it refers to using technological means to mitigate the effects of global climate change I know everybody doesn't acknowledge that global climate change exists in this city but I just think it does exist and that we do really need some serious methods to mitigate it but some of those methods could be more dangerous or could set off interactions in the atmosphere that would be more dangerous than the problem where we want to solve and yet I do actually support low level experimentation so that we know better whether or not some of these geoengineering techniques could be really problematic or not and I'm sure that we can solve all that so these are just a few examples I agree with the genetic editing issue is really interesting it's both one with the most exciting potential but also some really concerning side effects that was written in the report it was a witherspoon commission that was published about the ethics of human cloning and it also had a discussion there about genetic editing and there were questions about what it means for future generations which is the relationship between generations when parents are able to essentially have designer babies of course it's been around for a long time but now it's becoming more possible and it's not coming soon either well that being said there are already things that we can do when a fetus is in the womb that would not possible a few decades or even years ago and the question is who knows what else is being done in labs overseas and so on and so forth so there are open questions there that deserve more serious societal debate formal precaution but a lot more concern I think I'm a little bit more concerned about the marriage of autonomous systems and robotics I think that's an area that I'm also very excited about but clearly can see the potential I can't believe I'm pleasantly surprised we made it through a whole discussion without referencing the terminator and I'm the first person who's going to do it because usually I'm pushing back against terminator like scenarios in these debates but the reality is that there is a legitimate concern they're not just with killer robots well story I walked into my colleague's office Eli Dorado and Brett Scorb was sitting next to me the other day and I said you know I've been all for permissionless innovation for the most part with drones but did you see the video of the guy who just affixed a chainsaw to his drone and flew it through the woods and they're like well there could be great benefits with drones with chainsaws in terms of trimming like overheads yeah but boy the first guy who runs that through the middle of a stadium you know and then there's the drone with the flamethrower on it the gun on a drone it's hard to know what you do about these situations and how you do with that because it's stuff that's being done most of my kids and that's a real big challenge it gets back to the point I tried to make about how important risk education is to try to talk to people about like hey just because you can do it doesn't mean you should and if you do go ahead and do it you should have these thoughts and minds about how it could affect others around you and your community and your society and so on and so forth that's not a perfect solution about technology policy we jump to the presumption that what are we going to do about it means what are we going to do to regulate it as opposed to having that conversation that Wendell suggests we need to have but having it at a broader societal, more organic level from a very early age because every kid's a coder these days my two teenage kids can do amazing things with computers I couldn't have thought about doing when I was growing up and they're only 14 and 11 respectively you know I don't know what they're going to be doing in another 10 years but the world's going to be theirs for the taking and educate them and do every day about like hey that would not be a good idea to do that with that robot you're creating right now can you see why? we need to have that sort of educational process or what some people call digital literacy or digital citizenship kind of efforts at a more broad based level from an early age on let me say a quick thing about drones because I think it's a great example of permissionless innovation and it's permissionless innovation going awry there's so many benefits about having drones in domestic airspace but it's been handled in a really sloppy manner and the pressures from the manufacturers to get the drones out took precedence over looking at it in a comprehensive manner so just think about the simple fact that you had a gyrocopter land on the capital lawn I think there was a small drone that landed on the White House well that suddenly formed every mayor, every governor that they had a massive unmandated expense that nobody was going to cover for them they were all going to have to spend millions of dollars on security and privacy issues because there was pressure from manufacturers to get drones out there and yes I think a lot of the application should have been okayed but I don't know that we really want to deal with the society of drones everywhere and it's going to be tiny drones violating our privacy or kids I mean it's just a matter of time for one kid One point now for the record that on the capital when that drone hit that the first flying object that came to the capital and landed on the capital was not a drone during my years up there it was a disgruntled postal employee in a small little airplane I think I said that when I was mentioning it I thought you were going to say a UFO bill just briefly on this before we go to Q&A because one of the forgotten things about drones though is that these are some legitimate dangers that Wendell's highlighting here about intrusion upon our property or intrusion upon public spaces but we do have other remedies for these things we do have property law we do have zoning and nuisance kind of principles we do have the ability to actually find and detect who's drone it was and so on and so forth so I'm not exactly sure like how much more precaution would solve that What about city mayors that have millions of dollars of unfunded mandates now when we're already dealing with cities that are having real problems I'm not saying we shouldn't have drones but perhaps they should have been taxed in the same way as we tax vaccines because we know that some people are going to have adverse reactions so let's build it into the social structure and that's my concern we're just not dealing with these things in comprehensive ways Does anyone have a question before I ask my next foreign question? Would you like to go to the microphone in the back there so we can get you on Right One of the challenges with the soft approach to regulation is is always a sort of debate about whether there is a problem or not and at what point it's arrived you see this with climate I mean, my experts say this and your experts say that Do you both support some sort of a regime in areas that may be questionable and people have reservations about to say we're going to allow it to go forward but we're going to create some form of organized data collection where we think through what would be an indicator of the problem that we're worried about happening so that we detect it earlier on rather than waiting for apocryphal and anecdotal stories and then a lot of battles of the experts and years go by I'd need to know more details about exactly how you would structure that but that's sort of what we already do with some of the multistakeholder processes that are underway in our government today in terms of the information that's gathered and then is shared to the public about the concerns being raised about technology that's being focused on biometrics, drones whatever else that I mentioned in the earlier checklist After that the question of what you're doing with that information remains an open one there's also an open question that Wendell and I did discuss about what about the innovators of the future what about the people that weren't at the table when the multistakeholder table was set and were coming up with guidelines or best practices do they have to abide by the same principles that were there because then they could be shooting themselves in the foot by being at the table at all so there are real challenges with soft law governance approaches but my argument would be at the end of the day whether it's soft law or governance coordinating committees that different types of bodies and discussions being had about that inform a dialogue about ongoing innovation in that space and hopefully there's more information sharing as part of that process this is what the governance coordinating committees comprehensively monitoring the development of a technology staying attuned to all the different soft law initiatives that are out there often times they overlap they conflict with each other they don't have any impact at all because there's so many of them or there's so many different initiatives in place so trying to coordinate that trying to map out the development of a field in a dynamic way so that when thresholds get crossed that open up new societal impacts that at least need to be debated if not actually regulated or whether by soft law or by hard law so that's part of the idea here the difficulty the big implementation difficulty is how do you do that in a trustworthy and credible manner how do you get the staff or the leaders of a governance coordinating committee do you have to be built to be funded would it be public would it be private would it be some kind of consortium how could you maintain a credible body what would be its relationships with industry could it get funding from industry what would be its relationship to the media to the public to international governing bodies so there will be implementation challenges but the idea here is rather than let all this happen in a piecemeal form we now move to a form whereby you try and put in place trustworthy orchestra conductors if you like not that they're going to control all the development but that they're basically monitoring it and giving us some appropriate feedback on what is and what isn't needed and particularly when there are proposals out there perhaps are just way too early or when there are concerns that are really science fiction in nature and do not have much to do with present day technologies we're going to run for about another another five minutes does someone else have another question John I was very interested in the discussion of the multi-stakeholder collaborative approach which seems to me kind of an Anglo-spherical approach that's something in American or British or Canadian or Australian culture but some of the challenges you've mentioned like robots caring for the aged the center of gravity there is more going to be in Japan or smartphones is Korea so other cultures are going to be addressing these needs perhaps before we will and we'll kind of be following them and I wonder if if there's a more confusion approach that will take precedence and if you could describe that just to give a little more background is I read an article who wrote it that said if the Fukuyama reactor had been in the United States the problems of susceptibility to the tidal wave would have been discovered because of investigative journalism or whatever like that so I'm just wondering if there is a cultural component a cultural component to this issue I think that's a fair point I have a paper coming out that I briefly teased in the book about the rise of global innovation arbitrage and about how innovation and innovators are increasingly flowing across the globe as easily as capital has for the past several decades and finding wherever its most hospitable jurisdiction is it will be based we're seeing this in drones, driverless cars biotechnology sharing economy in several other fields we've documented in our work so it could be the government pushing in two different directions one to be more restricted but also be more open or to embrace or allow things that aren't allowed elsewhere this is probably a good transition to what Wendell's trying to do now at an international level you can discuss what you're doing on that front because it may be that you have these coordinated committees at an international level I do have a fear, just to pre-empt what he's going to say I have a fear that that gets captured by large global institutions or maybe like you know meddling people at the UN who are trying to take it over and use it for some other different purpose but generally speaking I like the idea of fear I've taken on this task with great trepidation I think it's daunting but I think in areas like robotics for example there's an international component there's a lot of international components but then again there will be cultural national components but just for an example the Koreans are way ahead of us in selling robotic dials that are supposed to help take care of your children and yet there's some real concerns about what those dials are doing or whether they may even be used in ways that would interfere with the cognitive development of your very young children so but that doesn't seem to be something they're directing attention to so it would be nice if that was raised on an international level but again any governance coordinating committee is also going to have national bodies which are going to oversee the deployment or the concerns within that cultural milieu so originally when Gary Marchant and I proposed this idea our first pilot projects were for AI robotics and synthetic biology in the United States and I still think it would be nice to have those pilot projects in the United States whether synthetic biology and gene editing are the same or overlapping that's another question but more recently it's become clear to me that at least in areas such as robotics and AI we should be starting on an international front but also putting in place national bodies to deal with the concerns norms and the very simple fact that some countries are just way ahead in the deployment of different kinds of technologies therefore drones and domestic airspace it's largely an American issue right now service robots in the home Japan and Korea are moving ahead on those though nowhere near as quickly as some of us imagine well just before I turned the program back over to Stacy this has been extraordinary and just wonderful definition of where you're coming from and perspective and information about these challenges and I hope that you'll join me both Adam and Wendell for just a tremendous performance