 Thank you all for coming here tonight. This is really a treat. Technology and innovation policy has not ever been hotter than it is today. There was a time when I started in this field 25 years ago when I was lucky to get maybe three or four people in a room to listen to anything I had to say about telecom, media, or technology policy. So this is really encouraging. So what I want to do tonight is talk to you about some of the exciting things that have been happening in the field of innovation policy in recent years that I've tried to highlight in this book on permissionless innovation and talk to you about these sort of conflict of visions that's developed over how the future should be governed, if you will. And this is based not only on the material in this book, but on a whole host of technology policy program publications that we have here at the Mercatus Center, all of which you can feel free to consult on our website. I want to begin by asking some general questions about the technological innovations that we've seen in recent years and decades. And I specifically want to ask the question, where did all of this modern innovation that we now take for granted come from? Because sometimes we don't even think about it that 10 years ago, none of us had an iPhone or an Android phone in our pockets or in any of the social media services that we have today or many of the other innovations. We just started taking for granted. Also, how did this area, Silicon Valley, develop so quickly? This is a map of Silicon Valley showing some of the corporate logos of some of the major players in the Valley. If you would have taken that snapshot three, four decades ago, you would have recognized a couple of names, HP and a few others, maybe Intel. But there would have been only a few logos at all. And today, it's a vibrant community with all sorts of wonderful innovation. Another question, how did the United States become a global leader in the technology space and information technology in particular? This is a table that's put out by Booz Allen of the most innovative companies in the world. They do this every single year. And for the past 10 years or so, it's been all US companies, almost all of which are tech companies, computing, software, digital technology, networking, so on and so forth. And importantly for purposes of this slide in the next couple, notice how it's not only just all United States for the most part, but no European companies. In fact, how many European technology innovators in the internet or information technology space can any of you name? Anybody? Okay, Skype, that's a good one. I mean, usually people give me the name of old telcos, old state-owned telcos. But it's really hard, isn't it? Why can we name so few? Well, here's an image that gives you a feel for just how imbalanced this is. I mean, the United States, the US companies in the tech space are household names across the globe. But European tech innovators, yeah, Skype. Of course, now American-owned, been American-owned for a long time. Sometimes when I asked this question, like I did recently when I was doing some Senate testimony, somebody said, Rovio, and I said, the maker of Angry Birds, are you kidding me? I said, okay, I'll give you that one. But it's really, really hard to know why, you know, why can't we name more European tech firms? How crazy is this imbalance? Well, look at this. These are tech unicorns that basically have over a billion dollars in market cap. And you see here some of the top names in America, from Apple, to Google, to Facebook, just absolutely dwarf the European competitors. Facebook's market cap alone is larger than every billion dollar tech company in Europe combined. Airbnb alone is larger than all of Germany's tech unicorns. That shows you just how imbalanced this is. And then final, final question, how is it that from a consumer perspective, that we have access to so many wonderful goods and services at the most wonderful price of all? Free. When I was cutting my teeth in the field of regulatory economics 25 years ago, the only variables that really mattered were, what were the price, the quality, and quantity of service? And regulators really, really worked hard to try to make sure we could get prices down, get quality up and quantity up. Well, now we've got all that innovation, all that price, low prices, high quality, an abundance of services at the best price at all. This is amazing, isn't it? So in thinking about that, we have to ask what was the sort of secret sauce that made all that happen? Economists and political scientists like to think about a wide variety of variables that lead to economic growth and innovation. You think about things like taxes and education, labor market, so on and so forth. R&D spending, and certainly all those variables play into this question. But what I'm gonna suggest, and what the key thesis of my book is, is that it really comes down to a question of values and attitudes towards innovation and risk taking. It comes down to permissionless innovation. It comes down to the idea of letting people generally be free to experiment with new ways of doing things and new business models without prior constraint and even being free to fail without being punished for it. The United States made this the basis of our technology policy starting in the mid-1990s for the internet and the digital economy, and we prospered because of it. I'm not saying anything new here. Deirdre McCloskey and many other economists and political scientists have made this clear in all their work throughout the years that basically values matter when it comes to innovation and commerce. That if you're more accepting to markets and risk taking, you're gonna get more of it. And all my argument is in this book is that by extension the internet and the digital revolution are the greatest proof ever that attitudes and values towards innovation and risk taking matter. Now it wasn't always the case. It used to be the case that the internet was highly permissioned. This is from an MIT handbook that went out to students in the early 1980s. And it said to them about the internet which used to be called the ARPANET, that it was considered illegal to use it for anything that was not direct support of government business. It could be both anti-social and illegal to use it for commercial purposes. And it could get us in serious trouble with government agencies. Well what changed? What changed is that in the mid 1990s, thanks to the Clinton administration and some members of Congress, we formulated a policy of opening up the internet to commercial and greater social exploration and experimentation. And specifically that the defining document of that time that is often forgotten now is a blueprint for global electronic commerce. A framework for global electronic commerce was released in 1997. And when we heard that the Clinton administration was gonna be doing this and that the report was gonna be authored by Ira Magasiner who was responsible for Hillary care, some of us were needless to say a little skeptical that we were gonna get a good report. I'm pleased to say that this is the most principled and probably the most pro-liberty document any government agency has published in my lifetime in an I think over the last century. It basically said that the internet should be treated as a market driven arena and that government should avoid undue restrictions on commerce and that basically to the extent we had rules they should be simple, minimalist, predictable, consistent and character, beautiful. And the rest has really been history. Once we had that sort of vision in place and it started to govern all of our policy decisions as a nation, it made it clear that no one would need a license or permission to innovate, to go out and build the next great gadget or the next great service. Steve Jobs didn't need to go ask anybody in Washington. Can I build this thing called the iPhone? Mark Zuckerberg didn't say, can I go ahead and do Facebook? These things happened. By contrast, Europe took the exact opposite approach and floundered. They had heavy handed top down data directives and all sorts of restrictions on e-commerce and a host of heavy tax burdens. And this is what a political scientist and economist like to call a natural real world experiment that we've seen play out over the last two decades on either side of the Atlantic and we've won. But we're not done. What we argue in our work here at Mercatus in the technology policy program and I argue in the book is that as Eli said, we should expand this vision. We should basically realize that what's good for cyberspace is good for meat space. That whatever worked for the world of digital bits can also work for the world of atoms. We just need to give innovators that sort of clear green light that says innovation's allowed in this space too. Now why is this important? Well it's important because there's so many amazing new technologies that are just starting to emerge some of which are still in the cradle. But you think about things like this. This is a map of the things that we cover in the Mercatus Technology Policy Program. Things like the internet of things and wearable technologies, advanced medical innovations, robotics, driverless cars, drones, the sharing economy, Bitcoin, cryptography, 3D printing, virtual reality, augmented reality. We have papers on all of these things at the Mercatus Center and we're trying to apply this vision of permissionless innovation to them. And trying to convince policymakers that what we did for the internet in e-commerce we should also do for all of these technologies and new mediums and platforms. Some policymakers get it. Some policymakers have already articulated a vision for many of these technologies saying that generally speaking we should embrace the idea of permissionless innovation for internet of things, drones, driverless cars, so on and so forth. But many don't. And many don't because they're concerned about the risks associated with any new technology because there certainly are risks. And this is what ultimately leads a lot of people including a lot of policymakers to favor so-called precautionary principle thinking. So what's the precautionary principle? Most of you have heard of it in the context of food and drug law or environmental regulation. It's basically the idea of crafting public policy in such a way that innovations don't occur until such time the creators of the new technologies can prove that there's not going to be any downside or harm associated with those technologies. Sort of a better safe and sorry kind of mentality that councils prior restraint or sort of what I call a mother may I approach to policy. And it's the antithesis of permissionless innovation. Now in the fields that we cover and the technologies that we cover at Mercatus that I just highlighted, there are several key issues that lead to precautionary principle thinking. These are the five main buckets that we divided into things like privacy or fear of psychological type of harms. Concerns about safety, which either can be physical or maybe like a psychic concern. There can be security issues of hacking and cybersecurity so on and so forth or surveillance questions. Of course, economic disruption has been always the primary pushback against forms of technological innovation starting with the Luddites on down, right? Concerns about automation and job dislocation or the destruction of various business models or sectors or professions. And that still today drives concerns about things. And then fifth and finally the very troubling difficult issue of intellectual property, which is always a challenging issue but one that can lead to cause for a lot of control. And what happens is that this infuses all of the literature out there that's coming out of the academy and the popular press about new technologies. There's just one technopanarchy title after another that's published. My desk is stacked with these things. I often take pictures of them and put them on Twitter for like technopanic du jour. Like here's the latest book. But it's kind of logical because fear sells, right? I mean bad news sells. In fact, it infuses popular culture. You look around at popular media and television and movies about technology. It's just dripping with dystopian dread about the future. We're gonna be enslaved. We're gonna be killed. The robots will take us over. We'll be cloned into becoming just mindless sheep. All sorts of different ideas about how technology is going to destroy us or destroy our humanity. So that sort of frames this distinction, this dichotomy, if you will, this conflict divisions between precautionary principle thinking and permissionless innovation. Where the precautionary principle camp likes to think about innovation as something that needs to be carefully guided. Those of us who believe in permissionless innovation think, no, we should be more freewheeling and allow it to evolve freely. They value more stability and equilibrium. Whereas the permissionless innovation folks say, no, we should have spontaneity and experimentation. And so on and so forth. Basically, whether it's how they view risks or what they regard as the proper solutions to it, they always want some sort of preemptive precautionary approach to guiding public policy towards new technologies. Well, what's wrong with that? I mean, after all, isn't it good to be better safe than sorry, as they say? Well, there's a variety of problems with precautionary principle thinking. And I spent my whole life writing about precautionary principle in various contexts to finally boil it down to this. The general problem with permissioning comes down to the idea that if we spend all of our time living in constant fear of hypothetical worst case scenarios and basing public policy on worst case scenarios, then by definition, best case scenarios can never come about. That we only get greater progress and prosperity in a society, in a culture, by allowing people to take risks and even take risks that involve an element of potential failure or danger. I love this cartoon by Hardin, came out several years ago, that says we've considered every potential risk except the ones of avoiding all risks. Basically, the idea is that there can be no reward without risk. This is the great lesson of Aaron Wodolski, my intellectual hero, the political scientist who wrote extensively about the danger of the precautionary principle and the risk of avoiding all risks. But of course, there's very specific problems with precautionary principle thinking. You can think about how it can discourage new types of innovation, excuse me, entrepreneurialism or entry. It can lead to cronyism and stagnant markets. And it can obviously lead to a loss of global competitive advantage. And then, of course, for consumers, it can mean higher prices, fewer choices, and so on. Certainly, a lot of innovators realize this. This is just one of many polls I've seen. This is an Atlantic poll of Silicon Valley insiders thought leaders in the valley talking about the greatest barrier to innovation being government bureaucracy and regulation. And of course, the second and third things on the list are also government policies, immigration policy and education. So not to put too fine of a point of, but why should we care that much about if we discourage innovation, right? Well, economic historians, political scientists, other economists, they have made it clear in all their work that there's widespread consensus that technological innovation is, as one scholar has said, widely considered to be the main source of economic progress. And that the Obama administration, in a 2010 report, that nicely summarized some of the literature on growth accounting and economic development, said that technological innovations linked to three quarters of the nation's post-World War II growth. And that it's explained 75% of the differences in income across countries. Other types of studies on this have suggested that technological progress or innovation accounts for 30 to 34% of innovation. We're doing more work on this at the Mercatus Center to show it, but you don't even need to look at the data. You can just think anecdotally. I don't know how many of you watched the wonderful Don Boudreau video about this, about the hockey stick of human prosperity and how do you explain the rapid increase in gross domestic product in recent years relative to the past? And you think about all the various innovations that came about during the last 100 years. It has to be explained by how technological innovation has taken hold in so many Western cultures. But why should you really care? I mean, these are all good reasons, right? I think, I hope. But I think what a lot of people don't realize is that if you really care about human flourishing and you really want to make liberty enhancing changes to our world, the legislative process is a bit of a loser. We don't get a lot of change, a positive change these days through the legislative arena. We get some through the courts, but it takes many, many years and a lot of expense. But technological innovation, it raises all boats and it basically accounts for the greatest prosperity and the greatest leaps in human freedom that I think have happened in recent decades and centuries. And that's why we plow so much of our time and attention into making sure that we defend what we call technologies of freedom and add a dose of permissionless innovation to get more progress and prosperity. Okay, but wait a minute, what about those risks? Let me get back to them because we need to take them seriously. Too many people who are defenders of technological innovation just sort of take a blasé attitude towards the disruptive effect of technological change. And that's a mistake. We need to take seriously those challenges that others raise, whether they privacy, safety, security, economic disruption, so on and so forth. There may even be a case at times for a dose of precaution, but it's probably a very limited dose. It's probably one that's based upon the fact that there are certain harms that may be highly probable, tangible, immediate, potentially irreversible and catastrophic. Obviously, we don't let people roll down the road and tanks with bazookas and carry around uranium. There are some things that are just catastrophic harms that we have to avoid. But in most of the technology fields that we're talking about here tonight, this is not the case. We don't need that sort of heavy-handed dose of precaution. We can instead allow experimentation and trial and error, see what happens and deal with problems as they develop. We need to deal with some of those problems though with a better toolbox of solutions. What we've done at Mercadius is devise a whole host of different types of what we call bottom-up solutions to these complex challenges posed by new technologies. These solutions can be an educational variety, an empowerment variety. You can talk about the importance of self-regulation, industry best practices. Property rights, contract law, common law are often forgotten solutions to a lot of problems, such as things like drones or driverless cars or so on. And then if needed, if all else fails, maybe some target limited legal interventions, but only after all of those other options have been exhausted. But what is probably most forgotten in debates about technology policy is the incredibly important role that social adaptation and assimilation and resiliency plays with new technologies. In fact, in my work and in some of my law review articles, I've spent a great deal of time trying to develop a model, sort of a familiar cycle of technological adaptation or assimilation, to explain how almost every technology you can look at historically find that initial resistance, sort of gradual adaptation, and then eventual assimilation of it into society. That is, we become more familiar with it, we become more comfortable with it, we bring it into our lives. And the lesson there for policy is pretty clear in terms of council's patience and a little bit of flexibility in terms of policy. There's no better case study of this in my mind than the rise of the camera in public photography. When the camera came into the public consciousness and worldview in the late 1800s, there was really a panic about it. All of a sudden, people could walk down the road and take pictures of you, and that was never possible before, and they could do it without permission. This led to such a panic that Louis Brandeis, who of course later went on to be a Supreme Court justice along with his friend, Sammy Warren, a Boston lawyer, wrote the most famous law review article in American history about privacy called the Law of Privacy in 1890, in which they proclaimed that because of photography, instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life and numerous mechanical devices threatened to make good the prediction that what is whispered in the closet shall be proclaimed from the housetops. They wanted the camera heavily regulated. They wanted essentially exemptions from the First Amendment for public photography. They wanted newspapers to be limited in licenses and how they could actually use a camera in public. And they, you know, they and a lot of other intellectuals sat around wringing their hands, what are we gonna do about the camera? And in a few short years or decades, we had our answer. We were all gonna buy one, right? It became an essential part of the American experience. We all wanted to have a camera in our lives, but we had to learn a new set of social norms. We had to develop a new sort of technological etiquette, if you will, about when and where it was appropriate and not appropriate to use a camera all the way down to this day. If you go to a gym, for example, one of the signs you'll see is no cell phones in locker room because they don't want people holding out cameras while you're in a state of undress. So, you know, there's obviously an etiquette that develops about technology that's sort of an informal sort of regulation, if you will. But the panic over the camera has played out so many times in so many other ways. I highlight in the book all of these different case studies. There's panics about going all the way back to the telegraph, for God's sakes. The telephone, of course, all sorts of music. There's been one techno or moral panic after another. There was a panic about caller ID, about how it invaded our privacy. There was, of course, the one about RFID chips. There was books written about how RFID chips were going to be implanted in our heads and give us the 666 sign of the devil on our skulls. There was a serious book about that. Gmail, people don't realize, when Gmail was launched in the mid-2000s, there was an effort in California to ban it, to ban it. And now 450 million people use it as an essential resource in their lives, free of charge. And you could go on and on. So, my friends over at the Information Technology and Innovation Foundation put together this nice little chart depicting panic cycles, showing sort of how we go from a point of panic to sort of the height of hysteria and then the point of practicality. And I think that's a nice way of thinking about how these things often play out and why it's important to be patient. Because eventually we sort of, as they say at the bottom, we sort of move on from these things. But for each of these technologies and sectors that we cover here at Mercatus and that we care about, we see this sort of same panic cycle play out over time. Here's a map that they did at ITIF about some of those technologies and how they are currently, where they're at currently on the curve and clearly like things like drones and wearable technology starting to move up the panic curve. And that's why Eli's been doing so much on drones lately. I've been doing so much on wearable tech. Others are sort of coming down the curve. Like there was a panic a few years back about Google Street View and about what are we gonna do about that? Well, now we're all relying on that, right? That's a really great resource. But the bottom line is that all of those panicky, all these panics about technology based upon, again, these five things lead us to have to figure out a framework for how to address them constructively. This is the whiteboard that's on my office that you can come and look at at any time that's constantly changing of all of those technologies and all of those sort of regulatory threat vectors on the other axis. And what we've done is Eli and I and our team at TPP, we've tried to put some weights on like which of these concerns are going to drive policy and then we figure out which of us is gonna handle it or if we're gonna have to find other people to help us out or to come up with a way to apply our policy blueprint for how to solve the problems that people have identified for each of these technologies. That policy blueprint obviously is something that counsels forbearance and humility and patience as the first order of business from policymakers and thinking about being restrained before they actually try to adopt new regulations for any of these technologies. But we have a very specific 10 point checklist and this was passed out to you today on a shorter paper I think when you came in of sort of a 10 point policy blueprint for how to apply the permissionless innovation vision to each and every new technology that comes along. You've already heard me talk about most of these things but the key is that we need policymakers to first and foremost articulate and defend the vision of permissionless innovation the way the Clinton administration did in their framework for global electronic commerce and the internet. We need that for the internet of things. We need that for drones. We need it for driverless cars. We need it for Bitcoin. And we're starting to see some policymakers embrace that and defend it. And they need to get serious about removing barriers to entry that might exist for things like 3D printing or virtual reality or the sharing economy. And that's what we write a lot of papers on at Mercadius about identifying those barriers and removing those barriers to permissionless innovation. Then they have to get serious about defending freedom of speech on these platforms because all of these platforms all of these technologies of freedom involve speech and expression. After that we talk about the importance of making sure there is not onerous liability imposed on intermediaries in these sectors. And then all of the rest of the bucket of solutions that I've already identified. Common law solutions, property rights solutions, insurance solutions, new competitive entry, so on and so forth, tort law, so on and so forth. And then of course cost benefit analysis which is a crucial thing for us here at Mercadius. At the end of the day I put it all together in this sort of a little spectrum, this image of saying where should public policy start? Should it start with the red light of the precautionary principle to innovators? Saying you have to come, get our blessing, kiss the ring of someone in Washington and say mother may I, can I please do this? Or do you start instead with the green light of permissionless innovation? And thinking about bottom up solutions that rely on adaptation and resiliency before we think about sort of more heavy handed anticipatory regulation of new technology. This is something we highlight in all of our work at Mercadius, an extensive amount of research that we've already done. We're hoping to do a whole heck of a lot more. I hope that you have been interested enough in this to engage more in this topic. Eli and I and the rest of the tech policy team desperately need help on these issues. We're severely outgunned by the other side who want precautionary principle thinking to pervade all of these sectors, technologies and issues. I thank you very much for your time and I'd be happy to take questions. Jeff's got the mic. Okay, why don't we start over here and get over here. Hi, great talk, and book, I'm real, I think a wonderful area of developing economic thought. I come from a legal background and I've always felt a little guilty because I've always thought to myself, if I were legal counsel to say Uber management four or five years ago, I would have definitely said you can't do it. There are laws regarding cabs, et cetera, et cetera. You basically be breaking the law. This isn't quite like Facebook or Google where there were not preexisting structures there. Are you interfacing with law schools and how to teach lawyers to sort of counsel on a broader spectrum such as you've described? That's a wonderful question and yes, the answer is yes. I do most of my speaking at law schools or in philosophy departments, not in econ departments. And that's because most of the precautionary principle thinking emanates out of those schools or those programs. But what you're pointing out is exactly right, that a lot of the emerging technologies that we've identified here, especially the sharing economy, are budding up against the old economy and old economy rules. And many of you probably heard the famous quote from venture capitalist Mark Andreessen that software is eating the world. And that basically the ramifications of the digital revolution are coming to spread into each and every sector of our economy, whether they like it or not. And that includes medical technology, that includes transportation services, so many other things like that. So the legal councils for these companies obviously are notoriously cautious about these things. But luckily a lot of the innovators have said, the heck with it, we're just gonna basically to borrow the phrase from Admiral Grace Hopper that we're basically gonna go ahead and innovate and ask for forgiveness later. And that's exactly what they're doing. They're essentially engaging in forms of technological civil disobedience, if you will. I've often started some lectures at law schools or philosophy departments with a provocative phrase like, what is Uber if not the biggest lawbreaker in America today? And God bless them for it. Because we would not have known what that innovation would look like in those sectors if not for the fact that they came in and said kind of like, na na na na boo boo, we're gonna do this. And they did it and they've taken a lot of heat but they have opened the door for a whole host of other sharing economy services. Some ways Airbnb has done the same thing. And it's clear they are not playing by the exact same rules that old taxi cab companies or hotels are playing by. And if we tried to pigeonhole them into those rules, those innovations would die. So what we are you in our work, I do a lot of work here with our SAMCAP team, our Center for the Study of American Capitalism about cronyism issues and barriers to entry, is we say there's a right way and a wrong way to level the proverbial playing field in a regulatory sense. The wrong way is to try to impose the old rules, the old regulations on the new technologies. Instead we should level not by imposing that but by sort of deregulating down instead of like regulating up. And basically take the sharing economy as the new baseline and say let's have hotels and taxis regulated more like them and lighten the load on them. That would be the right way to deal with that regulatory asymmetry. I think Steve had a question over here. Thanks Adam, Steve Del Bianco with NetChoice. You talked about the cyberspace and meat space but I have to work in the ledgespace. State capitals in the Congress where I'm trying to talk them out of precautionary disasters and I quoted you liberally just Monday in Jefferson City, Missouri. Did you get my permission? Oh. Actually it's permissionless quoting. Yeah, okay. And I quoted you liberally because they still don't allow Lyft and Uber to operate in the state of Missouri, right? So again, it's trying to push back on that and I have discovered that I'll counsel against the precautionary mistakes but the opposite of precaution isn't no caution. It's probably post-caution. Maybe in what you call adapt and react and maybe we need to explore that even more in your next article or book as to how to intelligently adapt and react in a post-cautionary sense to demonstrable harms that aren't gonna already be solved by the market itself. Because I find that telling them that don't do precaution because of all the reasons Adam says, they're very uneasy in the state legislature to take the no-caution approach. Yep, yep, well that's a great question and I agree with that. And we talk about that in the book and we talk about a lot of mercatious research. Eli's been doing a lot of research on drones, for example. There's clearly a very serious safety concern associated with drones but we don't wanna stop a lot of that wonderful innovation. We, however, have a legal apparatus, a legal set of rules and torts on the books that can actually handle a lot of these problems associated with drones through property or privacy torts, property law or privacy torts. The same applies to many other sectors of our economy and other technologies. So there's been some wonderful work done on this including by people sort of to the left of center. This is a very non-partisan affair. John Villa Senior of UCLA wrote a paper for Brookings Institution about these liability issues associated with driverless cars and drones and basically says like, look, be patient. Let this stuff work itself out in the courts. The same way it did for every other technology like this where we heard the same horror stories. The problem is that that's not enough of a security blanket for some people. For some policy makers and for some activists. For some regulatory activists, they want the security that's provided by the precautionary principle saying, we'll make everything okay up front. Which is to say, you make it so perfectly sanitized that innovation can't exist and then there can be no harm and hey, we've solved the problem, right? I think the way we counter that is to basically just point out all of innovations we've had because we did not have that approach for other technologies. And the really exciting thing about all these new technologies is now that they're out there in the wild, we've got people who are becoming essentially the lobbyists for the future. There's a famous phrase by some lawmaker, I'm gonna forget, it's in the book, it basically says, the future has no lobbyists, right? But incumbents in the past have tons of lobbyists, right? But consumers are the lobbyists for the future. If we can get them to get a feel for them, right? Where's the mic? Hi there, I'm Peter Somerville with Street Shares, working a financial innovation company. And one of the things I notice the regulators that we work with is that you do sort of a public choice analysis, they have every incentive to exercise that sort of restraint. Where if you, in my area, no one gets a pat on the back for allowing the Bitcoin world to evolve, but if you are the guy who let the Bernie Madoffs of the world through, you're gonna get hauled before Congress, brow beaten. Yeah, exactly. Are there efforts to change those incentives? Are there ways to recognize and reward the use of restraint? That's probably the hardest question to know how to answer. I mean, risk aversion runs deep in regulatory circles and in public policy circles, right? They wanna, at all costs, avoid those sorts of problems or pass the buck onto somebody else. It's hard to know how to answer that question because all you can do is ask them to take a leap of faith with us and say, you have to embrace an uncertain future. I highlighted a few of the policymakers who have been willing to do that. If you look at what Corey Booker's been doing or Senator Ayotte or some of the others, that they're willing to talk about it that way, but most instead wanna come up with a set of rules that would say, well, here's the playbook for when this goes wrong or that goes wrong. I mean, there's amazing work that's being done right now, troubling work by some senators to basically stop driverless car technology until such time as we can guarantee that driverless cars are hack-proof. And you're like, well, there's no such thing as a hack-proof anything in this world anymore, right? But here's what we do know. The current state of affairs is that somewhere almost 100 people die every day in America because of vehicle fatalities and something like 6,500 people are injured every day in the United States. Those are real people hurt dying because of human error behind the wheel. I have to believe that by taking a leap of faith and actually allowing a little bit more permissionless innovation with driverless car technology, we're gonna save a lot of lives. In fact, it might be the greatest public health story of our time if we could bring down vehicle deaths, right? And that's not the way I think you have to do it. You have to frame it in terms of risk trade-offs and opportunity costs, right? I think that's the only way you can get them to understand it. Still, they'll be very cautious about taking that risk and allowing deregulation or not trying to preemptively regulate. Back over here, question over here. Well, for Frank Mannheim School of Policy right across the way, wonderful presentation. But I noticed your book did not have the word environment in it. And that brings up a point. Most of the innovations that you refer to are either in IT or other areas that do not involve the environmental rates, which control almost the whole field of manufacturing and industry. So have you thought about addressing the fearsome problems and the emotions that are involved in that area? Yeah, great question. Well, one book at a time. Can only cover so much in one book. We have other people who cover a lot of those sorts of regulations here at the Mercatus Center. I like to point out to people that many of these technologies can create wonderful benefits for the environment, just mentioning driverless car benefits for human health, but there's also amazing benefits potentially for network vehicles and driverless cars for the environment. And if we could talk about those sorts of trade-offs, we might actually get some traction with these things. But yeah, I agree, but there's no doubt about it. The field of environmental law, much like the field of food and drug law, is the most heavily precautionary in character of all sectors or fields of law. And so that's the toughest nut to crack. I mean, I'm doing a lot of work right now with my colleagues, Bob Greyboys, Richard Williams, on others here, about food and drug regulation, FDA regulation, and this amazing intersection of sort of the world of digital technology and the old world of FDA regulation. And something's gotta give. You think about 3D printed prosthetics or like advanced medical devices that are now embeddable in your body. You think about all of the health and fitness technologies that are in your phone. Is this a phone or is this a medical device? The FDA's currently trying to figure that out. But luckily, technology's moving so fast that they can't slow it down. So some of them might try, but it's gonna be hard. So yeah, I'd love to do more on environment. Maybe you wanna write a paper for us on this? Ha ha ha ha. Sir, in the middle. Thanks, I'm Jim Hassek. I'm with the Atlantic Council across the river. I wonder how much of this is about risk or as you say, risk aversion and how much of it is about vested interests because you mentioned Uber particularly. As I was walking to work this morning, I saw this line of I don't know how like 100 and something people lined up at the Tesla dealership or not dealership. It's not even a dealership, right? That's the whole point is that it's not a dealership. The Tesla store to order a car that nobody's seen yet. I can't do that at my other place in Austin, Texas because you can't buy a Tesla in Texas. I don't think this is about risk aversion. I think it's about car dealers handing, you know, walking around money with you. So which is harder and why is it? Do you think that Uber could crush the taxi cab commissions effectively over most of the world, but Elon can't do the same thing with the car dealers? Well, I think in time that will change too. To answer your question more generically though, you're right, it is often about vested interests and there's a public choice story to be told about some of these sectors. Clearly with the sharing economy, there's a powerful public choice story. Clearly with the Tesla question, with driverless car technologies, things like that, there are vested interests who don't necessarily want to see change and they control the legislative lovers that basically make sure it doesn't change. On the other hand, there are all sorts of other issues where there isn't a clear public choice story. Ideal mostly of the five buckets identified of sort of the major fault lines of these technological innovation issues where public precautionary principle comes in. Privacy was the first on the list for a reason because most of these technologies are threatened by sort of regulations that would say, oh my gosh, drones will invade our privacy, intelligent vehicles will invade our privacy, wearable technology will invade our privacy. There's so many regulations pending based upon the idea that private corporations are gonna have too much data about us and there are some legitimate concerns about those data, but there's not a public choice story there as much. There's actually maybe some more legitimate concerns about how these technologies will affect our lives personally. It doesn't mean that there won't be vested interests there eventually. So it depends as the answer to that. And then to your question about how do we solve this problem? Well, I think the good news is that when it comes to things like the sharing economy in Tesla, we've got a lot of powerful real world evidence now based upon where it is allowed and where it isn't. I mean, you look at the example of what happened in Las Vegas that my colleagues have identified, Chris Coopman and Matt Mitchell and others about how it was forbidden there and they had a huge problem with long haul taxi fraud. Next thing you know, you've got the threat of Uber and then you've got the potential for solving that problem because you can track where an Uber's at at all times. You can't literally be taken for a ride now in Las Vegas by someone who's trying to drive you out to the desert in order to get down to the strip, right? So the same with Tesla. People who enjoy it in some states obviously brag about it and other states people are jealous and want one and they're gonna, I mean, people are lining up today for them, right? So I'm optimistic that dynamic changes. It does take time. It does take a lot of time. We've got a question in front and then I know there was another one over here somewhere. I'm Bob Hershey. I'm a consultant. What can be done to get people to do the math to look at it on the basis of what they call mathematical expectation of the probability of gaining something times the amount of gain minus the probability of losing something times the amount of loss. So Bob's an old friend of mine and Bob has been asking me the same question for probably 25 years because he wrote a wonderful book on these topics. So that's a great question, Bob. I think one of the things we do here at Mercatus is we crunch the numbers. Whenever we can, we try to create models and run the numbers and do cost-benefit analysis. I will say this. For a lot of the issues that we've identified in our work here in tech policy, it's not always easy to do that. Sometimes you're dealing with variables that are not quantifiable. For example, going back to the privacy and security issues that I raised, I've spent a bunch of time trying to figure out how to do cost-benefit analysis for regulatory proposals that would regulate or limit the collection of big data sets by companies to basically bring us all sorts of benefits. Could be cheaper loans for cars or homes or whatever else or better insurance. But there are people saying, no, that's too invasive. It's too intrusive in our lives. It's too privacy-invasive or there's a security threat. Well, how do I model that? When I get into privacy preferences, we try to weight these things. It's complicated, right? Because these variables, there's a bit of an I other beholder problem with how you value privacy or security or safety or how you even define them. So this creates a real challenge for us when we're trying to deal with it. And the only context, I've got some examples from the past that I used to deal in endangered species law, like what's the value of a spotted owl, right? We've tried to deal with that, but it's hard, the value of a pristine nature environment. So doing the math would be easier if there were things that were easily quantifiable is the short answer. When they are, we do it. We come up with proxies when we can't. We try to find other ways to solve. We had another one over here somewhere. Sir? I'm just from Boston, Virginia, Square Civic, so just this neighborhood. Very quickly, Apple, FBI, or James Comey versus Tim Cook. The order was vacated, but the issue is not resolved. No, no. We're refighting the crypto wars. This is probably the single most important issue being debated in the field of technology today. Will our digital systems, will our modern connected information networks and platforms, will they be safe and secure thanks to strong cryptography or not? And there is a real danger that our government is going to take steps to try to intentionally weaken these things on the grounds that it's in the name of national security or law enforcement only to open up huge vulnerabilities that make us less safe and secure in the long run. So I don't know how this turns out. I thought when we fought the crypto wars 20 years ago in the late 90s, I was done with them. I washed my hands with them and saying, hey, we won. We can move on. Wow, was I wrong? Nothing's ever settled in this town. Sir, back there. David Barinski, my company, Come Home Baltimore. We renovate houses in blighted neighborhoods in Baltimore and sell them to market rate buyers and we attract market rate buyers by having a heavy community development component to what we do. And because of that, we are very involved with city agencies and city agencies are very involved with us. And I find even with the best of intentions that the agencies sort of can't get out of their own way. So there's an element of sort of system design, a lack of resilience or a lack of adaptability that is in part intrinsic in government function, but it's partly just a system design issue. So it's just, in listening to you talk, it occurred to me that it's not a vested interest like a ill-motivated politician. It's the head of the housing department trying to do something and can't do it. And to their own detriment and to the detriment of the people in the community. It's exactly right. In the book, I talk about the sort of the danger of good intentions. Sometimes the most well-intentioned laws do the most destructive things for innovation. Exactly. You get too stuck in the ways of the past or thinking, well, we're doing this for all the right reasons. We just need to try harder. I think, I can't remember if that's the title of a section of this book or a paper I did recently about building a better bureaucrat. Like we just need to build the better bureaucrat who can make this happen, right? Well, we've been there and done that and hasn't worked out so well. Sometimes you need to change. You need to let loose the past. One of the things we suggest in the book and in our work more generally is that we need to get serious about sunsetting old rules that cover new innovations. And I basically say that what we need is essentially a Moore's law for technology policy. Moore's law refers to the well-known principle that the power of a microchip basically doubles every two years and its price usually falls in half. Well, if the whole world is expected to innovate at the pace of Moore's law today, why shouldn't government? And so basically I say any technological policy enactment or anything that's already on the books should be sunset every two years. And if it's a good policy, you can make a case for it, bring it back. But so many of the things we fight over that sounds like some of the things that you're struggling with are just old rules that seem well-intentioned but aren't having positive results for consumer welfare. And it's so hard to get them off the books. We need to find some sort of a mechanism that gets them off and requires people to make the case for putting them back on. That's hard though. Question here and there. This gentleman was first, let's have him. Somebody let me know what we're doing on time, by the way. Seven minutes. Hi, I'm my crew. I'm also a lawyer and more particularly in the field of intellectual property law. In the broad sense, I was curious to see you note intellectual property amongst the buckets of the precautionary principle. And I'm curious if you could explore that a little bit, given particularly, for example, in the patent law sense where in principle, and you could set aside the implementation, the principle of patent law theoretically is to encourage innovation, exactly. And in that context, you had made a mention of a more preferable embodiment of innovation would be, we're gonna innovate first and ask for forgiveness later. Which kind of speaks to the next innovator but what of effectively the prior innovator? Sure. So several of my colleagues are looking at me right now smiling because they know how difficult this issue is for me. And you can go around a real rat hole talking about intellectual property issues. In 2002, I wrote a book on intellectual property issues called Copy Fights and thought I was gonna sort of stake out some common middle ground in the fight among free market oriented people about IP issues. I got nothing but heat from people on both sides and grief and so much so that I gave up the issue and walked away from it forever more. It's a challenge and I don't mean to suggest by putting it on the list that all intellectual property issues, all copyright patent issues necessarily deter innovation. Many of them, of course, incentivize innovation. I believe that. On the other hand, I think today intellectual property provisions, especially in the copyright code are starting to greatly discourage forms of innovation. And in the patent sense, I'm worried about software patents, business method patents, also discouraging forms of innovation. I find that there's an extraordinary amount of litigation and sort of rent seeking that goes on in this space that a lot of people who defend intellectual property just wanna turn a blind eye to, that I don't think I can turn a blind eye to anymore. It doesn't mean you throw the baby out with the bath water, but it does mean you need to be skeptical about some of the claims for how much of this protection we do need because if you ask me, there seems to be, and I'm surprised about this as anybody who used to be a strong defender of IP, that there's a whole heck of a lot of innovation that happens outside of the world of strong IP that is hard to explain except for the fact that there must be other things that incentivize innovation. So I'm torn about it. I don't have any answers that will make anybody happy. There needs to be a balance as I like to say, but I am concerned that increasingly that balance is tipped too far in terms of strong protection. Okay, we're down to five minutes, so maybe a couple more questions. Oh yeah, that's right, I'm sorry, over here. Anika Green, White House Writers Group. So you may not have an answer to this, but my question after reading the book was, who is going to lighten the regulatory load? I mean, we know that bureaucrats and regulators sort of live to justify their existence, and you even cited the White House Office of Technology, I don't remember the exact title of the office, that had some great policies to try to gear, I guess, on technology issues, have the mindset be you have to justify new regulations and yet you said they don't adhere to those. So where is the mechanism to get the old laws off of the books? Congress or these lobbyists, what do we advocate for in the public sphere? Yep, yep, that's a great question because as I pointed out in one of those slides, it's very, very hard to get any sort of change done through the legislative process, the regulatory process. Most of these agencies, the FCC, the FDA, the FTC, every F agency under the sun, they're very slow to adapt to new technological realities. What instead happens is, Jason, what was the name of the article you sent me from Harvard Business Review? Come on, spontaneous deregulation? Spontaneous deregulation is what two scholars call it, is what happens when essentially innovation takes hold in certain sectors and starts the process of deregulation naturally without the law ever changing. Again, you might think of this as well as a form of technological civil disobedience. We're essentially saying the law exists but we're gonna kind of do an end run around it. And that's exactly what's happening in many of the sectors that we cover here in the Mercatus Technology Policy Program. People put drones up in the air and not registering them. I know I didn't register the drone. I got my son for Christmas. I'm breaking the law. He's breaking the law. Maybe he'll go to jail for me, sharing economy. It's the same thing in many other sectors. So it could be that these things just happen and the laws just are on the books but not really enforced. We just basically turn a blind eye to the fact that they even exist and say, yeah, sure. I think what happens is that the really tough issues that policymakers care about, usually it's like taxes. You'd be surprised. I was recently interviewing a bunch of people for a documentary we've been making here about sharing economy issues. And the Airbnb owners and the Uber folks, when we talk about policies and regulations, they all say all the government cares about is the taxes. And we're starting to solve that problem. You see now on your Uber bill, there's the DC amount and so on in the Airbnb it's already been factored in. Other things though, like they're not enforcing disabilities access and Airbnb's. If they did, Airbnb wouldn't exist, right? I mean, nobody's Airbnb is compliant with the Americans with Disabilities Act. So even fire codes that apply to like hotels don't apply to like Airbnb's. Well, they do, but they're not enforced. So I think basically we get this spontaneous deregulation. And in some cases there may be some dangerous associated, but I think at the end of the day we're better off for it. Maybe we have time for one more in the back. Is that Katie? Speaking of taxes. So Katie McAuliffe, Americans for Tax Reform, Digital Liberty. So I have kind of two questions, I'll make it quick. On one of the things I see kind of with, and let's use Uber again as an example, they tried and tried and tried to regulate and now they're going after the independent contractor classification. So I was wondering about your thoughts on that and then also your thoughts on Airbnb proposing how they should be taxed and how they can raise $2 billion in taxes because they would suddenly like to be taxed. That second one is a hard one. And it does raise an interesting question which is that could some sharing economy, some of the larger operators, start to become too comfortable with public policy and with regulation? And again, my friends and the Mercatus Sam Cap Center, the Center for Study American Capitalism, they have documented how sometimes like Uber and others are writing laws at the state and local level to basically authorize or legitimize their particular business model as it stands today. And that's a problem. We don't wanna just make the world safe for the sharing economy in order to make it less safe for the next thing that comes along. Whatever that may be, it may be robotic cars, right? That may be what makes the sharing economy our Uber obsolete. And then that answers the first question, which is that these labor designations in the question about like our Uber drivers independent contractors and the sort of whole body of labor law that could just crush this sector. Well, if it does, that's the best incentive in the world for Uber to continue its massive investments in like robotic labs to basically make sure that there are no drivers behind the wheel. Which is exactly what Uber did. They raided the Carnegie Mellon computer lab and took all their best computer scientists to develop autonomous vehicles. So I think that's another example of sort of like how regulation can have perverse incentives but in this case, it may be what the future holds anyway.