 Fantastic. And now, since the order has changed, I will do my best. So first, Matt Donne is a senior associate in Booz Allen's high-tech manufacturing business. He advises senior clients and leads project teams in driving innovative cybersecurity analytics and risk management solutions, particularly for automotive, industrial, and consumer product companies. Then next to him is Seth Carmody. He's with the FDA Center for Devices and Radiological Health, focusing on cyber safety and medical devices. Next to him, going back to the other page, is Josh Corman. He's the founder of I am the cavalry.org and the CTO for sonotype. Next to him is Adam Theurer. He's a senior research fellow technology policy program at the Mercatus Center at George Mason University, and his latest book is Permissionless Innovation, the Continuing Case for Comprehensive Technological Freedom. And last but not least is Hilary Cain. She's the director for technology and innovation policy government and industry affairs, nice title, at Toyota. And she handles policy issues relating to connected vehicle technology and in-vehicle telematics, including spectrum, data privacy, and cybersecurity. Now, this is a fantastic panel, and so I'd just like to open up by asking all of you, where do you see the likely greatest impact of IoT technology over the next couple of years? All right. Thanks, everyone, for having us today. I'll start with the word culture. I'm not going to go down a technology route here, but when I look at the term IoT, I think it's actually a bit limited, because it's not just about the devices. What that limits us with is it doesn't put the human, the consumer, the citizen at the center of the topic. And that's really what it's all about, is how our lives are changing. So when you think of these things, IoT, I actually like to flip the terminology towards connected society, because that's what it's all about. These devices enable us to live in a different way. This always-on connectivity is actually changing the way that we live, where the internet is inseparable from us as we go over the next few years. So we're at a point of, I think, cultural demarcation. That line is coming up, and we're going to have to really rethink how we look at this world and how we interact with it. I view it in three different pieces that are changing, government, commercial, and the individual itself. With government, you've got people that are much more nomadic, both physically and virtually. There's a lot less allegiance. The nation state, that comes up a lot. What does that mean? So in this realm, we're going to have to have governments look and re-engineer, re-imagine what they are, and engage people through new creative ways, new types of agile agencies, new types of partnerships, alliances with commercial entities attracting new types of investment. It's a whole new re-imagining of government. On the commercial side, you're seeing all sorts of companies get closer to the consumer. They don't just build a product and throw it out there into the world, right? Tesla gets closer to the customer. Amazon's all up in your face where you push a button, and automatically it orders tide again for you and you turn things of this nature. So that condensing of the value chain has gotten a lot smaller. It's a lot more in your face today. And then the last point I'll make is the individual. You are truly a consumer of one, a customer, a unique entity. And all the data we spin off as a person from the time we get online is captured forever. It's the sum of the data that we leave this world with is honestly going to be what people know of us going forward. It will live on. And I'd like to just end real quick on the story I had. One of my family members passed away recently. And we looked up his will. And it was kind of comedic to read. But he said, I don't want any parties, any type of funeral. Because the data on the internet, I'm going to live on forever. So you make sure you celebrate me there. So we threw all sorts of embarrassing videos on YouTube and everything to celebrate them. But it's true. It really makes you think differently. So that's the kind of perspective is on culture. Any others want to jump in, IoT innovations? So I'll jump in. Selfishly as a person from Toyota, I'm going to say automotive. I think we're already starting to see pretty remarkable advances in terms of in-vehicle telematics and infotainment systems and remote services where you can interact with your car from your smartphone. But I think what we're going to see in the next few years is fundamentally more transformative than that when we're talking about vehicles communicating with each other and the infrastructure around them for crash avoidance purposes and also self-driving cars, which will just blow our minds in terms of how many lives it's going to save and crashes that it's going to avoid. So I'm going to go with cars. Anyone disagree? No, I wouldn't disagree with that. In fact, we're going to see significant cost savings and life savings because of connected vehicle technologies. It could be one of the great public health stories of our time. But I'm actually going to go with sort of health, awareness, wellness, and fitness types of applications at IoT because that's what we already see the greatest demand for. Because everybody wants to make themselves healthier, feel better, or whatever else improve their health. And so you're going to see a significant ramping up of the already hot demand for those types of technologies. You already see it with the quantified self-movement more generally. I myself know this firsthand. I am wearing my Fitbit, lost 30 pounds in a month and a half. So real world results right there, right? And a lot of other people have similar type stories. So I think there's going to be continued demand there. I think the obvious answers are the kind of efficiencies you can get in a smart home from environmental and power consumption savings, but also the cars. I think one of our both sources of excitement and fear when we started the cyber safety group of iamthecabulary.org is we looked at cars and we said, look, self-driving and semi-autonomous vehicles. I think the stats were that in 2014, there were 32,000 fatalities in automobiles, which is down from where it used to be. But 94% of those were human error. So the promise of semi-autonomous vehicles is you can cut out massive chunks of that 94%. Think of all the lives we'll save. And since we come from the hacker culture, we know that if you put software on something, it's vulnerable, and you connect it to something else that's exposed, we kind of see if software is eating the world, it's actually infecting the world. We're putting Bluetooth and Wi-Fi and 4G LTE Wi-Fi hotspot standard in all vehicles, giving easy access to anyone on earth to assert their will on you and your family. And we're insufficiently guarding against those threats and exposures. So what we want to do is make sure that the trust that we place upon these innovations is worthy of that trust, and we're pretty far from that. So we don't want to scare everyone away from driverless cars. In fact, the fear that we have is that the promise we just outlined, if there is a Jeep hack that's not on stage at DEFCON, but rather in the wild, my mother and I will never trust a Jeep again. And if there's a crisis of confidence, not only is it where bits and bytes meet flesh and blood, but it's also gonna have a material impact on GDP. I think one in nine Americans are employed by automakers, parts makers, service dealers, and the general ecosystem, and this isn't funny. So I hope that we can catch up and actually make defensible, resilient, survival platforms out of this IoT, or else it'll be like asbestos. Looking back. So I have a personal note and professional note as Josh alluded to in a resource conservation angle that personally I would love a autonomous car to drive me to work so I could think about FDA problems instead of driving through DC traffic problems. On a professional note, I think people have this concept that the FDA is technology adverse, and really that's not true at all. We welcome all sorts of technology. We do ask that you back up your claims such that we could substantiate safety and effectiveness, which is our edict at the FDA. But other than that I invite people to bring their innovative technologies into the center and especially leverage the presubmission process at the center which allows folks to come in and have a conversation about their innovative technologies. Great. So we've talked about some things that I think people would expect you to give answers on. And so I'd like to push you all to think about what are the unconventional cyber security risks that you think will emerge in the next five to 10 years? Not just the obvious ones, but can you think of any risks that go beyond what we might be thinking about today? Well, I'll mention one that's way out there. Biohacking and people actually engaging in truly innovative types of experimentation on their own bodies, extending what I just made, the point I just made about quantified self and human augmentation, that's going to be a huge issue. I see Joe Garo sitting in the audience who wrote the best book on this ever written called Radical Evolution, I guess 10, 11 years ago. And it's still very fresh. And it points out that increasingly with these technologies, people are going to want to use themselves as sort of the experimentation platform. And so a lot of IoT technologies that today are wearable are eventually going to be embeddable and ingestible. And as that continues to take hold, we're going to have all of these concerns about security, vulnerabilities, hacking, whatever else, start to take effect. We of course hear the pacemaker story again and again, but that's just the beginning of it. So the question is can we devise reasonably effective security practices or privacy practices so that those technologies can go into the body, and not just be worn on it? I'd like to say these are far out, but they're the adjacent possible and they're actually manifesting. So I think most of the people, when they think of hacking, medical devices or cars, they, this is a very dangerous assumption by the way. We presuppose a financially motivated criminal actor. And there are as many motivations to use hacking as there are in the human condition. So it might be ideological, it might be nation state, it might be ISIS types. Most of the government workshops I do, they're much more concerned about sub-nationals and ideological groups out of the Middle East than they are about, say China or Putin. So it's a very different motivational structure. And I think we assume that, yeah, you can hack that insulin pump and give a lethal dose without any authentication, but no one would do it, there's no money in it. So in the adjacent possible, we've talked a lot about ransomware for medical devices or to start your car with a Bitcoin. And I think what we were talking about is the adjacent possible has happened. There was a ransomware that took out an entire hospital in California. For several days, that's actual patient care, actual effect, actual denial of service for critical services and moving people from one place to another. And whether it was targeted or not, what we were joking at science fiction is materializing. We've said for a long, long time that no one would ever hack a power plant. And now we finally have a confirmed deliberate hacking of the Ukrainian power. So I think it's potentially dangerous to A, assume that motives will be financially motivated or rational and B, that these things are the distant future. I think they're happening in real time. So I can add something in the car space. I mean, we talk a lot or hear a lot about the idea of someone hacking into a vehicle and being able to take over electronic control of a vehicle. But what's really interesting if you take that and then put President Trump in the car, for example. And the ability to maybe do some sort of assassination attempt by having a car crash with somebody very sort of key and critical. And I know it's something law enforcement's very worried about, not only in terms of hackability, but just self-driving cars in general, the ability to put a bomb in a car and send it anywhere you wanna send it and have it go off. There's just a law enforcement aspect to this and a keeping important people safe element that I think gets lost sometimes in the conversation. This very scenario came up in Munich because if you have physical security and physically hardened cars, but they can be bricked, Katie talked about breaking a pacemaker earlier, you break a pacemaker, you break a person, probably the quote of the day. If you look at that for like the president's convoy, if you could disable the cars from getting away, if you could disable all the emergency response vehicles in an urban environment or in a city, no cops, no firemen, no ambulance. And there's a remote kill switch in many, many vehicles now for an otherwise legitimate purpose. So these types of things don't have to be taking control, they can simply be disrupting egress. I see a lot of talk about IoT and scare and worry from security on connected products and that's very true. But I love to eat, so I think about problems with my food and maybe, Seth, you can think about this too a little bit and you probably dealt a lot with it, but food safety, I'll put that out there. If you see the end to end nature of production from farm to fork, that whole nature, whether it's beverages or whether it's food of any sort, you look backwards and you start to see the smart agriculture industry coming online and automated the way that fertilizer and seeding is done and all this nature and the massive sets of tears that take to bring a raw ingredient and all the way to a production element and we're eating it at a restaurant. You see things coming up these days like agro-terrorism, things that are really focusing on cyber intrusions all the way back to the farm and you start to see those things come in your mind, it really starts to say, what's this embedded sensor technology and connectivity we're putting out there and it's not just the vehicle which is very scary but literally I could be ingesting something at a restaurant that has a known substance that's gonna kill me or put me into a very bad state. So putting those things into perspective really, I think, make it show you how much of a societal impact this can have. Fantastic. So I think that one of the things I hear here is you have what's essentially aggregate risk when you have a system of systems or IoT that you have essentially introducing complexity, you introduce multiple failure points. From the FDA perspective, we want folks to be able to manage that risk even as it emerges and I think they're, and in our recent policy decisions, our policy that we put forth on cybersecurity, we've tried to leverage that flexibility and allow folks to account for emerging risks whether it's in a single system or in a system of systems. Great. And so having laid out a bunch of possible risks, I wanna shift the conversation a bit towards innovation. So I'd like to ask the panel, what should be the constraints, if any, on innovation in this world? Now Adam, I know you've written about this in your book so I wonder if you could give us the brief synopsis of the thesis and then I'd love to see whether the panel agrees or disagrees. Sure. Well Betsy, as you mentioned in your opening remarks, each and every new sort of disruptive emerging technology has a different set of privacy, safety and security concerns associated with it. But from a public policy perspective, the question is, do we make those concerns become policy defaults? Do we have a policy default of either innovation allowed or innovation prohibited? And you might think that this is a contrast between sort of precautionary principle thinking versus permissionless innovation as some of the internet space have called it. With regards to the internet, we made a very conscious choice as a country 20 years ago when the Clinton administration came out with a framework for global electronic commerce that basically took what is definitely a permissionless innovation approach to the internet writ large and said that generally speaking, we will base our negotiations on voluntary social norms, social interactions, contractual negotiations, multi-stakeholder processes and our innovation policy default will be innovation allowed or permissionless innovation. We'll deal with problems after the fact, exposed as they come up. And I think there's really good evidence, real world evidence that says that's had a lot of benefits for humanity. It doesn't mean we haven't had problems. We have had security, safety and privacy problems because of a permissionless innovation environment. But we try to deal with those problems after the fact by coming up with constructive solutions to complex real world problems. Will that be the same policy we have for IoT, for wearable tech, for everything else we've talked about here today? And I would hope it would be. I would hope that there is a good case to be made that we shouldn't base public policy on hypothetical worst case scenarios because when you do, it means best case scenarios ultimately don't come about. So, panelists agree, disagree, partially agree? I mean, I think, I agree, I agree, absolutely agree. I think though that one of the challenges we have with the IoT is there's so many different sectors that make up this emerging ecosystem and there are some of those that are very heavily, traditionally very heavily regulated industries speaking from experience and FDA as well and medical device front, which just has a whole different approach than some of the other sectors that make up the IoT ecosystem. And I think that's a tension that exists right now. I think we're seeing it play out sort of in real time and it's gonna be very interesting how it shakes out whether you have sort of a permissionless innovation approach for some IoT sectors, home automation. I mean, I'm trying to fit in it. No, I guess fitness gets a little bit into the FDA. And then a different approach, a more heavy-handed approach on the vehicle space and the medical device space and then you have these sectors sort of developing at different speeds and innovation occurring more rapidly in some and not others and it doesn't sort of all come together as one pretty package at the end. Other thoughts? I look at it a little bit. We're all working towards this vision idea and IoT when you start to see everything's talking and they're connecting into perfectly seamless. We know it's not the reality right now. That's kind of the vision state right now. But then when you take and look at more of the commerce model right now and you try to look at what businesses are doing and creating their own silos, Apple's the perfect view, right? It's the closed ecosystem, the way to control it all harness. There's beauty in power and performance and making things come to life through that. But if you want it all to connect when you go down the street and the vehicle moves and it's talking to the building and it's all sinking until you get to something that's open and a common architecture that people adhere to, I can't see it getting too great with innovation right now. The innovation is gonna be limited into sectors where you're gonna have in the building, we're gonna be all connected but we step on the street but maybe there's a whole new set of product makers and manufacturers and owners and operators of that whole infrastructure piece. So I really look and think about more and more of the open architecture of this. There's groups out there like the Open Connectivity Foundation that are working to do this to unify manufacturers from different sectors but until you can break down those walls I can't really see that it's gonna come to life in a way that people are really hoping it will in the next few years. I think, again I'd like to emphasize that the FDA is taking a reasonable approach and a flexible approach and that we have to realize that the Center for Devices and Radiological Health we have an array of risk profiles. You have scales, weight scales which present a risk profile and you have pacemakers which present a completely different risk profile and those each need a different and appropriate regulatory approach. So when you talk about stifling innovation or bringing a device in and saying, oh the FDA is stodgy, we have a certain mandate to the people, safety and effectiveness that has to be taken very seriously especially when you have devices with a sufficient risk profile. We do have policy movements within the Center that are focusing on say general wellness like we talked about fitness type applications that say hey, in certain situations we understand that this is a low risk and emphasizing our points that is a risk-based decision. I carry a cognitive dissonance on this. I think there's no political party for the hacker community but in general if you squint it looks fairly libertarian-esque. On the one hand, I'm one of the guys who call the payment card industry data security standard, the PCI DSS standard for credit cards, the no child left behind act for information security, highly prescriptive, very expensive, highly ineffective, made it really hard for people doing good risk management to continue doing so. It sucked all the oxygen and budget. On the other hand, the idea of come as you are, do as you please for things that affect where bits and bytes meet flesh and blood or my family can be hurt, I think commands a different level of care. If we were to do this permissionless innovation in commercial restaurants we wouldn't have minimum kitchen and safety codes. If we were to do this in automobiles we wouldn't have seat belts. So in meat space, if you take the technology out of it we do have some sort of public safety and public good where we wanna guarantee certain assurances and people can innovate on top of that and does that keep certain players out of the market? Yes. Are we overall as a social contract okay with that? Also yes. So I'd like to point out two tiny things that the architects of the internet are now saying we never really designed security in and it's gonna be impossible to design security in after the fact. And now they're starting to see that permissionless is not easy to undo when failures are too high. Second thing I'll point out is we spend about $80 billion a year mostly on credit card and financial security and nearly every single merchants have been breached. So even when they're doing everything right which cars and medical devices aren't yet the failure rate is incredibly high. The reason it's okay is because of the 4% annual fraud rate is acceptable but when it's life and death I don't think an incremental 4% annual death rate increase will be. So I do think on that continuum it's gonna be north of come as you are do as you please but hopefully not as prescriptive and brittle as PCI. And I was at one point and I should have mentioned it when I was making my comments. I do think I can speak from personal experience at least at Toyota and I think this applies to more companies than just Toyota but I do think that we as car companies are actually self censoring ourselves a bit on some of the technology in light of some of these concerns about how it's gonna be dealt with in a regulatory space. So I note that I do think there are certainly cases at Toyota where we haven't gone forward with something for fear of how it will be perceived or dealt with by the regulator. I think the happy go between here we can see in both the automotive space and the FDA spaces the rise of sort of voluntary best practices codes of conduct and various multi-stakeholder processes. The FDA has been doing this on mobile medical applications. The automotive industry has been doing this on privacy best practices for connected vehicles. And increasingly because we realize that the pace of technological innovation is always gonna outstrip the pace of legislation regulation. We have to come up with some sort of in-between measures. And I think that's exactly what's happening. All of the mobile medical have questions that the FDA has already considered and come out with sort of best practices for. And I know this is going on right now with 3D printing and another context. But this is what's gonna have to happen for the IoT. There's just no way we could like if even if we wanted to design this from above and say right now here are the IoT security practices that will legislate into within 18 months to two years because of the pace of Moore's law they'd evaporate. They would just be gone. They would be meaningless. So you have to have that sort of security is an iterative process, right? It's one that you have to sort of make up a lot of it as you go along in an organic bottom-up fashion through a collaborative multi-stakeholder process. So going back to what Josh said, I'm not sure if maybe many of the founders of the internet say, whoa, if we would have baked in security from the start. What does that mean? Well, I think the intermediate, which I think triggers into where likely to go next is so we published two major multi-stakeholder frameworks. One was the five-star automotive cyber safety framework and one is the Hippocratic Oath for Connected Medical Devices. And both of them, essentially, if you squint, they use a lot of technical language. It says all systems fail. So here's five things you need to do to handle failure. Tell your customers how you avoid failure. Tell third-party researchers you'll take help avoiding failure without suing them. Tell us how you'll capture, study, and learn from failure. Tell us how you have a prompt and agile secure update to respond to failure. And tell us how you'll contain and isolate failure. Now, again, there's much more beautiful language around them and lots of maturity details beneath the surface. But our essential offering to FTC, FDA, NITSA was, instead of regulating, you have to have a firewall or antivirus in your car, if people can articulate to the free market how they're responding to those failure conditions, then rational actors can make a rationally informed decision about which car company deserves their business. So that's not meant to be a regulatory, but it's also come off in the regulatory environment and that the FDA's taken to heart much of what we just described. So I do want to leave a little time for audience questions, but before I do that, I've noticed that we've talked a little bit about safety, a little bit about security, and a little bit about privacy. And I'm wondering how the panelists see those terms to be related to each other. In the cyber software world, we tend to use security as the primary term. In automobile industry and medical devices, I think safety is probably more common and then there's an entire industry, non-profits, et cetera, built up around privacy, and that's probably the term most familiar to the public. So my question is, is this just nomenclature? Are these concepts all actually overlapping or are there actual substantive differences between them and your spaces? I'll give that a whirl to kick us off here. So I think a lot of that has to do with where you set security if you'll go back to the fundamentals of CIA, confidentiality, integrity, and availability. Based on where you sit, the type of organization you are, or the type of consumer you are, those things have a different rank order for you and what you care about the most. A lot of individuals and citizens care about confidentiality of data. That's the number one, the others fall down. I spend a lot of time on the manufacturing floor with a lot of organizations helping them deal with security in a very connected, smart factory environment. And you look at things like that and the first thing that's always come into mind is safety. You have numbers that track how many deaths happen every year on a factory floor and anything that can trigger that comes to life as number one. So we look at security issues and walk that dog and you start to see what threads can amount to a security threat taking advantage of some sort of vulnerability exposure leading to an incident where someone, there's a casualty of some sort. That can then extend itself to bad products, bad processes, goes out the door and then the people that consume the products, whatever those products might be, might also be at risk. So I think a lot of it has to do with where you sit. I'm sure there's different stories along that line. So certainly at the FDA, our regulatory purview would probably be couched directly in availability and integrity. Privacy is outside our purview with the Office of Civil Rights. However, I would say that if I'm a reviewer a medical device reviewer is sitting at my desk and I'm looking at your risk analysis that medical device manufacturers submit and I was going to sample only your evaluation of confidentiality risks, if you did not perform that or I thought it was substandard, I'd venture to guess that your safety profile or integrity and availability risk analysis was also suspect. So I would say that they're inextricably linked and good practices will extend across the CIA or the triad. We got burned pretty badly. We wasted nine months trying to work with both auto and medical by using the same words with very different definitions. So on the one hand, when we would say safety, to we meant where can a CIA attack, a confidentiality, a integrity and availability affect life and limb? And when they heard security, 99.9% they thought confidentiality or privacy of healthcare data or we're not gonna sell the user driver data to advertisers. So on the one hand, security equals privacy to these highly regulated physical industries. On the other hand, when we said safety, they said, oh, we've been doing physics and safety engineering for over 100 years, we've got that covered. But this nice juicy thing in the middle, now we're now calling cyber safety for lack of a better term is completely ignored for the first nine months we were talking to them and we were just talking past each other. So I think the primary reason for this we got to at least with the FDA is a scalpel is safe if it can be used properly by the intended user. So it's validation of intended use. And most hacking is deliberate misuse abuse. It's the kind of things that lay outside of an intended use scenario. So if you truncate misuse or abuse, you truncate all hacking. So we've been trying to show that there's this really dangerous thing in between that can affect patient care or confidence in the public that isn't privacy, isn't safety, it's somewhere in between. And that's where I think some really good clarifications in the post-market guidance from the FDA this January. In fact, we actually had large manufacturers say we can't do threat modeling, we can't talk about adversaries because it would screw up our filings to the FDA. And the last thing I'll say is which upsets some of the privacy people is we make a joke that we love our privacy but we wanna be alive to enjoy it. So if you simply designed for privacy you might encrypt the data. If you designed for all problems and all threat models you'll harden the interfaces, have fewer of them and solve for both. I've been actually told that we need to move on to Q&A if we want time but maybe we can come back to this. Do we have any questions from the audience? With regards to the earlier discussion on non-traditional threats, how do you see the threats emerging for augmented reality and virtual reality as opposed to strictly bubble? Well, I'm writing a law review article on that right now. So, You can field it. It was a plan. I think there are some related concerns privacy security-wise with regards to the AR, VR environment that we're experiencing already and other environments including IOT but of course there are others that are somewhat different. There's a question of distraction, addiction, other types of concerns about VR, AR environment but there's gonna also be privacy concerns. I think what relates AR, VR to the IOT space is privacy nexus in terms of all the kinds of information that we'll be able to be collected through heads up displays or wearable type devices about our surroundings and other people around us and it raises this really tricky issue of consent in the information age because everywhere I go, I'm gonna be able to be consuming information or gathering information about other people in most cases with other consent and we've been through this to some extent in the old days with photography but now we're filming and sensing and everything in real time all the time and these technologies we're gonna raise those issues in a really profound way. Other questions from the audience? Over here, there's one there. The mic's right there, oh my great, wherever the mic is. Yes, so last week there was a hearing in the Senate on agency capture and I'm very interested in this because I was quite surprised the way the FAA ended up regulating drones which I think could be weaponized, et cetera and I was quite scared particularly when I saw my nephew use his drone in a park but I'm wondering what the implications are for IoT and what the role of corporations are gonna be with respect to lobbying Congress, lobbying agencies and whether this is gonna lead to an unbelievable amount of litigation. I didn't catch the last part, the litigation. Litigation, so any thoughts for Kenny? I mean speaking technologically, agnostic-ly, my viewpoint working at the FDA is that I wanna see that manufacturers have a repeatable process so that they know how to assess risk and they know how to assess emerging risk using upcoming or additional information. So in terms of litigation that's definitely outside of my purview at the FDA. However, I think that as long as we can establish that manufacturers are attempting a reasonably mitigation of risk then I would be okay with that. For example, we have the quality system regulations which allow manufacturers to intake information and change the process in response to outside information and what we're looking for is a reasonable approach for mitigation of risk. So as far as a larger, answer your question at large, I'm not sure. I can weigh in. I think when it comes to, we're starting to see in some states and some discussion at the federal level in terms of regulation of autonomous driving, of self-driving vehicles. And what we're seeing at least in places like California which are really leading the charge right now in this space are requirements that a tremendous amount of data from these cars be provided to them as part of their regulatory role. And I think it is raising a lot of concerns for us as manufacturers because some of the data that they're requiring that we share with them crosses into a space that may make your average consumer a little uncomfortable, right? Driver behavior information and things like that that we hear consistently is a space that consumers are not interested in that data being made available widely. And so I think we're struggling with that right now in the automated space and the regulators are certainly coming down on share a lot with us and the manufacturers are trying to figure out what the right balance is and I don't know that we know that yet. Other question? It's a question from Twitter from at Chris Bishop and he wonders if there should be a difference between the internet of things and the web of things. Internet of things versus the web of things. What do you think? There's a lot of different definitions out there these days for IOT, internet of everything, IOT, you know, thing of verse, things of internet. Things of internet, yeah. Yeah, let's not get too wrapped up in, I hope we don't get too wrapped up in branding this thing, this phenomenon. Because I don't necessarily think it makes a big difference. I just don't think it's worth wearing too much about. I do think lumping everything together has been okay to a point. I think Katie was the first one to tell me pretty soon there'll just be things, right? We won't even call it internet of things, it's just things. But there are tiers, right? There's a kind that can affect you in any negative way. There's kinds that can affect your privacy. There's a kind that can affect your privacy and your safety. There's the industrial internet of things. So we probably need to start differentiating what types of things we're describing because the consequences of failure will vary significantly. And I'd like to just say that we're kind of at a point where we can do pretty much anything we want with technology and software. It's infinitely malleable, even if it's the Apple FBI debate. I don't think we're in the stage of what can we do. I think we haven't yet brought in the philosophers and the sociologists to kind of say, what do we want from the tech? We're mostly reacting to the tech we create as opposed to making it a slave to what we want. So if there has to be a persistent ability to do certain lawful search and seizure, we've kind of lost that already. If we want it back, you kind of have to make policy discussions on that versus the headline. If you don't want to put software and connectivity on everything in your life, no one's forcing you to. So I don't think we're yet asking where we want to embrace technology and connectivity. We're just kind of doing it everywhere. And so that echoes an earlier point that was made in one of the earlier sessions about diversifying the workforce and cybersecurity. So plus one for that. So we only have a few minutes left, I'm sorry. So I'm gonna ask one final question to all of the panelists. I'd ask you to imagine, which I don't think is a stretch for this group, that you're gonna be asked by the incoming administration president, whoever it is, the following question. It's a major goal of ours to maximize the value and competitiveness advantages of IoT over the next four years and to minimize the most troubling risks. What is the one piece of legislation or regulation that we simply must put into place in our first 100 days? I'll start and I'd say that's the mandating of sharing information between like organizations. Right now it's completely voluntary within ISACs and now you've got this information sharing organization construct. Very voluntary, very much just how much do you wanna do. People are very uncomfortable, very much like the CIA being able to give it anything to anyone else but mandating it and showing that there is a demonstrated flow just like responsible disclosure that when an event happens you have to communicate it to certain regulatory bodies but in this sense actually mandating and showing that you are participating in a group information sharing for the common good and making sure that everyone as the community is as well protected as possible and that benefits you as well. So we recently released a draft policy document for management of cybersecurity risk in a post-market sense and we thought a lot about what kind of do we need a new legislation to enact the types of protections that we want to see and the answer was categorically no I think that we don't need it new regulations in the FDA. We have plenty and safety as well-defined so we work within again the construct of the quality management system to manage the risk in a total product life cycle approach so I don't believe we need anything. There's one I wanna say but I'm not gonna- Oh please. You can't do that. I was asked this a couple years ago and my answer is similar. If you get a one sentence of legislation that would make technology safer it would be that anything connected to the internet or any basically IoT must be patchable in a secure way. So table stakes, you must be this tall to ride the internet of things is you have to have a secure update capability if all systems fail, you have to be able to respond to that. So patchability. As I've already noted I hope that the next administration follows the Clinton administration's vision for frameworks in terms of establishing if we need one a framework for IoT that follows what the Clinton administration did for e-commerce. If we need something more specific I'd like them to follow what Congress passed and President Clinton signed in a telecom act which was section 230 of that act which was an immunization provision basically immunized intermediaries from liability associated with third party acts. It became what some of us regard as the most essential and important part legal part of the internet success because if you would have deputized all intermediaries to solve all the problems in the world you would have stifled, chilled a whole host of innovation online. We need that same principle now for the makers world for the world of IoT for 3D printing for VRAR and everything in between. So the challenge is that there is an alphabet soup of federal agencies that are sort of involved in the internet of things right now. You've got DHS and FCC and FTC and NHTSA and FDA and it's getting a little unmanageable I think from in terms of the folks who are working in this space and so I would strongly urge some sort of mandated coordination across all the agencies, a clear lead identified to help shepherd this through. There's some legislation I think was introduced last week that's by Senator Fischer that sort of started a conversation about this kind of approach and I think that's what needs to get done. Well on that note I think there's a lot for the DC folks in this room to do. I'd like to thank all of our panelists. Thanks so much for participating and I'd like to let you know there's a break now for about 10 minutes.