 Yes. So anyone who's thinking about escaping out to the lobby because maybe the bar is open by now, I have bad news. It's not. And it's not until I say it is, because I'm the co-director. It's good to be the co-director. Come on. Come on. Come on. All right. I'm not going to use this. I have a mic. I'm wearing a mic. I know because they put it on me. So as Laura was saying, last session of the day, everyone's a little tired. I can see it in your faces. You just want to go out and have a drink. I understand. So we're going to make this a little bit more fun. We're going to do a little cybersecurity scenario improv. Okay. So y'all, has everyone been to an improv show? Everyone? Yeah. Okay. So first thing we need is like a couple of ideas to get us started, right? So just holler them out, right? So anyone, can anyone give me a name of a country that you haven't read about in the news this week? Last week? Nigeria. Okay. Great. Oh, come on. Princes. Too easy. All right. Give me a household appliance that is not currently IOT enabled in your home. Blender. Blender. That's a good one. I like that blender. Perfect. Is this Mad Libs too? It's Mad Libs. You've done this before. And then finally, somebody give me the name of, let's see, a washed up, a celebrity who will be washed up in six years. Kim Kardashian. That's good, but you're on the panel. You're allowed to do that. Come on. So that was an easy one. Anyone? That's fine too. Beaver. Beaver? Oh, Beaver. Justin Bieber. Got it. Justin Bieber. All right. Great. So, we are in just a moment going to fast forward ourselves into 2023. And for the remainder of this panel, we are going to discuss a cybersecurity incident, which I will tell you about in just a minute, as it happens in 2023. And we're going to look forward from that and we're going to look backward from that. And so, I also, I still need your participation, please don't tune out. If you guys, in the course of our conversations, have interesting twists on the scenario that you think would be fun to throw at us, or if you've got an elaboration of something that's already been said that you want to, like, you know, have us think about, please tweet it to at NewhamCyber, and these fine diligent compatriots of mine in the front row here will holler that we're going to be able to do that. Here we'll holler them at us at, I hope, in opportune moments. With that said. We're not scared at all. No. So, with that said. Okay. Now, close your eyes. And imagine, the year now is 2023. As predicted in that, what seems like far off land of 2016, 2017, the internet of things is really just the internet now. And everybody has, in their home, cameras connected to the internet, microphones connected to the internet, all sorts of random things connected to the internet, you know, devices like we have today, Alexa, Google Home, etc., are in everybody's home in the developed world. Some of these devices are very well secured, they've been made by companies who know what they're doing, and some are not, shall we say. Same problems that led us to the Mariah Botnet of last year are still prevalent in 2023. Best practices exist, consumer reports, rates, devices based on this, but most people don't care, and they buy whatever's cheapest. Sometime in 2022, a spec device manufacturer, that is someone who makes devices on, a company that makes devices on spec, manufactures them in Nigeria, is hacked by an unknown party, and specifications for all of the IOT devices that they've made over the past few years are stolen, including some of the, you know, hot new Christmas toys and other devices that are going to be on the shelves come December. Those specifications are sold on the black market to a high bidder with unknown backing. In looking at the specs, the group that bought the device, sorry, the group that bought the specs discovers a vulnerability in the firmware code and exploits it to gain access to all of the devices made by this manufacturer, and putting it together with, already on the open source malware market, they put together a piece of malware that is released sometime in December 2022, and begins very quietly infecting everything it can find, and it doesn't do any DDoS, it doesn't send any spam, in fact, it's very, very silent, it merely tries to infect other IOT devices, and it hooks into whatever video and audio sources it can find, and starts recording snippets determined by an AI, which essentially only records when there's people talking, or images of people if it's a video camera, and these snippets are encrypted and uploaded to various cloud storage devices, and also to many peer-to-peer data networks such as BitTorrent or whatever they're going to have in 2023. It doesn't take too long, a few weeks maybe, before security researchers start realizing that there's odd traffic patterns happening, and before too long into January or February of 2023, forensic analysis has revealed the purpose of domain malware. The group that wrote the malware when they realized that someone's onto them releases the decryption key, and a search engine that basically using either voice print analysis or facial recognition allows you to essentially search the corpus for people you know. And cloud hosting providers who realize that this data is on their servers quickly take it down and delete it, but it's on peer-to-peer networks, and really you can never actually delete everything from the Internet ever, and I'm sure we'll talk about that. You know, it's mirrored by sort of 4chan-style vigilantes who think that this is the most hilarious thing to happen since 2016. So video and audio of millions of people's private lives are now online for everyone to explore. The data is tagged with date and time and IP address, and from there you can determine rough location, obviously. And so people begin skimming the data dumps, and very quickly a number of controversies as you might imagine crop up, including one involving Justin Bieber that proves conclusively that he is actually a woman in drag and that he doesn't write his own songs, and so forth, you know, embarrassing things like this. And so on and so forth, you can imagine sort of what is happening here. And there's a couple different things you can sort of insert here that I think make for interesting pathways for discussion. But the one that I sort of hooked on to was, you know, after all this has occurred, the group, you know, takes the Twitter and claims responsibility and says that they were trying to demonstrate the insecurity of the things that we all, these devices that we all use on a day-to-day basis. And now everyone knows and hopefully some good can come of it. But anyway, that is our scenario. Betsy, what happens next? The Cyber Apocalypse. I thought that's what I just described. Yeah, exactly. Well, so I think what happens next is a real change in the average day-to-day life of people on the internet. No, so I admit I get to cheat a little bit because the organization I run the Center for Long Term Cyber Security developed scenarios for the future of cybersecurity in the year 2020. And one of the scenarios we developed was about what happens if the hackers win. And in essence, that's what you've just described in a sense is a particular vector in which the hackers win. And we think that people would do three things on the internet if the hackers win. Some people would just say, okay, you know what? The threshold has been reached. No longer do I think that my interactions are going to be safe online. I'm going to assume that they're not going to be safe. And so I'm going to take everything I have and just put it on the internet so there's no question about whether or not you're going to release it. It's just out there. So that's some people. Other people or potentially people in different portions of their lives would say, I'm just getting that stuff offline. You know, I don't want anyone to be able to touch these sensitive things. So maybe online banking, no thanks, I'm going back to paper, maybe medical records, I'm not going to my health insurance anymore through the internet. So that's the second possibility. And then I think the third and potentially most difficult is that some people want to say, I still want to be online. It's so important to be online in some spaces. But I need to find a way to protect myself within that space. So we sort of use the analogy of gated communities and neighborhoods. We think there would grow to be a bunch of gated communities on the internet. And people would begin to exchange information. And so the key question is, how do you get from there to here when you're surprised by this big event all at once? Because up until then, what you've described looks pretty similar to what we have today. Right. And so the interesting question about the sort of gated communities, you know, can you create that? Is that a technological gated community? Or is it a policy gated community? And can you create that through policy at all, Alan? So first with the caveat that I'm not speaking on behalf of NTIA or the Department of Commerce. I think it's actually, it's a technical question that is quite similar to something that we've already talked about in some of their cases. Because the use of private information that I have, and a foreign server has, and the assumption that no one else has this information, looks an awful lot like public key crypto. And one of the things that we've talked about is what happens when public key crypto breaks. NIST has publicly said, please stop using exponent based encryption algorithms, please switch to ECC, elliptic curve encryption, and use these standards. And we know that there's a decent chance, and in fact, at RSA, the crypto panel said, we think there's a decent chance that public key crypto, as is currently used circa 2013, will be broken in five or 10 years. But symmetric key encryption is going to stay valid. And I think we're going to see that when it comes to authentication, because it's not just the users that are at risk. At commerce, we think about those poor, defenseless, large enterprises out there. And they depend on having the ability to remotely interact with their customers. And if now they have no ability to verify that the person at the other end of their exchange is actually you, how will they make sure that the server is going to work. And in both cases, we're going to need to return to something that looks a lot more like symmetric encryption, whether that is, we're going to give you something, you know, we're messing or everything to you at the end point, or we're going to have scenarios where we're going to have short term temporary keys. But we're going to have a certain cost of delivery. It's clearly going to raise the cost. The question is, can we still have a remote mechanism at all? Or are we going to start to use, you know, kiosk based models for every kind of interaction we want? And that's actually that raises an interesting point. And I think this is sort of you are edging this direction. But you know, one interesting, really interesting question is, you know, how do you do, how do you authenticate anybody in a world where everything, you know, even even not down to voice print now is, is, is, you know, on the web, like, you know, everything identifying is probably out there somewhere if you, you know, if you look for it hard enough, which by the way, I want to point out that could be, I can imagine this scenario, if we are truly all are at risk from having our intimate details published. The nice thing about it being 2023 is we've gotten really good at CG. And so why not corrupt this massive data dump with false data? So if there is something that's truly sensitive out there, you can imagine those of us with resources can then poison the well. And it will take a little time to do it in a sophisticated fashion. Certainly the security researchers among us will be trying to determine the real dump from the fake dump. But to the point that it's been raised by a number of people today, that's also something that is going to be resource constrained. Right. That sounds like a business model. Exactly. And so this is going to be an area where you will have privacy and security if you have the means. Right. So that sounds a little bit to me like hackback. Because under so I'll play the government lawyer in the room, even though I am no longer a government lawyer, you're tired from that position. And by a few years now, what authority would you have? How would somebody lawfully poison that? Well, it's interesting. What would be what's the what's the law preventing you from poisoning the well? That's interesting. Well, is it is it a CFA violation if all you do is upload another video and say, Oh, I don't know. That's loading a new video. Right. Yeah. Well, so are you taking an old video or uploading? Well, so actually, I want to talk about the hackback question, too, because, you know, you can like this is prime vigilante stuff here. Like, I can imagine myself getting really pissed and wanting to go after somebody. So like, you know, what, the darker side of Ross, you have no idea. No, but yeah, like, so what what's the target? Like, who do you even try to hack back? Because, you know, are we going to see attacks on the companies that are hosting unknowingly hosting the the material or these people are supposed to be the good guys, right? Your twist was that these were not people trying to seek money out of this. They were trying to show in security. So sure, if you're hacking back, who are you? You know, you're hacking good guys, too. It's probably someone here in the room that's a, you know, started this. Yeah. It's a, you know, it's interesting question, because, well, who do you go after? Right. So if you if it's a bit torrent client that this person doesn't even know they're hosting the images, like, are you going to attack somebody? It's a private person. There's like, there's a whole range of things you could go after there. There's an attribution problem. There's attribution problems. Well, speaking of attribution problems, who do you sue? I mean, we've got a couple lawyers up here. Who do you sue in this? In this instance, if you are not a lawyer, but I assume their answer is going to be anyone we can. Everybody. It's a valid question. I think, you know, we have, in certain instances, there are limitations on liability for hosting providers. True. But as we move to the Internet of Things and corporations, companies are now connecting something that was never envisioned to be connected. Who owns the software that that device is operating? Do I have the ability to, as a consumer, as a purchaser, make changes to it to update it? Can I hack it, in theory, myself, to make it more secure? Because maybe the company that produced the product had no clue what it was doing. And it's still, if it had a password when I bought it, it's password is 1234. We're sort of in a perilous position, I think, and we really need to start thinking about what the right balance is here in terms of potential managing the risks and potentially reappropriating liability issues. Absolutely. It's not going to make some companies happy, but it's a wake-up call, I guess. Well, we'll talk about that in the end when we talk about what have we learned from this and looking backward. What should be our policies next? But before we get there. Yes. Alan, you, oh. I was just going to go back to the question of how do you authenticate anyone on the Internet? Because I think that's a big one, and we haven't really gotten there yet. And so, at Berkeley, we have some students that are trying to come up with ways that relate biometrics to things that are changeable. So, one of the problems that is posed by a scenario like yours, if everybody's voice is public on the Internet, but anything that uses voice recognition is potentially hackable. And you can make the same argument about things like fingerprints. If those have been published in your scenario, we'll take 3D printing technology in just a year or two, or maybe even close to now, you'll be able to match someone's fingerprints. And in fact, there's some Japanese researchers that took a picture like this, high-res picture, took somebody's fingerprints and were able to replicate it to a degree that they were able to actually fake out a fingerprint recognition system. So, we're probably already there or close to there now. So, our students are trying to think about what is unique to you, but changeable? The problem with hacking a fingerprint or voice recognition is you can't easily change your fingerprint or your voice unless you're in an intelligence service, and that's another story. But, so, they've been thinking about, well, what about brainwaves? So, they've done some early testing on technology that would actually take an EEG brainwave reader, put it in their ear, and you think a particular phrase. So, let's say today's phrase is, Mary had a little lamb. Okay, so if I think the phrase Mary had a little lamb, then you will see a certain pattern of brainwaves each time I think it. But if Alan over here thinks that same phrase, you will see a similar pattern for him, but not similar to mine. So, you're able to distinguish at a pretty high level the brainwave pattern of the same thought to two different people. So, the idea would be that this could be a new way to authenticate people on the internet because if you wear this device, you know it's you because it's matched to your shape, then you think a particular phrase, and if that phrase gets hacked, then you just pick a new phrase. So, you take my Mary had a little lamb, I move on to three blind mice, you know? So, it's technologies like this that we're really gonna need to start thinking about now. Because if we're in this situation where it's too late and you've already got all of our biometrics, it's gonna be too late to change consumer behavior at that point. I can, we can go a little further into Philip K. Dick territory here. And so, one of the notions, so there's nothing new on the internet. Back in 2004, at the Kennedy School, we did an exercise on what happens to digital identity in different scenarios, and it was authoritarianism and chaos. And we delved into this, you know, same question of biometrics. And, you know, every sci-fi movie's favorite biometric is the retina, retina, highly impractical for modern biometrics. And one of the reasons is that it's susceptible to body chemistry. In fact, there's a story, and I've never chased down whether it's true or not, so don't quote me on this, but there's a story. Your live stream today. That was, yeah, that was attributed to, that was claimed by a major defense contractor that was using retinal scans in the early 90s, and they found out that one of the ways that women discovered they were pregnant was they were locked out of their offices because, you know, body chemistry changes your retina. So you can imagine a world where if your retina scan is compromised, there is a defined amount of hormone that you can take that will not affect your phenotype, but that will actually change your retinal scan. And we can imagine other types of biometrics that are mutable in a controlled environment that we're going to have to start studying now. How many times do you have to inject yourself with the... How many times do you change your retina, though? Is that my question? Like, if you get hacked four times, like you're pumping yourself up with more testosterone, is that really the right way? Well, not to mention you have to do the whole re-keying of the backend system, and that's really the part of it. Right. Cyber security jokes, man. Good old PKI. What happens in the international space here? Do countries bend together after this, or is this gonna fragment the presuming we have an international system of any kind still in 2023? What is the, is there some sort of treaty? Like, what's gonna, because this implies all kinds of jurisdictional problems, because this data is all over the place. Do you guys foresee this being good or bad? Like, do we get the federation out of this, or not? I think we, if we don't have too many countries already pursuing data localization, I think it potentially is a boon to them in arguments in favor of why data needs to say in a particular country, which I think if I read correctly last week, Mr. Bossert's comments suggesting that we are going to strongly, we as a country are going to strongly push back in trade agreements against data localization, which I think is a positive comment. Yeah, I mean, I think you could see the EU not enjoying this scenario. You know, I think when it comes to core internet values, this has made it quite interesting, right? This is the sort of thing that the press would love to talk about. There's lots of things that could be learned. At what point can this be published? At what point do international laws step in? Is this going to be, you know, a, you know, anyone can publish anything because it's the internet, or is it going to be an extension of the right to be forgotten? Or, you know, empower countries with very strong libel rules to go after anyone with exposed assets? I will go on the record as saying I don't think any kind of treaty is going to help us in this space by any stretch. I mean, Not after that. Particularly when you think about those countries among the many who care very little for their citizens, so-called privacy, and sort of the, what interests do they have in trying to pursue this? Unfortunately. So I think most of you can guess as to who I'm referring to, but I just, I think we're in a space where we have some existing international agreements that would allow us to move forward in this space. By that I'm referring specifically to the Budapest Convention. Who knows if there's, I'm not an IP lawyer. If there's some way that you could shove this under WIPO or something like that, I don't know, but I think there are existing instruments that would probably allow countries to make some progress in this space. Certainly they could be enhanced and we can get into that when we get to the next phase of our discussion. I won't jump the gun again. But I think hope for a multilateral agreement to help this is a lot. I'd expect a sort of dividing between regions of the world, so. And it's not too dissimilar to what we have now because there are different ideas of what privacy means in Europe versus the United States. And I think this sort of scenario teases up exactly those sorts of tensions. And I can't imagine an easy kumbaya or we'll just resolve this and move on. But I think it's important to focus on what might happen in the developing world too because all of a sudden with more information being public, the barrier to entry for companies that rely on data and the ability to learn more about how developed world businesses operate, how consumers behave, et cetera, et cetera. It becomes a much greater opportunity. So I would also imagine that there could be some evening out between the developed and developing world if such a scenario became true that data that right now is proprietary pretty much was no longer. Right, so our trade secrets are out the window, right? Right. It's potentially, excuse me. Sir, here's a twist. What if instead of everybody. And where's the audience twists? Come on, we're waiting for, come on internet. There won't be no cocktails unless there's audience participation. Oh yeah, no, I did say that. I don't know if anybody heard that when I started, but. Okay, so here's a twist. So let's say it's not everybody. It's just one political party. How about that? I guess this is now sort of a US centric and not global, but roll with it. You know, what if it's the Republicans? It's just Republicans and no Democrats. It's just, and only their data is sloshed all over the internet. Is there, does it change how we view this? Does it make it a part, does it make fixing it a partisan issue? Well, I think it makes it even more important and for both political parties to stand up and say that this is unacceptable. For the Democrats not to choose to take advantage in your scenario of the fact that this has made Republicans vulnerable. It's even more important for Democrats to stand up and say, this is unacceptable in the same way that impacting an election or other major US institutions is unacceptable. Because if both sides don't come on board, then you have real issues because, you know, clearly the media has been unable to help people sort out how they should feel about these sorts of things. So we need all the key players to stand up and speak strongly on such a question. I think it's a new, or it's a sort of reiterates this theory of restraint. So we need, I think as the US need to demonstrate and carry out our actions in an informed and thoughtful manner. And so that goes to our politicians as well. And so they need to potentially not take advantage of situations that could otherwise be quite public. It would be appropriate. Any chance of that happening in real life? No comment. So I'll take on one other. Oh, go ahead. Do we have? No. Just yell it. Twist. Here's a twist. Let's go by income bracket. Let's say that the data released was based on income bracket in the US, bottom 40 percent, top 10 percent, these are three different scenarios, and then middle 50 percent. So the data was tagged by income bracket? Interesting, okay. Well, one of the things we were talking about in preparing for this scenario is the equity issues that do come into this, right? So if as Alan was suggesting, there could be a business model for helping yourself get protected, then the top income bracket would be the most likely to be able to be affected by that and to take advantage. And so you could imagine a world in which certain people's data is protected and certain is not. So I think if you release the top income brackets information, you'll merge immediately towards that model, and perhaps if it's a lower income bracket, maybe there will be sort of a societal recognition that this could happen to anyone, and we should build institutions to protect all folks. That would be a positive spin on what could go. I see skepticism from Alan. No, it would be nice if by 2023, digital divide issues had been dramatically been addressed. And we are working hard on it, but hypothetically, if they have not been addressed, then I would argue that the at-risk population, if we're going this separation, is not those at the high end of the spectrum who can rely on direct personal services. And the low end, which is not as dependent on the digital infrastructure, but the sort of middle band that has come to rely on digital services for healthcare provision, for education, for really every aspect of their lives and for whom disconnecting would be a dramatic change in the quality of life. No, I would push back slightly. I agree for sure that the middle group could be very highly impacted, but I actually think that the lower income bracket also will. I think after that comment that came out in the healthcare arena about why don't people buy health insurance and not use a smartphone. Well, there was a great piece in the New York Times that showed how much people rely on their smartphones for their day-to-day information about, or even text messaging, how to get who's gonna go pick up the kids when this person has to work late at work to get overtime, or how do I find out when my services are being delivered? I mean, increasingly, government services can only be delivered if you get online, and folks may not have home internet, so their access to the internet is limited by their phones. So I do think that while that may be true today, by 2022 or 2023, you may see equal reliance by pretty much everyone on the ubiquity of the internet. Which would be victory up until the apocalypse. Right. And so I guess one question, I'll go take a step back, sort of, up a level and go back to my multilateral comment. Is the apocalypse the straw that breaks the multi-stakeholder internet governance model? Oh, that's a good question. So do countries say, those who don't appreciate the multi-stakeholder model, see it's broken and we told you it was, so forget it? Oh, yeah. I could see that happening easily. I mean, in 2017, they use any excuse they possibly can to beat up on it, so yeah, some sort of apocalypse would absolutely be jumped on, I'm sure. I could see that too, yeah. What happens in the wake of the attack the inevitable, some countries are going to start passing draconian laws on, you know, the security of devices sold in their countries, you know, let's say it's the EU, right, to make it a big buying power. It would most likely be the EU in that, yeah. Assuming the EU still exists, yes. You know, what does that do to the pace of innovation? What does that do to manufacturers all around the globe who suddenly have to, you know, face huge fines or, you know, do security, right? Absent clear standards that are actionable. I think it would be devastating to the EU market, right? It is not just enough to say be secure, it's the, right, in the crypto world, we talk about nerd harder, and this is the security of the, we are going to, you know, start amputating pinkies until all of your devices are secure. Well, you're going to have a whole bunch of nine fingers being waved in the air saying, how do we do this? And so you can't just be secure by fiat. You actually need to have the tools to empower organizations to build and deploy them as well as the incentives. And the incentive side, you can help with the regulatory approach. The fingers, I think, is the incentive, right? The fingers is the incentive, it's the, but when it comes to actually making devices that are secure and interoperative and still have the features that people have come to know and love, these are things that need to be engineered carefully in open consensus-driven standards processes. Or alternatively, these are things that need to be built in exquisite vertical silos that don't interrupt at all. Right. Those are the two models we kind of have around security. And if you try to push either one of them as a shock, I don't think you're going to succeed in the short run. I would like nerd better as the new hashtag for this panel. Nerd harder. We got a twist, all right, good. Detailed US defense specs are leaked because an ally used IoT device in violation of policy. Yes, so this was a twist that I've been thinking about as well. What happens when the general, in charge of NATO, at that point has a smart TV in his home that is recording his papers on his table that he shouldn't have taken home? Does that change a lot of things for you guys? I think one of the things that's interesting about that is that it would show how difficult it is to do this well. That essentially, in day-to-day life, doing good things on the internet and being careful at all times, I think the example used earlier about updating and how many of you actually update when it tells you to? Well, pretty much no one, despite cyber security professionals in the room. And I think that would be really hard to swallow in the military setting. But I think it would also force some hard questions about default settings, for instance. Why people are able to choose whether to activate these things? Why, for instance, your camera is default set to be able to be activated rather than having to be turned on when it's being used. Same thing for updates. If the update does not require an immediate restart, why not just force it onto happening? Or let's debate those risks, at least, because there are risks on both sides. But right now, we're not having that conversation. And I think if the military were involved in such a thing, that would really force the conversation. Does anyone else like to walk into their friends' houses and say, Alexa, order 100 pounds of silly putty, confirm purchase? No. Have you done that? Is that just Alan? I saw that. You're not coming over to my house. It was stolen by Randall Monroe to cite your sources. Yes. OK, let's stop here. Let's pause here and let's turn our gaze backward. What policies do you wish that we had enacted in 2017 to avoid this horrible fate in 2023? Hallermann. I'll jump in and say I want to reiterate this point that you can't just get people to nerd better. There are some very easy things, default passwords. But security is a very complex issue. And Katie said earlier, you can't jump into having the high-end security side of things. You need to actually have an organization that is capable of doing it. And so some of the work that we've been doing at commerce is around making sure that the entire community is on the same page on issues like vulnerability disclosure, which Katie's been very helpful on as has Ross on sort of thinking about what are some expected practices. Acknowledging that there is no one-size-fits-all approach, how do we foster collaboration between the security research community and those who make and maintain the systems on which we all depend? And once we have the ability for organizations to learn about vulnerabilities, how do we make sure that they can fix them? So in the IoT world, it's not just enough to say, build it more securely. It's a little bit like when you ask someone, excuse me, do you know where I am? They said, well, you're lost. It's not wrong. It's true. It's just not actionable. And so what we've tried to do at commerce is sort of figure out what is the path forward. And IoT patching is really important. It took us 20 years to learn what patching meant in the software world. We don't have that kind of time. Not even the apocalypse is coming in 2023. You have six years. So what we need to do is at least have a shared vision of what a security update looks like, what does field-upgradability mean for a consumer-grade device, and have that be detailed enough so that manufacturers can build to it, but also have it translatable into consumer-understandable language, so that consumer reports can say, don't buy anything that doesn't have this on the box. Have people walk through CES and say, you don't have this property that everyone now believes that you should have. And I think that we can actually make progress, not by saying, everyone must do this now, because that's how you get lobbyists in the room, but saying, let's get the engineers in the room, and let's get the civil society advocates in the room, and figure out what will actually protect everyone in a way that can actually move progress forward faster. Go ahead. Actually, I just lost my train of thoughts. Well, so I guess I'd say three things. One, sort of bouncing off that, obviously, the making the consumer aware is incredibly important. And so the standards idea, sort of like the equivalent to Energy Star or the nutritional labels recommended by the White House Commission, those are all possible ways to help bring the consumer awareness as to what is secure. I think there's a bigger fundamental underlying problem, which is that right now, the consumer doesn't care. And that's because they haven't seen any negative effects. I mean, so I'll often ask, and I'll ask this audience, how many of you have actually seen real detrimental financial or other implications to any of the hacks that I'm sure you've all been subject to? So I have two hands, three hands, and one maybe. So the average consumer, and we're not the average consumer, the average consumer has seen even less. And so we're at this point where we need to educate the consumer and to make it real. And the analogy I like to use is anti-smoking campaigns. We did not actually understand how bad smoking was for you until those ads where they showed your lungs were black or your brain was being affected. And that made it real for people. And I think that had real tangible impacts. And we need to come up with the public awareness campaign that can do this in the cybersecurity space. And then I think finally, we've talked a lot about policies, but I always like to remind people that if anything, companies matter more than government in this space, at least they do today, that can change, but at least they do today. Companies are making decisions on a day-to-day basis about what data to collect and about how to manufacture devices. And they're often taking into account quickness to market, as we like to say in Silicon Valley, rather than security. And so we need to put policies in place that encourage companies to take their time in figuring out security and find resources when they don't have them, you know, any of these devices are coming from companies that are not traditional internet manufacturers. And so they may not have those security resources. And then we need to encourage people to consult with each other and to grow a community in which we can actually take into account corporate interests as well as government in building the space. So I think part of the challenge, though, and the consumers not understanding the cost to them is that there is not a whole lot of transparency in this space. So I think it was Intel and CSIS just had a report that was published in the past couple of weeks that talked about the fact that six years ago, cybersecurity spending was not on the top 10 lists of company boards for their knowledge and decision-making processes. Nowadays, we have the SEC potentially demanding or requiring that you disclose whether or not there's somebody on your board who has cybersecurity background, and if not, why that's satisfactory for your space. So we need to have a little bit more transparency from companies. I think one of the challenges in this space, not to say nerd harder, but we are thinking about this, and I'm not the first person to say this, I think in the wrong way in the sense that we're spending money on cybersecurity rather than spending money on, we think we need to continue spending money on that, but we need to, as Betsy was saying, and others have said, spend money on education so that we're producing products that have more secure and stable code so that we're getting companies and boards saying, rather than saying how much are we spending on cybersecurity or how are our networks secure, but how secure is our product? How are we managing and handling the information that we obtain about our customers? Are we being appropriate with it? Are we doing it in an ethical manner? Are we giving adequate security to that information? Are we enabling our customers to have enough choice? I know you can follow the FIPS framework there, but I think we're, we need to, we as consumers need to demand greater transparency from the companies from whom we buy things in order for us to make an informed decision. I think that this is where the consumer reports digital standard effort is really going to help both consumers but also boards set to say, oh, the product that you guys are manufacturing just got to, whatever it is, half of a circle, why is that and what do we need to do to improve that? Right. And the other thing I'd add regarding the cyber talent pipeline in education is that we also need to grow a workforce that is better able to handle this than exists today. And so the panel earlier with Angela and Kirsten had a number of great ideas and I think we really need to invest in those now because the reality is unless we begin to do both training for people who are going to work in this space and training for the broader community that by just living in the digital age you're gonna have to engage with this space, we're never gonna be able to catch up to the scenario that you're talking about. Yeah, I heard someone the other day referring to it as like we need basically another class in high school alongside civics. Right. That is basically how not to get owned. Right, I mean. Or whatever you want to call, you don't have to call, you can call something else. As opposed to the after school special on two kids who are about to buy a nest have a very special lesson learned. So we haven't talked about this issue. So what do we need to do now? There's been an effort, a couple of governments have been working on it for a number of years. So thinking about, so what we're talking about in part here in education is capacity building. So we have our Nigerian princes who've decided to, you know, the formal company and... Manufacturer. Manufacturer. IoT devices. Chromey devices, I think, right? Blenders is what it was. No, yeah, blenders, yeah. But I think it sort of did not very eloquently tie some of these concepts that we've been talking to together. Some of the work that I think NTIA is doing is to help those companies, but also consumers eventually hopefully talk the same language. So when we're talking about we have standards, but which standards do we think are the most important? What does it mean to build, you know, what does it mean to have a build particulars for software development? Which of the particular standards in that space is the most useful and most effective? But I think, you know, we need to translate this effort internationally because obviously we don't want to have the internet shut down and balkanized and we don't want to have the end of multi-stakeholder internet governance because we don't think that that's in the best interest of the internet. So how do we, you know, in the US, but also in mostly Western countries, think about how do we take these lessons that we are learning painstakingly in some cases and help those who are still coming online to not have to endure the wounds that we've all fought. Exactly. And I'll just bounce off that cyber insurance is another area that we haven't really talked about, but would be really important to start figuring out now in this space before all the claims come in. And so, you know, I think there's a lot of interest in that space. I mean, certainly the number of companies that are like, hey, are you doing any work on cyber insurance, or, you know, so that's definitely an area where people... You're getting this one called, it's been the next big thing since 2002. Well, maybe it'll always be the next big thing. Earthquake insurance, for instance, in the area where I live is still the next big thing that I'd love to have. But so I do think that it's time for us to start really thinking about what types of solutions we're looking at. Are we looking at private sector or are we looking at, you know, are we looking at cyber insurance or liability? And where, you know, what are the comparative areas that have struggled, and maybe earthquakes isn't a bad one to start looking at, and figure out how can we resolve this? Because the alternative is, when there's something bad that happens, taxpayers are gonna pay to help clean up the mess as like sort of an emergency response. So, one way or another, we're gonna end up paying. Let's figure out how to do it in the most efficient manner. So, last question that I wanna throw at you guys is when we think about all these policies that we're sort of talking about right now, how do you get mom and pop, people to care, first of all, and how do you educate them enough to actually take what they're doing and make it secure? I'm thinking about like app developers or somebody who's just learning. The reason I'm saying this is because in my head I'm imagining, okay, take 3D printing, fast forward six years, printing circuit boards is perhaps a possibility, maybe now people are trading circuit board designs online, like how do we get that community to understand and implement? You know what, forget it, let's go drink. So, one of the garage biology is going to be one of the new big things, but I'm not really worried about someone saying, hey, there's syringe, I've been working on something really cool, try it. That's sort of, we have that intuitive risk model. Darwin's gonna take care of that one for you. And I think the path forward is going to be to make sure that the garage hobbyist is this key built-in source of innovation that we have in this country. But as we've scaled to a digital connected infrastructure, there needs to be a little bit of pressure against that. And I think that pressure isn't coming from making sure that everything that comes out of a garage goes through full on eight year government testing. But it does come from having the other parts of the ecosystem push back a little bit and saying, listen, if you're gonna connect that to my trillion dollar cloud business, it can make me look bad. And so, one of the things I love is the first question on the questionnaire that Google has of if you're going to be a part of their software chain of building products for Google is do you have a vulnerability disclosure process? And making sure that we have that spirit of connectivity, I think that's gonna spread a little further. I'll go back to the mom and pop question, because hi dad, my dad still has his password on a Post-it note, so. Well, he should, it's a good password, right? Oh yeah, really good. It's not one, two, three, four, it's okay. But so, I think that really to begin to get the consumer to take this seriously, there has to be some cost. I don't think it should be the full cost, but I do think that there needs to be some cost. Right now, if your credit card information gets stolen, even if it's your fault essentially, you get to just have your credit card company send you a new credit card and the only real cost to you in the vast majority of cases is waiting three or four days to get a new card. And so, I don't think that the consumer should pay the full cost, but maybe if they had to pay $10 to replace the card or something like that, it might lead to just enough incentive for people to take this a little bit more seriously. And I think we need to think about that in every aspect. Even when you think about training people at work and if they fail to take one of the, when they get fished at work, a fake fishing scam and they click on the wrong button, well, maybe they have to go and spend an extra two hours in cybersecurity training or something like that. So we need to think about small ways to incentivize people who don't think about this as their job and who don't really see why it matters so much right now to just understand a little bit why it matters. Back to, we have these, every year you have an annual physical or you do monthly health checks on certain things. Do we need to start doing something like that to consumers like through their app? This is your quarterly reminder that if you get an email that says you need to change your Gmail password or if God forbid, no offense to AOL, you're still on AOL and there's a new app for AOL, and a new AOL app, which I think is probably remote, that you just, this whole idea of stop, think, connect or think before you link or whatever it is, we need to, we need to. It's got to happen more often than I go to the dentist though. I think we need to, yes, well, probably. That's every, twice a year, right? Twice a year, that's right. Again, this notion of using, what we heard earlier today, using behavioral science to try and better understand how to connect with consumers. And I think it was Bobby and a few others earlier, much earlier today who talked about the need to bring many more people, experts in different areas, it's a multi-disciplinary approach to this rather than the nerds nerding harder and the, you know, you can think about some of the bar, some of the lawyers out there just sort of running after any kind of lawsuit out there. Let's try to think collectively a little bit more carefully about this. I'm now envisioning like village doctors doing making house calls and like turning on two-factor authentication for you. Anyway, you've been a wonderful audience. Thank you everybody. Please join me in thanking Betsy, Alan and Megan for giving us their thoughts this afternoon. Thank you.