 First up, we have a law professor round robin for you. We couldn't find any robins that were round. So instead, we've brought some sparkly other birds that will pass around to create some visual interest in case we get boring. So behind me is a banner for a policy lab that I run at Penn State. And up here, I have a stack of stickers that feature a cow wearing a wig, hopefully in an aesthetically pleasing fashion, that has the handle for the Twitter feed for the lab. So if you're interested in the intersection of security, policy, and law, please do grab a sticker or a cow's, like it's cool if you just like cows, that's fine. Grab a sticker, follow us on Twitter, and look forward to your thoughts in the discussion section today. So first, we have this law professor round robin. After that, we have a one-on-one with me and former FBI member and former National Security Council cyber lead. I'm not sure what his official title was during the Obama years, Anthony Ferrante. After that, we have Dr. Suzanne Schwartz from the FDA talking about the new medical device guidance that's coming out. Also, one-on-one in conversation with me, in each case followed by open season for questions, of course. And then after Dr. Schwartz, we have Josh Steinman. Josh Steinman from the White House, who is the current National Security Council cyber lead. And then after that, we have Erie Meyer, the chief technologist for Commissioner Chopra from the FTC. So please stay for as much of the afternoon as you are able to and give us your best questions because we're hoping to trigger an engaging conversation. And thanks for being here. So without further ado, the first up, we have a discussion among three opinionated law professors. I'm Andrea Matwishan. I am a professor of law and a professor in the engineering faculty at Penn State, the associate dean of innovation at Penn State Law. This is my 16th DEF CON. So I've been around a while. And to my immediate right is Professor Stephanie Powell from West Point. And to Stephanie's right is Professor Margaret Hu from Washington and Lee Law School. So what we're going to do is we're just going to sort of start talking about some of the key issues that we see happening in the intersection of security and law and interrogate each other a little bit and hopefully trigger some questions for you all. And then we're going to open it up for discussion. Since this is the ethics village, can I give my ethical disclaimer? Please. OK, so as Andrea said, I work at West Point. And in doing so, every time I open my mouth, I have to say, these are my personal views and do not reflect the views of West Point, the US Army, or the United States government. So thank you. Sure. I'm ethically compliant now. You can complain to my deans about my views if you want. It's OK. I've tenured, it's fine. So Andrea and I actually met almost a decade ago at a privacy-related conference. But what I learned about Andrea over the years is a lot of her interest, her scholarship interest, her policy interest, and her long-time interaction with the security community was based on her interest in security, information security, and what we now have to, I guess, say, cybersecurity. But she was doing this well before the cybers came along. So can you talk to us, first of all, about how you, as you approach various policy questions, how you distinguish security from privacy? Sure. So I'll start with fleshing out the cyber-energy that you referenced. So here are the reasons why I tend not to use the word cybersecurity. So first is just sort of a logistical one that in law, we've had a generation of courses called cyber law that referred to internet law. So by calling security courses cyber law, you end up with cyber-cyber law. And that just sounds like I'm stuttering or something. But in reality, part of the concern is that when we talk about cybersecurity, we seem to be emphasizing an internet component. But of course, security is physical as much as it is digital. And so thinking about the fact that if you have physical control over a box, you can do as much or more damage than you can over a remote connection. So the digital and the physical need to be interwoven when we're thinking about the attack surface of different situations and the possible harm that attackers can cause. So with that, I'll shift to the question about the distinction between privacy and security. In policy and legal circuit, I also said circuses, I guess that's not wrong, but it's a Freudian slip. Circles, those two issues get confounded. And why do they matter? Well, they matter partially because of the unit of analysis. So in security, we're talking about whether systems can successfully defend against attacks on confidentiality, integrity, and availability of the system. In privacy, the unit of analysis is an individual, an individual's expectations, and what the negotiated deal was around those expectations in light of a particular party's data collection, storage, handling, sharing practices. So security is about successful defense against attacker, Mallory, and the privacy side is about what Alice and Bob negotiated with each other and whether Alice kept her promises to Bob and what consequences happen if Alice doesn't keep her promises to Bob. So what that means is that from a policy standpoint, you can see many security win-wins in ways that you can't as neatly frame win-wins on the privacy side. And it means that security challenges are by design more tractable when they're framed that way in policy conversations and in legal conversations. But this frame is often blurred or blended, and that's why sometimes we don't get the progress on security issues that those of us in this room would probably almost all agree are kind of low-hanging fruit issues about encouraging companies to patch known vulnerabilities that have a significant negative impact on their users or on consumers or on national security, et cetera. So there's the security privacy divide. The last piece that I'll highlight that comes from my scholarship is the recognition that is intuitive in this world, but not intuitive in policy and legal worlds, that vulnerability is reciprocal. It never stays in just the private sector or the public sector. The vulnerability exists wherever the code exists. So you can't think about policy approaches or regulatory approaches in a truly segmented way because you're not going to address the problem as a whole. The attacker is going to go wherever the easiest point of entry is, whether that's a private system or a public system. Similarly, what we see from the national security compromises that we've encountered during the last 10 years, it's often private sector contractors that have the point of entry or the insider attack happening rather than the strict framing of the organization, the public sector organization itself. So the blending of private sector and public sector in performing public tasks and the fact that the code is going to be vulnerable wherever the code resides, regardless of whether it's a private sector or public sector entity, I call that the problem of reciprocal security vulnerability. And again, while that's intuitive here, it's not intuitive in policy circles. So with that, maybe I'll shift over to Margaret and ask Margaret tell us a little bit about the privacy side and the kind of stuff that you research. Yeah, great. Thank you so much because I am relatively new to the security world and it's such a pleasure and privilege to be here today. As Andrea said, a lot of my research focuses on the privacy side. And in particular, my research focuses on the constitutional law privacy rights that I think can be brought to bear into the way that these technologies now influence what I consider our core fundamental freedoms and liberties that are protected under the Constitution. So I'm working on a book that is called The Big Data Constitution. And the thesis of the book is that our Constitution was forged in a small data world. And so our founders thought about small data power that was available to the government and what kind of restraints and constraints that you need to build into the Constitution to try to preserve forms of democratic governance. But now that we have big data power, we have to really reconceptualize what does it mean to interpret our Constitution in order to protect those rights, liberties, freedoms, and privileges. So I wanted to quickly summarize a few pieces that I've published. And I think that that helps to contextualize what I mean by reinterpreting the Constitution to encompass these new types of algorithmic decision-making tools that the government has, big data cyber surveillance, machine learning, et cetera. So one of the first pieces that I wrote on this topic was called Big Data Black Listing in this article. I said that, look what the government is doing with things like the no-fly list. They're creating forms of governance that have huge amounts of asymmetric power. So there's no way to interrogate exactly what are the algorithms that are being used or the data that's being collected, but suddenly you're being told you can't fly. And what kind of constitutional remedies do you have if you find yourself on the no-fly list? Right now, in the no-fly list litigation, for example, it's procedural due process. You can say I was denied due process under the law. I don't know how to challenge the process in which I was nominated to the no-fly list and I don't have a way to challenge how to get off the no-fly list. So what I said was we actually probably need to switch it over to something called substantive due process. And under substantive due process at the Constitution, you basically are arguing that no amount of process by the government can cure the violation. That once the violation has occurred, there's no way that the government can do it at all. And I said that these types of big data systems and the types of harms that they incur have to fall within a different type of remedy such as substantive due process because what's happening is with these big data systems is a lot of the harms are correlative. They're stigma harms. They're harms that focus on basically harming our digital selves or our cyber self or our digital footprint, not necessarily even targeting a human or a person. And that's how our Constitution first envisioned our rights that it would impact us as humans, not our cyber selves or our digital selves. Another piece that I wrote was called algorithmic Jim Crow. And in this piece, I look at the need to vindicate equal protection rights in light of the fact that you can have disparate impact or targeting of minority communities through algorithmic decision making, but you can't challenge that under the 14th Amendment Equal Protection Clause because a lot of these systems are applied equally to everyone. Yet you still might have the same types of harms that flow that we saw with a Jim Crow regime. So Jim Crow regimes were set up on the basis of classification and then sorting. So race-based classification and then sorting systems or segregated systems. In algorithmic Jim Crow, I said, now we're gonna flip it around and you're not going to have a race-based classification system, but you're gonna have a risk-based classification system. But you're still gonna have sorting, but because it's not race-based, you're not gonna have the Equal Protection Clause. Then in another piece that I wrote, it's called Orwell's 1984 in a Fourth Amendment Cyber Surveillance Don Intrusion Test. And I look at the fact that during some of you are aware of the warrantless GPS case that was argued before the Supreme Court. And during that case, 1984 was brought up half a dozen times. Why? Why would the Supreme Court justices talk about 1984 in light of the Fourth Amendment, which prohibits unreasonable searches and seizures? And I argue that we're now in the realm of not having the proper legal vocabulary. We have to reach to science fiction or dystopian literature in order to try to preserve our constitutional rights. And so I think that that summarizes some of what I'm trying to achieve with my research. Thank you. So I'll jump in here because some of the questions or issues that interest me most have both privacy and security elements. And if you don't separate them and think about the kinds of harms that can flow separately, I think you miss the bigger picture or problem. And to backtrack a bit, prior to teaching at West Point for several years I was a federal prosecutor. Being a federal prosecutor, you obtain orders from a court, issues subpoenas for various kinds of, collection of various kinds of information, law enforcement as you're all well aware uses various kinds of technologies to surveil and collect information. There was one kind of technology that I didn't come to know about until after I stopped being a prosecutor that frankly I have to credit this community with. For a long time there have been presentations about IMZCatchers. And it wasn't until a long time member of this community, someone named Chris Segoian who is currently working for Senator Wyden, we had collaborated after I was no longer prosecutor on another piece about location data. And we were sitting on the steps of a yogurt shop in Dupont Circle having some yogurt on hot summer day and he started telling me about what IMZCatchers do. And it floored me a bit. I gotta tell you the fact that this particular piece of equipment could impersonate a cell tower. And in doing so, collect all sorts of information, spoof another phone. And at the time that this technology was known generally in this community and as Chris was doing his due diligence and investigating it, it was certainly being used in the government, federal and then it trickled down to state and local law enforcement. But again, I had never encountered it in my work as a prosecutor. And what really struck me as it started to become more of a public conversation, there were I think the first big article on it in a national newspaper if you will, the Wall Street Journal. I think it was Jennifer Valentino de Vries did a story on a case about a very smart defendant who figured out that the way law enforcement located him was with the use of an IMZCatcher. This is Daniel David Rigmagen. He is now happily a free person. But he figured out that the only way they could have tracked him down to his data card if you will, was by using an IMZCatcher. So as this conversation started to develop in a more public way, what became interesting is that appropriately, some people who normally sort of focus on privacy interests were talking about, well, wait a minute, how is the government using the IMZCatcher or more commonly termed Stingray? Is it what kind of court order, if any, is it getting? In other words, they were concerned about whether there was potentially a Fourth Amendment violation. But here was the, and that's a very, very important conversation and of course one that should occur because potentially Congress or the courts need to step in and say, look, if the government is gonna collect location data or various other kinds of non-content data that the Stingray can collect, law enforcement perhaps needs a warrant and just to jump a little bit to the end of the story, DOJ has now issued a policy, it is not the law, but that any time law enforcement uses an IMZCatcher, it is supposed to get a warrant. But what was more interesting to me and what my now longtime friend, Chris Segoin, explained to me was that IMZCatchers exploit vulnerabilities in our cellular networks. So dealing with the privacy problem wasn't gonna fix the security implications of using Stingrays. And I didn't personally attend Black Hat this year, but I understand from reading a piece in Wired that indeed security researchers have now found vulnerabilities in the 5G network that Stingrays can penetrate. So this is just all about saying that when we approach surveillance issues, I like and find most interesting to think about both the security and the privacy implications. So I will toss it back to you on that point. And so now let's talk about the next generation of technologies that cross the civil and criminal boundaries. And I call this the problem of the internet of bodies. So we know the internet of things, connects all of the things to the network from our talking toasters to our refrigerators that self-order food or the apocryphal story of the refrigerator spamming people. But what we also see, and all you have to do is walk into the biohacking village and you see this, is that there is a host of new technology that is embedded in and connected to the body. So we're all comfortable with the Fitbits and the Apple Watches and some of us Google Glass that films things and stays always on. But when we're starting to talk about an artificial pancreas that needs security updates, when we're starting to talk about digital pills that talk to your phone with Bluetooth from the inside of your stomach, things that have already been FDA approved, there's a whole generation of these technologies, some of which are medical devices, some of which are not going to be classified as medical devices probably. So what that means is that all of the challenges that we've seen on the security side with the internet of things, or some of you know a Twitter handle undoubtedly that has a colorful renaming of the quality of the internet of things, we're gonna have those same problems in the context of the internet of bodies. So extreme interconnection, sometimes gratuitously so, high levels of security vulnerabilities, the problem of what I call builder bias, in other words shipping fast without securing adequately, and a problem of impoverished choice that I call the problem of not being able to, basically the lack of choice on the degree of technological connectedness. And so I view this as a competition problem basically. And so when we look at the question of what we can buy in the marketplace, if you try to buy a car now that isn't powered by hundreds of millions of lines of code, good luck, right? Everything is now so code reliant that you don't functionally have the choice to pick the degree of technology connectedness that you want in whatever your estimation is the appropriate degree of connectedness. So if you think about Battlestar Galactica, there's the story of the Galactica surviving because it was not connected to the internet, right? But when we're building, whether it's the internet of things or it's the internet of bodies, devices, and we aren't thinking through the extent of connectedness, we can sort of predict in this communally is how that's gonna play out, right? Things won't end ideally. And legally speaking, because of the gray areas of whether these kinds of devices are going to be classified as medical devices regularly or going to be classified as just internet of things consumer devices, which means in one case, they would be held under a closer scrutiny by the FDA. In the second case, it's only the FTC that would functionally be looking at these devices. And it's a much smaller agency with limited resources and the extent of enforcement would probably be significantly lower. We have an identified regulatory gap there, but the fund doesn't end there. Fun. So think about every end user license agreement you've ever clicked on. They've become progressively more draconian across time. They've become longer. They're written by lawyers, for lawyers, as a corporate lawyer who used to write them, I can say with certainty, they're written by lawyers, for lawyers. And so imagine that instead of it being a random website you're clicking on and you say that it is the interface for your internet reliant enhanced vision from the injected contact lenses, patented invention already by multiple companies. Your injected contact lenses, whether it's for AR gaming or it is for vision correction or it is for archiving of what's happening in your life. So with the embeddedness of that device, so first of all, if it's reliant on the internet, suddenly these secondary issues start to come into play. For example, which data plan did you pick? Is your provider going to be able to push you that critical update wirelessly to your eyes when you need it? What happens when you click yes and it says that they disclaim any liability for any malfunction in the code of your injected contact lenses? What happens when the provider of those services needs a new line of funding and the venture capitalists that invest decide to change the terms of the data aggregation that you thought you were agreeing to? What happens when the company goes bankrupt? The bankruptcy court doesn't consider the interests of consumers generally. The bankruptcy court considers the interests of the creditors. Consumers frequently aren't even represented at the table in a bankruptcy proceeding. So all of those data aggregation repurposing issues that we see now in the first generation of bankruptcies of internet of things devices, including bricking sometimes for patent reasons, those are all going to show up, one might predict, in the internet of bodies world, except the consequences are going to be physical harm to human bodies. And that's where the legal mass is gonna get really interesting because courts won't be comfortable with the same kind of power balance that we have now in the world of software in terms of as is where is, you build it, it doesn't work, oh well, you know, your operating system crashes, you lose the document you've been working on for four hours, it sucks to be you. When it's your eyes no longer working or it is your robotic arm, if you're say a veteran with a prosthetic that is internet reliant, suddenly things start to look very connected to the physicality of humanity. And so courts are not gonna be comfortable with seeing plaintiffs come in and saying, your honor, my arm doesn't work anymore or I can't see out of my right eye because my left injected contact lens had a patent problem and they wouldn't pay the money for the license fees. While this sounds like it may be sci-fi, although I think many of you are gonna be along for the ride with me on this because you know exactly how IoT security works. But we've had a similar problem in the context of medical procedures before and patents. So in the 1990s doctors started patenting medical procedures and because of those patents, other doctors started to shy away from performing certain kinds of surgeries that they considered to be in the best interest of the patient for fear of being held liable for patent infringement. And to make a long story short, it is primarily because the doctors had a professional association, the AMA, that lobbied Congress and they got a change in the law that said that no one could recover on those patent damages from the medical process patents. So there are a few interesting tidbits in that story. So one is the fact that a concerted community that cared about patient safety managed to get a change to law that was a big deal and getting intellectual property law changed is an extra big deal. It's really, really hard. Any of you who've dealt with patent or a copyright or trademark just needs to know that it's a free-for-all with bare-knuckle lawyering at the extreme. So that's one part of the story, that success story. The other part of the story is that they required a really concerted push in an organized way. And so when I take a step back and I look at our world here, I see the same level of care about what happens to humanity and what happens to people getting hurt. But in this community, there isn't the same level of organization to be able to ensure that when bad things start to happen we're there to nudge and to help correct the course. And to talk to the companies that are building some of these technologies about making sure that they're building things in the safest way possible and that the regulators are minding the store in the optimal way. So that's the world of the internet of bodies and I'll leave you on a particularly dystopian note and then turn it over to Margaret because this was all the happy part by the way in case that wasn't clear. So any of you who have been reading Wired or the Wall Street Journal or following the adventures of Elon Musk know that there is a company that he's involved with called Neuralink and they have a cortical interface technology and they're not the only company in the valley that's doing this, that functionally has a live read and write feed back to the cloud that is intended to augment the processing capability of the wetware of the human brain. And these are healthy bodies that they're talking about that are choosing to augment their processing power of their brains. And the way that the video that Neuralink released recently described it there was going to be a component that sits behind your ear and then there's a Bluetooth connection. Footnote, Bluetooth, problems with Bluetooth. All right, so they're going to build this read and write technology. So now let's shift a little bit and let's think about what it means to have other people being able to write things to your brain. When we think about what it means to be a human, a citizen, a voter in order to process our opinions about how we should be governed, how we should live our lives. We rely on not only the exercise of autonomous choice in voting but there's actually a precondition. So Kant called this heftonomy. He talked about autonomy which is going out doing things and demonstrating what your opinions are. But Kant had this thing called heftonomy and it was the inside voice. It was you talking to yourself just trying to figure out what you think. So in other words, the acts of autonomy that we all exercise in choosing the courses of action, choosing our leaders, those acts of autonomy require the precondition of a hermetically sealed thought process where we talk to ourselves about what we think. In a world where other people can write things to our brain, how does that self-contained process of thinking actually work? Are we sandboxing pieces of our brains off? How are we doing this? And these technologies that are already being built, are they thinking about this enough? Are we potentially undermining the future of liberal democracy by running too fast, too hard in trying to augment human bodies that are healthy with extra capabilities in ways that may bring unintended negative consequences, particularly for security and privacy. And with that, I will turn it over to Margaret with some other thoughts about elections, I think. Yes, definitely. And I think that that dystopian segue is really very appropriate because as you point out, Andrea, I think a lot of these harms that we're now addressing with the challenges of these technologies are really societal wide harms. Going back to the theme of what our constitution was intended to do is, in many ways, intended to protect individual-based rights against individual-based harms. But what if the harm is a harm to society? What if the harm is the harm to democracy? What if the harm is the harm to how we conceptualize human rights? What do we do then? And because we're in the ethics village, I think it's really important for us to think about whether or not the law is even capable of protecting some of these rights and privileges and core fundamental values, or whether we need to turn to something like ethics, or whether we need to turn to things like art and literature. So I think talking about dystopian literature, I think is really important because to the extent that we maybe lack the imagination as lawyers, maybe we need to turn to Philip K. Dick in Minority Report as a way to really understand what's at stake and what we need to do moving forward with these technologies. So I wanted to talk a little bit about the foreign interference of US elections because I think that that was what bridged my interests between data privacy and cybersecurity. When you have something like Cambridge Analytica and you have the allegation that this contractor for a campaign was capable of collecting anywhere between 2,000 to 7,000 data points on every US voter, over 200 million voters in the United States able to aggregate that data and build psychographic profiles in order to influence those voters in campaigns. And then you have the other allegations of the foreign interference with the cyber propaganda and other commandeering of our social media platforms and being able to spread disinformation, et cetera. What can we do under the law, especially in the United States when we have the First Amendment, when we rely on these new modes of communication and digital economy? What is the role of the government to then step in and regulate? And then what do we do when we rely so heavily on the private sector, not the public sector to secure those forms of communication and economy? And I think that it proposes unprecedented challenges. There was an excellent discussion yesterday here on, for example, cyber offensive measures as a deterrent against future cyber attacks or trying to prevent and preempt future types of forms of cyber warfare or cyber disinformation campaign and cyber propaganda campaigns. But to what extent do we swallow up democracy by trying to protect democracy? And I think that that's a core ethical question that spans the worlds of both law and cybersecurity and data privacy. And so I think that that's not something that I necessarily have an answer to, but I would love to invite a discussion on here. So I'll end on an ethical point too, I think, and particularly on a, unfortunately, ongoing debate that is, I think, near and dear to this community's heart. The going dark debate or whatever iteration of the crypto wars we're in. I find it interesting and honestly troubling that a number of government officials when they talk about a need for back doors or law enforcement access to our systems frame it as a privacy issue. And what I mean is they invoke the language of the Fourth Amendment and talk about, but wait a minute, we're going to a court to get an order requiring a company to disclose information or to be able, to have their systems built in such a way that it can disclose that information. When you frame this, and I don't think I need to name names, there's a former FBI director and a current attorney general who is doing this, but when you frame this in a Fourth Amendment context and try and shift this to just, well this is how our constitution balances government power against citizen privacy, you lose the fact that you are exposing your damaging information security in ways that we probably all can't, well, you all get it, but I think we don't, we can't even appreciate the full iteration of how those vulnerabilities will manifest as different technologies interact. And so one of the things that I'm working on with a colleague who is a philosophy professor is to try and talk about the ethics of framing this debate in the right way, so that the security issues are better understood and protected. Stephanie, how would this play out in a world where we could read and write information from brains directly? Oh my goodness. Well, I think we certainly have privacy, security and literally body integrity security issues. Botnets of body parts is, you know, it's your term. And on that happy note, we will open it up to questions, discussion, thoughts, yeah, please. So you were talking about how to frame this debate. What about a Second Amendment challenge to some of these crypto and cyber laws meeting? It's, we hear cyber arms control, we hear export controls and encryption is very clear to the government views, these is some kind of weapon. It's also very clear that the Second Amendment says right fair arms shall not be in French and that our decision says that there is an individual right to human bare arms. Therefore, do I not have an individual right to strong crypto? So I'm gonna do a lawyer thing and partly a DC policy thing. So that's an interesting theory. Given though what is going on with the Second Amendment and gun control issues right now, I'm not sure that the issue, at least in the way I see it in wanting to prevent backdoors, I don't know that is best framed, like a Second Amendment issue. Perhaps a legal challenge, but in so far as, at least it is my hope that we get better gun control laws and that the, I don't think that the heller opinion, maybe you might have, you may differ on this, but maybe not. I don't think that the heller opinion prevents gun control in the way that some argue, but I am slightly concerned just living in the crazy DC, I live in DC, I actually, on Capitol Hill, in spitting distance from the House of Representatives buildings, I'm sort of concerned, at least in some circles, in couching the discussion that way, I don't know if that answers your question. I'll throw on one line. So whenever we're talking about legal strategy, you always have to think about the interaction of different strategies. So you don't want one strategy to damage the protection offered by a different strategy, and in particular, when you consider the interaction of the First Amendment with this kind of a legal argument, you may end up at the end of the day with a less protective approach in the aggregate if you bring in a novel legal approach kind of from left field. So you want to game it out and so the First Amendment questions to the extent they've been developed, I would say probably will get us to a greater level of protection than a novel Second Amendment argument. Yeah, and I think just the Heller opinion is very controversial in part because of the way that it divided the text of the Second Amendment into parts and it ignored the first part of the Second Amendment. So the Second Amendment says a well-regulated militia, comma, being necessary to the security of a free state, comma, the right of the people to keep and bear arms shall not be infringed. And so the focus and the Heller opinion on that second part of the Second Amendment on the right of the people to keep and bear arms was the focus of that opinion. But I think that there's a lot of constitutional law scholars that would question whether or not you can really cleave the Second Amendment in half and just focus on the second half and not engage in the first half. Yeah. You guys mentioned. Wait, really quickly, if you don't mind trying to get to the microphone just so we can all hear. Gracias. Thank you. You guys mentioned a Fourth Amendment legislation that would potentially harm information security. I didn't quite follow that. Could you guys go a little more in depth? So it was more that if an issue is framed simply as a Fourth Amendment privacy issue. The Constitution has struck a balance basically saying that law enforcement cannot search persons, places, or effects without a warrant. So what law enforcement would say when it goes to a company wanting information and the company says, look, we don't have access to it. We decrypt it in a way that we don't keep the keys. Law enforcement would say, well, wait a minute, that's not necessarily in line with the way our Constitution balances privacy rights. If law enforcement comes with a warrant, then it should be able to get the information. What I would respectfully submit is that that framing of the going dark debate as a privacy issue misses the big pink elephant in the room and that is if companies are required vis-a-vis a statute or otherwise to build their systems in a way that always give law enforcement access to data, that the Fourth Amendment doesn't consider the security implications of all of that. So the premise is the warrant, whether or not the warrant can access all of the information. The Fourth Amendment looks to a warrant as the right way to strike that balance. And so law enforcement says, or, and again, I'm not sticking this on all of law enforcement. We've had public statements by high-level officials. They would say, well, that this going dark debate or the crypto wars, let's look at how the Constitution would talk about law enforcement access to data and I would just say that's probably the wrong way to frame it. These are competing visions of security. On the one hand, you have public safety concerns that law enforcement traditionally investigates, but on the other hand, you have these information security or cyber security issues that the Fourth Amendment just doesn't seem to be the right tool to address. Are you aware of any cases where companies had to decrease the security level or encryption level because of the law requirements? So there have been some cases where there's been a fight. I mean, you're aware of the maybe Apple iPhone debate, which sort of petered out in some respects because what happened, law enforcement, as it is reported, I don't have inside information, was able to find a third-party vendor that could access the phone. There's another case, and I don't know that I'd quite put it in that category, but the company case from, it's going back maybe 15 years where law enforcement wanted to compel a car company that provided various onboard services to wiretap the individuals in the car, except, and they were using this language in the wiretap act that basically talked about technical assistance that companies had to provide to law enforcement, but the problem was, and what ended up preventing law enforcement from being able to compel the company to use the microphone is that when the microphone was used, it disabled the sort of emergency component of the car that would allow a user to tell this third-party company, I need help, my car's broken down, I'm in danger. And so the wiretap act, the language sort of limits based on that kind of concern. But if this were a car where, and presumably today, such things are not so interconnected that perhaps the microphone feature could be enabled without harming the security features of the car. But again, it wasn't, the company had access to, it wasn't a matter of them not having the ability to wiretap, if you will. The issues that are coming up now, and there was some reference in a news story, I believe, about WhatsApp fighting a request to compel encrypted data. In fact, I believe that the ACLU and Rihanna Pfefferkorn from Stanford are actually fighting to get those pleadings unsealed because they're currently under seal. But there are companies that are considering the extent that they really need to be collecting certain information because to the extent they collect that information, they are subject to a warrant to share it. And all of the user agreements give them the right to share that information even without a warrant. So we consent, at least in theory, every time we click yes on one of those end user license agreements that none of us reads very carefully. And so the principles of data minimization while a good security principle, they also play into this conversation about the extent of information that a company can be compelled to share with law enforcement. Hi, first, this was fascinating. Thank you so much. First, quick comment. I think we definitely need a season six of The Wire for the whole Barxdale group is using Signal and VPNs and the major case squad launches a spear-efficient campaign or something. So my more serious question, I'm just curious to your thoughts or opinions on the balance between security and privacy around required disclosures of breaches, particularly like the 24-hour disclosure requirement in GDPR. Like me as a consumer, yes, I want to know as soon as possible so that I can take my own protective action. But me as an incident responder, the first 24 hours, I might not even know if there's a persistence in my network yet. And how do we balance that? Is there some way that we can notify law enforcement but it's not publicly announced yet or something like that? I'm just curious what your thoughts on that are. So the approach that we're taking in the US is really one that varies state to state. So the extent of the breach needs to be disclosed, what constitutes as a carve-out, what constitutes encrypted data that's not within the purview, it's all varied. So we have a bunch of different approaches. The thing about GDPR is that it objectively just does raise that level. The most interesting part of GDPR and we can debate whether the 24-hour period is an adequate turnaround period. But the thing that is an important, I think positive shift with GDPR is that in the preamble, there is an imposed duty to stay up to date with the state of the art of security. So it shifts the whole conversation from being one around throwing a bandaid on the severed arm to trying to prevent the children from playing in the street in the first place. It's trying to shift to a proactive, patching, maintenance, investing adequately and security teams kind of model. So while it's fair to debate the efficacy of the particular formulation of certain pieces of GDPR, that overarching paradigm shift is important and I think ultimately a positive one from the way that we shift the conversation toward a more proactive risk modeling, threat assessment, attack service analysis conversation. I think we're out of time. Okay, so we'll cut it off there, give us a few minutes to reconfigure for the next discussion and stay with us for Anthony Ferrante. If you guys do want to talk, I think you guys are available to talk. I'll have to kick you out of the room, but... You have to. We won't say anything about talking. You guys can do it, just we'll have to kick you out if you want to continue the conversation. Thank you. Thank you, stickers, because I have a whole stack here and they have a cute cow on them.