 Good morning Las Vegas and welcome to policy at DEF CON. This talk is Putting Your Money Where Your Cyber Is, a guided discussion of software liability and security with our speaker Andrea Matwisham. Few announcements before we begin. Talk is being hosted on the record. It is being streamed as well. Cell phones occur to see to our speakers and audience. Turn them off, put them on silent, please. There is a microphone up here. When the speaker allows questions, I'd really appreciate it. Everybody can come to the microphone and speak. If it's being streamed and you don't come to the mic, she might hear you. Nobody else will. And with that, let's get started. Please welcome Andrea. Good morning, everyone. Thanks for being here, especially in light of the exciting reported bomb thing yesterday. Most importantly, there's some stickers up here. First come, first serve as quantities last. They have a cat riding bacon, holding scales of justice and hitting a box. So just saying, it's some of my best Microsoft paint work. Okay, so apologies in advance for any animation that goes awry in these slides, but if it works, hopefully they will be some useful things. So, welcome. And for those of you who might have not yet had the pleasure of talking with and knowing, this is who I am. I wear a bunch of hats. I'm a law professor. I'm a professor in Penn State Engineering as well. I founded a couple of labs, policy labs. Please check them out. They do good work, I think. I used to be a corporate lawyer. This is, I think, my 20th anniversary of coming to Vegas for the Hacker Circus, which is a little frightening to me. I'm not sure how that happened, honestly. I'm gonna try not to feel old on that one. And I've held some government appointments most recently with the CFPB and the FTC. But, of course, anything that I say is just me in, you know, slightly cranky professor mode and not reflecting the opinions of the nice people that I work with in government. Also, this is part of my original academic research. So, please discuss it to your heart's content. But please do remember where you heard it, because it's kind of soul crushing when you see your years worth of research as an academic get casually discussed on a blog with no citation. So, this is from a forthcoming article called Exploit Makina. And, yeah, so, away we go. All right, we start with the White House Cybersecurity Strategy. So, some of you are undoubtedly very familiar with this. On March 2nd, the White House released this document. And part of what it included was a noteworthy third pillar, which articulated the readiness of the Biden administration to take a step toward nudging potential liability for, as they termed it, the consequences of poor cybersecurity. In order to impose the burdens where they are best able to be born, rather than on consumers and other entities that are impacted without the ability to address things. So, here's the exact language. But as you can imagine, there was a bit of legal fud that happened on the internet. I know that never happens, but there was a little bit. And so that made me think, hi, maybe I should pull together a talk just to kind of frame the conversation a little bit, perhaps, and engage with folks about it. Okay, so the second point is this question of what is security liability? And I see my animation has already started to go awry. Okay, so in reality, when we're talking about security liability, we're actually talking about liabilities. There are many different ways that legally we construct responsibility for failures or inadequacies or malfunctions slash harms in infrasec security settings. So, the one that you hear a lot of talk about is the one with the arrow, which is tort. So, what is tort? Tort means civil bronze. It's the situation where you have some sort of private harm that happens generally and one person's who's another person. I'm shortcutting severely, but they come in various flavors, intentional, negligence. There are various different flavors of negligence towards there's product liability, which has a slightly different articulation. And of course, some of you are familiar with privacy towards which go back a long time, though arguably not necessarily the most robust of what courts are willing to enforce. But of course, privacy and security are not the same thing, which is a point that a lot of lawyers, unfortunately, don't understand, and therefore they get wrapped around the axle around confidentiality questions, which, you know, what note is defined differently in security, of course. And don't even look at the integrity and availability harms which are increasingly the point of the scariest problems in security. So, just a quick note on torts. Torts without an E are never tasty. Torts with an E are often scrumptious. Just a little point there. But I digress. So, when we're talking about various different bases of security liability, we're talking about at least three buckets of issues. We're talking about private bases for liability, which primarily arise under state law. Contract relationships, where one party is contractually obligated, you know, say to, you know, be pen tested every month. A failure on that front would result in a private contract suit. It's not, oh my God, yes. I would give you a round for applause, but I'm weak from decaffeination, and so I can't, I can't do it. Thank you so much. You are the best. You are so the best. Thank you. Hero, thank you. Extra stickers for you. So, tort law is also a creature of state law. Corporate law is as well. Security's regulation is too prompt. There are securities regulators on the state level, but there's also, of course, the SEC on the federal level. But when you're a shareholder, if there are issues that happen in the governance of your entity and those things were not disclosed or you believe that the board of directors or the officers are violating the baseline duties that exist in corporate law, you can sue. There are also some interesting common law claims that could emerge, and that's just kind of bases for liability that have bubbled up across time. And I think we'll see some action there in the future, but I'll skip that. Civil and regulatory liability. These are amounts of money or promises to take or not take certain actions that are the result of enforcement from various federal or state regulators. So, is it that a judge is necessarily forcing you to pay X number of dollars? Not necessarily. It may be a settlement that you reach with a particular regulator, but that doesn't negate the fact that you have now accepted certain duties to change, hopefully improve, your processes around these issues and you may have a fine associated with it. It depends on which agency you're dealing with, whether they are finding authority. So there are a lot of specifics that are relevant here. The CFPB has some recent enforcement action. I'll highlight for you Google's CFPB and ACI and then read that enforcement resulting consent carefully because the definitions in particular will look, I think, familiar and a recognition of a shift toward more of a technocratic framing around security responsibility from a federal perspective. Finally, there are criminal situations that can arise from security failures. The one that perhaps is least hyped, but I think incredibly significant was the EPA's approach to the defeat devices that were in Audi's and Volkswagen's and the fact that there were criminal sanctions that happened because of this device's intentional gaming of accurate data reporting as part of the reporting requirements that exist to the public and to the EPA. So in other words, it was an integrity game with the output. So you have this interesting new involvement, even though the EPA wouldn't necessarily call it a security case, I view it as a security case. You have criminal enforcement coming out of HHS over HIPAA. They have that authority. We of course know about DOJ's prosecution of CISO who was here and some of you may have been at that talk. We have criminal state laws, so the CFAA exists not only on the federal level, but there are variants on the state level. You get the point. This is a very complicated, intersecting kind of arrangement. So focusing just on that one little piece misses the forest for the trees. You kind of have to look at it all at the same time. Here's where I really hope the animation works. Please work, yes. Okay, so you start by looking at the standards and the technical, professional, and cultural standards. Then you look at the state regime, but you also have to simultaneously look at the federal regime. So what you have here is borrowed from this really smart psychologist who unfortunately is no longer with us, named Yuri Bronfenbrenner. This is an ecological model of security. And what you have on the outside there are the security interests writ large, democracy, defense, harmonization, critical infrastructure. And at each point, each of these layers is pressing on each of the other layers. So if you tweak one layer without thinking through what it's going to do to the rest of them, something will end up out of balance and there will be consequences down the road. The ideal situation is, of course, to assess them all at the same time and to evolve them in a way that results in constructive directions on every level. So what we don't want is a situation where we fix something because it's allegedly special with a focus that's too narrow on toward and a sensibly only confidentiality harms. What we instead, I would argue, want to do is to think about tech history a little more broadly, to think about where we've been, what we've learned, what we've done well, what we've done less well, learn from it, and then realign each of those layers in a way that's complementary and doesn't break other things that we need because you can break other legal things. If you take the position that tech is exceptional, you are basically threatening to undermine some of those other, even private sector regimes that organize corporate engagements with each other. So that's, we can talk about that more in Q and A, but I have lots of thoughts on that as an ex-corporate lawyer. So the basic promise of all of this is the goal to do no harm as we discuss it. So let's talk about what already exists. Let's start with some tech history. What's new and what is new to us, but not really new. So part of what I've had the pleasure of digging into is thinking through what are some models that we might be able to learn from in the way that we've addressed new technologies and when things have had the stakes of literally life and death much like we face in security today. So liability, as we've discussed for confidentiality and integrity availability failures, that's already a thing it's just sometimes called other legal names. But what a reframe that I think we might want to engage with is that non-liability is a form of legal tech debt and that when you misalign the way that you treat tech products and the rest of the products in our economy, we are at higher risk of breaking things that down the road we will regret. So with that, let me tell a quick story about the Hoover Dam. Hoover Dam has been around for almost 100 years and it stands as a monument to engineering success, prowess, people working in tandem with precision and rigor and peer review to build a government project that has stood the test of time. But the dam that I recently started learning more about was the St. Francis Dam. It was the predecessor to the Hoover Dam and that one ended very tragically. It ended with at least 200 dead with lots of people missing and with millions of dollars of damage. And the story of the St. Francis Dam is partially a story about the man that you just saw flash before these shots of the Smithsonian article. His name was William Mulholland and there was a cult of personality around William Mulholland and he was building things and he sort of was perceived to do no wrong and so he was given a lot of runway to build as he wished. The problem is that as this particular dam was being built, people started seeing that there were problems with it but those reports were ignored and so we ended up with 1,200 homes destroyed, over 500 people ultimately dead or missing and so we had a case where a dam that looked great at first ended up being nothing but this little stump of a dam and it meant that there was public confidence erosion in the process of engineering. So there was a crisis of confidence in engineering as a profession. Particularly because there were signs, there were warning signs that were ignored by choice. And so when you have this lack of an oversight or this structure, you end up unfortunately with situations such as this one and the residents of the area thought they had been led astray into believing in the safety of this structure. So as you might expect, many commissions investigating these failures ensued, litigation ensued. The city of LA paid out significant amounts of money to the families of the deceased and so it is the combination of this liability and the outcry over the degree of control that was concentrated in this one person who ignored reports of danger and literal patching requests for cracks in the dam that ultimately led the professionalization, led to the professionalization of the engineering profession in order to restore public confidence. So when the Hoover Dam build rolled around, you had a profession that looked very different and a process that looked completely different. There was no star of the show. It was teams of people checking each other's work at every step and the result, as we know, is a dam that stands to this day. And now unexpectedly, I'm going to tell you something nice about Richard Nixon. So Richard Nixon faced an environmental disaster that was a little different from the one we're facing today, but it was very serious because in particular, the Cuyahoga River in Ohio had caught on fire many times from the pollution. So at least 13 fires between 1868 and 1969, things were getting worse by the day. So there was public outcry, there was a coalition that was built and ultimately the EPA was created to address some of these issues. And the Clean Water Act, the Clean Air Act, the Superfund Law, they were all passed to generate one of the most robust in terms of penalties regimes in U.S. law. And what Nixon pointed out at the time was that it was basically a situation where a debt had come due. And the good news is that the Cuyahoga River, despite being on fire in 1969, is cleaned up to the point where there are fish again, people are using it again. And so this is a story where things really did improve through coordinated effort and through legislative effort. And Josh Corman has told this story in the context of the Calvary and I appreciate that he credits me, so I'm giving him a little shout out for that. So what we have is this question of whether the tech, the law tech debt, as I'm calling it, is coming due, I think it's partially from the reframe that happened around the Y2K problem. So for some of you, this is ancient history that you read about, that weren't part of. For some of you, you are so scarred by the memories of the fire drill that you lived through for years that you don't talk about it, but I hope some of you will. And so for me, just to give you a picture of where I came in, I started practicing law in 1989, so I came in at the tail end. But I understood how much had been built to the point of the year 2000 changeover. And so this was something we knew was coming and it's actually a security success story, but it gets told as a punch line. It's not a punch line, it's a good story where there was a concerted whole of government effort, a whole of industry effort. And if you, because I am that nerdy, I recreationally watched C-SPAN hearings from long ago, there's some really great C-SPAN hearings from the Y2K era. And you can see how seriously these teams of private sector and public sector folks were working together to address these problems. Now, we ended up with a congressional statute that assumed that there would be liability and therefore they passed a type of shield. So note what the default was, the default was the assumption that there would be liability. But somehow in the last 23 years, we've spun this into an exceptionalist presumption of non-liability for tech in situations where we see the risk of physical harm to human bodies increasingly come into play. So I would argue that this is not a sustainable course and we need to get back and pay for the legal tech debt that we incurred with the Y2K situation, which at the time, I think, reasonable people would definitely argue made sense, but we're at a very different point at this point in our economy and in the history of technology. Why do I think we really have to grapple with this? Because there is a potential dark outcome here. And I think we need to start talking about what kind of a society we're building and what we want to see in 20 years. We dodge it, we hear people invoke the empty vehicle in some cases, word innovation. But not all innovation is constructive. Not all innovation ends well. So the question is, how are we going to build a better society that focuses on making our lives better, not just on a particular deosyncratic definition of success for a few people? So the story of this town in Pennsylvania, which some of you may not be familiar with, which is the basis for the video game Silent Hill, Centrelia, Pennsylvania is a town that no longer exists, basically. It's a zip code that was eliminated by the post office because there were underground fires burning in an uncontrollable way. And it turns out, and I didn't know this until recently, it's been a very long time since I was there either. There are tens of fires burning all over the country uncontrollably underground that we haven't been able to put out. And in many cases, these are coal seam fires that have arisen through interaction effects because of the design of neighboring structures. So it's not that someone is setting a place to these things, it's not that anyone in some cases necessarily even took a shortcut. It's just that they didn't look at the whole system. They didn't look how the dumpster underground dumpster fire that would happen when say a town dump literally was built too close to the coal mine. And for those of you who are familiar with the engineering here, when you're shutting down a coal mine, the goal is to tamp down any airflow in order to ensure that embers don't get accentuated and start conflagrations. But if you're building a dump, you want that air because you want decomposition. So those two things, you put them together, they don't work together. And so one of my concerns is that we will end up in a situation where our society has a bunch of stuff that doesn't play well together and it will lead to a centrelia-like consequence. So let me move on to some thoughts about how to simplify this conversation and then I have an exercise that I hope some of you will find at least vaguely amusing. So you've seen this before, so here's where I simplify it. So one way that we can think about simplifying this conversation is to focus on three elements in every situation. What is the context that we're dealing with? What is the full scope of possible harm that can result? And what is the intent of the folks who are involved? We can call this chi, I might call it shy because I'm from Chicago, but that's taking liberties. So CHI, so what do I mean? Did a duty of care exist? What promises were made in this context? What are the reasonable expectations of folks in these contexts? What other baselines are set externally? The nature of the possible harm, is it fixable? Who was in the best position to prevent it? Loss of life is not fixable. And one of the ethical points that I think we need to start putting cards on the table about is whether the technologies that we're building are going to knowingly take X number of lives and what our position is on whether that's acceptable. I prefer the goal of zero deaths, but I can tell you that not everyone agrees with me on that. And sometimes I've been surprised who hasn't agreed with me on that. So finally, the issue of intent is operative in many different legal frameworks. It's also part of what the First Amendment requires. So what I'm talking about here is sort of preharmonized with the First Amendment constraint around it. And if you'd like to read more about that, I have some hundred page articles I can point you to later, but we're not gonna talk about those today. So a little bit of good news is that there's action happening all over the country with various different cases that involve responsibility for security dynamics. Some of these are a bit tangential, but they have connection that I'll explain in greater detail in that paper I'm working on. But you have some jurisdictions like in Georgia where you have courts starting to see repeating problems and being willing to turn the dial up on the level of liability and responsibility that they're finding to exist for the builders and maintainers of these technologies. So that's the good news. The slightly messy news, which I've already alluded to, is that these cases may be using slightly different legal doctrines, tweaking them in slightly different ways. So you don't wanna disrupt the good stuff that's happening here on the state level. You want to nudge it forward, and that again means the use of a scalpel, not a sledgehammer, when we're talking about it in terms of national initiatives. Okay, how does this connect with threat modeling that many of you do in your day to day? There are so many different frameworks. How does this all mesh together? So this is cut off. Unfortunately, there's a, let's say 15 minutes. Okay, I will talk faster, thank you. This is from Adam Shostack. This is not a list that I can post myself, so shout out Adam for this. But what these things all have in common is they're already basically doing that CHI analysis that I was mentioning. You're looking at context, you're looking at harm, you're looking at intent. So this is not something that is out of left field in the way that we're talking about this. But what I will point out is that traditional threat modeling, in addition to constructively creating a conversation around technical substantiation for the degree of carrier using, but it does fall short in assessing the risk that's presented by the failures of the human internal controls in governance ways. And so those concerns are not represented in any of these frameworks, I looked. And so entity level governance is something that we think about in certainly corporate law a lot, but it's not something that's shown up in kind of an integrated way. Neither have laws normative baselines. And these are some of those points that I think cards on the table, it's important to recognize as assumptions of much of the law that already exists and to be aware that they could be at risk if we approach the questions of security liability in ways that erode the foundations of what exists in law already. So just to finish up here, here's an idea on a meta model, a threat meta model, troll. Technology, which is the regular threat modeling, organizational and product specific. Regularity, look at it from a user perspective. What is the organizational response? Is incident response effective? Can someone who is a regular consumer who doesn't know what's happening but weird things are happening and they're trying to get a reaction out of a company and ask for help, is that capacity robust? Is it considered in the context of the sensitivity of the particular use? Organizational, are there members of the board of directors who actually understand security? The SEC wants that now. Unfortunately, my sense is that most even multi-million dollar companies still do not really have a board member who knows much about security basics. And so that is part of what I perceive to be an important threat to the sustainability of the technology economy. And legal consequences, can the harms that result even be mitigated? If there is a mass casualty event, money is not gonna solve that. Those are people's families. All right, a little bit of what should exist. So we face this choice. Hoover Dam or St. Francis Dam? Cuyahoga or St. Tralia? So, as we ask ourselves, are we pushing to advance suitability and design, substantiation and security and sustainability for society? This question of what kind of a vision are we advancing for society through government as well? And here's where I'm gonna say something that is going to be controversial, no doubt. I think we need a technology regulator of last resort. Not a first resort, but of last resort. A coordinating entity that fills some of the gaps that exist prevents gaming. And there's a lot of gaming by entities playing regulators against each other. You want coordinated, thoughtful action that keeps things smooth and in particular helps us look internationally because the product deployments, of course, are in fact international. It also allows for a centralized scaling of technical expertise. There aren't enough security pros to go around for all the agencies in the way that we'd like. So having a centralized point, and I'm not saying to not have them inside agencies, absolutely have them. But to have a particular point of focus for a troller that has a core group of people to work with agencies and keep things smoothly moving, particularly when we're talking about supply chain security issues that cross over many different sectors potentially, that's messy stuff. This regulator needs finding authority and it needs a robust toolkit to be able to not have situations such as the ones that we have in, for example, transportation now where a particular car company is simply ignoring what regulators are asking it to correct. So back to the same question from 1994 that these folks are talking about, it's preserving those baselines. And so with that, here's a hypothetical for you. So I'll tell you a bit about this hypothetical. There's a site for inputting your thoughts, I'd love to hear your thoughts. And it kind of pulls together some of these moving pieces that I've been talking about. So Stinkin Hall, a garbage hauling company, transports garbage between states on behalf of government entities. It has self-driving trash barges with remote overwrite capability for safety reasons. Because of this IO trash capability and one human being on each barge, the company tells its investors, accident risk, litigation risk, it's low. We've got this, we're all good. The company is a public company and one of the three dominant players in its market, no officers or directors regularly ask about the state of security in meetings. So very little upper level management discussion. The company has a dedicated security team that is under resourced. Regular threat modeling doesn't happen in the way that the security team would like. They need more budget, they've been asking about it, they perceive themselves to be under resourced. The company's slogan is be trashy. Our cyber barges keep it safe with ads with raccoons putting garbage into a safe with blinking lights floating in a swimming pool, little humor. This is what law school is like by the way. If you enjoy this hypo, you too, maybe a future law student or law professor. The security budget is less than 1% of the marketing budget. And so here I tried to create that, but it turns out that the helpful tools on the internet, note what I did not say. The helpful tools on the internet cannot yet produce that advertisement for me. All right, so there's a barge. It's named The Putrid. It operates along the Atlantic coast. It is not great with its patching. It's staffed by a captain with a history of accidents and unfortunately drinking on the job. A remote shore attendant monitors 13 barges all at the same time. Attackers exploit 7.0 vulnerability in the nav system. It's been known for a while. We know there are exploits in the wild. The attackers send the barge off course and neither the driver or the remote attendant notice. The Putrid crashes into a container ship carrying thousands of rubber duckies full of pink soap which now float in the harbor causing a soap bubble fiasco that kills wildlife, stops commercial traffic, impacts potentially local drinking water. Tons of garbage is dumped into the harbor. All right, who is at fault and what should the response be? So here is a site. You can scan the QR code at your own risk. I believe it to be trustworthy, but it's DEF CON and especially if people are watching this on the internet. Cannot promise someone hasn't helped themselves to a new QR code. So here is the URL and I would ask you to take a sec, think about it and put some thoughts into that page. The mobile version cuts off the last two questions, but I will pull up the page so that you can see what the full questions are. And I think I am running reasonably... Oh no, I have another 10 minutes. Awesome. So we have 10 minutes for questions and for discussion of the barge and for putting in your thoughts about this unfortunate rubber duck deployment and putrid incident. And so yeah, thank you for your attention. Please grab stickers and if there are any questions I'm happy to engage. They're hoping you might be kind enough to go to the mic just so that the people of the internet can. So to make a distinction between the two realms of the liability when they talk about software liability, most of the discussions we're seeing in Congress on places, they are focusing on the software producers. And many of the examples and ones when you start talking about context, how I'm like, you know, software for an airplane or software for a vehicle that can do harm and that can be misused. The challenge is that when I build a kernel driver that could go into a plane or into a video game, I don't have any knowledge of the use down road and that distinction is lost. That's the point. And I want to make that as a clear point when it comes to context, you can only go so far down that supply chain to you lose context with what they develop because the liability on the kernel driver developer could never build the driver with having to deal with car based liabilities. So that is why the context of use and what you know when you know it and what efforts you've made matter. And that is why I think we really need to be nuanced in the way we approach this. And I think we can be nuanced in the way we approach this but there isn't some one size fits all Y2K like blanket we can throw over this to, you know, fix it. That's just, I think that's going to take us in the wrong direction. So thank you for your comments. I agree intent is everything. So this is where we get into, so the question was wouldn't this just end up in court and wouldn't Finder Effect be able to adjudicate this? So the way that a Finder Effect would look at this this is why it's really case specific about the context. Let's say that there is a properly positioned clear disclaimer on the website. That's a different circumstance than one where someone just posts something and says YOLO or a third scenario where someone posts something and says this is a research project. It is to be used only for this very narrow context. Right, so those nuanced analyses go to that context point and that intent point and what affirmative steps someone took to try to protect against the risks of harm that could arise in a repurposing scenario. Not to say that you have to protect against every single repurposing scenario, but you tell a story of care and that is something that Finder Effect looked to. That's gonna be a huge challenge for everybody. Well, that's, yeah, that's... I don't know if it really shows importance and it's not a split. So, but I think that this is partially why it would be really great to take a senior group of folks and to reach some engineering code of ethics like principles about doing no harm, et cetera and to share those principles with those young people so that they understand that there are baselines that if you wanna be a member of this thoughtful community of folks who approach life with these shared views, here are the ways that you can reach out for guidance, for example, right? This is a structure that leads to apprenticeship, leads to mentoring, leads to communication, but what the engineers did in my Hoover Dam story is that they stopped the bleeding on the trust erosion that happened after the collapse of the St. Francis Dam. And I think we're at that point where there needs to be an affirmative intervention from multiple directions, creating structures of building trustworthiness through humans. I think Chris was next. My question interacts with the first two questioners and I wanna ask you to deepen your explanation of the no-death principle that you're following because all innovation, the no-death, the no-acceptable. No, that's just, that's my personal goal and that other people don't necessarily agree with. Well, that's exactly what I wanna ask you about is is it a good goal? Because all innovation carries, well, let me, I can express the question. This is why I think we need to have the ethical conversation. But in, so just think about pharmaceuticals, we do, there's gonna be some known injury we put up with it. And then specifically with the first question, the potential foreseeable loss from a, and then unforeseeable loss from a software error could be fantastic. And I'm wondering what policy levers we could possibly think about to deal with that, including have you considered tinkering with the value of this statistical life? So that's classic tort question, right? The value of life in the St. Francis Dam case was approximately $5,000 per life in 1928. So I think the conversation about the value that tort or other sources of law place on life, I think it's fraught because it has weighed, I mean, we should have it, sure, but I think it's fraught because it's placed valued arguably disproportionately on certain types of people rather than valuing humans as having inherent worth through dignity grounds. So not everyone will agree with me that our goal should be killing, ideally, no one or as few people as possible. But I think that it's important to understand where people's flags are planted on that question when you're walking into policy conversations. I think the law does have a baseline of your goal is to not kill people. Murder is, you know, obviously a crime, right? The issue becomes a little slippery in, for example, securities regulation where companies that have in the past resulted in the deaths of large numbers of children have not necessarily felt that those were reportable events in their securities filings. Partially because the children's lives, particularly when they are in the global South, are not necessarily viewed by those companies to be financially worth a lot. Now, you know, this is the ethical conversation that I really think we need to have. I don't like that position. I think all life has value. But without, you know, cards on the table and in particular asking what kind of a world do we want 20 years from now, we're gonna just keep going along and end up somewhere that is gonna look more like the St. Francis Dam, as I see it. So that's not to say that we can't have a good conversation, we absolutely can. But I think these kinds of baseline questions need to be forthrightly discussed and the shared vision for the next generation of our society. I think we just haven't had that conversation. There are lots of dystopias, but can anyone articulate a utopia that everyone likes? I don't think so. Has there ever been a utopia? I don't think so. There's just shades of dystopia, right? But some dystopias are worse than others. So I don't wanna live in some of those dystopias. And I think we need to get on the same page about which dystopias we don't wanna end up living in. And I don't think we're there yet, unfortunately. I think your hand was next, yes. Wow, thanks to the SECs. So the question was what could a plaintiff's attorney do to reduce the legal talk debt? So I think the SEC's new rule when it kicks in on accurate disclosures around securities issues and the presence of people who can address threat modeling and incident response, et cetera, and from an oversight position. I would be reading those filings very carefully in the future if I were a plaintiff's attorney. Not yet. I think there's a lot of space for that. I'm curious, the aviation industry has dealt with a lot of these sorts of questions from a safety standpoint. So we don't define zero deaths as the goal. There's a very, very high bar for safety. Instead of acknowledging there's inherent risk in flying and if we want zero deaths, we can't fly. By the same token, we've struggled with some of the same supply chain considerations, suppliers wanting identity and so on and so forth. It's very difficult to make changes to software. So that seems very much like an end state resolving from some of these proposals. So I'm wondering if you'd use that as at least a conceptual model for what you're proposing. So I think the context of aviation and the history of aviation are really interesting and it's a history that I'm teaching myself among other things, the way that you've handled it. So the redundancy structures that exist are noteworthy. We don't have the same level of redundancy for failure situations that aviation has traditionally had. So for example, something that an aerospace engineer explained to me, the reason that there's a co-pilot, it's not to really fly the plane, it's partially to ensure that there is a second person there to calibrate partially the inputs for the pilot and a technology can't read a human and the needs of a human in a particular space the same way that the co-pilot can. And there's a, the last date I saw, there was a relatively high rate of pilots changing defaults on what exists in the cockpit in terms of how their systems operate. And we know from the Therac 25 incidents that defaults are not necessarily out of the box correct and defaults can kill people. So there's this very complicated relationship, particularly in aerospace context. I agree with you. I think it's a great example to study in depth. I think aiming to not hurt people in general is a good goal. We won't necessarily get it, but I think that's what, at least my version of an ethical society does. You aim to not hurt people. You do no harm. So, and we're out of time unfortunately, but please stick around it. I'm happy to take your question if they won't kick us out of the room, please. Okay, you can't build a house without hiring an architect. You can't build a bridge without hiring a civil engineer. These are all certified professions. Are we headed towards professional certification for software engineers? So I'm not sure that the model that exists in those professions are the perfect fit for software engineers or for security engineers. And I think that the profession itself needs to figure out what the best model is. There are lots of different models out there. So maybe about five years ago, a co-author of mine and I did a talk at B-Sides Las Vegas that there were like four people in the room for, which I understand, but I'm gonna re, I think reinvigorate that talk. And it looked at, I think we did 15 different professions and the way that they self-manage and the way that they do entry training, leveling up in the profession and management of violations of shared norms and rules. The legal profession is certainly not ideal, but we've been kicking a few folks out of it lately. So there is something to be said for a group of like-minded professional folks to set up a structure where those ethical baselines have some teeth behind them. And it's not that other people can't do stuff, but they just don't belong to this group of people who share these public commitments. So I would expect there to be, say, a group of CISOs who get on board with a core group of tenants that they subscribe to. Yeah, it's like, software is like writing. You keep at it and you grow through your craft. But it's a different situation when you're tinkering with your website and you're coding up critical infrastructure applications that people's lives rely on. Well, we are out of time, but there's still some stickers up here. So please help yourselves and I'm happy to chat with you.