 I want to thank Eleanor for being here today and she's going to talk to us about how to think about complex adversarial systems. Please give a big round of applause for Eleanor. Thank you very much. It's been a lot of fun so far and I'm really looking forward to the rest of the conference as well. So I'm going to talk for a while. We should have some time for questions at the end. If you have, because a lot of what I'm talking about is theory, if there is something where you want an example, call out now. Everything else hold it to the end but if you want an example for something that I'm saying because I forgot to give one because I spend too much time in this space, then just shout out. So we talk a lot about specific systems and specific problems and specific software stacks but what we don't end up speaking that much about is kind of the meta level about thinking about the larger scope of the problem. If you want your work to scale, whether or not you are a defender or an attacker, you need to think about that stuff. You need to think about the larger structure of the problem. Now most of the time I really only speak to defense. That is where my background lies. That is basically where most of my interest lies. But actually when you're looking at adversarial systems, it actually doesn't make that much difference. It's not a lot more symmetry to the problem structure than we normally think about. So I'm going to start with some definitions. I will try and keep these brief but I think it's kind of useful to set the stage a little bit for the scope of what I'm talking about and for the way I think about security. So security is the set of activities that reduce the likelihood of some adversary frustrating some set of users. This is, you will notice that computer does not appear anywhere on this slide. That is because security is not actually about computers. We were slightly misled when they called the field computer security, but the basic issue is that what we care about is outcomes for humans in the world, right? That is why we do all of these things. And when you think about the security problem from the humans down instead of from the bits up, you end up looking at the problem pretty differently. Among other things, you realize that security is just another property of systems, right? Like performance, like reliability, et cetera. It's just another thing that comes out of systems. And it can only be evaluated in context. If you talk about is this system performance? Well, I don't know. What are you trying to do with it? Well, I've got one box and it can send 100 emails a minute. Cool, great. You're trying to run all of the mail for a Fortune 50 through it. No, you have a problem. So security is in the same place. So when we talk about complex systems, we are talking about here systems that are mostly created by composition from smaller individual nodes. If you're familiar with kind of the terrain of the complex systems conversation, this is the Annapolis systems, complex systems model, not the Santa Fe strange attractors model. We're talking about systems where you can analyze the individual components, but not necessarily the outcome. So socio-technical systems are all systems that we care about for security. Really, socio-technical systems are all systems that we care about because if someone cares about them, there is a person involved in the system. Socio-technical systems can still be analyzed in structured and rigorous ways. You can look at social strips. You can look at behavior both at scale and at individual levels, but they are significantly weirder than purely mathematical or purely technical systems for basically all of the reasons of people are weird. So when we talk about adversarial systems and all adversarial systems are socio-technical, we are talking about systems where you have multiple actors in the system who have conflicting goals. Again, this is almost all systems, right? There are almost always multiple actors in any real world system, and there are almost always going to be goal conflicts if you have humans involved in the system. So this is kind of an exploration of a space of thinking. And I do not want anyone to come away from this with the idea that like, oh my God, there are all of these things that I'm not thinking about when I'm securing my relatively small startup network. This is not what this talk is about. This talk is about tools for thinking at scale. If you are in a position to start bringing some of these things into your work, wonderful. They can shape the way you think about things. They can make it easier to do work. They can prompt you to do stuff. This is not a talk about you must be doing all of these things before you are sort of getting out the door. Get the basics right first, then start thinking bigger. So on to the lessons. Absent luck, you're going to get a lot of otter gifts. I decided that was my theme for this talk. You're going to get ridiculous cute animals. So absent luck and a few other things around starting conditions, all adversarial situations are resource conflicts fundamentally. If you have kind of two equal parties that have completely identical starting positions and if you look at chess, they're not technically completely identical. They're very close to identical. They're closer to identical than we ever get. If you have two very similar starting positions, they both play by the same rules. There isn't a structural advantage for one side or the other. There's a structural difference between the way the two sides interact. But if you take a chess board and you give one side 20 pawns and one side none, then the side with the more pawns is probably going to win the game. It is a resource conflict when other things don't interfere. Now, not all resources are interchangeable. If you have a terrain advantage, if one side is trying to fight uphill into the sun and the other side is fighting downhill, that is not necessarily something that you can just turn into, oh, well, it's the equivalent of this many more people. You can look at that in a context of statistical outcomes. But the shape of the way those conflicts move are going to be very different. They're not commensurate resources. So money and person hours tend to be, though. You can trade money for time. You can trade time for money in many contexts that actually works fairly well. When we are building systems, A, we need to understand the resources that we're actually using, the resources that are actually in play. Our goal in system building in an adversarial context is to build asymmetric problems for our adversary. You never want to pick a fight that is easy for you or that is easy for your adversary and hard for you. We don't actually understand the resource balance between attack and defense and computer security. This is actually really kind of fascinating to me the depth to which we don't understand it. All of us, if you've been working for a while, probably have some degree of intuition about resource balances in specific regions of the problem space that you've seen. Like you have a pretty good idea that if the problem space is Mossad and your grandmother, your grandmother is going to get Mossad very quickly and it doesn't really matter, it's not going to take the Mossad agent 10 times as much time as it is to own your grandmother, it is going to take her to defend, even if your grandmother used to work at Bletchley Park and it's a scale problem. But there's a lot of really interesting work coming out that's looking at the economics of vulnerabilities at scale. And for instance, can you actually force an adversary into depleting a zero-day pool over time? And while in the small, for small networks, we currently, things currently seem to be fairly defender biased or attacker biased, it's not clear that that's actually as true at scale as we think. So there's a lot of interesting research to follow there. So intuition. Any complex adversarial system that's being run with real stakes is going to be too complicated for you to actually understand in one go, right? If you, again, if you think about a cell phone, right? And what is the software stack running in your cell phone that determines whether or not it is going to leak location data under a given set of circumstances and who it's going to leak location data to? One of the things I just found out today, apparently actually Qualcomm gets a real-time location feed off your phone. It's part of the adaptive, assistive GPS location trustlet that you can't even see from within the host OS, right? There are more layers. So even if you have a very good understanding of, OK, this is what Android OS security looks like, how is your baseband security, et cetera, there are always more layers than we can keep in our head with full detail. Rigor helps, right? You can write down the list, OK, these are all of the possible channels, et cetera. You can kind of go through it. But you probably don't have time to do this work rigorously in the field, right? Sometimes you do, but a lot of what you need is intuition to guide that. Intuition comes from breadth of experience, not depth of experience. One of the single best things that you can do to improve your intuition for how systems work is go look at more and more examples of systems. One of the things that was, while the work was sometimes incredibly boring, I spent probably the first eight years of my career mostly doing application review, doing code audits. And that meant that every two weeks I got a completely new system to sit down with and stare at until it made sense. And I didn't have time to read all the code. You never have enough time for the actual scope of the audit. So you get very good at learning new systems quickly. And that is where the intuition for, OK, I'm staring at a thing, what are all the channels, right? What are the interesting ones? Intuition is about finesse and about shifting resources. It can shift the cost of an attack. It can shift the speed of response. It can shift the speed of audit, et cetera. In chapter three of the trying to, there's an allegory about a butcher. And this butcher is joining an ox for some duke. And he says, as he's making this amazing performance of just sort of things just falling apart, barely being touched, a good cook needs a knife once a year. You sharpen it, eventually it wears out. Because they cut cleanly. A common cook needs a new knife once a month because they hack and they hit bone and they tip the knife. And an amazing cook, well, you just put the blade where the bone isn't and where the animal isn't and it just moves, right? That is the thing that you are looking for with that intuition is that ability to just sort of know the right place in the problem. If you only study from within security, you are never going to learn very much because we are very bad at metacognition as a community. We do not think very much about how we think. Seal IDs from elsewhere as much as you possibly can. I've gotten some great stuff from chemical engineering, from aviation, from medicine. Any time you are looking at systems reliability, team organization, these kinds of problems, there are a lot of people who've had those problems before. Unfortunately, there are not actually very many well-theorized adversarial problems out there. You can find a lot of texts talking about adversarial structures in a business context, but they're mostly actually fairly basic. They don't get into actually thinking about the structure of the thinking very much. Similarly, there is a lot of this stuff that comes out of the military. But unfortunately, then you end up with the military mindset kind of getting applied to all this stuff. And I apologize for using that language in a lot of places here. I don't think that it's a very good model, but it is the best one that I've had for places where there is actual theory around the structure of adversarial systems. So there are four kinds of flaws that you find in complex systems or that you find in the components of complex systems. You can get all sorts of emergent weirdness out of them. They combine in all sorts of weird ways. But what you're looking at, there are weak primitives. So this is basically you've got a lock that turns out to be a terrible lock. You've got a weak encryption algorithm. You've got a math problem that is not as hard of a math problem as it is supposed to be. That is a category. You have subsystems that have negative asymmetry for the defender around resource use. So some kind of denial of service magnifier. You have elements that allow either state transitions that they're not supposed to allow in their state or that allow you to start from an illegal state. And you have places where you have two different elements that in theory both implement the same interface but actually implement something different. You've got to mismatch. Every problem at the component level resolves into one of these things. One of the things which is really nice about this for defenders is that two of these shouldn't exist and don't need to exist. We can basically eliminate, with some care, bad state machines and interface mismatches. How many people here have heard of Langsac? Church of Langsac, Pastor Manuel LeFroyd presiding? So there's an entire group of people that's basically looking at what if formally approvable security but actually useful. And basically saying that it turns out that a parser is as dangerous to write as a cryptographic algorithm. And you should not let normal people write parsers. And by normal people, I mean basically all of us, you should whenever possible let computers write parsers and have humans write descriptions of the things that are supposed to be parsed. Because that ends up in much, much tighter parsers. And you eliminate half of the categories of vulnerabilities that way. This is not so easy with humans. You get these same four kinds of vulnerabilities. But it is really difficult to have a machine implement a human's parser. We are sort of trying it with things like mechanical Turk. It's not working very well. I do not recommend that route. So all systems have three components. Infrastructure, structure, and superstructure. Structure is the thing that the system nominally does. You have an application that's supposed to complete some work in the world. That's the structure of the system. Infrastructure is the thing that we normally think about attacking and defending. That's where a lot of us spend all of our time, is down in the infrastructure. Superstructure is the political and management and development framework for that thing. All of these layers have technical and social response structures. Actors who are involved in the system have incentive models at each of these layers. You can attack and defend at all of these layers separately and together. How many people here are actively defending some systems? How many of you know the incentive model for your adversaries? Like, can tell me what their ROI model is? A few? A couple? Awesome. That is surprising. Oh god, I forget his last name. Chris at DIN, ask me after, did some really interesting work. He was curious as to, OK, so we have these scammers who are trying to get free DNS hosting service from us. What do they actually get out of this? And so he went through the entire supply chain and figured out their ROI model, who they sold to, who those people sold to, figured out all of the closed loops in the system. And from that, you can do really interesting things. You don't necessarily have to intervene at the infrastructure level to make the attack that they're doing more difficult. You can just make it not profitable. And it doesn't fix your problem, but it does make your problem go away, which is sometimes what you actually care about. You care about the outcome more than you care about technical perfection. Political motivations at the superstructure layer are a resource. If you have an adversary who can offer, if you are as, if on your side you can actually provide some kind of political and ethical and moral inducement to the people who are working with, and your adversary has nothing to offer their team but money, you are in a much better position. Motivation and team structure make a huge difference to outcomes. So I'm going to talk a little bit about trust. Who here knows everything they trust? Good. We have gotten very, very, very lazy about actually trusting things, right? We have built, you know, we have DNS. DNS isn't trustworthy. It mostly works. It's not trustable. The CA model is a joke. Almost no one understands, can actually prove that the OS distribution that they're running hasn't been tampered with. We just, you know, it mostly works. It's kind of OK. That's fine also, right? You know, we can't necessarily, certainly individually, we can't fix those problems even in production systems that we care about. But if you forget the things that you are trusting that you shouldn't actually be trusting, they will eventually bite you. Remember those things. Know where you can and can't trust. Know the shape of that trust. Trust is not a binary structure. We often build systems where it is. But say you've got a big company and you've got a bunch of fairly confidential sales data that traveling sales people need to pull down, right? Somebody asks for 1,000 confidential data sheets and they're sitting at their desk on a company controlled PC. Cool, give them. You know, they ask for 1,000 data sheets and they're supposed to be in Portland, but they're actually coming from Minsk on a no. Even if the password is right, it doesn't matter. Like they're not getting those. Most off systems can't tell the difference. Build systems that understand trust more deeply. Also, understand where your authoritative data sources are and under what circumstances they're correct. How many people here have dealt with a corporate environment that had more than one canonical source of truth for users? Yep. Again, OK, now you need to remove a user in a hurry. Where do you do it? Try really hard to only ever have one canonical source of truth at a time. It will make your life much easier. Also know where your single points of failure are. Sometimes it's fine to have single points of failure. They're not necessarily bad. Sometimes you want to put all of your eggs in one basket and then guard that damn basket. Sometimes, I mean, fine, maybe you want it to be a redundant basket, but you do actually, it is not necessarily the case that you always want to eliminate spoffs. Just don't be surprised by them. So time is a fundamental resource in all adversarial systems. Any time that you have two parties actually in conflict over a real system that is occurring in time and the outcomes change depending on when things happen. How many people here have heard of an Udalloop? John Boyd? A couple, OK. This is another military-sourced thing. Boyd was a, I forget if it was Korea or Vietnam. I think it was Korea. Highly decorated pilot ended up teaching American fire pilots how to fly for quite a while, came up with basically an entire doctrine for how dogfights actually work. And as part of that, he came up with this model for how people think. Observe, orient, decide, act. When you are in a time-contentious situation, like a dogfight, one of the things that determines the outcome of the interaction, so all else being equal. You put two pilots in two planes, same planes, same starting altitude coming in, head on. The person who can think faster is going to win. Alternately, the person who can turn faster, the person who can maneuver inside of the other. And in a dogfight, it's very literal because you're looking at gun-citing lines. But for system response, if you can respond at the same depth faster than your opponent, and you are in an otherwise symmetrical conflict, you are going to win. Now, that means that we can design for speed, responsiveness of an operating structure is a thing that we can design for and optimize for. We are already doing this, and we're doing this for reasons that have nothing to do with security. They have to do with the return on investment rates that venture capitalists want. If you look at a lot of the work in Agile and a lot of the whole DevOps movement, that kind of stuff, a lot of what they're doing is designing for response speed. Because it's an adversarial system, they are dealing with an adversarial system. Capitalism is adversarial. And they are designing to be able to turn inside their adversaries. If you have teams that are responsible for the security of the code that they look at and poke at every day, and they have the tooling to enable them to actually practically respond to threats and breaches there, they are going to do much better than when you have a siloed off security team that is responsible for the security of that code that doesn't interact tightly with the people writing the team because the communication latency and the like, basically loading latency of getting all of the state about like, OK, wait, what is this thing that's getting attacked? That will slow down response significantly enough to make a difference in outcome. Let's talk about terrain. So unlike physical conflicts, we are not with a specific piece of terrain. We get to shape where we fight. Now, often by the time we get involved, it's a bit late. Because terrain shaping is mostly a function of business models, right? Your business model lets you choose the terrain that you fight on and that their adversaries fight on. However, it's not an all or nothing choice, right? If you are going into, you're a market that's selling something online, there are a lot of fine detailed decisions about like, how do you pay vendors, right? And what's the time frame for payment clearance versus incoming payment processing, et cetera? That massively changes the entire game for fraud. Now, we are rarely in the room when those decisions get made, mostly because we don't know how to speak the right language and also because we haven't asked to be, because most of us didn't realize that we wanted to be in that room. Understanding how to talk to your business about the security impact of business decisions and the security impact of technical decisions that shape user behavior is critical, right? You get to design the conflict that you would like to have instead of accepting the conflict that your adversaries decided that you're going to have. You get to force your adversaries into a position that's more favorable for you. If you were in a startup or if you were joining a security team, for that matter, think very carefully about committing to terrain until you understand it, right? Don't build a business model without actually running through it from a security perspective. So switching gears a bit. Yes, you need to focus down into the trenches. You need to go read all of the code. You need to go hunt specific bugs. You need to patch things. You also need to understand the context of the task that you are doing. Your brain will not do this. We are very, very bad at that kind of dual stream thinking. Help your brain do it, right? Help yourself visualize the strategic space of security response. Ensure that your team has a shared understanding of the context of their work. Draw pictures and shit, literally, whatever it takes. Have a big picture on the wall of your response model. Hopefully, it's some kind of closed loop. If it's not, you should think about that. Enable decisions at the edge. This is one of the ways that you get faster decisions is you let the people who have all of the data make the decision. In order for them to do that, they need to have the context that they're going to make the right decision. They will screw up sometimes. Let them screw up. That is the way that they get better at making those decisions. And also, there is no value in you in punishing them. And you are probably not going to make a better decision yourself because you don't have the context that they have. You will fail in different ways, but allow the failures that they make. So you don't want to think alone, right? You need tools to think with. You need principles, right? Some of these are principles that you are familiar with, that you've seen. Availability, confidentiality, yes, these are things that we think about a lot in the context of systems that we build. There are a lot of principles that you can optimize for in a system out there. This is a fairly small selection. Figure out what the set of principles that are useful for the kinds of problems that you're dealing with are. And then ensure that the entire team working on those systems understands the principles that you're designing to so that you get systems that are optimized for those principles across all of the little micro-decisions that every developer makes every day. So let's talk about information a little bit and tells. Even if you don't go all the way out into corporate espionage, deep state, woo, land, your adversary watches what you do in the same way that you watch them, right? If you run a website and it's got accounts and you've got somebody trying to take over those accounts because they have some monetization model and you've been checking in the logs and you've started tracking down a few adversaries and you start blocking them, then they're going to notice, like, oh, yeah, I've just lost access to 50,000 accounts that I had cracked into. And then they're going to change strategies. And that's fine. This is the process fight, right? It is not a single static point. It is a process. But maybe you don't want to just act. Maybe instead you'd like to explore what their response is, right, go up a level and try and actually model, like, OK, how fast is this guy responding when I change something, right? So you do a bunch of little tweaks that aren't likely to lose you access to the adversary, right? You don't tweak on any of the signals that you're actually using to find the adversary. You send some fake signals that they'll react to. And then you watch their reaction pattern and you learn some interesting things about how your adversary is moving through this space. Maybe you'd like to train your adversary to do something that's more useful, right? You've got two kinds of accounts. You've got high-value accounts and you've got low-value accounts. And you know that you can't get rid of ATO entirely because of various things about behavior in your user base. So you're going to train your adversaries that if they go after high-value accounts, you destroy everything they've got. And if you go after low-value accounts, you ignore them and mostly just revert the stuff that they did, but you let them keep some access, right? And so you train your adversaries into who they can touch. Is that great? Probably not. Like, that's not the relationship that you might like to have with your users. Is it better than just letting the adversaries get more access and do more harm? Yeah. You don't have to play a single game here. Attackers, remember that you have infrastructure, too. If you do offense, you still have things to defend. Don't forget to patch it. Don't write bad code because you think that in your exploits because you think that you don't have anything to defend. Defenders, remember that they're going to totally forget to do this. Their infrastructure will be really, really bad. You can probably knock it over by blowing on it. Don't do anything which is a felony, but be aware that the systems that are attacking you are also systems. So stuff will break. Things breaking is a reality. Then what happens, right? Then you have a team that gets woken up in the middle of the night. And they try to do something. What happens then depends mostly a little bit on the technical problem, but a lot on the human structure of the team. What you want from your team in that moment at 3 in the morning is adaptive capacity. You want them to be in a position where they can deal with the fact that the system and the context that they were operating in has just changed significantly. That means Slack, not the crappy chat client that logs everything forever. That means that you need smart people who have strong, rich relationships with other relevant smart people in the team. It means that they've been on vacation recently. And it means that their to-do list is not like Thuloid Nightmare, and that they actually have time to sit and think about problems. If you are running all of your development teams and everyone who might be involved in response, at 100%, you have no adaptive capacity. That is a very, very dumb risk analysis decision. Also, it will burn out your employees and hiring is way more expensive than you account for. So build-in Slack. Build-in Slack because it is a security, it is a critical security issue if you do not have Slack in your system. Measuring Slack is really, really difficult. If you think that you're going to have a dashboard with a Slack metric on it that's tied to how many tickets people have filed and their vacation latency, it's not going to be useful at all. This is a thick property. You measure it with qualitative data, not quantitative data. It is worth keeping track of it. It is worth thinking about it. But don't think that it's something that you can deal with qualitatively. Also, this is why you don't automate everything. Automated systems do not have adaptive capacity because adaptive capacity comes from people. What you want are orchestrated systems that have a human in the loop where someone can actually think about what the response should be and shape outcomes. That human in the loop is probably one of the most important things in your entire security system. Because among other things, you know what an automated system that makes your infrastructure change and mutate aggressively is it's a weapon. It's a weapon that you've pointed at your own infrastructure and have now allowed an attacker to arbitrarily trigger in ways that may do amazing amounts of damage. So maybe don't do that and maybe have a human in the loop that says, hey, the system just said that I should shut down all of our production clusters because something, something. I'm going to not do that. I'm going to go think about this for 30 seconds before I take the entire business offline. This is a thing that you'd like to have. So speaking of rate limiting and time, it said that list programmers know the value of everything and the cost of nothing. Engineers in modern environments, especially in cloud environments, have the opposite problem. They know the cost of everything and the value of nothing because they've got great detailed AWS stats and they don't understand necessarily the actual overhead and headroom that they need compared to the human problems because it's super easy to just throw something in an auto scaling group and not think about it. Let's say that you've got some core transactions database that's got a bunch of information that's totally unencrypted for some reasonable technical reason that was annoying to deal with at the time and it needs 50 transactions a day from the tech support team. And it's in an auto scaling group. And congratulations, an adversary just exfiltrated half a petabyte of data in 20 minutes because that auto scaling group just scaled right up when you actually needed 50 transactions a day. If you can limit a resource to only what the business actually requires, you are not providing tools to your adversary. Access capacity is a capability for your adversary. So any time you can get rid of it, you are making your system more secure. Segment in capacity, you have the good transactions and the bad transactions. The bad transactions might squeeze out some of the good transactions, but then people are going to complain. If you know that you never exceed 50 transactions a day in that database and you have a middle box sitting in the middle that's doing some really annoyingly slow crypto, basically just to slow the system down, and all of a sudden you can't get 50 transactions a day out of that machine because an attacker is trying to get 45, your customer support team is going to complain that the system is suddenly super slow, and that's great because you've now noticed an intrusion that you otherwise wouldn't have noticed while that database got exfiltrated in minutes. Slow is better. This applies in a lot of places. So for a while, and I don't know, I haven't checked in on the current status of this, for a while the tour team was playing an interesting game in China. China kept trying to block tour. And tour looks a fair bit like TLS. And so the tour team would change something, and then China would find a new signal, and then they'd block it, and then back and forth. Now the tour team had a list of 42 different signals that could be used to distinguish tour from TLS. They'd already gone through the entire list and figured out all of the possible responses. Instead of just patching all of the bugs, they patched them one at a time because there was latency in the response from China. And in that latency, the network kept working. And it was much better for them to force China to respond incrementally, China not having done the mapping, than it would be for them to fix everything at once. This is the same thing. This is excess capacity. In this case, it is capacity that an attacker, because in this context, China is the defender, it's capacity that an attacker doesn't want to give away to a defender because it will harm their ability to continue operating in the space. So this goes both ways. So let's talk about measurement. How many people here have some kind of program where they measure security risk? You can't measure security risk. You can measure security exposure. You can measure how much is this going to suck when we get owned. But if you want to measure risk, you are measuring the probability that this guy, Igor, is going to wake up Tuesday morning and get really annoyed at you and decide to come take a swing. Now, if you know Igor, if you are in a position where you know all of your adversaries and you have significant threat intelligence, you can probably actually do this, right? You can go play that game. You are not in that position. Almost no one is in that position, in part because the organizations that are big enough to actually have that degree of threat intelligence and aren't just totally lying to themselves about the game that they're playing and have spent a bunch of somebody else's money on threat intel feeds that they can't understand while they still haven't locked down their authentication database. Any organization that's big enough to actually be playing that game is too big to understand the entire organization, right? It can't understand itself. It's already big enough that it's playing a statistical game. Now, if all of your attackers occur, if all of your attack vectors, you are being attacked at statistical scale, right? If you are Google or Facebook or Amazon, you can start saying statistical things about the probability distribution of a portion of your attackers, right? If you're eBay, you can say that about a lot of the business level attackers, right? Because you've got enough of them. Those are places where you can measure risk. If you don't have enough data points to actually make statistically valid inferences, don't lie to yourself. Do not measure things that are not real. It will get you in trouble. Measure exposure. How much is this gonna cost if I get owned? How much is it probably gonna cost to launch the attack? How much is it gonna cost to clean up from the attack? How much is it gonna cost to fix this bug now, right? That is a nice cost ratio system, all of which are things that you can measure or at least approximate fairly accurately. You are not making up imaginary lies and then making strategic business decisions on that basis. Also, be aware that when you measure things, you change reality. The bigger you are, the bigger the change in reality, right? If you decide as a marketplace that you are going to penalize vendors for some set of actions, those actions are largely going to vanish. Now, the thing that you were actually trying, the meta behavior that you were actually trying to fix may or may not change, right? But because you are now measuring a thing and acting in the world on the basis of that measure, you have changed the system that you're measuring. So you need to understand the impact of measurements. You need to get some folks who actually understand statistics and field studies, hire some ethnographers, hire some sociologists. There are people who literally have done this. Like there were entire disciplines about doing this. Also, please learn qualitative measurements or qualitative methods. Not everything is a number and a lot of the things that we care about are not numbers. Those can be measured too. Again, there are entire disciplines that can teach you how to do this and it will make your life much easier. Over a long enough time, the likelihood of system compromise is 100%. All data is eventually deleted or public. When you are designing systems, assume compromise, right? Assume that the system will fail. Assume that all of your countermeasures are not going to be sufficient. And then figure out what the response is going to be. Maybe not all of it in detail, but actively design for compromise. Maybe you don't design for a total compromise of the business because that's too expensive and the response is well, we all just go hang out in the park and drink what's left in the beer kegs and go home. That's fine, that's a response. But any computer that you have is probably eventually gonna get compromised. So you can't have a computer that can't be compromised and you need to think about what happens with every individual technical system and every individual process, every individual business process that you operate and have thought about how you want to design for that thing failing. If you are playing offense or if you are playing some of the more aggressive versions of defense where there are nation states and pointy things involved, the consequences of failure aren't just people losing jobs and you need to think about this very, very carefully. Anonymity systems in particular, if your work or ability to not go to jail depends on anonymity or unlinkability of some kind, you really need to think about that because it will probably eventually break. It might not, but you know, understand what will happen when it does. Also ensure that everyone inside your organization or your team or side or set of stakeholders or whatever has a shared tolerance for exposure. You do not want to have one guy on your team who's super cavalier and is gonna do stuff that's gonna get you owned in ways that everyone else isn't willing to deal with. You need to actually agree on this. This is a fundamental business strategy choice is making sure that everyone has at least compatible understandings, compatible tolerances for exposure and has talked about it explicitly and understands how to evaluate their own exposure tolerance. So here's just a quick review of our 14 lessons. And I will leave this up for just a minute if anyone wants to take photos and see a couple of people. And yeah, any questions?