 This is probably the only talk you're going to get here from somebody who's absolutely unqualified in terms of their academic background to speak to this audience whatsoever. So I'm going to look here at some history and trends together and try to look at what I think kind of some of the interesting research frontiers going forward are going to be. So if I just start looking at the problem of just doing a cryptographic operation, historically we were really limited in the amount of computation we could do. And so you've got these various manual schemes, the Enigma, the M209, various mechanical sorts of cypher machines. And this limitation on computation really limited security. But when things scale, as I think all of you know, that favors the defense from a cryptanalytic perspective and makes offense really hard. So you go from single des to triple des. It's only three times as much computation, two to the power 56 times as much work to do the attack. So this kind of scaling means that basically the defense wins. And I'm going to make what is in one hand the absolutely most safe prediction and kind of an interesting one at the same time, the triple AES 256 I think will absolutely never get cryptanalysed in terms of classical cryptanalysis. So we've reached this point where the algorithms have won. And obviously though, if we map the winning of the algorithms over to the real world, you can see just a huge number of different breaches occurring. And security isn't going well in practice, despite the fact that we have these algorithms that have won. And you can think, oh, well, that's a security problem. We're crypto people. Things are actually OK in the crypto world. And no matter how you look at it, they're actually not OK. So if you're trying to protect a key that's going to protect your Bitcoin, I mean, that's a pretty easy problem, how to hold a key so somebody doesn't steal it and do maybe an occasional transaction with it. But the number of successes at holding high value keys versus failures and the places where we actually get data, like if somebody steals your Bitcoin, you actually can see it, in the places where we get this kind of feedback, we can see that we're failing terribly. And even in the places where we don't usually get feedback, the leaks coming out of the NSA are telling us that we're failing in many of those areas as well, at least against that kind of adversary. So our world is not a very good one right now. So one of the places that I sort of like to do is I like to look to other fields and see kind of what's going on there, what the history is. So if we go back to the Middle Ages, if you needed an amputation or you needed a surgery of some kind, your physician wasn't going to do it for you. He was going to send you over to a barber. Because the barber was the guy that would go chop stuff off. The physician didn't want to deal with that. That was people screaming and messy and the patients often died. And so you had this divide between the academic physicians and the barbers who went off and actually did a lot of the applied stuff. And we are kind of in this same era of history. So if we look at this, I mean practice right now yields some really, really bad outcomes and some that are good even for inexplicable reasons. So you may be getting the right outcome for reasons that we don't really truly understand why. And if you look at research, I mean, I think at least the practice people would say that research is really painfully divorced from practice in many cases. Not all, and this has gotten a lot better in the 20 years that I've been involved in this community, but it's still a major issue. And so what you end up happening is quite predictable. I mean, theory is struggling with just the messiness of reality. And the theory isn't applicable then to the real world. And the practice then ignores the theory because if you're a practice person and you attend a conference and you can't even understand a single word that was said, you don't really gain from that. And if you start to actually implement the fully homomorphic encryption algorithm that was just presented that seems like it might solve your problem. And you get your code working correctly and then you just realize that it's going to be many, many, many centuries before anything returns. You haven't actually achieved anything useful. So you ignore all of that stuff. And so you get this gap. And if you're a practice person, it's really frustrating to go and read some of the kinds of things that get written. And if you're a theory person, you get so frustrated with the kinds of questions that practice is asking. So, but in the same way that when you had well meaning of people in the Middle Ages trying to do their best, practice has to go on. You can't just stop and say, look, you've got a leg with gangrene in it. The amputation is still probably better than doing nothing. You still have to go and do your best to protect these systems that people are using. So I came into this when this divide was, I think, even greater than it is today. If barbers can do surgery, pre-veterinary students can clearly do cryptography. And so I got curious about cryptography, discovered the Psy.Crypt news group that the old people in the audience will know what I'm talking about and the young ones, maybe not. Discovered from their interesting things in the Stanford Engineering Library. There were all of the proceedings of the old crypto conferences. And I went down there and discovered all kinds of interesting stuff and met up with Marty Halmon in the Stanford Crypto Group. In parallel, I had this problem called tuition. And again, I shouldn't complain now because it's gotten a lot worse. But I had to come up with enough money to pay my tuition. I discovered that compared to working in the food service hall doing crypto consulting was fantastic and worked for RSA for a bit implementing things. Discovered some issues like the RC4 related key problem kind of early on. Also learned about patents for better and for worse. Those of you who pay my company licensing money probably wish that I hadn't learned it, but at the same time, those are an important way to be able to monetize research despite all of the warts in that process. I also got an interesting project that paid a lot of my tuition working for Microsoft where they were trying to do try before you buy software. And the CD format was relatively new. You could put lots of programs on it. The idea was you'd be able to distribute a CD to lots of people. And then the people who had paid for the software would buy unlock codes or perhaps you'd even have some trial period first. And so I probably broke 40 or so of these schemes over the course of a few years. And that paid my tuition. And the guy in the marketing department thought it was great because he could just send these things to me and I'd send him back a little hack. And so we had this kind of thing that kept on going. And that basically paid my way through school. And it also let me sort of peek behind the curtain. Because there was this idea, whenever you're new to a field, you have this idea, well, gosh, people who are doing this really must know what they're doing. They're old, they give invited lectures. They've got to be some sort of wizard behind the curtain. And peeking behind and seeing that even companies that were doing as good a job as any were, not really sure what they were doing. It was kind of an eye-opening and interesting experience. And so when I finished my biology degree, Marty Hellman retired. He kept sending me some projects that he was no longer doing consulting. That was the seed capital for my company, which started out by doing services, mostly breaking stuff and then trying to sometimes pick up the pieces afterwards. So there was a presentation at Stanford. I can't remember who gave it, introducing mainly linear cryptanalysis, which was relatively new, but also talking about differential cryptanalysis. I thought this was kind of cool, but I went off and figured I'd go try to improve this and implement it and make it work as best I could. And it was really, really frustrating. Because the correlations were just terrible. I mean, dealing with things that were two to the minus 50 when all you've got is a PC of this era, wasn't very fun. And I couldn't get the attacks to work, but I had known from profiling the code that I had written that there were these different non-constant timing effects that were popping up because sometimes the code would run faster than other times and understood the reasons why these were there. And that led to the timing attack paper that Benedict mentioned in the introduction. I'm not going to explain the attack here, because I think we've reached the point where everybody in this audience knows, or if you don't, you can find the paper. But the one set in summary, though, is that given a whole bunch of timing measurements from a device with a key, you can use some statistical techniques to identify correlations that can let you figure out what that key is. Now, the implications of it are maybe a little more interesting. I mean, from just a practical perspective, this gave me the kind of strong correlations that I wanted. I mean, the kind of thing that I could go do and play around with the PC that I have. So this was enormously fun compared to trying to make differential cryptanalysis actually work against a live cipher. And there's some things here that are also just, when you look back in hindsight, just seem just absolutely incredibly obvious, like that every little tiny side channels can expose keys. I mean, we have these cryptanalytic techniques that we're exploiting much, much, much smaller correlations. And so when you get something strong, like the difference of many clock cycles that you can measure, that that ends up being important from a cryptographic perspective. It also is now, I think, obvious to people that the real implementations that we have in the world aren't these sort of tidy black boxes where M comes in and C comes out, and everything just happens through some completely opaque machinery that we have no insight into. But there are all kinds of things going on inside there that become important. And also tied to that optimization, the whole quest of computer science to get the M translated into the C as fast as possible actually can make things a lot worse when you're trying to worry about security. And tied to this, then, is the question of this disconnect between the algorithm and what its requirements are in the real world against what you actually end up, what the security assumptions underpinning the strength of that algorithm might be. And so you end up with these incorrect assumptions that the implementer makes, and there's not even any language that we typically have for expressing really what assumptions there are and verifying that those assumptions are met by an implementation. And then, last of all, is a theme I'm going to touch on several times through this talk, really this idea that cryptography is a lot more than just the algorithms and the mathematics. It's actually how you solve a problem for somebody. And I'll talk about that a little bit more later. So back to history. So started cryptography research, had clients who are having me do various projects. A lot of them were deploying smart cards at this time. And there were some just crazy security claims being made around how it'd take $15 million and this massive equipment lab to actually break anything. And I had student loans still just out of school. I didn't have $15 million to go attack these the way you're supposed to attack them. So instead, I took on protocol reviews and started looking at the protocols people were using. And they were horrible. And there were things like time memory trade-offs against DES or places where you could get lots of identical plain text with many keys. And if you could break any key, you could load arbitrarily large amounts of money onto it. Macs that were using DES in ways where you could actually use the same key for encrypting chosen messages and break the Mac, all kinds of terrible stuff. I mean RSA where you did a cert where you put the public key inside just with no padding whatsoever inside a slightly larger certifying public key. And many of these systems had proofs of security with big quotation marks around it. So lots of things were bad in these kinds of things. It was kind of fun because, again, you could go out and make these systems better, see what was going wrong. But one of the other things that you quickly find if you're working doing analysis and practice is that as soon as you break somebody's system, they get defensive. And defensive means they deny it at the attack work. So I had to go and implement the attack to show that I could load money into these things. Or break the protocol in practice. And when I did this, I also went and figured, OK, these guys surely have read about the timing attack papers that I wrote. So they must have fixed that. So I looked for timing attacks on these devices. And they were just consistently bad. Again, I mean you could break RSA through leaks of whether the result mod p or mod q is larger and the CRT pin verifies are still in lots of devices when there's a little comparator that runs through and checks the number of characters that match. The other thing you could do that wasn't cryptographically interesting, but you could see what the sort of decision tree in the software was for commands coming in. So if you send a command byte and it says another one, another one, it says illegal command, how long it takes you to tell you that tells you how many plausible bytes were submitted. And so you can find the sort of space of all commands that the thing can support. And so finding these undocumented testing and backdoor commands was another thing that just kept popping up. So there'd be the magic global password for all the devices with a mem compare on it. And so finding a bunch of those. Other issues like time to reset counters. So in a lot of eProms, you first go through, if you're going to have a counter, the right function will do a bit clearing operation first. And then it will go and set the bits back in. So if you have a reset counter that says you're only allowed to check a pin 10 times as a user, well, if you zero your counter and then set the bits, well, if you set a reset right in the middle, the counter goes back to zero. And we still see this problem regularly today. So this isn't the kind of thing that's quote unquote fixed. It's just not cryptographically interesting. And we also did some glitch attacks as well. But again, glitch attacks, we just caused the device to output its entire memory space as opposed to breaking the crypto in some more mathematical sort of way. So I was doing these and I wanted better data. So I went off to Fry's Electronics and bought the cheapest oscilloscope that they had. I think it was about $230. CRT screen. I had a Radio Shack electronic project lab that my parents had bought me, who are actually in the audience here today, that I happened to have laying around, which had resistors in it. So I took some alligator clips and stuck the resistor in on one of the smart cards I had. And instantly could see keys. So just one of those like, oh my gosh, people just clearly haven't ever had a cheapest oscilloscope, a resistor, and a cryptographer together in the same room. What on earth is going on here? You'd think that there was some reaction that would occur. So you can see all of this sort of stuff. Even like DES, which you might think, OK, you're not going to instantly see that on an analog scope. But a lot of the software implementations would do the C&D shifts where you've got this 28-bit shift register, so you shift the thing. The bit comes off the end. If the bit's a zero, well, you just do a shift. If it's a one, you have to move the bit back in. And so you could see the DES keys falling out as well. And with DES, though, we could only do that at night, because the CRT on the oscilloscope has such a crappy phosphor that if there was any light coming in, then you couldn't see it with enough persistence. So this was the problem. So finally, OK, there's enough stuff I'm breaking here. We went and we spent several thousand dollars. And we, at this point, was myself on Josh Jaffe, who was actually a friend of mine that I'd hired him. I was my first employee. So we went and we got a digital storage oscilloscope. And this was absolutely amazing, because we could do stuff in the daytime now. We were no longer like vampires. And so we could, this had storage. We could see one-time events. We managed to get the data pulled into the PC. This was a picture from one of the press articles. And both the oscilloscopes that we owned at this time were carefully stacked in the background to show much equipment we had to give you a comparison to the labs that had real equipment. And that's Josh and Ben, who were my co-collaborators on this. So at the same time, we did this major push on countermeasures, because when you're seeing a new attack that's causing all kinds of devices to fail, saying things are bad and not doing anything about it, on one hand, isn't useful. And also, there's the question of, OK, I've got a company here. I need to actually make some money. So that was one of the things that we did, was to file a bunch of patents. And I never actually got around to submitting a paper to conferences, describing the countermeasures we came up with. If you want to see what the muddled state of mind at that time was, though, you can actually go and see these things. But in parallel, we were looking at smart cards. And I actually said, breaking everything we tested. There was one smart card we couldn't break, because we put it in a reader and wouldn't do anything at all. And it turned out there was actually no chip on it at all. They had just put a little piece of foil on. And that was the first smart card review that we came along with, where we actually said, there is no security problems that we were able to find in the smart card. This was a bank that was trying to impress their customers with their new smart card technology. At the same time this came up, there were some fairly complicated ethical questions that were coming up here. There were customers we had who were panicked about the problem. They had payment systems in the field, including some where you could, in a non-auditable way, load arbitrarily large amounts of money into them. And what do you do? And I was under NDAs and things got complicated. Fortunately, somebody at one of our clients had briefed all of their senior VPs about what to do if they were asked about the issue. And a reporter asked a question that had nothing to do with cryptography. And the guy gave the response about this power analysis thing. And the reporter realized, hey, there's something kind of interesting here. And his name is Jeremy Flint. I forget the name's reporter's name. But he wrote an article in Australia. And finally, that kind of allowed us to talk about what was going on. In retrospect, though, I don't quite know how long keeping something like this quiet amongst one's sort of clients and their inner circle is the ethical thing to do. And the responsible disclosure discussions around software obviously don't apply always the same way to hardware, like Adi's question last night about Phillips with their light bulbs. It's another classic example of what do you do here when you have lots and lots of light bulbs in the field? And that's a lesser issue than something where you have a significant payment scheme where large amounts of money can be stolen. So again, in retrospect, this is absolutely completely obvious. I mean, the idea that electrons moving around affect your power consumption. I mean, they have to come from somewhere. And you can see them coming in. And that you're going to get EM emanations based on the movements of, I mean, this isn't even like physics. This is sort of pre-physics here. And that these measurements are correlated to the secret intermediates. I mean, obviously these transistors are processing your secret intermediates. And again, same kinds of things around cryptanalysis, being able to use tiny correlations. And I mean, I still, all the time I'm running into people who think, well, gosh, I have a little tiny AES circuit in my giant big ASIC. Surely you can't pull the key out from that. And if anybody who's in doubt about that, the answer is, yes, you can. You don't need strong correlations to break keys. But even that is not obvious to a lot of people who haven't looked at the cryptography. And I think the most important thing, though, that, again, is obvious in hindsight, is that the strong algorithms are the beginning of cryptography and not the end of cryptography. So I've been using this sort of phrase obvious in hindsight for a while. And part of the thing that anybody who's a practitioner here is going to be thinking, well, telling me that this was obvious in hindsight really isn't very useful, because you want to actually know in advance what the problem is. Now, it actually is useful if you want to assign blame. But aside from that, again, it doesn't help you protect a system. So the question of why the problems aren't obvious beforehand is one of the things that we actually have to understand if we actually want to make practice better. And the analogy that I like to think of for this is one of fractals. So if you think of a system, it's kind of like a fractal. And you've got these different levels that you can look at in detail. And at the very finest level of detail, when you stare at something, it's absolutely obvious that you may be looking at a bug. I mean, if I give you the lines of code that Apple had was going to go to fail here, it's pretty quick for you to understand that this is not the right way to actually implement your certificate verification. So the individual vulnerabilities are really obvious, especially if you have this sort of n-squared amount of effort where you can take every line of code and every academic paper that has been written and try to see whether there's a match between the problem that might be in that line of code and this academic paper. But the amount of human brain power required to do that is absolutely staggering. Now, there's another view of the system where you can step back and say, OK, let's look at this whole system in its totality. You can actually get a pretty good idea of what some of the risk profile here is. I mean, if you're going to tell me that you're going to go launch a new Bitcoin exchange using the same kinds of key management approaches as some of the other Bitcoin exchanges, I can give you probably a pretty, you know, within an order of magnitude guesstimate that your risk is probably pretty high, that your bitcoins are going to get stolen. Now, I have no idea what your bug is going to be or which employee is going to be the insider that breaks you or what is going to go wrong. But I can understand very quickly that the risk profile of what you're doing is actually pretty high. So if we step back and look at there, there are three trends tied to this that are really driving the technology industry today. We have exponential growth in the number of devices that we're making. We have probably also exponential growth in the economic value of the data that's going on those devices. And through Moore's law and related trends, we have exponential increases in the amount of complexity of those devices. So if you put the bad guy hat on for a second, this means I have more targets, more reward when I break something, and more vulnerabilities. So these three trends together make the defensive job very difficult. And you can look at what happens as the complexity of a system increases, that your odds of getting all the bugs out fall off, basically, exponentially. So if I double the number of lines of code, I roughly square my chance of being successful at getting the bugs out. If I assume that my number of bugs is proportional to the number of lines of code, reality is usually worse, because you have these unintended interactions that occur between components. If you look at the number of interactions, and you say that my bug rate per interaction stays constant, my probability of success falls off at the fourth power. And even worse than that, if your number of engineers working on a system doesn't scale with its complexity or number of interactions, so lots of things work against you as systems get more complicated. And I will come back to that in just a second. 1924, there was the silver bridge that was built over the Ohio River, and it was a really innovative design. Instead of using steel cables, they used high-strength steel I-bars that you can just kind of see barely on the here. So this isn't a cable, you've got chunks of metal that are holding up the sides of this bridge. And being a crypto talk, everything always, of course, is depressing, so you kind of know what happens to this bridge, and it collapsed in 1967 and created awareness of what structural engineers call fracture-critical components, things where if they fail, something catastrophic occurs. So how many fracture-critical components do you think there are in a typical mobile phone or connected device today? And you start looking at papers like the Rohammer papers and the number of software lines of code you have and the things that you actually assume working perfectly, and you're somewhere around maybe 10 billion fracture-critical components in a typical device right now. Now if we continue scaling at Moore's Law, which is a doubling every 18 months, which isn't exactly right but good enough for kind of an approximation, it means in about 10 years we're going to have a trillion individual single points of failure in the devices that we're making. And you might think, well, maybe we're not going to scale that way, but you look at things like the coming of AI and its use in computer systems, we may actually end up scaling faster than that rather than slower than that. Now the defenses we've got today have failed to scale to the problem that we have right now in time. And when you start looking at these more complicated devices and these connected things, we're actually taking a problem that we're failing at right now and we're making it orders of magnitude more difficult for ourselves. So things like the question of how sophisticated are the vendors? You know, Phillips with their connected light bulbs that Adi talked about, they are pretty sophisticated in terms of their security understanding compared to a lot of the vendors that are currently making devices that are not connected. And you look at the product lifespan. We're going from disposable connected products to very long-term products. We're looking at devices where I'm willing to spend an hour making sure my PC works properly. I'm not willing to spend an hour to make sure every light bulb in my house works properly. I'm tolerant of my PC or my phone having some issues because it's a very valuable thing to me. An individual light bulb is not nearly as valuable. You know, my car has a much more physical impact on my world than my mobile phone today. So the potential for damage, again, is changing. And the number of software platforms we have with iOS and Android and PC and Mac may be on traditional computing devices in Linux. Maybe five or six of them. We're going to start having this huge proliferation of different firmware images that aren't even really platforms. You're going to have firmware version 1.7 that somebody cranked out for that particular release of light bulbs. And it may not even have a traditional OS and all the stuff attached to it and the on-device security tools that you're used to. So all of these things that we've been able to do to manage security and make the problem even be survivable, we're going to start losing a lot of those tools. So what do we do? And I'm going to look at two parts to this here. One which is focusing on outcomes and one looking at some foundations that we need to build. So when you give a talk, one of the first things you want to try to do is figure out, what does absolutely everybody agree on? Well, we kind of agree that the probability of cryptanalysis for our best algorithms is very small. And the probability of a mistake is very large. So I think everybody wants to narrow this gap. And there are sort of two general approaches. So the FBI has one. We make the probability of cryptanalysis really, really huge and this gap goes away. That's not what I think we should do. I think we need to try to figure out how to get this probability of mistakes to be a lot smaller. So we need to start thinking of probabilities, not certainties. And this means that a proof is no longer a proof of anything. It's evidence. You don't have 100% confidence in a proof itself because people can make mistakes. I have certainly had math exams where I was told to prove something. And I came up with a proof and it was wrong. And we've had errors in proofs before. You can have automated proof checkers. They can have bugs. But even much more important is that the assumptions and the relevance of that proof to the real world may be very tangential. And in many cases, a proof that gives you confidence that is misplaced is a dangerous thing. In fact, generally, if you look at the history of cryptography, the place where things really, really go wrong is where you believe something is strong and it's not. One other thing to keep in mind about proofs when you're looking at a real world that's got exponential scaling is that you may have some gap between your proof and reality. But the size of that gap typically scales exponentially as the systems get more complicated. So if you say, well, OK, I'm assuming that the software is correct. If I have 50 lines of software, that has a certain probability of being a correct assumption. If that's software complexity scales up to 5 million lines of code, I have a very different probability of being able to actually achieve a system that meets that assumption correctly. So if you look at the different kinds of systems that we've built, there's a question of what is the probability we're actually going to get the outcome we desire. And certainly we can see in retrospect that for the Germans with the enigma machine that they did not get the outcome they desired. Or the Bitcoin exchanges. Or with our operating systems. And I am with SSL 3 as the lead author on that. Certainly there were some surprises there that the protocol, on one hand, did certain things fine and certain little things that you miss, are critically important from a security perspective. And we have a history in our community of being massively overconfident. And there's some reasons why this happens. It's partly that our brains are tricking us. Because when we think we understand something, like I think I understand transistors, that doesn't mean I actually understand operating systems, even though I can map the operating system sort of some model in my mind that connects these two together. But you have these emergent properties that make you feel comfortable with the things you understand. And you tend to be a lot more blind to the things that you don't. The same thing happens in crypto. You feel like you understand an algorithm. So you infer that you might understand the protocol built with those algorithms. But really we don't. And when you go from a protocol to an implementation to an executable to some absolutely critical use case that the world needs, that connection gets very, very hard. And our brains tell us we understand it when we don't and we get overconfident. So one of the important research questions here is to ask the question is, what does cryptography for fallible humans look like? Not cryptography for people who are working in a world of just pure mathematical symbols. But in a world where we have biological people who are building things in the languages we have today, with the transistors we have today. And one of the things that you very quickly realize is that our goals need to be around safety and assurance. We don't generally need to be 10 times faster. We need to be 10 times safer. And we need to come up with systems where a mortal practitioner can usually succeed. And if you look at medicine, most of the doctors who are practicing today are not absolutely state of the art researchers, they're ordinary enough GP's working in ordinary enough environments. And there are certain things that they can do that mean that for most of the patients walking through, they help more than they heard. And we need to be able to get to the point where we can do that. And it means making some pretty different assumptions and different metrics and different trade-offs. We need to look at implementation risk as a critical part of what makes a system better than another system. How many lines of code and special use cases are needed? How do we achieve high test coverage for something? If you have a security critical branch that needs to be done correctly by software that the attacker can tickle but the defender's test harness won't actually explore correctly, that's something with a very high likelihood of being a failure. And to go build a system that has that kind of design property in it is something where you're putting the ultimate people who are using it at a significant amount of risk. We need to figure out how to build safety margins and what they mean. If you look at this building around us, if it was designed by computer scientists, they would have optimized out almost all of the structural supports because they are clearly unnecessary. But when the earthquake comes, we're actually would like to have those there. And if you think about how much money have you spent indirectly paying for completely unnecessary things like airbags and redundant structural supports, you're spending hundreds if not thousands of dollars a year for things like safety. And yet we're in a world where you walk up and you say, look, I'm going to add two cents to the price of this thing to make you safer. People say, well, I don't know if I can afford a two cent budget. Yet the amount of steel that went into this building is probably two cents for each of us just sort of amortized over the time that we're here. It means we have to be very different than we're currently being about our terminology and our understandability to other stakeholders. So when you write a paper which is in math terminology and you hand it over to a programmer, there's a big language gap there. And if you can't clearly understand what it means in terms of bits and bytes, you don't have a system that is safely implementable. You have an interesting mathematical construction, but you don't have something that should probably be used in the real world. It means we have to be very precise about things. What is the internal state that you're going to track? What are the computations you do? What are the messages that come in and out? When I think about protocols in the post-SSL world, having had lots of scars that are covered up by my shirt here, from having designed that, I've learned that if your protocol isn't completely clear from the message perspective, the computation perspective, and the state perspective, whichever one of those three was ambiguous or underspecified tends to be the place where the security problems are lurking. So we need to document those things. We have to document our assumptions a lot better. And then you get questions like best practices. Now, there are probably a bunch of you in the audience who are now thinking, oh, this is just a practice guy. Best practice has nothing to do with what I'm doing. But if you actually want your stuff to have real world impact, we actually have to figure out how to translate into what the real world actually can use. What should a cryptographic API look like? And if you look at pkcs11 is not what a cryptographic API probably should look like. I think we can do better than that. But I don't think we know, even from a research perspective, what are the properties that we can reasonably achieve? What should it look like? I mean, giving somebody just raw mod exp as your sensitive API and assuming that you've got your untrusted thing that can do a mod exp with a secret key doesn't feel good. And I think we can do better than that. And we have to ultimately figure out how to get things to be more resilient. So this is actually a really, really important graph, if you think about it. If you were to go back to the 1960s and had the 1960s aviation safety success rate today, several of us would be dead. But through time, there has been exponential improvement in the safety of aviation travel. If you were to draw a similar plot, I don't have good data for it, but looking at the safety of electronic systems, we're probably getting exponential increases in the danger that we're exposing people to. And this means that in order to do this, there were a couple of things that the aviation industry figured out pretty early on. First of all, that they had to build a culture of safety. It wasn't about whether it was possible for the plane to get from point A to point B. It's what's the probability that plane fails to get from point A to point B. It also required realizing that aviation was a whole lot more than just aerodynamics, because questions of what is the user interface in that cockpit is an issue that gets a huge amount of thought, because if that user interface is something that a somewhat distracted, tired, moderately competent pilot is going to make a mistake on when one component fails, that's not a good thing, because there are going to be people who are going to make that mistake. So if we want to step back and say, what can we do to try to make this situation better besides just bemoaning the scope of the problem? My argument here, and my thesis for the rest of my talk, the final part of it, is that we actually have to be able to make foundations that can bear what I term as the security pressure. So when I think about a security system, there's a certain amount of desire attackers have to break it. There's a certain amount of work they'll put into it, and it's hard to sort of quantify this, but you can get a sense of how much pressure is going to come to bear on the system, and if it can't withstand that pressure and it fails, then you end up with the attacker winning and some catastrophic outcome, generally, for the person who's on the defense. Conversely, if it can take the pressure, then you end up with a system that can function pretty well in the real world. So what do we need to do to make foundations here that are strong enough? Now, we know that at the lowest layer, we have crypto algorithms that look pretty good, and we understand this problem. We have the basic building blocks that we need, ciphers, hashing, signatures, key agreement, and I put secret sharing and threshold schemes on here as well, and they're vastly underused, but if you pick the basic building blocks that you need to construct this building around us with, we have those mathematical constructions. Now, there's one little asterisk around quantum resilience not being as boring as a practitioner would like, although of the things that are causing fatalities in the near term, quantum attacks are pretty low on that, and we don't have scaling and qubits at this point. So it's an important issue, and I think it's one that's being dealt with in the right way, but it's not the most important issue that the world faces in terms of cryptographic security at this point. So we have to solve basic crypto problems now. Protocols and constructions get a lot. So I should say we really understand how to build strong protocols in theory. The theory of how to do it is great, but when you actually map to the real world messiness, and this is the distance between the middle age physician and the barber, you have a much, much messier set of constraints that don't actually map always very comfortably back to the theory. We are not really at the point where you can easily say, oh yeah, it's really easy to go make a protocol that will deal with all of these real world things. And some of these are things where you have cryptographic failures that result, some of them result, and other problems. But if you look at issues like compatibility between versions or ECC curve proliferation right now, is an absolute security problem, because it means you can't go and hardwire a nice single state machine for your accelerator. You have all these different choices, which means you end up with the security surface area of your system growing dramatically. Certificate syntax, if anybody needs to break something, go look at the X509 parser, because nobody who actually had a choice in what jobs they were going to be assigned the X509 parser. So there's always a bug there if you need to go find a bug, so that's just a little hint if you ever need to break something. We often ignore the economics. And in the SSL3 protocol, I had the assumption that one, two, three root CAs. It never even occurred to me what the economic forces were going to be that led us into the non-cryptographic disaster that we have with a whole lot of CAs who manage the root key to the kingdom, who have economic incentives that are actually diametrically opposed to security in many cases. Side channels, implementation complexity. There's a lot of these things where when you actually ask the question of how do you go make a protocol where somebody can actually verify that the implementation works in practice, it becomes a much, much thornier problem. But we've made a huge amount of progress here, but it's one where we still have, I think, a ways to go to deal with many of the real world cases. Some of them work much better. But if I ask the question, it's been 20 years. I think SSL, TLS has probably been the most reviewed protocol family that we have today. Do we actually understand this? Or is there going to be another surprise at some point? I'm not sure I'm comfortable promising that there won't be another surprise. Or another way to ask the question is if somebody needs to build a system that they cannot upgrade, can they put TLS 1.3 in and be confident that that's going to last for the next 100 years? And I'm not sure I can say yes to that either. So we need to be able to get to the point where we can tell a practitioner that it is OK to go ahead and do this and that you're not taking an unacceptable risk by committing to something. And if you tell the practitioner, no, you can't rely on any of your cryptography, well, they're going to then go around and hard code their own software update mechanism, which is going to create a whole bunch of new risks. So there's not a question where you can say, look, I'm going to punt on this problem and tell the person to be totally flexible and actually get a good outcome from that either. Now, I call the $2 trillion question in many ways it actually is. How can we actually enable secure computations to be actually done in the real world? And this is really a prerequisite for all of the fancy cryptography that a lot of us are working on. And when we look at what's actually happening today, in practice there are these massive, massive failures for even very simple use cases. Can your web server hold your private key for your SSL session in a way where somebody can't steal it? Can you hold the key for your Bitcoin in a way that somebody won't actually steal it? And you look at the empirical evidence that we're getting and the answer is that the failure rates here are not just significant, they're very, very, very high. So one thing you might be, some of people will be thinking, oh, we'll solve this through fast, fully hemomorphic encryption or some kind of obfuscation scheme, some miracle occurs, that won't help us because you still need secure compute. You've gotta have a trusted compiler to go make your system that has to run on some computer. You need to have somebody who's talking to the fully hemomorphic encryption system that needs to generally know some secrets to do something useful. And you're gonna create a whole lot more buggy code. So there's no miracle that's going to save us through some kind of a new primitive that comes along. We're also not gonna have a miracle where somebody finds the last bug in Windows and we're all good here because the new product will come along, it'll be obsolete, there'll be new bugs that get added. There are bugs being created faster than they're being retired still. So even if you're trying to bail out the water with a bucket, the river is still pouring in faster than we're emptying the lake. And again, maybe artificial intelligence can come along and find all of our bugs and then we get the singularity and it's probably not good either. And actually, I think actually AIs are super important but I don't think they're gonna save us from our problems either. So then maybe you can say, okay, how do we take where we are and what do we do to sort of get towards having a place where we can run secure compute? And there's the sort of traditional computer science approach which is you scale things bigger and bigger within a one security perimeter. You put SGX into your Intel processor and you assume the processor is good and you pack more and more security stuff into this big perimeter, your ring zero code keeps getting larger and larger and larger and this is sort of like this Serbian ammunition depot that we saw where you pack more and more stuff in and when you put more things in, the odds of failure go up and the consequences of failure go up and I believe it was this depot that exploded in an absolutely gigantic explosion shortly after this picture was taken. Or you do like what the US does and we're storing large amounts of ammunition. This is a satellite picture of the Black Hills Ordnance Depot and you have lots of little security perimeters that can live and die independently of each other so you have small, much more survivable failures that can occur. So the architecture that I'm advocating for here is one where we start building little bits of security and this is based on the assumption that our current, first of all, and do these in an additive manner. So starting with the assumption that our legacy platforms that we have are too complicated to debug. We will not get the bugs out of them. They will remain this sort of cesspool that we have today and they're too valuable to abandon so we can't run away from them. But we can add stuff and in fact it's possible to put on a chip or into a larger device. A secure thing that runs separately and manages your more important things than the Pokemon Go game that happens to be running on the main processor. And hardware in this regard is absolutely unique in the sense that it is the lowest level. So if you try to go do something nice in a piece of application software, there's always these lower levels that sit underneath. So when I want to attack a system, for example, looking at the USB stack is a great example of where to go because it has lots of power. It comes on first and again it's written by somebody who probably wasn't thinking about security too much. Whereas if you go do something in hardware, the software running that USB stack in your legacy device is not going to be able to build little mechanical arms that come up from the chip and edit some other part of the chip. You can actually get some kind of security separation here. And it's the one place where Moore's law helps us because if structural components for buildings were getting exponentially cheaper, we could make our buildings very, very strong at a very low cost. And we actually have this benefit that the transistors that we use to construct our systems are getting exponentially cheaper. So something that we might put in that costs a dollar today in 10 years may cost a penny. Assuming we get 18 month Moore's law maybe takes a little longer than that. But if you start building something now which is slightly expensive by the time it comes to market it's not going to be so expensive. And that's an exponential improvement that we're seeing. And it means finally that we can get this separate scaling between the security critical pieces and the overall complexity of our system. So one of the things that my company for quite a few years has been trying to, has been doing is building crypto hardware bits that do useful things for customers. Whether it's helping to decrypt their pay TV signals or authenticated device from anti-counterfeiting perspective. But when you look at this from a research perspective we have kind of this model is pretty straightforward at some sort of academic level around okay we have some kind of persistent secret we have some crypto we have some countermeasures to side channel attacks and glitch attacks. But when you actually ask the question from an engineering perspective what is the research community suggesting that we do in terms of building that crypto block? What should it look like? I mean it's not just AES doesn't really answer the question there very well. I mean should it have redundancy and should it have five pieces of hardware that work separately and five bits of key that go into them so if one fails that the system doesn't fail. Can we build our side channels at the algorithm level rather than having to build them at the hardware level? Can we put what does an anti-glitch system really look like in a piece of hardware like this? And then you have to ask the question what's the probability of failure against a non-invasive attack or against an invasive attack? And I think we're getting to be at the point where we can have pretty good confidence that we'll survive the non-invasive attacks invasive attacks trying to actually map things to probabilities gets more difficult and we don't have as good tools there. Again our sort of ad hoc results the barber is going out and doing stuff a little bit chips that we're actually building are doing pretty well for the most part. We've been building cores that manage keys and seem to survive and we're building DPA resistant things that seem to do pretty well but there's a lot more work that needs to be done even to build just this most simple construction of a keyed function that holds a secret. Now when you expand this problem just a little bit it actually the solutions get a lot more useful. So in the previous slide I was talking about just doing something that might do challenge response authentication or some basic operation like that but if we increase this to actually including sort of true compute we want to do run arbitrary algorithms in here in some interesting ways. At this point I don't think we have really even a consensus as to what this should look like should it be a thing that can do NAND operations and it kind of emulates a fully homomorphic type operation using some trusted gates should it be an FPGA like thing where I can run some kind of diagnostics on it first to understand that it's not been trojan and I can force my hardware to be built before my bit file so that I can kind of separate some things should it be I mean I'm pretty sure it's not an Intel current like a processor or even an ARM processor in terms of the best way to do it but I don't think we know yet and this is a critically important question for us to figure out what this ought to look like and what are the things it needs to do and so there are lots of interesting crypto problems here that have a really, really big impact in the real world and the impact here ultimately needs to be measured and what is the probability that the bitcoins being protected by that key gets stolen or what's the probability that the SSL private key that is ultimately secured back to this piece actually gets stolen. Now there's another thing I put sort of in gray here on the left sort of this shadow around manufacturing and stuff so in terms of the amount of engineering work the sort of live in the field use case is generally a small minority of the engineering work that goes into actually building something. Now you can say well okay that's the applied stuff and I'm a theoretician but no this actually needs people who understand the cryptography to be thinking about it so when a cryptographer just writes there will be a key secret key known to the device and it's gonna be computed as some function of a master key. Pretty straightforward all of us kind of understand what that means mathematically but what does that actually mean in the real world? Okay so now I've got a master key where am I storing that? How does this drive key get in the device? Most implementers will think okay great I'll go send the master key to the device and so that it can go compute its drive key and believe me I've seen plenty of times where the master keys end up in devices because somebody logically took this and didn't build this plumbing in the correct way and so you end up with these sort of statements but when you look at what actually has to happen here you don't even have a single use case you need to solve because my mobile phone is got more and more use cases that are being piled onto it so I actually have to build a system that can work for many keys and many product types and many different component vendors and many different protocols and use cases and security requirements and I can't grow my factory downtime in some way that grows basically exponentially with the number of boxes if I have to put a new box for every sort of point here my factory is never going to operate if those boxes each have 90% uptime whereas if I have one system okay I can understand how that's going to actually work so if I look at just sort of the things that my team puts work into the back end stuff actually just dominates the sort of simple infield use cases and so like in our crypto manager business getting boxes that can work in factories is really really hard and then the last thing here is this question when we start having secure computing what is the program that we run into in this? Now, and so if we have this beautiful foundation and some structural supports what is the thing that we're going to build on top of that? What are the programs we're going to write and there will be new problems that arise but in many ways the perspective that I have again coming from something of the practitioner view is this is completely irrelevant I mean dreaming about these advanced surgeries when we don't have basic sanitation I mean it's nice but that's not really a problem that's relevant yet we have to go and figure out how to get some secure compute before we can devise enormously complicated calculations that run on it now there certainly is some co-evolution that may occur there for example if we can adjust the compute to accommodate the programs that we need but the basic question how do we run operations in the real world in a way that's as strong as the crypto that we have is the critical issue that I think that we face so in conclusion here if I step back and kind of look at what's happened to kind of the world of cryptography over the time that I've had the honor to be involved with it it started out as this fairly clean circle involving keys and algorithms and I and many others have done things that kind of sort of blurring the edges of that you know I remember after giving the timing attack paper somebody said you know why is this a crypto this is an engineering paper rather than a cryptography paper thought about that and I'm not sure I even know what is cryptography and what is engineering in fact you kind of get this sort of old view of what cryptography was and you get this sort of new notion of what cryptography is and it's this very large set of different components and things that go into solving problems for somebody in the same way that if you think about what is medicine and what does it take to get a patient who comes in with an ailment and comes out healthy or what does it take to do structural engineering or aviation you know aviation isn't just one little thing it's many different things that all blur together and we are realizing very quickly that we're in a really wonderfully diverse set of different problems that all connect together in terms of solving a problem for somebody at the end of the day and when you sort of swirl all these things together you get this really sort of psychedelic mess of stuff but in terms of solving these issues I believe that the cryptography community is much more likely to succeed at solving these problems than other areas are likely to go and figure out cryptography and then successfully apply it so of the groups in the world that can solve these issues it's the people in this room and we're not gonna have somebody coming and saving us from the outside we have to go out and figure out how to solve some of these really really hard problems and part of that requires going out and thinking across disciplines and you know having studied biology and looking at cryptography you know get a somewhat different perspective and if I could get a single thing that I think would have an impact here if I could get everybody in this room to try to go out and just have lunch with somebody in a different field and talk about the problems that you see in cryptography and what their field does to solve something that's the most connected or related to that you know how does a structural engineer think about a problem like contractors cutting corners go have a lunch ask these questions and think about how some of these problems map back to the again the real world of what cryptography is and what it needs to be and the out in sort of in conclusion so that the results of these problems really matters so if you look at what happens to system scale you put the most valuable things in first so making my phone go from two processor cores to 16 processor cores makes a little bit more useful it's not eight times more valuable because you put the most useful stuff in first but the risks of these systems is growing with complexity which means that the net value that we derive from technology at some point starts to decrease you know if this 1995 Mercedes had Windows 95 built into it and was connected online it would be a less valuable car than that car is as an offline car and that's really important in terms of asking the question of if I take the car and the operating system and I put them together am I making something more valuable and if we can't make it more valuable it means that society does not get the benefits of the things that are possible and so we have to figure out ways to make it so that these systems get better now from a macro perspective there is no magic technology is going to make the security situation at a global sort of overall perspective better in the next three to five years things are just going to get worse they have been getting worse it's hard to imagine how they will get worse and you can't say exactly what the specifics in that fractal will be but there will be an overall worsening and there will be individual systems that do fine and do better and as a company I'm trying to be deliver my clients that individual success that is better than the average but from a macro perspective it's going to be a while until we can start even sort of leveling out the curve about where things are but the technology industry's future depends on finding solutions to these and in terms of doing that cryptography is the place we have to look to and it's a very broad, very weird very wonderful set of problems that we have and a really fantastic community that's been really just wonderful for me so we have to all kind of work on this think about what it is and there's a huge amount of really great research results that lie ahead that will be connected toward these problems so thank you very much for listening to me as a failed veterinarian I hope all of you will grab me in the sessions afterward I'd love to talk with you more about this and depending on how much time we have I'm happy to take some questions thank you very much