 And we're live! Hi, I'm Dazza Greenwood, a scientist at MIT Media Lab, and I run civics.com. And this video is a flipped classroom advance lecture from one of our featured speakers in the computational law course starting next week, January 15th through 17th at the MIT Media Lab. My friend and colleague, Christian Smith, we met when Christian was at MIT and now he's ascended to grander heights in industry with his company, Stranger Labs, which Christian is the founder of. And one of the favorites of Christian's talks in my classes and events in the past have been just laying out the fundamentals of cryptography, which is so very essential to the integrity and the security of the web and networks today in a way that lawyers can begin to understand the fundamentals. And so I asked him and he's kindly agreed to do a lecture like that tuned for our class this year. So I want to thank you for taking the time to do that, Christian, and I'll hand over the screen controls to you now. All right. Thanks, Dazza. It's a pleasure. I wanted to give a little primer here today, like we talked about on cryptography. And so just starting out, I mean, we heard an awful lot about this subject of cryptography in the last few years because of Bitcoin and Ethereum and cryptocurrencies and smart contracts. And in parallel, the subject is getting a lot of attention because of the evolving privacy landscape between GDPR and these incidents, you know, Facebook and Cambridge Analytica and so forth. Cryptography and the role that it plays is getting a lot of discussion. And there are other applications of cryptography like secure messaging platforms. We hear about Signal and Telegram. You don't hear about those quite as much. There's some controversy and not quite as much. But that's actually where some of the most interesting work is being done in cryptography today. And when you put it all together, we kind of seem to be in the midst of a cryptographic version of the industrial revolution. But so much of what we hear comes from a marketing perspective and not necessarily a view that's well grounded in theory and the realities of practice. And the fact is cryptography is not new at all. It's a rich field with a very, very long history. And just to put this in perspective, Ethereum dates back to something like 2014 or 2015. I think it's 2014 and Bitcoin to 2009. But we've been using public key cryptography on the web for more than 20 years now. And it's twice that since it was discovered. Going back further, Alan Turing was inventing digital computing during World War Two, and that was actually the motivation for that work was actually cryptography. So if it weren't for cryptography, we might still be thinking of a computer as a person rather than a machine. But it goes back way further even than the early 20th century. And the first known use of cryptography was in the form of non standard hieroglyphs that were carved into the wall of a tomb in Egypt around 4000 years ago. And that's probably where the discipline gets its name. Now, most people, even a lot of highly technical people working in the space have some major gaps in understanding. And this can be a real stumbling block when we're talking about bigger issues like privacy, personal data and many other challenging information problems. There are a few that come to mind like classified information, digital rights management, nondisclosure agreements and other things that have a legal context to them. And it's really not good to have flawed expectations where flawed understanding just sets everyone up to fail. So what we're going to do is try to quickly level up everyone's awareness. A deep understanding will take a long time to develop, but an awareness is something that we can achieve in in the next hour, and that's really useful. It'll, it'll set me on a good path. So we're going to start with a kind of survey of the things cryptographers think about. And if you're thinking I already know what encryption is, you might be in for some surprises. There's way more to this field than you think and it's mostly not about the primitive algorithms. So we can't talk about cryptography without understanding some basics of information security. And the reason is that it's pointless to talk about the methods without understanding our motivations. It might be good to use some technique like a, like a hash function for one purpose. And then it might be counterproductive for another one. Excuse me. So we need to start with some basic understanding of information security goals. This stuff's not complicated. It'll go really quick. Traditionally, there are three information security goals. Confidentiality, integrity and availability. Confidentiality is about only sharing information with authorized parties. Integrity is that characteristic that you have an assurance that data is complete and unaltered. Sometimes authentication and authenticity get mixed up with integrity because of that unaltered part. And then there's availability, which is the information has to be there when you need it. Some people kind of initially get confused with this one because availability, I mean, it sounds more like scaling, you know, large software systems in the enterprise or something like that. But one way that you can mess with someone is by cutting off their access to a system or deleting their data. So availability is actually a security goal. Now, some people say that any other kind of security goal is really one of these three. Others argue for a larger set. Don Parker back in 98 wrote a piece called toward a new framework for information security and it was in the fourth edition of the computer security handbook and he adds three more in there. Possession or control authenticity distinguishing that from integrity and utility. Utility one is, I think it's similar to availability but I think the distinction he was making was, for example, you could have access to your encrypted data you could have it as encrypted data, but you no longer have the keys. So that's no longer useful. There are some other other security goals that that people talk about non repudiation is one of them. This is this is a very, very difficult one to guarantee I don't think you can actually guarantee non repudiation, but you can you can have non repudiation as a goal and try to make it harder to repudiate. That's valid. Denial ability is is one and provenance is one. And these are, these are really simple concepts, but the devil's hiding in here waiting for a chance to push you off the cliff. For example, it's possible to have some subtly conflicting goals, like establishing provenance while maintaining denial ability. This certainly keeps things interesting, but it also illustrates that sometimes we really can't have everything. We have to make trade offs. And one of the classic trade offs is between security and convenience. It's really hard to do both well at the same time. If you make things easy for users, you're, you're increasing the opportunities for attackers to take advantage of that. And if you make it really, really airtight and secure users are going to complain. So that's that's it in a nutshell. If you're, if you're working on a problem or evaluating a solution and you lose track of your security goals, you really need to rewind start over and figure it out. Otherwise, everything else you do will be ineffective and potentially dangerous. There's, there's no math and there's no algorithm that can save you from yourself. So another thing we want to be aware of is well established security principles. And by principles, we're not referring to sort of ideological principles or the things you see in manifestos, but at the same time, they're not quite like mathematical axioms or logical postulates or first principles in physics. But then there are also not really some kind of arbitrary rules that that consultants made up. And you could probably think of these best as as essential defensive strategies. If you ignore these, these ideas that are security principles, if you ignore them in your countermeasures, it's probably going to hurt. One of the best places you can read about these the easiest place to start is probably on the website. There's a set of principles called security by design. You probably hear about these in a lot of different places, but it's it's good to, it's good to go over them and have some ideas. One is the first one is to minimize the attack surface area. And if you have, if you create less opportunities to be attacked, it's easier to to be secure. Another good one is the principle of least privilege. A lot of attacks are based on on having too much access, more being authorized for more than what somebody needs. This isn't really a technical thing it's more of an operational thing. But you have to make those decisions when you're when you're designing and configuring and deploying systems and deciding on policies, the principle of least privilege defense in depth is another one. The idea here is that it's sometimes you hear security is an onion. You know it's in it's in layers. You, you know, you may have one measure that fails and you need to have another one that that catches things to back that up. There are a few others that are good here fail securely is important. One way that attackers will try to break your security is to cause your system to fail. In its failed state, it becomes vulnerable. So when when you're designing systems, whether they're human systems or technical ones or whatever, you need to make sure that in a in a state of failure, you still have some kind of protective measures won't go through all of these. There's a lot that you can read on them. And it's good, but this is by no means an exhaustive list. So I want to show you another resource. There's there's the open reference architecture for security and privacy. And I just cherry picked a few of these that were that were different out of here to illustrate that there's quite a lot they have 84 security principles, or, or maybe maybe I miscounted but it's, you know, going from 100 and then they have another hundred or so principles on on privacy. There's a little bit overlap between them and they have really good references and citations. It's good to good to just kind of familiarize yourself with the notion of security principles and look at a few different sources. There's some other aspects of security you should be aware of. One is that security is probabilistic. There are no guarantees what you're doing when you're trying to make things secure is its risk management. Another one is that security is a process. This is something you do. It's not something you buy. It's not something you have. It's not a feature of a product. You're you're either actively doing things to be secure or you're not. And that brings us to another one, which is, you have a security posture, you're either prepared and alert and ready, or you're something else that's probably not very good. So then there's, there's another idea here that we should get into, which is the security life cycle. There are a few different takes on this. Mostly it's important to be aware of this so you understand the context of timing. Security activities can be roughly categorized by this life cycle. And it's important to know where cryptography fits into this. Here's the TLDR. You can get the longer version in the NIST cybersecurity framework is a good resource on this. So identify is about, you have to know what you're trying to protect. And then you can choose protections like encryption and access controls, but some problems can't really be addressed with protections. And sometimes protections fail. So what do you do then? Well, you need to be able to detect them. You have to know there's a problem. You can do this by auditing by doing log auditing is a really important practice, or by monitoring, like in real time watching things as they come in. Usually there's some combination of the two. Respond. Once you detect an anomaly, you probably need to do something about it. And that could involve some planning ahead. So you're prepared for day zero. But it should definitely involve at a fair minimum, stopping the bleeding, whatever's gone wrong, and then communicating with stakeholders and some kind of an analysis, what happened, what went wrong here. And the last, the last part of this is recovery. You don't want it to happen again. So you might need to implement some new countermeasures that close the gap. There's usually Yeah, there's not much sense knowing that you're being hacked without doing something about it fixing it. And what's really interesting here is that the protection, which is where cryptography is really, really useful is about one fifth of what's going on in the security life cycle. It's not the total of it. Okay. So we talked about goals and we talked about principles. Before we dive into cryptography itself, we need to understand threats. This is something legitimate cryptographer spent a lot of time on. You have to figure out why people might want to attack you, how they go about doing it, and how likely they are to succeed using any, any given method. And after that, can you think of ways to protect against those threats. And then you need to analyze their probable effectiveness because not every, not every protective measure is necessarily a good one. You need to assess the cost and decide on which protections really make sense to implement. This entire activity is called threat modeling or threat analysis. And it's pretty much the same kind of thinking, whether you're doing it for national security in the enterprise or personal privacy or as software developers or any of these cases. And as it turns out, threat modeling is quite a well developed discipline. It's unfortunate that we don't hear more people talking about it, especially because this part's not rocket science. We're going to get to that later but but it's a very approachable subject. There's a there's a seminal paper that I'd like to share with you guys. It's, it's very accessible. It's called toward a secure system engineering methodology. And what this paper presents I've highlighted here it's a methodology for enumerating the vulnerabilities of a system. Determining what countermeasures can best close those vulnerabilities. And there's there's kind of a high level overview here would be that you'd start by characterizing adverse adversaries by resources, access and risk tolerance. And then we want to map vulnerabilities to the system through its lifecycle. And finally, we want to, or not finally, we want to demonstrate, sorry correlate an attacker's characteristics with the characteristics of the vulnerability to see if an actual threat exists. We want to think about countermeasures but only only for the attacks that meet the adversaries resources and objectives. That's kind of important. Otherwise, you could spend a lot of time and effort and money and not have any impact. And, and then finally, viable countermeasures have to meet the criteria for cost and performance and the rest of the context. The most important idea in this paper is something called attack trees. This is, this is kind of the meat and potatoes here. Bruce Schneier has a good write up on this on his website as well and I stole this example from him. To make an attack tree, you start with the attackers goal as the root. In this case it would be opening the safe. From there, you want to work backwards and look at all the ways an attacker might go about getting what they want. This is kind of an easy thing to understand and abstract, but to get good at it, you have to be just a little bit naughty. Once you have an attack tree, then there are a few more steps. You can classify the things that you see here. Like in this one, take a look, you can pick the locker, learn the combination, you can eavesdrop, you can blackmail. And for each of these, you know, until we get down to the leaves of the tree, we're not really done breaking it down yet. So the next steps, once you have this attack tree, would be to apply weights to the leaves. There are some really sophisticated ways of doing this. You can, you'd have a great big table that, that measures a lot of different, a lot of different aspects of this. But, but in some way you want to wait the leaves. Then you want to prune the tree, so only the exploitable leaves remain. The things that aren't practical to deal with, like maybe, you know, we're not going to worry about people that have quantum computers in our threat model, or something like that. Only deal with the exploitable leaves. And then for the things that are left there, you want to find what are all the possible countermeasures for each one of those potential threats. And then once you've, once you've identified all of them, you can start to optimize. You can decide what are the best countermeasures that fit the situation and whatever resources you have available for this. If you're interested, there are many other approaches to threat modeling that help solve for specific needs. There are reams of literature on this stuff. If all you ever understand is attack trees, you're doing pretty good. And for our purposes today, it's really just important that you know these things exist. All by itself that puts you a few steps ahead in your thinking. But here's a few others, a few other thoughts on how to do threat modeling. One is called stride. This is from Microsoft. And stride is an acronym. And each of the items in the acronym, each term, is a classification for threats and vulnerabilities. This is useful during design phases if you're designing a product and you need to think about different ways someone would break it when you're generating that attack tree. Think about all the ways they could spoof an identity or tamper with a component. How information might be disclosed in a way it's not supposed to. How could an attacker will think about how can I deny service or escalate privilege so I can own the system. There's a lot written up on this and it's good stuff. Really useful in that early stage of building up an attack tree. Another one is called dread. This is about risk assessment. Dread is also an acronym. Damage, reproducibility, exploitability, affected users and discoverability. Damage is asking how bad would an attack be. Reproducibility is asking how easy is it to reproduce the attack. Is this something that someone figures out how to do this and then everyone can do it or is it a one off. Exploitability. How much work is it to launch the attack? It may be that someone knows how to do it and it's a very solid way to do it, but it's so much effort that they probably wouldn't be able to pull it off. What's the, how many people would be impacted by this? And discoverability would be, you know, how easy is it to know that this is actually that there is a threat or that there is a vulnerability. A few others won't talk about them too much. There's the octave framework from CERT at CMU. There's trike, which is aimed at security auditing. And there's pasta, which is all of these, almost all of these are acronyms. It's very nerdy. There's a process for attack simulation and threat analysis. There's one called vast visual agile and simple threat modeling. And so, yeah, that's just useful to know that stuff exists. And now we can get to the part that everybody wants to hear about cryptographic primitives. We're not going to dive into the math here, where we would need a lot more time. But we will talk about encryption, digital signatures, and maybe some related techniques. The purpose and general characteristics of the cryptographic primitives are fairly easy to understand without going under the hood and looking at the math. Just like you don't need to understand thermodynamics or metallurgy to learn the rules of the road and drive a car. It's kind of the same thing here. We can, we can get a lot out of this with just a little bit of knowledge. Now, I mentioned earlier that cryptography goes back a few thousand years. And the easiest way to understand this is to go back closer to the beginning. The first use of cryptography was obviously communicating in secret. And the way that's accomplished is by using ciphers. The Romans had, they called it, well, it's now called the Caesar Cypher. And it was a fairly simple method of substituting one character for another one. So in this, in case this is hard to, to understand what's going on here, we're basically establishing an offset for the characters. If I had a message, you know, breaking this up into letters, and I would replace an E with a B and F with a C. There's, there's an offset. You count back. So I'm going to count one, two, three steps back in this case. And reversing that would then get you the decrypted message. This one, this one is pretty uncomplicated. But it illustrates well, all ciphers work on this principle of substitution. It's not magic. A cypher is a procedure or an algorithm for transforming a message into a code that can only be reversed or deciphered if you know how to do it. And the state of the art at this has become increasingly sophisticated through the course of history. At some point, people develop cyphers that used a secret key. So, and this is important because then you could know the method of a cypher, but you still wouldn't be able to decipher an encrypted message. So the Caesar cypher is pretty easy to guess. You would just guess, you know, what the offset is, try all of them until the words start making sense. But you could get more complicated and have every character be a different offset. And that that series of offsets is the key. So let's take a look at a couple examples of these couple of simple ones. If I had some plain text, just ABC, and I wanted to, I wanted to encrypt this so that it's a secret and send this message to Daza. I would go through each character, and I would say, let's see, we're starting with a, and I would count to forward the next letter is going to be C. For the next character from B, I would count forward four from B, one, two, three, four. So the next letter is F. So we've got C, F. And the next character is six for C. So I would go from C, one, two, three, four, five, six. And the next letter is I. So the cypher text is CFI. And Daza, are you, are you, are you with me here? Yep. So, so far, so good. I'm able to decipher your message to me. Okay, can you, can you decipher this message right here? Based on this key. Based on five, one, five, oh, okay, so I would take a Q. And I would say the offset for Q is five. So let's see, darn it. The way my video sharing, I can't see all the letters. But basically, I would, I would reverse, I would reverse the offset in order to see what the letters were. And I don't have them. I don't, oh, here, actually, here's the Q. So I go Q, P, O, N. That's one, two, three, four, five. K, I think is the first one, right? No, you're one off. Oh, I'm sorry. L, it's five, not six. So it's L. And then offset of one for B is A. So A. And then offset of five for R is Q, P, O, N, L, I, M, so far, is that right? L, A, M, I mean. And then zero for one, two, three, four. For B is B, L, A, M, B. And then is the T no offset? No, you would start over with the key all over again. Okay, so then five back from T, which is one, two, three, four, five. Oh, so it's some variation of the question when Lambo or something like that. Yeah, it's just Lambo. That's, that's the, you did it. That's fantastic. I didn't know. But I guess when I hear Lambo and cryptography, my mind has been totally soiled by the, by ICO. I always wonder when Lambo, when Lambo. It was bad joke. But, but you can see the process, you can see how this works. And this is a very, very, very simplistic example here. Modern cryptography is much more sophisticated than this. But going through the exercise of just playing with substitution shows you, it shows you how it, how it works. And this is, this is also where the notion of randomness becomes very, very important. You need to keep this key secret. If anybody has this key, and they know what method you're using, and they have the message, then they, they would be able to decrypt it with the key. Randomness is really, really important because in addition to just, you know, protecting it, not leaving it laying around, you know, carved into a stone near the campfire that you just left would be, you know, no one should be able to guess what it is. So randomness is very important. Keys should be random. And this is a, this is an important property in cryptography that pops up a lot is randomness. Now, you can see where this gets to be quite a tedious process to do it manually, especially if you have large organizations that are sharing lots and lots of information that they want to keep secret like this. So at some point, you're going to want to automate some of this. And by World War Two, there were these electro mechanical ciphers like the German enigma machine. And these machines used much more powerful methods of substitution that are impractical for humans to compute manually with a with a large amount of data. And part of the reason that we have computing today as we know it is because of the efforts at Bletchley Park to crack enigma in particular. And here's a photo of the machines that they were that they were using for this. See the vacuum tubes and everything pretty cool. Now, where things started to get really, really interesting though, and more relevant to some of the current technology, the, you know, lots of things that we're talking about these days was in the mid 70s when Whitfield Diffie and Martin Helm and published a paper called new directions in cryptography. And in this paper, they present the notion of asymmetric or public key crypto systems. This is an idea that was known to government cryptographers in the UK for a while in a bit earlier in the 70s, but they kept the lid on it. And then eventually, eventually, Diffie and Helm and wrote this they devised this scheme where you could use to related. But different keys, a public key and a private key for encryption and decryption respectively. And what was important about this is that you could share a public key with with anyone. Alice here could give her public key to Bob. And if if Bob loses the public key or accidentally gives it to someone else, they still won't be able to decrypt Bob's messages or anyone else's messages to Alice. And this this really eliminated this this problem of a secret key being compromised by the center of the message. But the other thing that's really significant about this is that you could use the keys in reverse. And this one's a little confusing to begin with. If you encipher a message with someone's with someone's with some with your private key, anyone that has your public key could decipher it. If you're trying to keep a secret, that's completely counterproductive. It doesn't make any sense. But what is useful is that no one else but the holder of the private key could have created that cypher. And no key except for the corresponding public key could decipher it. So if you can keep your private key private, the cypher is effectively unforgeable. And suddenly cryptography wasn't just for keeping secrets. Now we could use use ciphers as a form of a digital seal or a signature. And authenticity based on based on this kind of cryptography is probably one of the most important applications in the field at this point. But primitives like encryption and signatures aren't very useful on their own. That's why we talked about security goals and threat modeling first. Cyphers and hash functions are all well and good. But as Adi Shamir pointed out, cryptography isn't usually broken. It's circumvented. Now a simple example would be encrypting your data but failing to keep your keys secret. And this sounds obvious but it's harder in practice than it is to say it. And opportunities for circumvention usually aren't so obvious. And that's where cryptographic protocols come in. These protocols specify communication between participants and incorporate various countermeasures to accomplish some task while satisfying a set of security goals. Here's an example of something being drawn. This is from OpenID Connect. And it shows a little protocol. And a protocol is essentially a multi-party algorithm. This is similar in a lot of ways to any other algorithm but there's a very important distinction. And it's like this. You can perform a series of steps on your own or you can program a computer to do it for you. But that is not a protocol. To be a protocol there has to be more than one party involved. And there's necessarily a change of control during the procedure. This is why designing protocols is hard. And it's why we hear so much about trust in applications of cryptography. Now compared to designing a system as a protocol designer you're giving up some control to those other actors and you can't really make people do anything. They probably have different interests than you. And some people are going to break the rules. Some people will play nice. But some people will always break the rules whenever there's an advantage for them in doing that. And because of that protocol designers have to assume that everyone is potentially adversarial. This is true to such an extent that two respected cryptographers, Ross Anderson and Roger Needham, they wrote a fantastic paper about this called Programming Satan's Computer. And it's kind of summed up in one sentence in the abstract here. In effect our task is to program a computer which gives answers which are subtly and maliciously wrong at the most inconvenient possible moment. So a big part of this work is about understanding human behavior. Designers spend a lot of time reasoning about roles, risks, trusts, incentives and so on. And the primitives are just part of that. That's why you'll see so often protocol descriptions that include these little Alice and Bob stories and stick figure drawings like we saw a few slides back. And there's actually, there's a conventional cast of characters that are used to discuss different kinds of protocols and scenarios. If you Google Alice and Bob, you'll find a fantastic Wikipedia entry that has a lot of background on this. Here's just a few of them. There's some usual Alice and Bob and Charlie kind of generic participants. And then there's Chuck, if you see Chuck, Chuck is usually malicious. Craig shows up as a password cracker. Eve is an Eve's dropper. But Eve is passive. She doesn't really modify messages or actively subvert the protocol or deny service or anything like that. She's just trying to listen in. Faith is a trusted advisor and sometimes an intermediary. There are a few funny ones like Grace is a government representative who's who's probably trying to force back doors into protocols or weakened standards. She said she was here to help us. Yeah, exactly. Mallory is like Eve, but Mallory's is malicious. So Mallory's active and will will do things to mess with you where Eve is just trying to listen in. You'll hear about Olivia, some some systems, you hear about oracles in blockchain. Olivia is the name for an oracle in these scenarios. And and Olivia is a source of information from outside the system to inside the system. It's possible to have an opponent who is not necessarily malicious but doesn't have the same interest as you. And that would be Oscar. There are some others like, you know, Trudy the intruder or Wendy the whistleblower, but there are a couple I have highlighted here for a reason I wanted to point out that there's some different categories of protocols that you could think of. There's a judge and an arbiter there, you can have arbitrated protocols, and you can have adjudicated protocols, and you can have self enforcing protocols. That's actually Bruce Schneier's written about that and I've seen it pop up somewhere else as well. Should be particularly interesting for lawyers. And let's see here. So, let's look at some examples, example the protocols. There are communication protocols, authentication protocols, there's some, you'll hear about things TLS PGP off the record messaging signal. MT protocol is the protocol behind telegram. These are communication protocols that they're intended to be secure and they all use cryptography in order to do it. authentication protocols. Kerberos and SAML. There's a few others. Older ones that I haven't listed on here and quite a few newer ones and things I've forgotten about or left out but authentication protocols these are these are all of them use signatures some of them use encryption for various purposes. So, so we have these protocols that's, that's great, but let's talk about formal analysis. This, this is the real rocket science of cryptography of applied cryptography. It cannot be overstated that designing protocols is very hard to do correctly. You remember that paper, we looked at programming Satan's computer. There's a much more recent piece by Catherine Meadows about specifying requirements for protocols, such that they can be formally analyzed. And it's called ordering from Satan's menu. And so even even trying to express requirements correctly is really hard, let alone knowing they're the right ones or actually solving for them. It's virtually certain that whatever we design is going to be fatally flawed. So the next logical questions are, how can we know if a protocol is flawed? How can we find the flaws? How can we eliminate them? Can we prove that we've eliminated them? And it turns out that there's a, there's a quite a long history in the field of trying to answer those kind of questions. And you should really pay attention here, even though it's unlikely you'll ever do this kind of work yourself. Because a lot of practicing engineers out in the field, building stuff, especially blockchain folks, don't even know that this part of the discipline exists. And a good way to kind of evaluate them is to see how familiar they are with some of these ideas. So one approach to the question is, is to start with a catalog of known attacks and try them all out. But that's not very efficient. And it's also an expensive way to confirm the obvious and it doesn't really help you find the things that you don't know about yet. So there's got to be some, some way to reason about correctness more effectively. And there is, it's called formal analysis and formal analysis is the subject that deals with reasoning about protocols to prove certain aspects of them. Researchers have been working on ways to do this going back to the early 80s, if not earlier. And one of the first papers that at least I'm aware of on this was written in 1983 by Danny Dolov and Andrew Yow. It's called On the Security of Public Key Protocols. And this is an extremely dense paper, you probably won't make it through it. I don't think I did. There's an example of pretty much the whole paper top to bottom looks like that. But the authors developed two different symbolic models and an algorithm for checking protocol security. And you don't necessarily have to understand this to be able to use it because there are a lot of tools that have been created for putting this stuff to work. Here's a list of some of them that are based on what's called the Dole of Yow model because ideas come from the paper. Probably never have a reason to take a look at these, but it's good to know that this discipline exists. Another important contribution came from Michael Burroughs, Martin Abadi and Roger Needham. Sorry about that. Roger Needham. And they wrote a piece called The Logic of Authentication in 1989. And in this paper, they offer a notation and a set of rules for reasoning about beliefs and behaviors in the context of authentication protocols. Their method is sometimes known as BAN logic, B-A-N for the author's last initials put together, kind of like RSA. This is something called Logic of Belief. And let's take a look at some. BAN logic involves statements that look like this, very simple statements. You know, replace P with one of our cast of characters like Alice or Bob, and replace X with some message or data that's important in your system. Alice believes X, Paul said X. And then you put these statements together and you start to reason about them and there are rules for reasoning about them. And they kind of look like this. You can combine these statements, though Alice believes Bob controls X and so on. And there's quite a collection of rules for reasoning about them. Learning how to do this takes some time. Very few people even know what exists, let alone practice this. But it's important to know about BAN logic isn't perfect. And there are a lot of valid criticisms of it, but it has successfully uncovered flaws in protocols like X509. The thing to take away here is not that you can prove that you're secure, it's that you have some systematic ways of finding flaws in your designs. And there's a choice quote in this paper, I love it. It couldn't be more relevant if it was written today. Protocol designers often misunderstand the available techniques, copying from existing protocols inappropriately. And as a result, many of the protocols found in literature contain redundancies or security flaws. I think in academic circles, this is getting better. You know, people are a little more aware of the trouble they can get into. And there's a kind of growing awareness instead of practice about the difficulties of protocol design. But out there in the wild with some of the blockchain stuff, it's really scary. People are misusing techniques that they don't understand correctly. And they're not going through the effort to find the problems. So there's a lot of time bombs out there. There's another approach to formal analysis and just getting some traction lately. It's from 1998 but see it quite a lot. And it's called strand spaces. There are a lot of other approaches like this. Most of the methods like this they assume that the primitives are valid. We're not trying to break the cipher. You're not trying to crack the signature or crack the hash. You're assuming those things are valid. And then systematically they explore how attackers interact with the protocol. And that I think is about what we have time for today. Wow. Well, that was a really a tour de force. I am impressed and grateful you're able to fit so much essential knowledge into such a short amount of time after all. You're able to cover it at a level of abstraction so that it was understandable. But also reference the sources where people can learn more. I want to just double check one thing. Could I get a copy or a link to your slide so I can put them on a session page for you? And do you happen to have links to some of the references that you mentioned handy so I can make it available? Absolutely. I'll make that available to you. Okay. Thanks. And I guess by way of wrap there's so much in there. Really that was like years worth of a grist for the discussion and education mill. But two things that popped out at me that I'd like students to be thinking about are number one, that gorgeous cast of characters that I know it's like intrigued us both over the years of Alice and Bob and the others. And that's in addition to being a way to express information security protocols and processes, it's also at least adjacent to the way that we express legal use cases as so-called legal fact patterns. The more we can align our legal fact patterns and legal analysis to this technology way of expressing situations and scenarios, the better. And so encourage everyone to take a look at that. We'll be sure to put a link to that Wikipedia page in the notes. And then the other one was on formal analysis, which really is the culmination of so much of the thinking and work behind getting trustworthy computation together. And I just would like to challenge people to start to imagine a future and how could a future work where legal analysis was capable of formal verification or the processes by which legal instruments were designed and deployed. How could those be formally analyzed? Just as least as a thought experiment. So with that, I want to just thank you so much, Christian. I know you had a frog in your throat today, but like a trooper, you're willing to go through and do the talk. So I wish you buckets of tea and honey and just want to express the gratitude from everybody on the MIT computational law team for you coming through and providing a lecture and making yourself available for the class discussion next week. Thank you. Thanks a lot. And we're almost off. It seems to be hung. I think that it's not broadcasting notes, probably still broadcasting. I'm going to just give you a quick quick, I'm going to just terminate the session and give you a quick call because I think it's still broadcasting. All right, thanks.