 Yes, okay. Excellent. Okay. Thank you for coming to the last session of the day for Seafield 2020. We're going to have an invited talk here and then some time for Q&A afterwards. So please put some questions in the IECR chat as you think of them and we'll monitor that to post the speaker at the end. And so I'll pass it over to our program chair, Nikki, to introduce the talk. Thank you. I'll make my introduction very short so that you have as much time as you would like to give the presentation. I just want to say, Steve, that we're extremely excited to have you here. Not many people have accomplished as much and have been in the field as long as you have. Also, even for those who have, there's only a smaller fraction who is actually willing to talk about some of the things that didn't work and that failed. I'm really, really excited and looking forward to what you're going to tell us about IPSEC and I think I can speak in the name of all of the participants here that we are very, very much looking forward to your talk. The floor is yours. Thank you. Let me share the screen here, get my slides up. If there's, if I saw the comment in the chat room about me being one of the people who would meant to use that. If there's time at the end, I will talk a little bit about the non-history, the almost history of use net and cryptography. There's an interesting stuff there and that actually isn't a blog post I'll point to later. Anyway, So what's all what is IPSEC? How did it evolve? Why is it the way it is? The origin, the technical constraints, and the organizational, political and other non-technical issues that contributed to the way it came out good and bad. One of the lessons is non-technical issues really matter. This was cryptography meets the real world in many different dimensions. So IPSEC was encryption of the IP packet layer and ITF effort. The goal was to protect all packets without changing applications. And it had to conform to the IP service model. That is to say it was going to be stateless. Every packet had to stand by itself. And of course the IP service model assumes that packets can be dropped, duplicated, damaged. Correctness is end to end and handled by TCP, the transport layer. But IPSEC is at the network layer, the IP layer. And this lets it protect all packets, even those from naive applications. We envision three different scenarios, end system to end system, end system to gateway, typically a road warrior back to the corporate firewall, or firewall to firewall gateway to gateway for things like branch offices. The generic structure as you know the IP layer at the bottom for routing through the internet, then an encryption header, then for example TCP and user data, or if you're going to encrypt it to gateway or gateway to gateway. Inside the encryption header, you have another IP header so you can do in the outer IP headers for routing on the open internet, the inner IP header is for say routing within the corporate network. And the history of it actually goes back to the late 80s, mid to late 80s, the defense department had a project called SDNS, Secure Data Network System, and they devised something called SP3, Security Protocol for Layer 3, the network layer. And in the 80s, and partly because it was a government project, they believe the world was going to convert to the OSI protocols rather than IP, which they call the DoD internet protocols. And so the spec is written using OSI terminology, PDU for packet data unit, NSAP network service address point, and military terminology like the red net, the inside unencrypted net, and the black net, the outside untrusted network. Interestingly, they made confidentiality integrity, both as optional services, a decision that played a very important part in the later history of IP set. And in keeping with the style of the OSI protocols used variable length fields. So SP3 had a clear header, and a so called protected header, which could provide integrity and or confidentiality. The clear header was just a length, a type field, and a key ID. Protected header, also a length, flags, padding, especially to deal with block size and alignment and then the IP header and payload and such. The protected header was always integrity protected, which is necessary for access control in the DoD world where you're trying to have your labels for top secret and bottom secret and so on. You want to do integrity protection, make sure that something that's supposed to be top secret will not be routed to an unclassified network. There was integrity only possibly for as for the entire packet possibly for export control. There was a confidentiality only mode. And one of the reasons for this is, when this was designed in the 80s cryptographic processing was expensive. So if you were dealing with high speed bulk data that can tolerate occasional bid errors, video and OFV mode, for example, you didn't care too much about every last bit of the packet. So you eliminate the integrity check and save some of the expense. And of course, because they expected to be using things like NSA type one algorithms, the SP3 spec gave no details on the algorithms to use. And one of the interesting aspects is all of the cryptographic details, the algorithm, the block size of the site for the length of the integrity check value are identified by the key ID. The key ID is also specific indexed to the permissible source destination addresses, which use her access control. And you use the separate process for key negotiation. This is just the over the wire packets. The backfield have indicated direction of the packet to prevent reflection attacks and use the same key ID in both directions. Key management was separate partly to separate policy from mechanism, partly because it's a slower, much more complex process. And it's done much less often. So if you're thinking of these terms you put the per packet encryption in the kernel, though in those days NSA would only do crypto and hardware. Key management at user level can have complex policies checking CRLs and so on. You can negotiate multiple keys for different directions integrity versus confidentiality for which secrecy if you needed it and so on. That can be arbitrarily complex without affecting the over the wire encryption policies could be encryption and or integrity protection. How do you select the encryptions by destination IP address by network address by hosting port names. Crucially, what should have been encrypted when you receive a packet. That's just this prevents someone from spoofing a packet coming at you. You can say a packet coming from this source address should have been encrypted I will drop this packet it is nice. It is not nice. It's a good address of the decryption point, which could be a gateway. And that, you know, that was inside DoD and a few years later, Johnny need to send Matt Blaise device something called swipe, very much along the same lines. There's a simplification of sp3 eliminated most of the options. And they said, this is the internet. We're going with I internet only no OSI support. They made one very crucial change that was not in the SP3 spec. They added a sequence number why the paper says quote to protect against replay. No further explanation. In fact, when I quizzed Matt on this few years later, he didn't have any better explanation of that. He had a gut feeling that it was needed. And this struck me is very curious because the IP service model as I said, permits packet duplication. So why do you need to prevent replay when the underlying service might already permit replay. So one of the things the ITF does for better or worse is when they take a design from the outside, like swipe and many other protocols over the years. They assert the right to change it. If you're going to give it to the ITF, the ITF has the right to change it. So the IDF wanted a standard for packet level encryption at the network layer, the IP layer. And so IP sec was a descendant of SP3 and swipe and the people who designed IP sec were very familiar with both of these protocols. In fact, John and Matt were both part of the IP sec process, especially early on, why the changes and internet standard does need to be more general and then some have to have more options than swipe. For example, there was a desire to support multicast IP back when we thought that was going to be very important and mobile IP, which is used somewhat these days and IPv6 or some cellular telephony, but not very much on the open internet. Again, we didn't know this in 94 or so. We wanted to make network layer encryption ubiquitous to protect all traffic, maybe not at least be able to protect all traffic, maybe not actually protect all because computers were too slow then to encrypt everything. Remember the only really strong sign for generally available in 1994 was DES, that was widely accepted and DES is horribly slow in software. And people were still doing address based authentication. Instead of SSH you had RSH which use address based authentication. And with IP addresses having become dynamic. We knew that was not going to work. We wanted cryptographic authentication. We decided to generalize what turned out to be too much, not just IP encrypt based on IP addresses, but host names, port numbers which are TCP concept, and even usernames. And so you could have two sessions between one pair of hosts, each encrypted with a separate key and multiple granularities of encryption network pair host pair per user per connection, and so on, really spread out the traffic among multiple keys. But one of the problems we ran into was the US still hadn't export controls on cryptography, did not restrict authentication technology, but did restrict confidentiality technology. The state of the cryptographic art was considerably more limited then than it is today. And the designers of IPsec, at least the ones who could talk openly and freely there was some people with too many NSA connections had somewhat limited cryptographic knowledge we did not know as much as we thought we knew. So the first version, RFC 1825 through 29 had two different cryptographic headers a confidentiality header, and then authenticate ESP the encrypting security protocol and the authentication header. And you could use them separately explicit transport versus trauma mode for end to end versus end to gateway, a separate key management and policy protocol, though that was never defined. So the typical the SPI security parameter index, very much like the key ID and SP three, and there was no sequence number, why was there no sequence number, because I said there shouldn't be absolutely a bad idea it's an unnecessary field. Remember the IP service model allows duplication and Matt plays that I was wrong but he couldn't give a reason. And I said we don't want these extra four bites in the packet. So the packet layouts. You can look over here. I'm not able to draw on the screen here, you can see on the left IP with the confidentiality payload or just with a H for authentication only. If you look all the way on the right you can see the authentication header covering the cipher text, and that's pretty great ESP payload and fair number more combinations possible. The SPI was defined to be a random number. Why random someone said this will help prevent traffic analysis, rather than using a flag to indicate direction. We had separate SPI for each direction, partly to avoid traffic analysis, you couldn't easily link traffic and opposite directions, but also because you had multicast that might that would be a unidirectional channel. So we said separate SPI in each direction, but the SPI implicitly said what the source address was supposed to be confidentiality again you show. Here we stand the SPI, the initialization vector for your block cipher, your data padding at the end the length of the padding and the next protocol say it's IP coming next to TCP coming next to what have you. The padding was partially to deal with the block cipher block length, but it could be increased up to a maximum of 255 bytes, again as an attempt to evade traffic analysis. The authentication header was pretty similar, but it was defined to cover the IP header that came before it, not just stuff that came after it. This was a layering violation and made implementation very messy. And by the way, some parts of the IP header change on routes you can't just protect all of the IP header, you need to know a fair amount of the semantics including of the optional parts. But the advantage of using a separate header rather than just a policy is that it's export friendly you can say, my system is not doing encryption, it's only using the authentication header, and they firewall could inspect packets. They're going to and from that will protect with a age without needing to know the key that it would need to inspect an encrypted packet. There were reasons I don't know that there were good enough reasons but there were reasons for it. That's that sequence numbers for controversial. That was a desire for RC for for encryption. Again, we're talking mid 90s RC for is so much faster. But RC for courses a stream cipher, which you really don't want to use with manual key management. If you had an automated key management protocol, it could have worked. You can define a key management protocol at that point. And there are a lot of people said they wanted manual keying for simplicity of implementation. So you had the desire for fast encryption but you also have the desire for something that wouldn't work well we had such fast encryption. So we had this conflict there. And then we started moving on to define key management. The framework that was eventually adopted was called is a camp in their security key management protocol. The framework came from the NSA, and that generated an amazing amount of paranoia about why was the NSA pushing an open key management framework on the community. What was wrong with it. What was sabotaged about it. There was no crypto in is a camp. The itf defined key management key management protocol. I get that key exchange roughly speaking was RSA signed dialogue with the option of the helmet exchange for forward secrecy. There was an alternative proposed much simpler than is a camp called for tourists I'll come back to that is a camp and I or horribly horribly complex. It includes session management as well as key negotiation, multiple phases multiple authentication schemes. I'm not going to go into it today. The original is a camp in my opinion was disaster, and had serious functionality bugs as well as serious design mistake which made it not very usable in one important scenario. But there was another choice on the table besides ESPNH and that was a protocol called skip which came out of Sun Microsystems. Their reasoning was this IP the Internet protocol is a stateless datagram protocol. If you're going to use something with state like doing key negotiation and setup, then your gateways have to have state it's no longer a pure datagram protocol. That in fact is why is a campus session management. When do you delete these keys. So, Sun Microsystems suggested skip completely stateless. In the packet format I'm not going to try to go into into detail but every host was going to have a certificate for a Diffie helmet exponential. Post A would have a different helmet exponential. Post B would have a different helmet exponential. When you have these two half exponentials, you calculate the Diffie helmet exchange. And that lets you get a key without negotiation, you just see this packet, and you use this Diffie helmet derived key to encrypt the actual traffic key. And that would give you forward secrecy for example. So, fairly complex protocol with many of the fields being optional integrity encryption, a sequence number even compression for all optional and controlled by this flag field, and varying offsets depending on your options now rhythms make parsing difficult. All of these identifiers was set in the clear which concerns some people worried about traffic analysis, your policy was less flexible. And the really serious problems you are not negotiating algorithms, you needed universal algorithm agreement on what algorithms are going to be available and universal permanent agreement on the Diffie helmet parameters. Ultimately, that last was the killer issue. But here's where organizational politics stepped in. First of all, there was quite a split about for tourists versus is a camp bike. A lot of people preferred for tourists because it was much simpler. Let's just call it personality conflicts ruled out for tourists. This is a campus the only choice if you want to use ESP age. There was a bitter split and no consensus about ESP age versus skip. And I kept adding features to skip to make it to add more functionality with added more complexity, such as adding optional forward secrecy. Ultimately the working group deadlocked security area director Jeff Schiller at the time had to call it to him. The inability to easily change the Diffie helmet parameters was a showstopper for skip. And some microsystems itself had recently had a security problem with bad Diffie helmet parameters. They did not think that a 256 bit modulus was insecure. And of course it was Brian Lemakia had cracked it, rather to their surprise. And if we look back and realize what happened say with log jam when people did not increase the size of the Diffie helmet might those when they could have to see if there was a very wise decision to avoid to skip for that reason. So the final outcome was ESP age one over skip sequence numbers were deleted from the standard. There were no design consistent concessions for the export rules. We decided. We're going to do it right technically and fight the government, the political layer. And integrity only was still a rational alternative because the expense of devs and the need for firewalls to inspect the packet, independent to the political issues. And it's less is a camp and I is the only key management protocol left available. And so that went out. And that's when we realized there were problems. The integrity algorithms. Not then the original the original scheme said, tag your key to one end of the packet and do a hash on that. And that will be your integrity check H Mac had not yet been invented. What it was, it was least a drop in replacement. The sequence numbers turned out to be a mistake. Lack of mandatory integrity checks was a mistake. The suggested ID selection method. A simple counter was a mistake is a camp was a mistake. We just didn't have quite enough expertise. And we had a little bit of luck. Umar had gone around that the NSA could break CBC encryption. Didn't seem right to me but I decided to investigate. And that was a good idea. Well, we all know the properties of CBC in this audience not going to go over over them, just seeing how I was possible to abuse this year. I made a certain assumptions that were very reasonable in 1996 1997. You had a host pair key, a single key between each pair of hosts. Remember that you'll have time sharing machines and not just personal computers. Yet, it was reasonable thing that you had multiple people logging into a computer. So if you're doing encryption only with no authentication header, and maybe the attackers got a log in on one or both of the machines, and of course the usual assumption about access to the network. It didn't happen. So I send an encrypted packet protected by ESP. Confidentiality, no integrity. So you have the IS the IP header and the ESP header and TCP and the data. The attacker with a login on the same pair of machines sends a utp packet a user data grab protocol packet, again encrypted with the same key. And what do you do. Together, the IP and ESP header from your own packet with the payload of my packet and IP sec on the receiving machine is going to do the decryption for the attacker. It was a simple cut and paste attack made possible by the properties of TCP and utp and of course CBC mode. You can hijack a session same sort of thing. I could take your TCP header and take and stick in a nasty command like remove dash rf slash and glue it all together. I'll have to fix up the TCP checksum. But it's not that hard to do and 16 TCP only has 16 bit checks of it wouldn't be hard to do it by brute force. So I can take over your session and inject shell commands if I want to do many more attacks like these generate full size packets, guess it passwords without a login and either machine, many, many more. What went wrong. We really needed sequence numbers, but nine accidental packet duplication is not the same as malicious retransmission. We really needed integrity checking CBC easy cut and paste properties make it crucial and that rumor from the NSA, probably it was the IP algorithm predictable initialization vectors are serious with weakness. When you look back at the mailing list archives. They were various people, people who attended crypto rather than using security etc. But more of this, they weren't as engaged with the working group, they contributed to the mailing list, they weren't there in person, and the in person standards process, and the heavy involvement in the standard process really mattered. In the IP sec working group who knew all of this, but they had too much involvement with the NSA, when we finally decided to add the mandatory integrity field the mandatory sequence numbers and so on. Steve Kent, one of those people said to me, congratulations you finally have all the right fields in there. He knew it all along of course. We threw out IP sec. And after I led the fight to take out sequence numbers I led the fight to put it back in, which is what makes this so perfect for see fail. Integrity checking can still be turned off again for high speed bulk transmissions. It made me less than happy but at least we had the sequence for field. So the new packet formats. As but again you always had a sequence number now. And we used H Mac, instead of something awful age still isn't needed. You don't need to protect the IP addresses they're bound to the SPI. The interesting fields in the IP header can't be protected because they change on route. And if you wanted authentication only you could use the null cipher option with ESP. It's deprecated people need understand the need for more encryption of course stuffs a lot faster today. It's obsolescent at least. And you look at this the IP header, the TCP header and the ESP and header all have sequence numbers. They redundant. And the answer is no. First of all they serve different purposes, but from a security architecture perspective, the ESP sequence numbers are within the cryptographic modules trust boundary TCP aren't. And if you're trying to isolate your cryptography in a more secure module, possibly outboard hardware, you want to put the sequence numbers relied on by the cryptography inside the cryptographic modules trust boundary. You don't want to trust stuff from the outside. If you can help it. The itf adopted a newer version of is a camp I. I still thought it was too complex. Some of us and this was a group of about seven, both systems people like myself and Matt blaze and crypto theory people like bill yellow proposed a replacement for Michael JFK just fast key. I adopted it, but at a likely attended meeting at the next meeting reverted back to like version two. Was that right. Well, I still prefer simplicity doing different layering but it is what it is. Other changes to IP sake over the years newer cryptographic algorithms and modes of operation elliptic curve instead of RSA a yes easy combined confidentiality integrity cipher modes, which we wanted earlier but they didn't exist longer keys. We ran into another bug when we realized when the crypto community realized that MD five and Shaw one were broken and they needed to be replaced. We said okay fine, you know, this started the shot three process and so on. And when Eric Rascola and I looked at I can a variety of other IDF protocols, we realized that they couldn't actually negotiate new hash algorithms, because they assume you have the hash algorithm before you could negotiate whether or not it could be used. We had to go rework the way it did some of its fair number of protocols, the way it did negotiation negotiation of how to have hash algorithms to handle the case where there was a newer one known to some machines but not others. And the sequence number field was too small in 1995 32 bits. And blocks seem reasonable. We have much faster networks much longer than connections today. I won't try to go over this slide and tell but it compares the five different protocol versions and candidates, these slides will be up on my web page. So the lessons, I will take is, you know, the real world cryptographic particles have to be engineered. It's not just the cryptographic mathematics that matter. People matter. You didn't always have or didn't always heed the right expertise, the process matters, and the requirements will change over time speeds increase threats change algorithms are improved. So, I piece that is relatively stable now but it's not clear to me that it will remain that way. So did we succeed ESP is pretty clean age a bit less so we realized late in the game it was very hard for an application to tell if or how a connection was protected, especially since I piece that could be outboard on your Ethernet card, or even at your gateway. Maybe there was an API that would have helped, but the idea doesn't do API's is a camp and I was not successful because there was so very, very many options. It may configuration interoperability extremely difficult. I understand the need for many of these things, but simplicity went by the wayside, and even experienced system administrators find it hard to configure. What happened was that a lot of this was overtaken by events, ubiquity of the web, and the spread of TLS made IP SEC less interesting. If you wanted to encrypt a stream connection, add an application TLS just work, and it was easier to configure. Other technologies like NASA firewalls gotten the way of ubiquitous end to end IP sec username selectors or a bad idea that the wrong layer. We did not get ubiquitous network network layer encryption that we wanted. We did get virtual private networks and is still used for that purpose. So I will call it a mixed success we partly succeeded, partly failed some of the success. Some of the failure was things we couldn't predict like the rapid spread of TLS. Some of it was errors in design like the complexity. So I will stop sharing my screen now and go to questions. Thank you Steve. So as questions are slowly pouring in perhaps you could come in a bit on Martin's remark he found out you're one of the originators of use net and you said you had some. Yeah, so, and thank you Kevin for the suggestions of people read RFC 3514. I think it's the most popular thing I've ever written. The. Okay, so using that was invented by myself. I was a grad student at UNC Chapel Hill. Tom Truscott, the late Jim Ellis and I got his name now at grad students at Duke. And one of the things that we realized was that we, if this was going to be a distributed decentralized network, we wanted some way to control this. We were designing this in late 79 early 1980. How do you control the distributed decentralized network when there is no authority and no ability to control what's happening on someone else's machine. Well, we all knew about public key cryptography. We had all seen first Martin Gardner's column and Scientific American things August 78. We'd all read the original RSA paper. We knew about public key crypto and seventh edition Unix had just come out, which actually had encrypted encrypted email set of tools, never really were used but the tools that were there so we even had some source code. And we thought about putting it in. We realized a number of things, boiled down to we didn't know how to engineer it and we knew it. For example, certificates had been invented at MIT we do nothing of them. How do you know that somebody's got them. Here's a public key whose public key is it. How do you use that to the lead a message that you don't want out there for example. We knew we didn't know these things and so we just how long she keys me we didn't know any of this. And we were said okay, we're just going to ship this thing without the crypto. In fact, the original announcement in January 80 said, lots of potential for abuse. We don't know what's really going to go wrong. Let's get some experience and then we'll fix it. Find out what the real problems are. Let's think about this in 1980. Export controls on cryptography existed but we didn't know about them, which meant that if this has been shipped we might have had some long unpleasant conversations with federal prosecutors. The patents on public key cryptography and on RSA had not yet been issued. In fact, it had we've been a little bit less humble, we're a little bit, but not a lot more knowledgeable. We could have put all of this stuff into net news, it would have been distributed around the world. We were there before anybody really realize what was going on before the patents came into force and good luck trying to enforce those packets, those patents went by 1984. There were many thousands of nodes, including many outside the US where the patents didn't apply anyway. So it's interesting to think about what might have been different. This was a discussion we had, but we ultimately decided not to do it. And I wonder what we're going to admit. I don't miss the chance to have those discussions with US attorneys in the FBI agents that that would not have been a experience that I would have enjoyed, I suspect. That was the story that was, you know, the odd thing is with with the primitive that exists today, like H max and so on. You didn't even have cryptographic hash functions in 1980. I think I could have done it without violating the export rules or any patents that had been issued. But I didn't know and a lot of this stuff didn't even exist in 1980. But I'm not, I don't have the temptation to go back and reinvent using it. If you're really interested, I did it's about a year ago I did a series about 10 blog posts that where I recounted the early design decisions and early history of it, and go back and read that but the crypto pieces, I think the one that was the biggest mist, mist chance. But again, the print a lot of the prayers didn't exist. And in an era before the web widespread access. Most people didn't even know about certificates me bachelor's thesis about at MIT how, how was I to hear of that, especially since I was not doing crypto research. I think the area was on program correctness and formal verification. Other questions. Yeah, thanks for a really interesting answer. I see that a question came in from Kevin. So I'll read it out loud. You said that the ubiquity of HTTP made IP tech less relevant, which raises an interesting point. In the early days of HTTPS, there was a competitor called as HTTP, which would have been more fun to pronounce. The difference as I remember it is that as HTTP drove things down to into HTTP, whereas SSL TLS worked at the TCP layer above. Sometimes it seems better to go to a more general layer, but not always. How do we differentiate. So HTTPS relies on TLS encryption of the entire connection is encrypting. Yeah, just above the TCP layer. It gives you an encrypted channel. So that's HTTP was a way to encrypt part of an HTT HTML document. And so it was more closely tied to the web. It would have been very interesting because you could then digitally sign things you could digitally sign your order rather than just putting in your credit card number or what or what have you. I think the reason that it's lost out was that Netscape, which marketed the original web server and web browser was the energy behind SSL 2.0. And that was an HTTPS was their solution so they had the running code and both ends of the connection early on. So I say she gives proposals and alternative didn't catch it. I see Brian Lemakia pointed. He gave a pointed to Confeld is a bachelor's thesis. Thank you, Brian. Brian is the one who broke Diffie Helen Key exchange at at sun. I have a policy question for you. So seeing about this kind of parallel world world that my day job is in now of sort of technology in the stock market microstructure domain. And it's an interesting problem that the policy and the technology are in conversation and conflict with each other all the time. But there's a unique challenge faced by the policymakers, let's say at the SEC, for example, who want to understand the technology but tend to view it kind of reactively like they want to understand the technology down and make a policy that corrects it. But of course the policy leads the technologists down certain paths in response to that. They don't seem to have a structure for kind of anticipating or thinking through those things and part of the problem is that all many of the main technologists sort of adversarily aligned to the regulators and spend their time circumventing the regulation more than sort of sharing some of the same goals. Have you seen examples of issues with that with getting the right technology expertise into policy. And do you think there needs to be kind of solutions, like regulators building up their own internal technology staffs or do you see a different way to kind of resolve those issues. I think it's hugely important to have people who speak both languages on both sides of that. I spent, I spent at least half my time these days on doing law and policy I'm affiliate faculty at Columbia Law School as well as a CS professor. You know I spent a year at the Federal Trade Commission as chief technologist, advising the chair and the commissioners on technology issues spent another year part time with the privacy and civil liberties oversight board is in a similar role and felt now is actually a board member at P club. It really is very important for people on the policy and regulatory side to really understand the technology and have their own understanding of it, not just what they're told by the people they're regulating. I think it's hugely important I think it's hugely important to get people who speak both languages. I've become the unofficial advisor at Columbia for CS majors who want to go to law school. I have two who are in law school right now. I've got three recommendation letters to write in the next month. We need these people. We need them very badly. So, I have a question coming in here as a personal message on the chat. So the question is, there are four different authentication mechanisms in Ike. How come both public key encryption and revised mode of public key encryption in the protocol. So the question is when the protocol is already complicated why have two options. That's a good question. I should question that I have not really been involved in this in the last 15 years, more since I became a faculty member had a D couple from the IDF so I know very little about what's happened in the last 15 years. Different people had different needs you know one of my pushes for simplification was have only public key. So if you're defining as an authentication mechanism, and if you need something like passwords or one time, a time based password would have you have a separate outboard protocol that does nothing but retrieve your private securely retrieve your private key from some store, using variants of PAKES for example to securely store your private key someplace. You want password authentication, fine login to someplace that will take your password and issue a short live certificate. I prefer to have multiple simpler protocols than one complex one. I find it easier to analyze, but I did not win that fight. Kevin is continuing his question about applications versus versus network layer. So Kevin is asking, we are now using zoom with the little thing at the upper left that says using advanced encryption. But I have no idea whether IPsec would be appropriate for a UDP based protocol like video that could happily drop frames. Has IPsec been overlooked where it could be used? The question from Kevin McCurley. Yeah, okay. So yes and no IPsec would handle UDP very well. That was always one of the requirements. But IPsec was, this was a mistake we made early on. We wanted more application layer talking to IPsec. And that is difficult because IPsec is at the IP layer. This TCP layer in the middle, IPsec could be on your Ethernet card or your firewall and so on. And I regard that approach as a mistake. You still want it for VPNs. We are trying to protect everything, but for protecting an individual application, you're better off with much more application knowledge of what has been done and more I mean, easier ability of the application to say what it wants to be done. I mean, we tried, I think that we failed. I think the, the example that convinced me that it was bad to try to do IPsec application encryption was suppose you wanted to protect the old R log and remote login command, which again once upon a time used address based authentication. You connect to the far end. Fine. I'm going to key my side with my key SD development, but the far side is initially answering as root, because it would log you in as anybody doesn't know. So, on one end it's Steve's key. The other end it's roots key that then later on wants to switch to Steve's key, especially if you're trying to talk at different security levels. This is a multi level secure system. You can use a top secret algorithm, I can use a confidential algorithm and so on. So it trying to switch keys in that context got really, really messy. And that was what finally convinced me that it was a bad that it was a bad approach and one that was rightfully abandoned. So Dan Bernstein has a question or remark that seems to be related to that. So he's asking for the protocol struggling to deploy TLS securely. Do you think there's any hope for the original plan of encrypting the whole network layer. The problem with encrypting the whole network layer is who is using the certificates to everybody. You know, it's messy enough with the web PKI we've got how many hundreds of CAs, certificate transparency is a band-aid on top of that very necessary. I won't say bad decision but it's it's an unpleasant decision that you had to go add another mechanism as a band-aid on top of it. If I want to connect from A to B, if I control both ends I don't need to see a but I want to connect to somebody someplace random I do need a CA and is that the web CA it got it got complicated. You know at this point, it almost doesn't matter because so much of the web so much the internet is protected at application layer. Virtually all web traffic is HTTPS at this point. Email is protected via TLS, SFTV over TLS and IMAP over TLS. Zoom is protected by their own encryption mechanism. There are other video standards with their own you know Cisco's WebEx has got end-to-end encryption and so on. So, at this point, all of the protocols that are being devised have their own application layer encryption. There are things that are not easily protected like DNS, but DNS is awfully hard to protect as far as I'm concerned anyway because you're talking to untrusted parties reporting and caching and so on. You've got the DNS over HTTPS standard. I'm not a big fan of that because I don't think it's protecting you from people you really need protecting from. It's not an end-to-end protocol. I would love to see more end-to-end email encryption but no one's yet cracked the human factors piece of that. Again, part of that is key recovery when you drop your phone in a puddle and don't want to trust whomever on that. So, it's an extraordinarily hard problem. I don't think we're going to get the ubiquitous cryptography, but I think we have so much of it that at this point that we almost don't need what we had then, what we tried for then. And it's still the traffic analysis problem. Also related to that is an explosion of the number of passwords that you need to use and remember for all kinds of applications. Yeah. Do you think if you're looking back at the past few decades, are we going in the right direction and how do you see things evolving? Are we helping to make the world more secure or are there maybe, is the complexity increasing to such a point that it's becoming very difficult to keep track of where problems might be? I think the problem is where it has been for decades and that's the software. Software is this nasty unpleasant buggy stuff. The one I used to use is bad software, Trump's good crypto, now he's bad software, beats great good crypto. Let's change the verb for some reason, but you don't go through strong crypto, you go around it. You know, the primitives that we're using today, AES, the post-quantum algorithms look to be really, really good. You know, to my knowledge, in 45 years, the only attack on Dez stronger than brute force was linear, not necessarily strong, was linear crypto analysis. And that was never very practical because you didn't want to send that many blocks before key change and CBC mode anyway. And the only weakness Dez was the key length and that was designed in by NSA. So, yeah, but the software, maybe we're going to get good secure software someday, but I'm not holding my breath. I mean, Windows, I'm a Mac user, Windows 10, I think is considerably more secure. And last Tuesday Microsoft shipped 120 patches on Patch Tuesday. You know, software is this nasty stuff, we've got to get rid of it. The complexity of the systems today are far bigger and far more robust than they were when I started this field 50 or so years ago, but the complexity is grown at least as fast. But yeah, things are a lot better, but there's still a lot of bad stuff out there. So for authentication, I think we're going to move more and more towards the ubiquitous identifier, and cryptographic token our phones, we always have them with us. You know, the iOS and Android are pretty well designed the phones are getting more and more secure outboard authenticators like the U2F phyto keys and so on. I think we're solving that problem, but I'm worried about the application there, the software. I think the question that Kevin has is a bit related to this so he says zoom uses a proprietary system, but WebRTC is a standard that allows different implementations are standards dying at the application level. There are some aren't the web standards are pretty well followed because there are enough different web browsers and web and web servers out there to keep the vendors honest. You go back a couple of decades and you had some vendors trying to get proprietary extensions into the web that are that others couldn't emulate. There's not as much of that anymore places where we don't have interoperability like video conferencing, you know, I've got about five different video conferencing apps on my computer on my phone or tablet. Because some people want zoom and some people want WebEx and someone Skype and someone signal, and someone go to meeting and someone Google meeting, and these things don't interoperate and that's the problem. It would be nice if they did. Oh, Stuart. No, you've got to be kidding. Smart contracts. The existence of bucky software is exactly why I don't like smart contracts. Sorry. What, I didn't mean to provoke you. I knew I knew that was talking to you come on. I know you've been in that. I couldn't resist. I have to hear you say that these will be the universal method of authentication because of the side effects for privacy. Yeah, but it's a deployment issue and engineering is the phones today are very well engineered. The best security architecture is not perfect. The application is not perfect, but it's quite good. Apple's done a pretty good job at locking themselves out of the phone. I haven't managed to lock out Selbright and ratio, but there was pretty good. The point is that it's something that's ubiquitous. And people will notice their people will notice their loss. My last book was working on the book. It's called The Metropolitan Museum of Art. And there's a sculpture there. You can actually find it online on the Mets website because they put all their artwork freely available, available, most of it, I should say. The statue is called Indian girl with the dawn of Christianity. But no, when you walk in and see this thing, your first reaction is, here's this girl looking down at her cell phone. She's not even dressed yet. And I immediate reaction to it. And when I did some Googles and so on. I found I was not the only one who had that reaction to it. You know, she's staring at something in her hand when you look closely you see it's a crucifix, but to a modern audience, especially to a modern non Christian, like myself, it was a phone. You have it with you almost all the time. You know when you, you know, very soon when you've forgotten or forgotten or you've lost it. It's got a lot of computing capability. It's got short range radio to go talk to your computer and so on. It's not perfect, but it meets a lot of the checkboxes and so on. Could there be a better solution. Sure. But when you talk about engineering and market forces. I tend that that's where my guess is right now. Sorry. What Google Play services runs on Android phones. I'm sorry I didn't do you have confidence in what Google Play services does on Android phones which is a majority of the phones in the world. I don't have any confidence in any software. I didn't say it was the best choice. I said I thought it was going to be the winning choice. There's a difference. The, it's getting, it's getting better. I'm worried. Historically was not as good and was held back by the tremendous variety of different underlying hardware platforms, Apple controls their hardware so they can go put in the secure enclave, etc, etc. And Google is dependent on Samsung and Huawei and everybody else to put in the right hardware features. I suspect it will often be good enough. I don't, you know, I worried tremendously about software complexity and attack surface, but that's a different question. So I still wanted to ask Steve, are you worried that somebody might go to your cell phone provider to try to take over your number. Yes. But that's why you don't want to, that's why that's why you don't want to tie the authentication to something as easily stolen as a phone number. You know, I think that SMS to factor authentication is far better than password only, but it's not going to defend against even a semi sophisticated target attack of someone who's going to steal your phone number. And then, you know, for the next step up, I've got the duo sec authenticator on my phone I have to I needed, because my university has mandated it, and swap is stealing my phone is not going to steal the cryptographic keys that do that. It would be better to have something like you two F on my phone speaking via near field communication to my computer. Sure, because that's bi directional authentication of the actual cryptographic session. But it's less convenient to use that's one more thing to carry around. Maybe next year I'll have that feature I don't right now. Okay, I guess on that note, it might be a good time to end the session. I would like to thank you Steve and I would also like to thank all of the people who presented today and who attended today for being here. And I hope there will be next CPL next year, where all of you will attend again, hopefully in person. And we're looking forward to your participation in your submissions. I remember traveling. I used to do that a lot. Yeah, sorry, let me add one more thing to that which is, yes please if you're inspired by all the brave people sharing their failures today please consider submitting to CPL 2021. We will exist. I don't know in what form yet, but it's very on brand for us to to fail at changing or keeping the same format so we will definitely do something. Thank you so much Steve for being with us today and sharing your, your insights and your failure. We really appreciate it. Okay. See you all on the net.