 In the key exchange session, the first octopond has a password to indicate the key exchange that can be found in your home, in your answer, on the phone, to add any phrases you can not go up, so the key has to ring to this number. Hi everyone, I'm Sophia, and I'm going to talk about what the octopond can be exchanged. And since this is true of work with a pair on the phone, Julia has said that you push the call and made a phrase out of it, and Julia is in the audience. So, I want to start with a motivation, which could be a very familiar one. So imagine that we have as involved, and all we want to do is talk to one another securely, but all we have is an instant share on the same channel. We do also have a shared password, but as passwords tend to be, their passwords very low and should be, so they change to directly as of directly. The password also has a second problem, which is that it's kind of noisy, meaning it differs in a few places. So here, where L has a zero, Bob has an outside. So despite these two difficulties, the two have found Bob's left hand to the shared password to a read on the high-end stream of the draft key. So if they had an X and A in the middle, that in the middle of the game was about, I bear this to you, just like they draw up all of their messages. Alice and Bob want to make sure that no matter what this X and A does, he should learn anything at all about their passwords, in case they want to read those passwords later, or the key they read on, and they manage to read on the key. So whose natural application is something like this, would be this type of password. So imagine that Alice and Bob are trying to also keep using natural passwords, but one of them that could read the key word. So it has been shown that tolerating some errors in password entry would have been able to user experience a lot of applications. So another scenario that would be useful is that Alice and Bob are trying to keep by an access time. So here, imagine that Bob has some resource that Alice wants to use, and Bob stores it database. Iris scans the longing to all of his authorized users, and when Alice wants to access the resource, she takes a fresh scan of her iris and tries to use that to authenticate. So her fresh scan is going to be similar to, but not an exact match for, the iris scan that Bob has stored. So they're going to need to do something tricky. Another application is authentication based on physical proximity. So here, the passwords would be something like noisy and low entropy environmental readings. And this would be probably useful in the very new future when self-driving cars are everywhere, and they want to help one another navigate their surroundings by sharing information about what they see. So in a situation like this, it would be pretty important for these cars to prove to one another that they're actually nearby to prevent some remote attacker from feeding them all false information. So authentication based on shared imperfect secrets or passwords is a very active area of research, and we can sort of categorize this work based on what password problems it deals with. So first, does it deal with passwords that are low entropy? And when I say that a password has low entropy, I mean that it's possible to hit on the correct password realistically by brute force enumeration. So maybe a 30-bit string, I would consider to have low entropy. The second question is, does this work deal with passwords that don't match exactly and might have some noise? So we can see the related work as partially filling in this table. The nicest cell in this table is the high entropy exact match cell. And in that cell we have privacy amplification. So we still need something because even though the passwords are high entropy and exact, they might not be uniformly distributed. So these privacy amplification protocols are very efficient, but they pay for this efficiency by having some leakage about the password used. And in the high entropy situation, that's totally fine because there's so many bits of entropy that throwing out a few is totally totally okay. But because we have some leakage, we can't directly apply these techniques to the low entropy setting. So in the low entropy scenario, we have a different set of protocols called Password Authenticated Key Exchange or PINK. And in these protocols, it's very important that there be no leakage at all about the passwords. So in particular, they even prevent offline dictionary attacks by an active participant or a man in the middle. So anyone who doesn't know the password that sees a transcript of an execution that they either participated in or just observed, they shouldn't be able to then use that transcript to test their password guesses. So moving to the fuzzy match setting, in the high entropy scenario, we have information reconciliation and robust fuzzy extractors. But these also have some leakage just like their exact match counterparts and for the same reason they don't translate into the low entropy world. And in fact, the only thing we have in the literature for the low entropy fuzzy match setting is just generic two-party computation in the unauthenticated channels model. So this is something we can use really anywhere in this table and it tends to be pretty inefficient because it's so generic. So in our paper, we focus on this cell in the table for the first time. We introduce a new primitive which we call fuzzy password authenticated p-exchange or fuzzypake. And we define a fuzzypake and we give two efficient constructions of this thing. So next I want to talk a little bit more about our security definition of fuzzypake. So like I told you before, if Alice and Bob share two passwords that are actually close, they want to be able to talk to one another and as a result of this conversation, they want to both hold the same high entropy session key k. And any active or passive man in the middle should learn nothing at all about the passwords or the key as long as that man in the middle doesn't already know some password that's close enough. If the two of them don't have passwords that are similar, so maybe Alice's password is curiouser and Bob's is, can we fix it? In this situation, they shouldn't end up agreeing on a secret key and neither of them should learn anything at all about the other's password. And this should be the case even if one of them is malicious and doesn't follow the protocol. So there are a few complications in defining a fuzzypake. And the first of these is that, well for any key agreement protocol, it's very important that the protocol be composable. Key agreements is never the ultimate goal. Whenever you want to agree on a key, you probably want to then use that key in a different protocol. So you want to make sure that the protocols remain secure even if other things are also being executed. The second difficulty is that just like pake, we really need to be secure against offline dictionary attacks. We're dealing with low entropy passwords, so we want to make sure that malicious participants and men in the middle, even if they've saved a transcript, they can't then use that to test their guesses. So the approach we take is we generalize the universally composable functionality for pake given by Kaneti Halevi, Katz, Lindell, and Mackenzie. And I'm not going to tell you anything more about this definition. It's the full definitions in the paper. So next I want to tell you about our two constructions. So the first of these uses regular non-fuzzypake together with something called robust secret sharing. And the second uses Yao's garbled circuits. And some of you might find this pretty alarming because I said earlier that I really wanted to avoid using generative party computation because it's inefficient. And here I'm saying that I'm going to use like one of the most famous generic two party computation schemes. But we actually don't use it in the straightforward way. We do something special specifically for fuzzypake that is actually pretty fast. So these two constructions, even though both of them realize fuzzypake, differ in a few ways. The first of these is what notions of similarity between passwords they support. So until now I've just been saying similar for some notion of similarity. And the Yao's garbled circuit construction actually supports any efficiently computable notion of similarity. But the pake and robust secret sharing construction is limited to hamming distance. The Yao's garbled circuit construction pays for its generality with lower efficiency. So both in terms of number of rounds and computation. It's more expensive. So I want to start by talking about the less general but more efficient construction. And because we're talking about hamming this distance, I'm going to assume that our two passwords are of the same length. So this construction follows a very natural template. Our two parties, Alice and Bob, they do share a secret. It's a problematic secret, but a secret nonetheless. So we're going to have one of them, without loss of generality, let's say Alice, pick the key she wants to agree on with Bob and encrypt it to Bob using their shared secret as the encryption key. So using the password as an encryption key. And this is going to have to be some very special form of encryption because this encryption is going to have to tolerate error in the encryption and decryption keys. It's going to have to work even if the encryption key and the decryption key are a little bit different. But even assuming that we have this magical encryption, we have the problem that the encryption key, the password, is low entropy. And any eavesdropper who sees this cipher text is going to be able to really narrow down the options for this session key just by trying, just by going through the password space and trying to decrypt. So we address this by turning every single character of the password into a high entropy key. We're going to expand the entropy of every password character. And Alice and Bob are going to do this using PIC. So PIC is a tool that takes in low entropy objects that match or don't match and gives the parties high entropy objects that also match or don't match based on whether the low entropy objects did. Alice and Bob are going to run PIC for every single character in their password. And whenever the password characters match, they're going to get back the same key and whenever they don't, they're going to get back different keys. And at the end, they're going to have this list of what we're going to call character keys, high entropy character keys that match or don't match depending on whether the corresponding characters did. So one crucial observation here is that it's very important for neither Alice or Bob to learn whether they agree on any given character. So imagine that Alice's password is what I have here, but Bob's password is we can fix it. So if Alice learns whether she agrees with Bob on any given character, despite the fact that she doesn't know a password close to Bob's, she's going to learn that, you know, his first character isn't P. And for low entropy passwords, this is a lot of leakage. This is not acceptable. So we introduce a UC definition for a new flavor of PIC, which we call implicit only. And this new flavor of PIC specifically has this property that neither party learns whether they succeeded in agreeing on a key, so they don't have key confirmation. And one of the famous PIC protocols, EKE2, actually has this property, this implicit only property. And that's the PIC protocol that we use. All right, so now when Alice does this magic encryption step, instead of using her password as the encryption key, she's going to use her list of character keys. And Bob is going to use his list of character keys to decrypt. So this magical encryption really quickly, it works through a combination of robust secret sharing of the message of the secret key K, together with a one-time pad encryption of the secret sharing using the character keys as a pad. And this is very similar to the code offset construction of Jules and Blottenberg. So I'm not going to give you any more details about this. I'm going to leave it at that. And next I want to talk about our Yau's Garbled Circuit construction. So like I said earlier, this one is more general, but less efficient. So one of the tracks yesterday heard a lot about Yau's Garbled Circuits already. So I'm not going to spend too long introducing them. I'm going to be minimalistic here. So Yau's Garbled Circuits are a two-party computation scheme where the two parties play different roles. One of them is going to be a garbler and one of them is going to be the evaluator. So the garbler takes the function that they want to compute and garbles it. And she then sends this garbled function or garbled circuit to the evaluator. And the evaluator is able to evaluate this garbled circuit and learn the function output. And there are a lot of steps here I'm skipping. Really high level. This is how it is. So the way we can use something like this for Fuzzy Pink is just have one of our parties garble a closeness circuit. So the circuit is going to take in the two passwords as input, compare them, and if they're similar enough it's going to output Alice's chosen session key. And if the two passwords are too different it's maybe not going to output anything. And then Bob is going to evaluate this. The problem with doing something like this is that Yau's garbled circuits offer asymmetric security guarantees. So they're secure no matter what the evaluator does. The evaluator can be malicious and still won't be able to learn anything at all about Alice's input apart from the actual computation output. But Alice the garbler has to be semi-honest. If she chooses to instead be malicious there is a lot she can do to screw things up. She could for instance just garble a totally different circuit. She could garble a circuit that always returns the same key that's fooling Bob into agreeing with her like no matter what his password was. So like I just said Yau's garbled circuits don't guarantee correctness against a malicious garbler and they actually also don't guarantee privacy against a malicious garbler because if Alice garbled something along the lines of a circuit that just returns maybe a few bits of Bob's password once she learns the function output she's going to learn information about Bob's input that she shouldn't have learned. The upside is that Yau's garbled circuits are actually really fast as two-party computation goes. There are a lot of transformations out there that guarantee correctness and privacy in Yau's garbled circuits against either party being malicious. But the downside of all of these is that if you consider pre-processing a part of the computation the overhead is going to be pretty high. There's like a lot of cool works here that I didn't list over the recent years. There is one more transformation which is called dual execution which has a very small overhead so it's only twice as expensive as regular Yau's garbled circuits. But the downside of dual execution is that it gives one bit of leakage to the adversary. And especially in our setting when we're dealing with passwords that have very low entropy, one bit of leakage can be crucial. We don't want that. So we modify dual execution to eliminate this leakage when it matters specifically for Fuzzy Pink. So all right let's have Alice garble this close in a circuit like we did before. And instead of outputting yes the passwords are close or no they're not close this circuit is going to output a yes or no key. So it's going to output the yes key if the two passwords are close and the no key if they aren't. So she sends the circuit over to Bob. Bob is going to evaluate it and learn one of the two keys. And because this isn't secure against a malicious garbler, against a malicious Alice, we're going to do this whole thing again in the other direction also. So now Bob gets to play the role of the garbler. Alice has to play the role of evaluator and as the evaluator she can't cheat. So what Alice is going to do once we do both of these steps is she's going to take her own yes key. She's going to take the one key that she learned from Bob's circuit. She's going to XOR them and the result is going to be her session key. And Bob is going to do the exact same thing on his side. So if both parties were honest and if both circuits decided that, you know, yes the passwords are close, they are close in this picture, then clearly these two are going to actually agree on a key. They're going to be XORing the same key. So the same pairs of keys. If the passwords are actually far, then Alice is going to be using her own yes key but Bob's no key and Bob is going to be using his own yes key but Alice's no key. So they're going to be XORing two totally different independent pairs of keys and they're going to end up with different session keys. Now let's say Alice is trying to fool Bob into agreeing with her. Well as the garbler, she can cheat. She can make Bob, you know, get the get her yes key out of her circuit even if his password is far away from hers. But since she can't cheat as the evaluator, there's really nothing she can do to learn anything about Bob's yes key, this blue key over here. So there's nothing a malicious Alice can do to agree with Bob when their passwords are actually far. So we can view what we just did as a variation of dual execution in the general case. So dual execution has very symmetric guarantees. No matter what the correct computation output is, dual execution guarantees that, you know, both parties will either get the correct answer or know that cheating took place and it lets an adversary get one bit of leakage. We make a trade. We say that, okay well when the computation output is yes, when the correct output is yes, we're going to let the adversary still have their one bit of leakage and we're also going to, you know, give up on correctness. We're not going to have any correctness guarantees. But in exchange we want better things in the case that the correct computation output is no. So not only do we want a correctness guarantee, we also want to eliminate the one bit of leakage. So this actually turns out to be the perfect trade-off for fuzzy-pick. Because in fuzzy-pick, intuitively, we only care about security against an adversary that doesn't already know the password. And if the adversary doesn't already know the password, then we're in the no case. So in the no case, we're guaranteeing that the adversary can't learn anything about the password and can't fool our parties into agreeing on a key. But in the yes case, when the adversary already knows a password that's close enough, the adversary can fool the other party into not agreeing with them, which they can really do anyway just by using a random thing as their password. And they get to learn one additional bit of leakage, which is also arguably okay, because they already have all the information they need in order to agree on a key. All right, so in conclusion, there's been a lot of work on key exchange based on imperfect secrets. We're the first to look at the low entropy fuzzy match setting, and we give you security definitions for this new primitive called fuzzy-pick, and two different constructions, which are on different points on the sort of efficiency generality curve, so are probably useful in slightly different scenarios. All right, thanks everyone for listening. Right, so a malicious man in the middle gets to execute fuzzy-pick with each of our Alice and Bob, if they want to, and each online execution gives them one password guess. So he learns an end of two proximity queries. So he gets to basically try to guess, so he gets both answers, not just an end. Okay, thanks. But that's really as good as we can do for something like PIC. Can you say something about the model and assumptions? I assume the first construction is any PIC, the second you use CRS, your random oracle, what do you need? We do use CRS and something very close to the random, and actually the random oracle in both of them, I think. Yeah. Okay, thanks again.