 My name's Andrea, and this is Rio, hello, and this is joint work with Virginia Vasilevska Williams, and we're going to be talking about public key cryptography in the fine-grained setting. In a world where one-way functions don't exist, where BPP is functionally equivalent to NP, a world called Pessiland, can we still have meaningful cryptography? Three intrepid researchers find out. All right, so what do we want in this world? Well, we want average case-hard problems, of course. We need BPP not equal to NP. But additionally, we don't just want that these problems are hard. We'd like to be able to efficiently generate both the problems and their solutions so we can get some nice one-way functions. But actually, we don't just want that, right? We also want public key cryptography. We want to be able to efficiently generate a problem that's hard and generate a trapdoor to that problem. But these rely on increasingly strong assumptions on BPP and NP. So we'd like to work with a different kind of assumption, a fine-grained complexity assumption, assumption based on polynomial hardness of problems. And what we mean by polynomial hardness here is that we want to distinguish between, for example, problems that take n squared time and problems that take n cubed time. So first, let's talk a little bit about some related works. Concrete security reductions focuses a lot on the efficiency of these cryptographic reductions, but they're focusing primarily on the change in the security parameter. We care a lot about our security parameter in these fine-grained reductions, but we also care about polynomial overheads in the reduction that don't affect the security parameter. There's been some really nice work on combinatorial crypto based on NP-hard problems, for example, subsetsum. And in the fine-grained complexity world, there's been quite a bit of work in fine-grained complexity. If you're interested, there's a survey paper from 2018 that's quite good. Talk to us after. But you know, more related to crypto, there's been some nice work on fine-grained worst case-to-average case reductions. All of this work so far has been fundamentally for counting problems. So we're able to show some counting problems that are hard, but there have been, unfortunately, some barriers against building crypto out of this. For some of it, it's an assumption called NSF, once again, talk to us after, but then there's also been some really exciting recent work showing that relative to a particular oracle, you in fact can't have fine-grained one-way functions, much less fine-grained PKE. So can we have a meaningful notion of cryptography even if we don't have exponentially hard one-way functions? You're still on. Yes. This is still you. All right. So first, let's talk about something that might immediately come to mind, Merkle puzzles. So Merkle puzzles do in fact require exponentially hard one-way functions, not just exponentially hard, but like very exponentially hard. Like two-to-the-end hard. So this was, the puzzle itself was described by Merkle, and then Biham Goran and Ishai later evinced a specific example with an assumption. And the gap you get there is if the honest parties run at no event time, then the dishonest parties are going to take n-squared time in order to violate the security. There's also been some really nice work. By Del Walker, Ben Kittenathan, and Boris Van in 2016, where based on an assumption about the hardness of NC1 with respect to L-slash-poly, you're able to get one-way functions in PKE, where the honest parties are NC1 circuits, and dishonest parties must be strictly stronger than NC1 circuits in order to violate the security. But what if we don't want to be working in circuits? If we don't want to be talking about P-slash-poly, we want to be talking about different polynomial times. We want to be working in the RAM model. Well then, in this work, based on an average KCRK clique assumption, we're able to build both one-way functions and public key encryption, where the one-way functions get a gap of n to the k over 2 based on average KCRK clique assumption, and our PKE gets n to the 1.5 minus a little bit. So, me is visiting that question from before. Yes, we can have a meaningful notion of cryptography, and we can use these fine-grained complexity assumptions to do it. So in this presentation, we're going to go over some definitions for what we mean exactly by fine-grained one-way functions and a fine-grained key exchange. We're going to go over our complexity assumption, our average case assumption, which will be based off of 0k clique, and then we're going to go over our construction, including the security reduction. So, let's get started in this wild, wild world of fine-grained crypto. To start, first we'll want to compare what a classical one-way function looks like compared to a fine-grained one-way function. Classically, you need to be able to evaluate your function polynomial time, and we say that if an adversary can invert your function, then that adversary has to run in super polynomial time. So you get this huge gap between what honest parties do to evaluate versus what an adversary does to invert it. Now, in the fine-grained setting, if we have an n to the c fine-grained one-way function, in this case it takes n to the c time to evaluate, and if an adversary inverts it, then that adversary has to take strictly, polynomially more time to invert, n to the c plus delta. So, that's one-way functions. We're going to use a similar sort of definition here when we talk about key exchange. So, in a fine-grained key exchange, just like any other key exchange, you'll have Alice and Bob exchanging some messages to produce a transcript. By the end of their protocol, they each can produce a bit that represents their key, and because this is fine-grained, they each run in, say, order n to the c time. And again, if you have an adversary that's breaking this key exchange, that's figuring out what that bit b is, then that adversary has to run in strictly more than n to the c time, n to the c plus delta. All right. So, this is what we want to get by the end of this talk. Let's talk about the tool we're going to be using to get that, our fine-grained assumption on zero k clique. Now, we said this was an average case assumption, so we need to start with some distribution. So, our distribution is we're going to start with a complete k-partite graph. In this example, k is three, and each part has n nodes in it, and then it's complete, so every single edge exists. And then we're going to label each edge with a weight. This weight is going to be drawn uniformly and independently from Z mod R. Finally, we're going to be planting a clique, a zero k clique inside of here, so we're going to choose, in this case, three nodes, a, b, and c, each in a different partition randomly. And we're going to choose one of those edges to be the negative sum of the weights of the other edges, forcing this to be a zero k clique. And our assumption is that finding a, b, and c takes at least n to the k time, meaning you have to check every single clique before you find the clique that you're looking for. Notice also that this is a search problem and not a decision problem, so it's, you know, even nicer of an assumption than your classic yes versus no. Now, when we talk about fine-grained key exchanges, the first thing that will probably pop into your head is a Merkle puzzle. Let's go over why a Merkle puzzle doesn't work in this case. This will be a good warm-up for what our protocol eventually looks like. So you have Alice and Bob. They've got this public input universe. Typically, it's, you know, all inputs, 0, 1 to the n. Alice is going to choose a square root subset of those inputs. So 2 to the n over 2 inputs for Alice. Bob is going to similarly choose a square root size input. And since they're choosing square root size inputs, we hope they only, they collide exactly once. And there's a constant probability of this occurring. Then each of them is going to publish evaluations on those inputs. And because they can see those evaluations, they can tell if they've evaluated the same point. So in this case, they've both chosen four. They've only chosen one overlapping input. So their key is going to be the preimage of that function. Now, in the fine-grained world, well, Alice and Bob have to evaluate this function 2 to the n over 2 time. And if your function takes, say, n squared time to compute, that's 2 to the n over 2 times n squared. So one of these terms is way bigger than the other. And if we're in the fine-grained world, an eavesdropper looking at this transcript can see, oh, hey, f of 4 is equal to f of 4. Let me just invert that one point. And they only need to take a fine-grained polynomial amount of time to invert. So this isn't going to work. We need to do a Merkle puzzle, but different. So for this, Alice and Bob are each going to generate all 0k clique instances. L is going to be a parameter we define based on n later. Then they're each going to choose square root L positions in these instances and plant solutions in those spots. And hopefully, again, they're going to collide in exactly one index because they're choosing square root. Then Bob is going to brute force check every spot that he planted an index inside of Alice's list. And Alice is going to do the same brute force check. So they are solving 0k clique instances. And then they find that, oh, hey, they both planted a solution at index i. Now, for entropy reasons, we can't just use i as the key itself. So Alice is going to publish a random 0, 1 vector of length log L. And the key is going to be the dot product between i and this vector v. So let's go over how much time it takes for Alice and Bob to run this protocol. Well, they have to generate L of these 0k clique instances. Each 0k clique instance has n squared edges. So that's Ln squared. They plant square root L solutions. That happens in constant time. Then they have to do the heavy lifting. They have to solve square root L of these instances. So that's square root L times n to the k. Now, it turns out the best value for L in this case that minimizes the amount of time Alice and Bob spend is L equals n to the 2k minus 4, excuse me, giving a total time of n to the 2k minus 2 for Alice and Bob. All right. So how much time should the adversary take? Well, we would hope an adversary just seeing these lists and not knowing where any of the solutions are planted has to go through and check every single one, brute force checking n to the 2k minus 4 instances, meaning n to the 2k minus 4 times n to the k time total to find the place where Alice and Bob intersect. But this needs a proof. So what we start saying is hard, right, is that this average case 0k clique location is hard. And what we want to reduce to is the problem where we've got two lists of instances and finding the index of the shared list, a shared solution is hard. So first, we're going to talk about three properties that we use in order to do this proof. So these three properties are list hardness. List hardness roughly says that finding which index in a list of L instances has a solution is hard. Now, that's not quite our key exchange. So we need something else. We need also that these instances are splitable. And what we mean here is that if we start with a random-looking instance that has no solutions, we can split it into two random-looking instances that continue to have no solutions. And if we start with a random-looking instance that has one solution, we can split it into a pair of random-looking instances that have solutions. Finally, we need that things are plantable, both for the key exchange itself and for the security reduction. So crucially, not only do we show that 0k clique has these properties, but also any problem that has these three properties will result in a public key encryption scheme here. So let's take a little look at a pictographic diagram of why these properties give us what we need. So we start with some problem that we, you know, is n to the k hard. And then we're going to split it into a list where if this list is formed by L instances of size n, you need to spend Ln to the k time using this list-hard assumption. Next, we split every single instance into, you know, its nice pair of either empty or non-empty solutions and finding this index is going to continue to be Ln to the k hard. Finally, in order to go back to looking like the scheme that Alice and Bob are running, we need to plant roughly root L minus 1, sort of false cliques into the remaining entries. Let's take a look at specifically why 0k clique is average caseless-hard to get a sense of what's going on here. So we've got our average case 0k clique instance. What we're going to do is we're going to split the vertex sets. So as a simple example here, we're going to split the vertex sets in two. And now we're going to look at all the sub-problems generated when you take one choice of each vertex set. So here we're taking the yellow moon sun and star. Here we're taking the green moon, yellow star and sun, etc. So we generate a bunch of these. But here there's eight instances when we specifically have k equals 3 and we're splitting in two. And one of these is going to contain our clique. So if we can point out where this clique came from, we can go back and solve that problem and find the clique in the original in the correspondingly labeled sections. So why should these problems be still brute-force-hard? Well, if we have L groups and we're dealing with a k-click problem, we're going to end up with L to the k sub-problems. And each sub-problem is going to be of size N over L. So we've got these, you know, large lists of relatively small problems. And crucially, the following statement is true. Our hardness for the original problem is roughly equal to the hardness of each individual problem raised to the k. So each of these problems is just as hard as the original. This is called the efficient self-reduction in worst-case land. So something you might be feeling a little nervous about right now is you're noticing that these edge sets are somewhat overlapping. Alice and Bob are producing these random instances, but there's correlations here. So it turns out there's a way to deal with these correlations. And if you're interested, please talk to us offline or read our paper. But yeah, there is a way to produce sub-lists such that these correlations become non-existent. So just to recall, we've got list hardness, splitability and plantability, which allows us to take a single instance of a search problem and turn it into an instance of this key exchange and show that finding this shared index is in fact going in faster than brute-force time is going to solve the original problem faster. And, you know, I brushed over some of the details that made this a little less elegant. There's some correlations to deal with. There's some carries to deal with over here when we split our instance. And we sometimes double-plant. But as it turns out, all of these things can be handled. And once again, talk to us after if you're interested in how to handle these issues. So now we'd like to summarize. So we gave some definitions for what we mean by fine-grain one-way functions and a fine-grain key exchange. We showed you what our assumption looked like. We showed you an average case assumption that is based off of a worst-case-hard problem, but there is currently no reduction there from worst-case-to-average case. And we gave you a construction that ends up achieving a end to the 1.5 minus a little bit gap between what the honest parties do and what the adversary has to do to break it. And doing this, we showed that you only needed three properties. The worst-hard properties to be able to split your instance into multiple smaller instances. We had the splitability property, which we might have forgotten to mention, but it's basically you take an edge. An edge has a weight. You literally take the higher-order bits and split that into one problem, take the lower-order bits, split that into a different problem, then you get two instances. And finally, this plantability property where you can simply take an instance that doesn't have a solution and put one in. So this is a very new area. There's been worst-case-to-average case reductions, but as far as we know, there's been no work showing an actual key exchange using these fine-grained assumptions. So there are tons of open problems. First, end to the 1.5 isn't that great? Also, we are fundamentally bounded by using Merkle puzzles. We are bounded by this n-squared gap if we continue to use a Merkle style key exchange. Can we do better than that? In the full version of our paper, we do get an n-squared minus epsilon construction, but we chose to go over the end to the 1.5 today. Another open question is, can we build fine-grained crypto that has different useful properties, say, could we build fine-grained, fully homomorphic encryption? And because we are dealing with a relaxed security notion, one might think we'd be able to get better properties, like more efficient encryption. Maybe you don't have to deal with error or bootstrapping. It would be really nice to get something like that. Finally, when you have a one-way function in the traditional sense, you get all of symmetric crypto out of it. You get pseudo-random generators, pseudo-random functions. You get your symmetric key exchanges, or you get your symmetric key stream ciphers and stuff, but in the fine-grained world, it's much more difficult, because all of these constructions take one-way functions and add some polynomial overhead that makes it much more useful, but at the same time adds to the time it takes to run all of these protocols. In our paper, we show how to get fine-grained hardcore bits, but even that was tricky. We don't know how to get anything else out of it, so that's a big open problem. If you have any ideas on how to solve these problems or thoughts on what these should look like, please let us know. We're very interested. Thank you for your attention. We'll take any questions. Questions? Yes, I did. In your paper, you consider only time and ignore space. Now, the problem of zero-key-click sounds very similar to the problem of having a number of lists and trying to find a sum of elements from the list, which sums to zero, and this has a well-known time-memory trade-offs, which can reduce the time from below n to the k at the expense of having more memory. So have you considered the effect of having some amount of memory on the fine-grained complexity? Yes, so for specifically the zero-key-click problem, there isn't such a space-time trade-off known. The best time known for log space algorithms, even in the worst case, is the same as the best known when you have as much space as you want. It doesn't appear there's a nice trade-off there, unfortunately. The extra structure of the cliques as opposed to the lists makes some of that trade-off a lot harder. The fine-grained equivalence of what you're talking about is these case sum problems. It's absolutely true that there's these space-time trade-offs, and often there are these funny gaps you reach, where if you try and get below sublinear space, it takes way longer, but that structure doesn't appear to happen here. So... I now really saw that fine-grained would mean non-asymptotic security. Do you have any concrete parameters to propose? What do you mean by known asymptotic security? So your security results are purely asymptotic, right? Yes. And I saw that fine-grained would mean something more concrete than asymptotic. We want to... Sorry to disappoint you. We wanted to mimic the traditional world of crypto, but use fine-grained complexity techniques where everything is still very asymptotic. Your assumption makes sense on the naming, but we're following the naming of the complexity class. Any other questions? Okay, let's thank the speakers again.