 Hi, thank you, Manoj. Hi, so this is fine-grained cryptography. I'm Akshay Degreker. This is joint work with Prashant and Vinod. So modern cryptography is based on a lot of assumptions. For example, the ES is secure. MD5 is collision-resistant. Factoring is hard. Discrete log atoms are hard, and so on. And newer assumptions on lattices, cores, and et cetera. It's probably a good idea to have a healthy skepticism about some of our assumptions, because sometimes they are broken. And in case of factoring and discrete log, we know some surprising sub-exponential and quantum algorithms for these problems. So it is conceivable that a surprising algorithmic advance can render some of cryptography vulnerable. A key reason behind this is that most of modern cryptography is based on the hardness of some very specific and structured assumptions. So we would like to ask the question, is this inherent? Can we do cryptography with really minimal assumptions? In our standard setting, where our honest parties polynomial time algorithms, which have small run times, while they have to be secure against adversaries who are probabilistic polynomial time algorithms, we know that the assumption NP being different from BPP is necessary. Ideally, we would like to say that this assumption is sufficient as well. What that means is that if NP is different from BPP, then we have one-way functions, and so on. This question has been around for a long time. But unfortunately, most of the results we know are either impossibility or impossibility results. We know that if we impose certain very, for certain very natural classes of reductions and constructions, it is not possible to base cryptography on NP hardness alone. And we would like to ask the question, is this inherent? That cryptography requires the hardness of very structured problems. And to ask this question, we would ask it not only in this setting, but also in other settings. So for that, we need a definition. We define this notion of fine-grained cryptography. It's also been called moderately hard cryptography by Dwork and Naur. So here, we would relax the setting slightly. We would have some kind of, we would have honest parties which line some computational complexity class C. And what we would like is that they are secure against all adversaries in a slightly larger complexity class, say C adversary. So some examples of this kind of notion. We could consider the honest parties to be, say, linear time honest parties. And in which case, we would say that let's say the adversary we want to be secure against are run in quadratic time. Or we could consider this resource to be space, in which case we would, again, have a similar notion that the honest parties run in some space S. And we have to be secure against a priory decided space S cube or something like that. Or we could consider a resource to be something like the amount of parallel time they get or the depth of the circuits which compute these. Again, we would have a gap between what the honest parties, the amount of power the honest parties have and the amount of power the adversary gets. Just to drive home the point, let's consider a non-example. So it is that of the Nissan Widbess and Style PRGs. These PRGs are classically used in complexity theory for de-randomization. For example, proving P is not equal to BPP. In this case, the PRG typically runs in runtime longer than the adversaries it is designed to fool. So we would not really consider this to be very cryptographic. So this is not really a new notion. It has been around for a while, the notion of thinking about different settings in cryptography. The first notion we'd like to consider is that of Merkle puzzles. Here we know that in the random oracle model, it constructs a moderately secure key agreement protocol. The honest parties, say, make n queries to the random oracle. Then it's secure against quadratically many queries made by the adversary. Or we could consider notions like proofs of work introduced by Dwork and Naur. Here, again, generating the puzzles in proofs of work is easy, while actually solving them is moderately hard. Otherwise, we could consider a resource like space, in which case we know this work of bounded storage model by model. Here, what we bound is the amount of memory the honest parties and adversaries have. Again, we can get similarly a moderately secure key encryption, a symmetric key encryption algorithm is just unconditional. OK. Finally, let's consider an example of parallel time algorithms. This is time lock puzzles, the classic work by Rivesh Shamir and Wagner. Here, again, to generate the puzzle, it's cheap. To solve the puzzle, it's moderately hard. And on top of it, what you want is that this is also inherently sequential, that parallelism doesn't buy you much. Anyway, this list is not exhaustive. Finally, we would come to the example which is closest to our work. This is work by Hastal. So Hastal considers his adversaries to be small depth circuits, more specifically AC0 circuits. So to remind ourselves, AC0 circuits are constant depth, unbounded fan-in, and polynomial size circuits. And just to remind ourselves of the second case, NC1 circuits are logarithmic depth circuits which have constant fan-in. So NC1 circuits can compute more than what AC0 circuits can. To our example, Hastal constructs this one-way function. It's a very simple one-way function. Every output bit only depends on two bits of the input. And in this case, he shows that based on the hardness of computing parity, a long line of work in the 80s showed that constant depth circuits which compute parity need to have at least exponential size. And using these, we can see that this function is one-way because inverting this function would require us to compute the parity of the output. And this is hard for AC0 circuits. So in this work, we also consider small depth circuits primarily because we understand their complexity to some extent. And we hope that using lower bound techniques from computational complexity theory, we can actually construct cryptography out of it. So our results are in two sets of, two classes of results. So the first one are against AC0 circuits. In this case, most of the results we get are unconditional. We can construct one-way functions, PRG, symmetric, encryption, collision-resistant, hash functions, and so on. A construction in this regime would mean something like a depth-free circuit of size n squared or so, which is secure against all constant depth and polynomial size circuits. So one thing to note here is that the classical genetic transformations we know, for example, HELL and GGM, don't really work in AC0. They require more computational power. So it's not clear, apparently, that all these primitives are equivalent or not. And secondly, the second set of results we have are against NC1 circuits. In this case, the results are not unconditional. And this is for a good reason. We do not know any explicit lower bounds against NC1 circuits. And hence, at least some assumption is inherent. So our results are based on the worst case assumption that lock space is actually distinct from NC1. Assuming this, what we get is that we can construct one-way functions. And we can construct public key encryption, which is also auditively homomorphic and lossy. This, generically, gives us oblivious transfer and collision-resistant hash functions. So the main technique behind these results is a non-blackbox use of the randomizing polynomials, randomizing encodings, construction of Isha and Gush levels. So in this talk, mostly, we would be focusing on the NC1 results. So since they are based on randomizing encodings, let's briefly review what they are. Let's say there is a function f and I have an input x. I want to reveal to you the value f of x and hide everything else about it. So one easy way to do it would be I simply compute f of x and give the answer to you. But let's say I'm lazy and I wouldn't want to do that. So Isha and Gush levels give us a way to do this. We start with the input x. We encode it. And the encoding is a randomized process. This gives us a fairly long string. And what we know is that if f of x was 0, then this gets mapped to one set of strings. If f of y was 1, it would get mapped to another set of things. Furthermore, what we require is that this encoding is extremely efficient in the sense that it's computed by these NC0, that is constant locality circuits. Finally, for correctness, what we also require is that we can actually decode these randomized encodings. That is, given one such encoding, you should actually be able to retrieve f of x out of it. And for efficiency, we also require the fact that these are sampleable, that if I just tell you the output value, I should be able to randomly sample strings from either of these two sets. And the famous work of Apple, Bomb, Isha, and Gush levels shows that all of lock space has these constant locality, in fact locality 4 randomized encodings. And this gives them a really nice compiler where you plug in any one-way function, collision-resistant, hash function, PIG, and so on. It would give out another one-way function, PIG, collision-resistant, hash function, which is computable just in NC0. And so this is quite close to the kind of goals what we would like. So what are the distinctions between these? So first of all, clearly, AIK does much better in terms of the kind of adversaries it secures itself against. Constructions, which you get by AIK, are secure against all polynomial time adversaries. Our goals are slightly more modest. We would just want to secure them against small depth circuits. In terms of assumptions, AIK requires that there is a one-way function or the corresponding primitive, which is computable in lock space. This boils down to our traditional average case assumptions, like factoring, discrete log, and so on. And this is precisely what we would like to avoid. So we want to base our results on unconditional or worst case assumptions. Finally, in terms of parameters, there's a slight difference in terms of the kind of parameters you can achieve. So the PIGs and collision-resistant hash functions you get from AIK have additive shrinkage and expansion. We would like multiplicative constant shrinkage and so on. So let's prove some theorems. This first thing we would see is one-way functions which are secure against NC1. So to begin with, what we have is a worst case assumption that there is a function f, which is computable in lock space, but it's not computable in NC1. From this, we would like to get one-way functions. Randomized encodings provide us with a very nice way to go from this worst case assumption to the average case one-way function we desire. Our construction is the following. We start with, it's simply the output of the simulator. So the simulator takes as input some randomness and outputs some value, outputs the encoding. And the claim is that this is a one-way function. And why is this secure? This would be a reduction. So we would like to say that if we were given an adversary who could invert the one-way function, then we can decide f on the worst case, which we assume is not possible. So if we look at the output of the one-way function, it is mapping into the one side in the picture earlier. Now if we got an input some y, which we wanted to decide if y is in the language or not, what do we do? We simply computed randomized encoding and feed it to the adversary. If y was actually in the language, then we know that these two would be identical. And hence, the adversary would sometimes be able to invert. On the other hand, if x was not in the language, f of x was 0, then we know that these two sets are completely disjoint. And hence, f of x, hence, the adversary cannot invert it in any way. So now we have a way to distinguish. If y was in the language, then you would be able to invert it sometime. This is one-sided error. You can amplify. And this gives us a worst case. Since we have non-uniformity, this gives us a family of circuits which compute f in ng1. This furthermore gives us something more. It actually tells us that the output of the simulator on 0 versus the output of the simulator on 1 are actually indistinguishable by the same argument. If you can distinguish between these two, then you can decide f, which we have already assumed is hard. So we would like to go towards public encryption. So what you would want is that both the encryption and decryption are in ng1. So in terms of visual thinking, what you would want is that the encryption is somehow corresponds to the encoding of the randomized encoding. And decryption corresponds to the decoding. The issue with this, as we saw earlier, is that while the encoding is cheap, the decoding simultaneously cannot be cheap. Because if both encoding and decoding were cheap, you could again decide the language in ng1. So we are stuck. But on the other hand, we not only have this notion of randomized encodings, but we actually have a construction of randomized encodings. And so we would like to see if we can use the construction to actually get public encryption and to kind of embed trapdoors in that construction. So let's briefly recall the construction of Hesha and Koshilevets about randomized encodings. So roughly speaking, the encoding takes the form of this sort of upper triangular matrix where the whole bottom half is 0. The top half is completely random. The value of f is encoded in the determinant or the rank of this matrix. If the matrix is full rank, then we know that f of x is 1. If the matrix is not full rank, we know that f of x is 0. So combining this with the simulator thing, we get that there exists. This gives us the observation that there exists two distributions on matrices, one set of matrices which are full rank, the other set of matrices which are not full rank. And these two distributions are indistinguishable by NC1 circuits. Now matrices clearly have a lot more structure than just randomized encoding strings. And we can use this to get a public key encryption. So the first observation to that is that we can actually sample a matrix m from the non-full rank case along with a vector k, which is actually in the kernel of the matrix. We do this by modifying the simulator of the randomized encoding. And this lets us do this. Once we have this property, a public key encryption scheme follows. What we would like is that the public key is a non-full rank matrix. Along with that, the secret key is the element in the kernel. How do you encrypt? So to encrypt a 0, you output something which is in the image of the matrix, so pick a random vector and multiply it with the matrix. To encrypt a 1, you take something which is not in the image of the matrix. And for this, what you can do is, again, pick a random vector, multiply it to the matrix, and then also do an affine transform to get outside the image. And decoding is simple. You can take an inner product. So if it was actually in the image, or it encrypted a 0, then we know that the element k would always have an input, would always have an inner product 0. On the other hand, since it's a rank n minus 1 matrix, we can show that if it was encrypting a 1, the inner product would actually be a 1. Finally, how do we show security? So the security, more or less, follows from the fact that this encryption scheme is lossy. So to observe that it is lossy, we would replace m with m prime, which we actually sampled from the full rank distribution. The full rank distribution, when we have m prime, which is from the full rank distribution, the encryptions of 0 and the encryptions of 1 are identical, because the span the whole space of vectors, and so do the encryptions of 1. And since they are identical, this implies security. It's additively homomorphic, and hence it implies oblivious transfer and collision resistance generically. So let's look at the second set of results briefly. These are against AC0 circuits, if we just remind ourselves that constant depth unbounded finite circuits. So this is based on some sort of sparse learning with errors, without errors kind of assumption. It's not really assumption, because we can prove it. So here what we do is we pick a matrix m, which is like a poly m by m matrix. And such that every row of m is sparse, we need this sparse. And then in one case, we pick a completely random key k and take the output m and m comma k. In the other case, we pick a random vector r and output m comma r. So this is quite similar to say learning with errors and so on. The difference being that m here is sparse. We need m to be sparse, because we actually want to compute the inner product m comma k. And the AC0 circuits cannot compute inner products unless one of the two vectors is sparse. So it follows from this beautiful work of Breverman, which shows that every poly log y is independent. If a distribution is poly log y is independent for every poly log, then AC0 circuits are fooled by their distribution. So this implies that these two distributions are indistinguishable. Now we would like to get some cryptography out of it. PIGs follow pretty quickly. What we need for getting PIGs is we need an explicit matrix m, which has this property. It's poly log y is sparse. And the distribution mk is poly log y is independent. And we can get such matrices using codes, more specifically like expander codes and so on. In case of symmetric encryption and weak PIFs, we just pick these rows at random. We pick random sparse vectors. This gives us symmetric encryption. So this encryption is also additively homomorphic. So to get collision resistance, a natural attempt would be to use the work of Ishaikoshe Levitz and Ostrowski, which constructs collision resistant hash functions from a variety of homomorphic primitives. But this doesn't directly work, because AC0 cannot really compute long inner products which are needed for the IKO construction. We can make it work by modifying the symmetric key encryption schemes to ensure that we always have sparse inner products to work with. Finally, we would like to ask the question, can we actually get public key encryption in the AC0 setting? And for this, we saw earlier that we had one kind of an assumption where you have a sparse matrix M, random key K. And then this distribution is indistinguishable from random. If we just flip the game slightly, if we change the game to something like that M is completely random, and the key K is a random sparse vector, and that even in this case, the output M, Mk is indistinguishable from the output M and R, this would actually imply public key encryption. But we cannot really prove this. So proving this would actually have some very interesting consequences. First of all, it would yield as public key encryption. Secondly, from Brewer, when we know a family of distributions with full AC0 circuits, that is, polylog-wise independent distributions that are independent for all possible polylogs. On the other hand, what we would like, this distribution for public key and so on, we would like a distribution which is actually not polylog-wise independent for every polylog, and yet, full AC0 circuits. Surprisingly, we don't know of something like this. And finally, it would also have some connections to learn. For example, if you could actually learn AC0 circuits really, really efficiently, then we would not actually have public key encryption in AC0. OK. So somebody showed a real construct set of primitives which are secure for AC0 and NC1 in one case, unconditionally, in the other case, under some assumption. And we have an open question. Can we construct public key encryption using similar techniques? Thank you. Good morning.