 So let's dive right in. First, I have a confession to make. I'm not actually going to talk about Bitcoin in this talk. I'm not going to talk about any sort of motivating examples. Instead, I want to use this time sort of as a platform to motivate a sort of larger program we have about where cryptography is the application of complexity theory. So what do I mean exactly? So right, this is the sort of thing, the sort of win-win that's on the screen here is the sort of thing that naive grad students, such as myself, dreams of proving. You know, the theorem of statement of the form, like we can either have interesting cryptography or efficient SAT algorithms. Well, you may say like, right, we have efficient SAT algorithms. So efficient worst case SAT algorithms. Why do we want a win-win like this? It's probably pretty obvious, right? We have this amazing, rich theory of NP completeness. So that right, if you break the cryptographic scheme, then you have far reaching consequences. Right, right, unfortunately, we know there are explicit barriers to many people sitting in this audience to proving statements like this. We can't even prove, we know barriers to even proving average case hardness from NP hardness. So in this work, we sort of, we consider the following sort of win-win situation. We prove theorem statements that are sort of of the form. We have interesting cryptography or non-trivial SAT speed-ups, right? Again, these SAT speed-ups are only interesting if they're on the worst case. And why do we think this is particularly interesting because it sort of connects to this emerging theory of fine-grained complexity, which there's been an explosion in recent years. So before we go any further, what exactly do I mean by interesting cryptography here? So right, what we're gonna talk about today is proof of work, right? This is a notion introduced by Cynthia Dwork and Moni Nohr in 92. And in the proof of work scheme, first we're going to parameterize this proof of work scheme with a parameter T, which is going to quantify the sort of amount of work that we want to prove it. We're gonna have two players, a prover and a verifier. The verifier, and it's gonna consist of three algorithms, right? First, we're gonna have a challenge generation algorithm. It's gonna generate a challenge. For the purpose of this talk, this is just going to be select something uniformly. The prover is going to take this challenge and run this algorithm proof to produce a proof. And the verifier will then run this algorithm verify, which verifies that the proof is indeed correct, right? And what sort of properties do we want here? First, we want efficiency. The honest prover should basically run in time T. And the verifier should be very efficient. Here, linear in this parameter N, right? T is going to be a function. We want completeness. So the honest prover actually works. It actually convinces the verifier. He always says yes when you run the honest prover. And finally, we want hardness, right? That you can't cheat. If you do less than T work, you're unable to convince the verifier. But let's dive into this a little bit further. Instead, we want to consider a slightly more robust notion here where we don't want, consider the situation where a verifier is interacting with a sort of mega prover and he sends this prover M challenges all at once. And the prover sends back M proofs back in return, right? We want this prover to be forced to work. Basically, the best he can do is to produce proofs for each individual challenge one by one, right? And that's sort of captured by this statement, right? Given M challenges, batch processing of M proofs in time less than M times T, this parameter T again, time shaving off epsilon factor from this parameter T succeeds with low probability, okay? This epsilon, we're going to see where this comes from loosely in a couple of slides. So right, sort of the statements we prove in this work can sort of fit into this generic sort of bad lib format, right? Assuming some worst case conjecture from fine-grained complexity, it's not totally generic. We instantiate it with specific conjectures, but there are a variety of them. Then this recipe that I'm about to describe to you yields a proof of work scheme that requires amount of work that corresponds to the conjectured hardness. So right, what's the roadmap for the rest of this talk? First, I'm going to guide you through some of these assumptions, some of these conjectures from fine-grained complexity. Hopefully you were at Virginia's talk on Sunday, but if you weren't or you don't remember, I'll jog your memory. Second, we're going to instantiate this recipe that I alluded to in the previous slide with a specific example, orthogonal vectors and sort of carry it through. And finally, if we have time, we'll talk about some nice properties that come out of this specific proof of work scheme. So right, fine-grained complexity, some of you may not be familiar with this notion. So what is fine-grained complexity sort of in a nutshell? Follows from this sort of basic observation that for many natural problems, this sort of basically the brute force procedure oftentimes this obvious procedure is essentially comparable to the state of the art, despite lots of work being done by very, very intelligent people to improve these parameters. And what do I mean by essentially? I mean that if brute force, like searching over the entire solution space takes time t, then it's hard to shave off an epsilon factor, right? Even the best algorithms that we have fail to even improve by this much. And so this is where this epsilon in the hardness definition is coming from, actually. So right, this observation led to sort of a theory where sort of people, we have many, including Virginia, proving statements of the form that if you can improve the exact complexity of a problem Y, then you can improve the exact complexity of a problem X or vice versa, I guess. Right, and how do you prove this sort of thing? They have this framework of fine-grained reductions, which is something that's probably relatively familiar to people, cryptographers, is sort of tight reductions in disguise. And this theory, so maybe this arrow is going in the wrong direction, this theory has led to the conjectures that for many of these natural problems, this brute force, this simple, obvious algorithm is in fact essentially the best you can do up to small polynomial factors. So right, what sorts of problems are we talking about in fine-grained complexity? So maybe one of the most important ones is KSAT. So right, we have a KCNF, CNF with clause with K, we just want to know if it's satisfiable. So lots of work has been done on this problem, obviously. And so far, the best thing that we know how to do is essentially takes two to the end time. We can barely shave off anything, not even a constant factor in the exponent if you consider K increasing. And so this led Russell, Impagliazzo, and Patori to formulate what is known as the strong exponential time hypothesis. And this says that for any epsilon, there exists a K such that KSAT cannot be solved in time less than two to the one minus epsilon to the time set. So what else do we have? Another sort of foundational problem here is orthogonal vectors problem. So we have two sets of n vectors, each a very small dimension D. D you should think of as polylog, right? And zero one, zero one vectors. And all we're asking is, is there a vector in U and a vector in V such that the inner product is zero, right? This is sort of asking this subset disjointness question. And right, again, the best that we know how to do is essentially enumerate overall pairs and check, more or less, right? And this isn't the worst case. And right, and so right, this has led Ryan Williams in 2005 to conjecture that this problem fundamentally requires quadratic time when D is slightly super logarithmic. And in fact, the strong exponential time hypothesis implies this conjecture. Right, there's many other examples. For example, you can generalize the previous problem and consider what I would call, what is called the KY's inner product here, which basically involves multiplying component-wise across your vectors. And this problem is conjectured to be n to the k-hard, right? The best you can do is enumerate overall k-tuples. This is again implied by Seth. For all constant k, this is implied by Seth. Right, there's an even more examples. All pairs short is path, like there's been basically no improvement over the algorithm you learn in algorithms 101, three-sum, zero-way triangle. I'm not gonna go into the definitions. There's a lot, right? So let's zoom out a second. And look sort of at, this is actually a relatively small map of this area, just the stuff that's relevant to this line of work. And so this is a map of sort of hardness conjecture islands, I guess, right? So these arrows represent these fine-grained reductions or implications. And I don't know how to, right? So over here on the left, right, we have these conjecture, worst case conjectures. Then in the middle, we have these reductions to these algebra-tized problems from a previous work of ours from a year ago where we were able to succeed in proving that they're in fact average, hard-on-average. And well, not proved that they're conditionally hard-on-average, right? These networks of conditions, right? If you can prove any of these islands is in fact hard, right? All of their descendants will have equivalent hardness. And you can also, in this work, we prove that these algebra-tized problems in the middle are also non-amortizable. Or we prove a direct sum of theorem for them. And finally, we also prove that if these hardness conjectures imply various proofs of work schemes on the far right. Let's keep going. All right, so I should mention that anything above cubic, these proof-of-work schemes are not exactly what I described, they're interactive. And so if you want to compress them, the best way we know how is to apply Fiat-Chemere. Okay, so for the purpose of this talk, which there's not a ton of time left, but we're going to focus on one specific sort of instantiation of this mad lib, orthogonal vectors, right? Oh, I should mention also that very recently Odette Goldreich and Guy Rothblum showed worst case to average reduction for the problem of counting t-clicks to the problem of counting t-clicks, not an algebra-tized version. And this also leads to a proof-of-work. Right, so what are we doing? Again, reminder, we're showing that the orthogonal vectors conjecture or sets whatever you like, if we apply the recipe that I'm about to describe, gives you a quadratically hard proof-of-work. So what is the recipe? First, we're going to represent orthogonal vectors as a low-degree polynomial, which I'll call FOV. Right, what is the benefit of doing this? First, we can apply a fine-grained worst case to average case reduction. Second, we can prove a direct sum theorem, which is a fancy way of saying the problem is non-advertisable. And then we'll proceed to the second step, which is efficiently delegate evaluation of FOV to the prover using some tricks from Ryan Williams and basic sum-check protocol from, say, IP versus p-space, equals p-space. Right, and this will allow the prover, the verifier to check that the prover actually verified this hard to batch polynomial. And so if he can do this too quickly, he broke the earlier, he can break the earlier conjecture. Right, so how are we going to represent orthogonal vectors as a polynomial? So what does this mean in the first place, right? This means, so we're going to say that it's represented as a polynomial. If UV has orthogonal vectors, then this polynomial should not evaluate to zero, and otherwise it should. So right, recall what orthogonal vectors problem looks like, we have these two matrices of zero, one vectors for now, and we just want to determine is the inner product of any of these two vectors zero? Right, it already looks like a polynomial, but it's sort of equal to zero at the wrong time. So we're going to apply a trick, flip things around, so now it's one, if the inner product is zero, zero otherwise. Right, this is for two specific vectors, U and V, from then it's set, and then we simply sum over all of these vectors. All possible vectors, right? And then we're going to look at, define this polynomial in particular over Fp. So P, this polynomial is essentially counting the number of orthogonal vectors when you give it binary inputs, and so P we're going to take to be greater than n squared, so there's no wrap around, and the degree, you should note that the degree is just basically D, 2D. Why do we care? Why did we embed this thing in a low degree polynomial in the first place? Well, this goes back to old ideas of, I think, Lipton and many others since then. So, right, yes, right, this is the idea from the previous paper, this fine grand worst case to average case reduction, right? So, if we consider the truth table of this polynomial, FOV, it's not really a truth table because it's over a finite field, but whatever, right? This is defining a code word in error correcting code, a very nice error correcting code. And this means that roughly, right, if we have some algorithm that it does well most of the time, then we can, and say we're interested in evaluating XI, like this input XI of this polynomial, right? Because it's in this code, we have this code has a very nice sort of local reconstruction properties. We just need to call this algorithm A on a few randomly drawn instances, and we can reconstruct for any XI with very high probability. So, it allows us to correct from something that's good on average to something that's good all the time. Great, the actual way that you prove this theorem statement on the board is a bit more complicated than what I just showed you, but, right, basically, so what is the theorem statement, right? Orthogonal vectors conjecture implies that any algorithm running in sub-quadratic time is correct on a uniformly drawn instance with low probability. But I promised you more, FOV is non-amortizable. So, what is this sort of basic intuition for non-amortizability? How are we going to prove this? The basic idea is not downward self-reducibility. So, if we had an average case reduction, so if we were just thinking about the worst case, or we had a sort of carp-style average case reduction like we do for, say, discrete log, downward self-reducibility would in fact be enough, but you'll see this in a second. So, what is downward self-reducibility here? How does this work? Right, we have our two sets of vectors, U and V. It's very simple. What are we going to do? We're just going to split them into two, each U and V into two smaller sets, and we're going to compare all of the pairs of these smaller sets, right? And FOV is simply the sum of FOV called on these smaller sets. And combining solutions to the smaller problems very easily translates to solutions to the bigger problem. And this is going to, with some more work, you can prove that it works even in this sort of Turing-style reductions that we need. Right, what do we prove, right? That orthogonal detector's conjecture implies that any algorithm running in time M times two to the N minus epsilon is correct on M uniform instances with low probability. Where, oh right, I should mention M is any polynomial in N, it doesn't matter. Right, and this is a bit surprising because Batch univariate polynomial evaluation is something that we've known is easy. You can do it in quasi-linear time since the 70s. Okay, so right, let's go back to our recipe. We're sort of halfway done. Now I'm going to show you how to delegate evaluation of this polynomial, roughly. So, right, how are we going to do this? We're going to basically run the sum check protocol. So the verifier has some instance of orthogonal, this FOV problem, evaluating FOV problem, and he sends it to the prover. The prover is going to compute FOV of Y, and he's going to compute the coefficients of this uniquely defined univariate polynomial G sub-UV. It's defined by U and V. And he's going to send this back to the prover. I'm not going to tell you exactly what GUV is, but fine. So what is the verifier going to do now? He's going to check that what the prover sent is in fact what is this uniquely defined GUV, and he's going to check that Y equals FOV of UV, right? Y, this sounds like the verifiers to do a lot of work, but the key thing is this GUV is very helpful here. So it's this univariate polynomial of just linear degree, which, first of all, can be computed, the coefficients of which can be computed in essentially quadratic time from U and V, and it enables the verifier to check that he can check that it's correct, the correct UV, the correct G sub-UV in linear time, via essentially the short simple lemma. And finally, given this univariate polynomial, he can evaluate, he can check property two very quickly in just linear time. So right, this is starting to look like a proof-of-work scheme, right? Here are our algorithms, generate, is just pick random field, like vector of field elements. Prove is this two-step computation and verify is this two-step verification procedure. It satisfies is, if you believed me what I just said, it satisfies efficiency, completeness, and right, if you had a very efficient prover, you just can simulate the whole thing by yourself. The verifier systems simulates everything by himself and breaks the earlier theorems. Finally, I should note that this Y is not actually needed in the first place. It's the, everything will go through without it, which may be useful in a second. So what did we just see? We saw that the orthogonal vectors conjecture, assuming the orthogonal vectors conjecture, then the recipe that I described yields a proof-of-work scheme with quadratic work. Also, if you're sort of paying attention, note that this proof-of-work scheme has unique proofs. And these proofs have a very simple algebraic structure. Just involve evaluating polynomials. So, oh, right, and I should, as I mentioned before though, for the sort of harder polynomial problems, right, we need interactive protocols. We're not sure how to do it non-interactively without assumptions, additional assumptions. So, right, how can we exploit this algebraic structure? I was just talking about, right, everything is just evaluating polynomials, essentially. So one instantiation of this idea that we have in the paper is what we call zero-knowledge proofs-of-work. What does this mean exactly? This means that the verifier learns nothing from the prover beyond the fact that he performed some work. He can't, say, take this proof and perform some other work, essentially exploiting the prover's additional power or something like this, right? What does this mean though? Like, he learns nothing. It means that we can efficiently simulate interactions with this, with an honest verifier. Right, efficiently here, we have to be a bit careful, right? Because all of these problems I've been describing to you are actually in P. So, right, our normal notion of zero-knowledge doesn't, it seems very trivial in some sense. But here we're going to say efficiency means that you can simulate in quasi-linear time. So, right, he's really not learning anything. And how would we do this very, very roughly, right? So, recall that our proof-of-work scheme, all the verifier needs to do is evaluate this univariate polynomial and compare it. Well, okay, I didn't tell you this explicitly, but he compares it to a value that he computes by himself, efficiently. So, the idea here is just to assume some DDH, homomorphic commitment scheme, basically, right? The prover is going to send some public key and a commitment to this univariate polynomial. Then the verifier, instead of evaluating the polynomial in the clear, he evaluates it homomorphically, he commits to the value himself using the public key, and then we test the equality of these two things, which is, you know, all the techniques. That's it, we think that this indicates that there may be more applications to this sort of thing. I don't know, any cryptography, right? So, I want to just mention one big open question, sort of this larger paradigm that one that we were initially hoping to solve and we didn't quite make it. This is a notion of moderately hard one-way functions, which is a notion due to, I believe, money nor. And so, like, can you show the existence of moderately hard one-way function from similar assumptions to the ones that I described today? And here, what is a moderately hard one-way function? It's really mouthful. So, like, given, it's this function f, where given some x, you can compute in the forward direction and say quasi-linear time, but if you want to, but it should be basically impossible to invert in sub-quadratic time.