 We move to the last talk of the session, which is called Undistinguishable Proofs of Work or Knowledge by Fondini Bardimti, Agelos Kejas, Thomas Zakarias, and Bin Chenzang. And the talk is given by Thomas Zakarias. Hello. I will begin my talk with some motivation. So let me get this hand out of my hole. Okay, so the first time that we run to discuss about this concept was by a simple observation in standard proof of knowledge protocol. So just to remind you, a proof of knowledge is an interactive proof where a prover convinces the verifier of the validity of some statement. And this is done in a way such that honest proofs always verify, this is the completeness property. And if a prover manages to convince us with some proof of probability, then we actually have a mechanism to extract the witness. And in most interesting constructions, this is done in a prover privacy manner. You can hide the code of the prover either via zero knowledge or some relaxation like witnessing usability or some other variant. A very prominent example is the Snorand Identification Scheme, which is essentially a proof of knowledge of a discrete logarithm. We run in three moves called the Commitment A, the Challenge C, and the Response R. And it is what we call a Sigma protocol, which means it's an interactive proof, public coin, three moves that achieve special soundness. It's a strong proof of knowledge property and zero knowledge against honest verifier. So what we were thinking is that, okay, we are convinced that actually the prover knows the witness, but how did the prover manage to do that? So what was the way? Did it actually happen to do this efficiently because it had some priori knowledge, the witness? Or did it actually spend some super polynomial effort to solve this specific challenge? Okay, so we're starting to see it more generally because this is a very special case. Actually, the knowledge challenge and the work challenge are exactly the same, but assume that you want to be convinced that a prover either can solve a problem or it knows something. And where could this be useful? And we run into the seminal paper from work an hour for proof of work. It was called as pricing functions then. And it was a proposition of how you could have a reducing spam mechanism via these tools. So we have a mail receiver Bob, a mail sender Alice and a mail server. And we want to have a reduced spam in this mechanism. So what we do is that if Alice is an actual valid contact, then it proves some knowledge that it is the branch of proof of knowledge in order to run a proof hold that it's a contact. And if someone is not like Eve, for example, then the system somehow forces the non-contact to run some, for some time or spend some computational results to show to be approved. So in this sense, someone who's actually a spam and is not a contact will be discouraged from such an attempt. The problem with this is that there is some leakage in privacy here because for this to happen with a DN-92 approach, the mail server gets to know Bob's contacts. And a very nice idea of how we could advance this technique is how we can do that so that the privacy of the contacts is preserved, so that this is not leaked. So you would use a tool that the senders can prove that either they know something that relates them to the receiver or they have spent some amount of resources. And this is done in a way that is privacy-preserving which means that the prover's mode what the prover chose in order to follow and prover remains unknown. And now I will just tell you that what would define a model as proof of work is via the concept of cryptographic puzzles where in this sense a verifier challenges the prover with a puzzle, the prover returns a solution and if the verifier accepts then this means that he's somehow certain that the prover has spent an amount of work given that we believe that this problem is somehow hard. And all that somehow came to mind and brought us to this concept that we will call as proofs of work or knowledge now for brevity-poe works where you prove that either you know a witness to a statement or you perform some work and you do it in an indistinguishable manner. So in more detail our contributions are first we define formally what is a cryptographic puzzle system and then use this notion to define poe works that are now defined with respect to some language in NP and a fixed puzzle system. We will provide an efficient three move poe work construction and instantiate our puzzle systems with two different ways, one we use around the miracle way and one we rely on complexity assumptions. And finally in order to provide some intuition why this specific new class is somehow useful we have two real-world applications. First how you actually said before you spam in a privacy preserving manner and how you could build like a hybrid from currency system with enhanced liveness and we also provide a theoretical application where you get a three-round concurrent and simulatable argument of knowledge which is definition from process work. So to begin with cryptographic puzzles. So what we see as a cryptographic puzzle in an informal way is that it's something that you can sample the puzzles and generate them in a fast way and you can verify them in a fast way but must be hard to solve. Maybe not intractable but certainly we'll have some parameterized hardness and we will also ask that the puzzles are what we say a mortization resistant. This means that if you take up lots of puzzles doesn't provide you with some significant advantage with respect to trying to solve them one by one. And for our constructions we require a special property that we actually prove in our instantiation that the problem is puzzles to be dense. What we mean by dense is that if you somehow sample randomly from the length of the puzzle space and coding then it is with high probability you're going to run into a puzzle. And when we mean by proof of work we don't actually restrict parallelizability because we're not talking about a specific time here. We're talking about what we generally can be seen as computational resources. So now a puzzle system in a more formal approach starts with three standard algorithms which is a sampler, takes a hardness parameter rates and that puts a puzzle and there is a solver algorithm that takes the parameter on the puzzle and will out to a solution and there is a verifier that will actually have the parameter take a pair of a puzzle and solution and we'll check if this solution is valid. A dense puzzle also we require to have an extra algorithm that we call sample sol, sample solutions which means it doesn't actually only put a puzzle but a pair of puzzles and solution. Our properties are, the easy stuff is that the completeness of the sampler and the correctness of the sample sol should actually happen with the programming probability. It should be efficiently sampleable. What I want to stick a bit more is what we define by hardness. So we define by hardness with respect to some scaling function which means that if I give the, provide the adversary with a puzzle then and here it ends with a solution the probability that he managed to do that in a time scaled by G with respect to the solver that we have is negligible and let me tell you what I mean. Assume for example that our solver is an algorithm that runs some brute force search in two to the lambda steps, lambda the security parameter. So the best thing that we would hope for is that G is actually the linear function. So we don't have any scaling with respect to that but this is not what something I would expect. What we would expect is that someone who runs a brute force search if it does square to the two lambda steps the probability that he actually finds a solution will be negligible. So here G would be the square root function. Generally J G is a sublinear. For the density and the privacy that we will require in our proofs we want the sampling distribution of the two sampling algorithms is indistinguishable and we define a mortization resistance with respect to a parameter K which is the number of the puzzles that you give us a batch and the scaling function which of course depends on how many puzzles you see, you provide. So this actually tells us this definition in an informal way that if I provide K puzzle you don't have an advantage more than a scale of a T with respect to solubility and one by one. For example, it could be that one puzzle takes two to the lambda steps and K puzzles take no more than one of a capitol to the lambda steps. So if K is polynomial again you're still hard. Given a formal definition of puzzles will actually enable us to prove to show formally what is a Pogwork. So Pogwork is an interactive proof and a verifier and it is F sound if it satisfies the following properties. First, the completeness property which means that an interaction either with the proof running on the knowledge mode this is has the witness or by having the code of the solver then this could be accepted with the should be accepted with probability for valid statements and actually I'm sorry for this RL that pops up there it's not right or left it's actually witness relation I have some compiling errors whenever I do save and export it always gets me that or something new. I mean, it's like the suitcase problem you have to put many things in a suitcase and you squeeze and something new pops out every time so this will happen. And then I said, okay, I cannot deal with that. So when you see right and left it's the witness relation for RL, okay. So now soundness is defined with respect to again a scaling function F which means that over the coins of the sampling algorithm if a prover manages to to persuade us within a time which is scaled by F with respect to the time of the solver then it actually did that because it knows something it didn't work, it knows so we can extract it via an extractor. This is F soundness. And its usability says that the view of the verifier it is indistinguishable even whatever the mode of the prover is is it the knowledge or the work mode and this property directly implies the standard witness visibility because every witness interaction with some witness is indistinguishable with a reference point which is the work mode. Okay, so now the construction first to give some intuition of why this notion makes sense so you can have a trivial four round construction by having the verifier sending a puzzle and the prover without either commit and provide a zero knowledge of knowledge of the witness or it can commit to a solution and provide a zero knowledge proof that send a solution. So this can be done in four rounds. It is much more interesting to do it in three rounds. So we actually have a compiler which in text a three round special sound it's VCK protocol and a fixed puzzle system and thus produce a three round poor work. Just to remind you what a three move special sound it's VCK is. So it is a protocol with respect to someone in pain language. The prover runs again in a commitment a challenge C and response are rounds and will prove the validity of the statement with a completeness property the special soundness property which means proof of knowledge in the sense that I have an extractor that if I provide the extractor with two accepting transcripts then we can extract a witness and we have the zero knowledge against an honest verifier. So how would this construction work? Have in mind that because we wanted this disability the two modes should appear similar. So see that the flow here. What is done is that the prover starts with running the first move of the underline it's VCK protocol. It produces a commitment a prime it sends it to the verifier the verifier will send that challenge and now what will happen is the and now that's the word density comes up. So the prover will sample a puzzle and solution and will set us the challenge of the underlying protocol as the XOR of the verifier challenge and the sample puzzle and will run the third move of the underlying protocol given this C prime. And it will output so the third move of the power work is the C prime the R prime and the pair of puzzles solution. The verifier will run the following checks. First that it's challenge is actually an XOR of C prime and pass and the transcript of the underline protocol verifies and of course that solution is accepting. Now we have to go to an approve of work mode that looks the same for this possibility reasons. So in this case, we use a simulator that is given by the HPVCK property of the underlying protocol. It simulates a valid transcript and sends a prime to the verifier. The verifier will again respond with C but what is going to change now is that now the prover does not the witness it has to work and how it will work. It will work by computing a puzzle that it is the XOR of the given challenge and the simulated one and now because this is uniform and density happens this must be a puzzle. Okay, that's why we need dense. So what given this puzzle it will run the solving algorithm it will output a solution will send it to the verifier and the verifier will actually run the same checks. And what is the security that we get here? So given some reasonable assumptions that it's very easy to get if we require the challenge and puzzle sampling distributions are very close. This can happen for every distribution which is close to uniform and that solve is actually the slowest algorithm of the ones involved. So what we get is that for this language and of the underlying protocol and the puzzle system that we fix then if the puzzle is G hard we get a Pogwork which has constant with respect to G soundness and statistical inscrusability. So what remains now is that okay we have our three round special sound SVCK protocols and in order to plug something to our compiler we need to have the dense puzzles. So let's build some of them. We provide two construction one is based on the other molecules and one is based on complexity assumptions. So the first one is actually the first that would come to mind. So what can I think of a puzzle? Let's just think that I have a hash function and I will require that this I will give actually the hash puzzle, the last I have a hardest parameter eights. I will give a spuzzle. Okay, give me the last eights, eights bits of so LSB eights means the last eights bits of randomly picked X. So I will give you this as a puzzle the solution here is X and now you are going to start running the hash until you find something valid. And of course the verifier will check whether the solution is actually if you have the solution take the eights bits it's actually the challenge puzzle. So it's something which is the first thing that would maybe will come to mind. And what it buys us is that for some meaningful parameters like eights the hardest parameter lies between the square root of the the square log of the security parameter and L over four. And for some constant C which is greater than two and some quite big K we get that in the random oracle model this has a C root soundness that's the scaling we get for every C greater than two and it's actually a mortization resistance with respect to the identity function that's what this idea. So we don't actually have any loss there whenever the batch of puzzle is no more than K. A more interesting construction is how to go to the complexity assumption setting. So here we are going to build our construction based on the hardness of one way related to primitives and discrete logs. So we construct, so we start with a universal one way hash function. This is a universal one way hash function is a one way hash function with a specific property that you can actually an adversary can not actually plug into two inputs that have the same evaluation. So this is target collision resistance and we build an extractor that has a similar property and this can be seen as independent interest of independent interest. And now given that we have this TCR strong extractor and some arbitrary one wave function we get that this function we don't have to get into details but this function is actually a dense way and one wave function. What we mean by dense is that it is output it's close to uniform. Not only it's one way but it's somehow very beautifully distributed in the range. And now that we have all the tools what we do is we somehow instantiate this F with a well-known one wave function so we get the inverse of the log and our puzzle is actually this one wave function some parameters that some randomness which is used for refreshing the puzzle and the solution is required to have a length of the hardest parameter. So again what we have is that I'm not going to go into so much detail here. So the parameters regarding security of these constructions are similar with the previous one but now we assume some reasonable hardness assumptions for the extractor and the discrete logarithm just to tell you that for these specific parameters these hardness of the log holds in the generic model. You can take another assumption if you like but we just provide this an instantiation because this is actually what soups result tell us. I will continue with some applications which is actually the last part of our contribution to show why this complexity class makes sense. So our first application coming back to the intro is now that we can use directly a Pogwork not completely directly but quite easily and have what we liked in the first place that we have our reducing spam mechanism that the prover does not actually reveal its mode of proving so it is a privacy-preserving reducing spam. Another interesting maybe application is how we could build a cryptocurrency that has robustness, has enhanced liveness property and how could this happen is that most blockchains use a decentralized public ledger that runs proof of work, okay. There has been some counter arguments against this overwhelming use of resources that is expanding so there are some new constructions that are based in signatures which can be seen as proof of knowledge of some secret key that you hold some income. So what if we could build a hybrid case? What do we mean by a hybrid case? Assuming that you have a system that runs normally via standard proof of work method but something happens and most of the miners go offline. We would like somehow to the ledger to remain live here to present this liveness property of blockchains. So we could have something like a trap door, like a backup which could be a priority, a trusted body that could use the proof of knowledge mode and issue blocks in case of such an emergency and then again go out if this emergency doesn't hold anymore and revert back to the proof of work setting. This doesn't mean that this is not essentially a decentralized proof of work approach. It means that it is like a proof of work with a backup in order to enhance liveness. We can discuss whether if this is interesting but you can do that with a power work you can do that in a way that you can hide whether this emergency happened and so you can hide every possible impact that could happen to the economy if they someone knew that blocks are being issued because the miners are down. So this is not revealed. And the final application, we use pass as a result and to show that, so pass as a result have some very interesting notions of simulatability like a general and somehow meaningful relaxation of zero knowledge. And we show that under reasonable assumptions about what could be the hardness of our puzzle, actually every hard puzzle should satisfy them. Then our power construction is a straight line simulatable protocol and since this is a close standard polynomial class, this lambda of the polygonal time, then we plug in the results of pass and we get a three round concarnatory simulatable argument of knowledge which actually reduces by one the four round construction for the same primitives that original pass had in their paper. And some conclusions on future work. So as we said, we define power works. We did it via defining puzzle systems and had instantiation and constructions and we provide some applications and some interesting future directions that we can think of is why not having some alternative power constructions? Why not seeing the relations of this notion with other known complexity classes? Some more applications in the real world would actually give more boost of why it's nice to use such a primitive and new puzzle system associations that could plug in so that we have a flexibility of which is the set up setting that we could plug in our hardness arguments. And that will be the end of my talk. Thank you very much. Is there any question? You're to run a searchable encryption. You mean, and what would be the proof of work method, the reduced mechanism? I don't have actually a direct answer. I'm not saying that it cannot be done by that. I cannot have my set up the efficiency here. I'm not saying that it cannot be in another way. Maybe you can plug in searchable encryption and work but this is somehow more straightforward like it was like the natural extension. So you can use proof of work and prevent spam. So it's a way to do it. Okay, that sounded meaningful. In the seminal paper of proof of work, this was the main motivation. Use this to prevent people that are not actually belong in your likely to be your family, whatever that means, so that you prevent them from doing stuff. And now let's do that. Why let's extend this nice motivation that actually was an initiative for doing all this proof of work modeling stuff in the paper. So let's try to do that in privacy presented with Niner. Whether it is faster to do the searchable encryption as in a symmetric setting, I don't know if it makes sense because I know that searchable encryption is fast but I don't know in a public setting if it would be more efficient than what we do now. But again, I don't have a direct answer. I have to check about this. Okay, let's thank Thomas again and all the speakers of the session. Thank you.