 I'll be talking for the most part about proofs of sequential work and then towards the end I will talk a bit about its reversibility and its added value. We could also start by talking about proofs of work. These are non-interactive proof systems at the end of which the verifier gets convinced that the proof I did some t-computational steps. They're very famous these days and a very simple instantiation of that. If you have a random oracle, you keep repeatedly looking for a salt until the hash starts with enough zeros. This kind of construction is very simple, but it's very highly parallelizable. You could do this in total in parallel. If the verifier cares about the sequentiality of the work, then we come into the concept of proofs of sequential work. It's an interactive proof system where in the end the proof-of-proofs t-sequential steps, not merely t-steps. The way to formalize it is that you assume, even given a massively parallel adversary that tries to save a bit of sequentiality, it will fail with overwhelming probability. Why we care about proofs of sequential work? They have many applications, old applications and new. The motivation for our work was some application that we wanted to do for cryptocurrencies. I will sketch that towards the end. In all of these applications, you use the computational delays to think about time in real life. That's the main idea under all these applications. If that's the only thing that we cared about, this kind of semi-definition that I gave, then there's a trivial solution. You get an x and you apply f to get x1, again x2, until you get xt. And you send this back. And if f is a random oracle, this is highly sequential. Great. But the problem is the prover is useless to the verifier. The verifier, to be convinced, has to do all the work again. So, obviously, we care about something else, not only sequentiality. We need to have some gap in the work between the prover and the verifier. So, if we are already on this slide, we can observe the following. If we have a random bemutation, f, such that if we apply it in the backward direction, it's much faster, then the verifier could also do the verification in the backward direction. So, the prover proves in the forward direction, the verifier in the backward direction. It depends on how much gap is there between the evaluation in the forward and the backward of f. We could have something practically relevant. An instantiation of this idea was defined by this last function, which is carefully defined square root permutation of the multiplicative group of a finite field of size b. We don't care much about the technical details here, except that in the forward, it's essentially computing square roots, and then the backward direction, it's one-squaring. And the assumptions that computing square roots in such groups takes roughly log b sequential squareings. And if that's the case, then the verifier would gain this gap, and it verifies in the backward direction, and it has a log b speedup. Log b in practice would be probably a thousand, it's great. They actually use this function to do interesting stuff. But this is not what we actually hope for. We hope to have some verification that the verification time will be something bullet logarithmic and still relevant in practice, and we can actually use it and deploy it. Before we go to the practical things, we could also somebody would say, well, if you had already a sequential function, like a hash chain, you could just compute the result and throw a snark on it. That's great. The proof is small. The verifier can verify it quickly. But now the prover is not happy. It has to do an extra work to generate this proof, and this thing in its generality is not that practical. It can be depending on what kind of functions and what kind of snarks you use, but in its generality, it's not clear that you can actually use it in practice. So we will not follow this approach, and instead we will follow the approach of Mahmoudi Moran Badan. And this approach, the protocol will be parameterized by a graph. I'll talk about what kind of graphs these are. And here the prover will actually use some random oracle and the statement to label this graph in the most natural way, where the label of a node is a function of the label of its parent. You do the labeling of the graph. You make a tree commit to it. You send the commitment to the verifier. Then the verifier would like to see that you actually did the right thing and start quizzing you on some labels. May ask you, give me this. You will give the labels of the parents. Then you can verify the correctness of the label. And you also give the Merkle tree commitment of everything that you already opened. This is a small example. You are giving almost all the graph, but this is just for registration. And this can be verified quickly. And you can repeat this many times until the verifier is convinced. Well, great. But so far, where is the sequentiality of work? There's no sequentiality here. Well, this boils down to the kind of graphs that you're using. And in their instantiation, they use graphs called ED-DIB throw bus graphs. It's not very important to know what they are, but they're very simple to explain. So we can still do it. These graphs are graphs in which if you take E nodes out of the graph, and then the induced graph, the remaining graph has a bath of length D. So, and there are actually very good graphs with very good parameters where you have theta in the size of the graph. The size of the graph will be also T. Theta, theta, so great. And these have also but a logarithmic indegree. The problem with these graphs in terms of applications is that the prover needs prohibitively large space in order to compute the labels of this graph for almost all of the time. If it's going to do it in these steps, it's going to maintain a lot of space in its memory, and that's not what we want. But is the question that was raised in this paper, is that necessary? Well, it turns out it's not with a construction. We use the different graphs by Cohen-Biachak. If you're going to commit anyway, so let's start with the Mercury commitment and make a good graph out of that. So you start with a graph like this and you add some extra carefully crafted edges that their variables will become clear why. And this will become your graph. And as before, you label the graph. But interestingly, you do the labeling itself is also a commitment. So once you label the sync, you're already committed to the graph. So you do these two things in one shot, elegantly and nicely. And then in this case also, you don't need to maintain large space because it's very good. And just as a sanity check, I'm not going to give the proof for this. Observe that there's a path that passes through all the nodes of the graph. And that's by design. You added these extra edges. So at least the sanity check passes. There's a long path. You can use a hash function and then you get it stopped. And as before, so this is the first phase, kind of the commitment phase. The second phase is as before, challenge response phase can be done in the natural way. So good. So far, this is practical and nice and good. What are we doing here? So this is also enough for our previous work. And let's talk about the current work. We want to have a construction that as simple as a skibless. I will talk about it in a second. The construction is very simple to explain. And it's almost as efficient as the previous construction. And on top of that, it has this nice feature that you can see it actually gives us an added value in these proofs of sequential work. So let's start with it. The same approach. But here we look at the labeling in a different way. We have this as skibless is part of the parameter of the proof system. And we use X to sample a bunch of permutations. Don't try to parse this as unnecessary. These permutations you can think of them. In these boxes. And now you think of this as the circuit computing. As a circuit to compute an input here. And you compute the output there. So you start with the zero input. Each wire here will be W bits. So for this permutation, you have three wires. Each of W bits, you apply the permutation, you get the output, and so on. For this one, you take the first wire and you move the next ones. And you apply the zero input to it until one step at a time. Until you reach the end. And you define this as your label and your commitment. And you send it to the verifier. So this is the first phase. The second phase as before. You get some challenges and you need to open them. Here the challenge for the illustration. Assume the challenge is I5 which would be here. I will think of this. It starts at the beginning at the source and ends at the sink and passes through 5. This looks like a very long path but it's actually logarithmic in the size of the skip list. And what we do, we then you give the corresponding states. These are the states that we computed. You give them back to the verifier. The verifier would like to check the consistency of this in a most natural way. And observe that these are permutations and these are invertible permutations. They will be random permutations. We're here in the random permutation model. And then you can also verify in the backward. You have the last state. You invert it here and you see in these blue nodes if the emboot here is an output here consistent. The emboot here is output here consistent and so on. So you check the consistency one at a time until you get the output. This is a logarithmic length. You can do this efficiently. Repeat this many times and you get conviction that the broofer is doing the right thing and there's a path that we will see that this gives us a proof of sequential work. Notice the space. When you're computing this you only need to maintain one, you know, of course the description of the parameters but also just one state. From sigma i you can go to sigma i plus 1 and also interestingly you can go to sigma i which is the reversibility and which is going to turn out to be useful actually. So the theorem statement that we prove is that the broofer will run in two stages in the first stage in the commitment phase and then the second stage is in the challenge response phase and we say if the broofer in the first phase done 1 minus alpha t-sequential steps rather than t-sequential steps and in total queries sorry to the rest of the permutations and in total it did Q queries then it will make the verifier accept to this probability. The first comes from a collision in some collision event on these permutations. If that did not happen then the only thing then the verifier, the broofer can succeed only on 1 minus alpha fraction that corresponds to the sequential work that is done. Now these proofs actually work a very high hand-wavy sketch is that you run the broofer, you observe its queries and in this case I'm thinking of these notes as query X and Y input and output to the query whether it's in the forward direction or the backward direction we don't care about that at the moment and then you add an edge between say two nodes if they are consistent meaning if the output here is an input here you add an edge otherwise you don't. It's obvious that we do this because of the verification how the verification actually works and then you also do brooening of this graph you chop all the nodes that are useless for the adversary and it won't affect its success probability so here we're assuming that the bad event did not happen and collisions the adversary could not say find some collisions so these will be useless so an observe in what remains is that there is a path that goes through all the nodes of the graph and we apply random permutations so there's the sequentiality follows and we also observe that the only challenges on which the adversary succeeds are the nodes in the black graph the one that remained after brooening oh apparently I'm very fast okay very good so in concluding so if we look at what we looked at so the hash chain takes in order to verify sequentiality it takes t steps you redo the computation the sloth function gained a log p factor in practice great the trees I'm going on Piatjak the log t and the construction that I showed is actually not look square t it's log t but to be fair the input to the query is here compared to this our larger so in the worst case that could be the input could be of size log t that's why to be fair comparing this to that we added the log the assumptions is random oracles, random permutations and the sequentiality and computed repeated squaring so that's great if it comes only for sequentiality but sometimes in some of the applications you care about the correctness of the computation and I'll sketch it in a second and when it comes if you care about the correctness rather than only sequentiality then all of a sudden these other schemes have worst parameters you have to to recompute the whole thing to be convinced that the output is correct while still the sloth function is still the best of these why is that this is if you just recall the graphs that I was showing before in all of them if you've done the actual sequential work and you decide to cheat say on one node then you will pass the verification of sequentiality with overwhelming probability but the output is not correct so they're malleable all of these are malleable and that's why to verify correctness you need to actually redo the whole computation the question was because we were motivated by an application to cryptocurrencies where we want to return the bit in the Chia network you want to return these are the cryptocurrency based on proof of space you want to return the dynamics of bitcoins bitcoin in terms of the blockchain if you need a new block without assuming synchrony and you would like that some party will do this sequential computation that takes time everyone else just observes and verifies the correctness so only one party can do this computation is deterministic and everybody can verify it quickly that would be great and that will you know this is kind of on a high level how this will be used in Chia and we'll talk about correctness so the best that we could do given this image which will change in a second drastically is that we could take the best of both worlds we could also take you know sequentiality we stay here but also we would like to kind of have the slot function and take the speed up the practical speed up that it gives us in a construction like what I called here slot skip list and this is actually the main design criteria why we had random oracles and we ended up using permutations because we wanted to embed such a function in the computation such that we can gain speed up so in this case the verifier the computation will be done in the backward and it will be much faster it's just you plug it in and it just looks like that you don't need to parse it but you just really plug these permutations on the lower level of this and then correctness will be verified in the backward and you get a log factor a thousand in practice is not bad still in theory you would like this verification to be to be quick and subsequent and concurrent to this work there's a line of beautiful work about verifiable delay functions where you could think of them as as proofs of sequential work that have unique output and you verify sequentiality and correctness in one shot and there are schemes that you can actually verify that in constant time there are caveats to that you have to do the proof generation they take a bit long and so on so there are different proposals here I won't cover them from different assumptions they add extra properties so it's very interesting and exciting area one major open problem in this area is both quantum security I think I will leave it at that and thank you very much Are there any questions in your list of previous works you didn't mention number theoretic solutions which seem to be very efficient there is a suggestion long ago of calculating 2 to the 2 to the t a modulo and n which was factorization is known to the verifier so it gives you an exponential gap and a very easy way to verify correctness and has all the nice properties but it doesn't it has to assume that factoring is difficult and you know there is also not public coin right in the sense the verifier to verify the correctness you can compute no there is no public coin as far as I can see you just calculate 2 to the 2 to the time you want the prover is just repeatedly squaring modulo n and the verifier knowing the factorization can quickly calculate the route exactly if the verifier knows the factorization if you care about designated verifier that's brilliant yes and that's actually these 2 works Piatrzak and Wieselowski take that beautiful construction and make it public such that anybody without knowing the trap door can actually verify and these actually 2 elegant and concurrent work that doesn't mention this as an extra requirement I was thinking about 1 prover 1 verifier and then I didn't see the difference yeah true thank you yeah I had a question so what fails if you take the Coen Piatrzak construction and just use the sloth permutation on the lowest level sorry again if I so why can I just take the the the Coen Piatrzak construction just use a permutation on sort of the lowest level or connecting the lease of the tree to get the same or then steps of verification but just have a the log t proof size so you want to put them here yeah just connect the leaves with a so well you're talking about radically different construction so first of all let's observe that this is not reversible at all it uses hash function and and hashes things down so you cannot hope to reverse it so you're saying well I don't care about that so why don't you talk about this line it's not clear how to do the proof in that construction at all in that scenario it's a very different construction but by definition it doesn't work this is just hashing down you lose you cannot reverse it any other questions yeah so we have to trust so the permutations will be chosen with every proof by the by the prover the permutations the permutations are random permutations and invertible so we don't there's no secret information in it but so how is it important that they are random so how do we how do we trust that they are random well if you trust AES for example probably you could do that in practice you'll probably use something like AES and assume it's a random permutation it's invertible you don't care about trapdoors that's what you actually use so and then you use so this x will have some enough entropy to kind of sample different permutations there are no trapdoors and when you try to make the verifier efficient so it's like in the picture you showed it looked like it's still in terms of and the length so somehow is there some compression going on because you saw you showed like this row operators can you show the the verifier the log n you're talking about the correctness yes when you verify the correctness yes so yes it looks like it's still I don't see where the the dimension disappears because you just add those rows so I don't see where you cut why it becomes n over log n rather than n yeah because because by definition these permutations if you apply it in the forward direction okay so you as a prover you have you cannot start from the end you don't know the end you start from the start where you know and you start you you apply the square root once and if you assume that this operation takes log B sequential squareings it's clear that if the verifier in the end can go in the backward and it will be here oh how do you verify that something is square root you just square it it's just one operation so there's this log B gap I mean it's still just a constant it's not you know like sequentiality but alright let's thank the speaker again