 Hello everyone, I'm You Xiong Huan. Today I'm going to talk about our work on the Comprehensive Oracle Technique and Post-Quantum Security of Proof of Sequential Work. So this is a joint work with Kaimin Zhong, SearchFear, Myself and TinyNail. First we're going to talk about the Quantum Render Oracle model, and then we give a brief summary of our results. As a commonly used technique, we talk about the lazy sampling in both classical and quantum settings. Finally, we discuss our results in more detail. So the Render Oracle model is a common way to analyze classical cryptographic schemes that use a hash function. Here, hash functions are idealized into uniformly sampled functions that everyone has access to. Previous into the quantum settings, everyone can make the query in superposition. Contrary to the sequential query, we are in the setting of parallel query in which multiple data points could be asked for a single query round. A typical example problem in the Render Oracle model is a zero-premise problem which asks to find an input that is mapped to zero. It is well studied and understood in both classical and quantum settings. For example, when given parallel access to the Render Oracle, running global search in parallel is known to be optimal. Another example problem in the Render Oracle model is a hash run problem which asks to find a sequence of inputs x0 to xq such that one is mapped to another. We refer to this as finding the hashq chain, and it is easy with Q-scratch queries but expected to be hard with less than Q-scratch queries even if we are given parallel access to the Render Oracle. This is mainly because the data points being asked in a single query round are non-adaptive which means they should not depend on their query output. The harness for the hash run problem is easy to show in the classical setting but there hasn't been any quantum proof parallel work. So what have we done? We offer a useful framework for analyzing such problems in the parallel query quantum Render Oracle model. Using our framework, one can prove quantum harness of this kind of problems using classical reasoning, and this works by lifting classical proof into quantum proof if in suitable form. For demonstration, we apply our framework into solving various example problems such as simplifying existing proof such as the harness for a zero-primes problem and also obtain new bounds such as for the collision finding problem and for the Q-chain finding problem. The main application of our framework is that we give the first post-quantity security proof of the PLSW scheme or the proof of sequential work scheme constructed by Cohen and Piatrack in 2018. Independent and concurrent to our work, Blocky et al also managed to give the post-quantity security proof of the PLSW scheme. However, their proof is tailored to the specific problem whereas our framework is generally more applicable into various example problems. Also to understand their proof, a certain amount of quantum information science is required while assuming our framework is correct, verifying our proof is just a matter of purely classical reasoning. So next, let's talk about the lazy sampling technique. It is useful for analyzing harm problems in the random oracle model. So instead of sampling the entire hash function, the random oracle function, at the beginning we use a database to simulate a random oracle. The database is initially empty and whenever an entry being queried we sample the fresh randomness for that entry. Formally, the database is a partial function with augmented value but and after Q queries there will be no more than Q non-empty entries within the database. For analyzing the zero-premise problem, an important observation is that if there is no zero within the database DQ then the adversary is unlikely to output one zero-premise neither because his best guess is some input that is not recorded within the database. But this will success with probability no more than the exponentially small error bound. Formally we can write this down as in following probability bounds. Putting this into the content settings, it is a way to understand Zendry's compressed oracle technique in the sense that now the database state are in a content state and whenever an entry being queried we are essentially applying the compressed oracle onto the content state of the database. Formally the state of database is now in a superposition of partial functions with no more than Q non-empty entries after Q queries. So here similar observation applies. If there is no zero within the database DQ which is obtained by measuring the database state after Q queries then the adversary is unlikely to output one zero-premise neither except here the error bound is slightly different. Although this simulation is not obvious it is a way to understand Zendry's compressed oracle technique. And notice that now we have reduced the probability of adversary finding a zero-premise into the probability of database having a zero within some of its entry. But how do we bound this desired probability? Next let's see the classical analysis of zero-premise problem and keep in mind that our goal is to eventually lift them into the content proof using our framework. We have two observations. First we observe that if after Q queries, sequential queries the database acquire a zero within some of its entries then the zero must occur within one of the Q sequential queries. We can therefore bound the desired probability by summing together this bracket notation we called the transient capacities which is simply the maximal probability database shifting from not having a zero within some of its entry to having a zero. And second we observe that if after one parallel query the database acquire a zero within some of its entries then that zero must occur within one of the queried entries. Put this all together we can now bound the desired probability database having a zero. For the second observation we are using the technology the transition of database shifting from not having a zero within some of its entry to having a zero is strongly recognizable by local properties zero. And here we refer to the database property zero being local because it only depends on one entry of the queried database. Put this into a higher level we obtain the following recipe. First we decompose the desired probability of database into sum of transient capacities. And second we bound the transient capacities by the probability of local properties that recognize the transition. And in our framework we use pretty much the same recipe except now we are in the quantum settings so different definition of transient capacities and correspondingly adjusted formulas to bound these transient capacities and probabilities. So similarly first we decompose the desired probability into sum of transient capacities. And second we recycle probability bounds from classical analysis and plug into our formulas to obtain quantum transient to obtain bound for quantum transient capacities. We also in case of weaker notion of recognizability provide correspondingly a different formulas but the point here is it also just a matter of recycling probability bound from classical analysis and plug into our formulas to obtain quantum bound. Now back to the zero-premise example. We can finally using our recipe to lift our classical analysis of zero-premise problem into the quantum analysis. So first we recycle local properties and probability from classical analysis and then we plug in the probability into our formulas to obtain quantum bound for the transient capacities. Summing together these transient capacities we obtain a square root probability bound for the database having a zero within sum of its entry. Finally by Zandri's compressor called lemma we obtain the probability the probability adversely finding a zero-premise with only an exponentially small error term. The function here is we don't need to understand the definition of transient capacities we can simply lift the classical proof into quantum proof using our framework. By using the same recipe we obtain several additional results. This includes a better bound for a collision-finding problem and also a new bound for the future finding problem. It is also worth mentioning that our improved bound for collision-finding problem is in fact sharp in the sense that we can parallelize a BHT collision-finding algorithm and the success probability will meet our asymptotic upper bound. The main application of our framework is that we prove the post-quantum security of the non-interactive variant of the PSW scheme or the proof of sequential work scheme constructed by Cohen and Piatrack in 2018. So the proof of sequential work scheme is a cryptographic primitive that is interesting in the context of blockchain in which a prover interacts with a verifier and we want to force the prover to do a lot of sequential computational work in order to convince the verifier while assuring the verification process is logarithmically fast. At the bottom level of the PSW scheme constructed by Cohen and Piatrack is a so-called PSW graph which is simply a merkle tree with additional red edges as you can see in the following figures. So we are essentially forcing the prover to compute labelling on each of the vertex within the PSW graph which will require a lot of sequential computational work. So for each vertex, the label of it is computed by the hash of the labels of its incoming vertices. In case of internal vertices, it is as if computing the labelling in the merkle tree. And for leaf vertices, we are essentially computing the labelling from the hash of vertices from the red edges. For example, if we want to compute the label L11, we need to compute hash of the label L0 and L10 as they are from the red edges connected to L11. So next, let's see the PSW scheme. We talk about the interactive variant here, but we can also non-interactive this by fiascia minetransform, which is the target we analyze in our paper. So first, the prover computes the entire labelling on the PSW graph and returns the root label to the verifier. And second, the verifier will challenge the prover to open several random leaves of the PSW graph. Correspondingly, the prover will respond the authentication path of challenged leaves. At the end, the verifier will only need to check consistency of these opened labelling on the authentication paths. For analysis, the intuition is as follows. Since we have collision and pre-image resistance, once the root label is set to the verifier, the entire labelling of the tree is fixed. And if there are too many chasing leaves on the PSW graph, then it is easily caught by the opening process of the Merkle tree. But if there is only few chasing leaves, then there is a long hash chain going through most of the vertices within the PSW graph. For example, you can see the following figure. If the prover cheats on these two red vertices, then the green hash chain will go through the rest of the vertices. Therefore, by the previously mentioned hash chain bound, this will require a lot of sequential computational work. As we can see, in this analysis, we need to consider intertwined core problems such as collision finding problem, pre-image finding problem, and also hash chain finding problem. In order to deal with this mix of problems, the situation is more complicated and we cannot simply apply previous recipe. We need rules to decompose these complicated transition capacities about the intertwined core problems into simpler forms. That's why we are giving calculus for transition capacities. This includes some basic rules to manipulate the capacities. For example, the capacities are symmetric and whenever encountering union properties, we have something like quantum union bound and also lower bound for that. When we encounter intersection of database property, we also have bound for that. These are relatively intuitive and we also have more involved calculus rules to manipulate the transition capacities. The pattern here is this allows us to work with the transition capacities on an abstract level understanding the definition of transition capacities. We can by means of these calculus rules to decompose the transition capacities that captures the security of previously mentioned PSW scheme into simpler form and from which we can apply the previously mentioned recipe for the rest of our analysis. Finally, let's recap on our contribution. We offer a useful framework that whenever applicable, help us prove query complexity bound in the parallel query quantum random oracle model. This works by purely classical reasoning that lift classical proofs into quantum proofs. For demonstration, we apply our framework into various example problems including recovering non-results and also find new bounds. Finally, we encourage the audience to refer to our paper for more detail and thanks for listening.