 Hello everyone. My name is Yifan Song. Today, I'm honored to introduce our recent work, correlated source instructors, and cryptography with correlated random tapes. Co-authored with Professor Vipo Goya. Randomness is crucial for cryptography. Based on several previous results, without randomness, several basic tasks are impossible to realize. For example, due knowledge, encryption, and so on. So in this work, we would like to get a better understanding on the extent to which randomness is necessary. To be more clear, we consider the following question. Suppose that a party uses correlated tapes in multiple executions of a cryptographic algorithm. Can the security still be preserved? This question can be motivated by, for example, a defective random number generator. Which outputs correlated tapes under multiple invocations? So we all know the line of research of reset for security, where a party uses the same random tape across multiple executions, can be seen as a special case of our general problem. For example, reset both your knowledge, reset the secure competition, and so on. In this work, we initiate a systematic study of the above question. As an example, let us first have a close look at correlated tapes through knowledge. We model correlations among the random tapes by considering an adversary, which may have limited control over the random tapes of an honest party. To be more clear, a malicious verifier is allowed to specify t-tombring functions a1 to a t. In addition, the random number generator in the first execution is replaced by a1, and in the second execution by a2, and so on. After that, a string x is sampled uniformly. This x can be seen as the original random tape, which is unknown to either the prover or the verifier. In the first execution, the prover uses a1x as its random tape, and a2x in the second execution, and so on. The prover is stateless, while the verifier is stateful across all executions. We point out that using a1x to a tx is a very natural way to model t-correlated strings. One can see that if the verifier chooses all t-tombring functions to be the identity function, then this becomes reset both your knowledge. However, correlated tapes' knowledge is impossible to realize, even if t is 1, and a1x is guaranteed to have enough meandropy. This is based on the work of Dodis and his co-authors. Therefore, we consider to use a small random seed in the shared random string model, which is a weaker model than the CRS model. In our work, we further show that if the tombring functions can depend on the seed, then it is still impossible. Therefore, we require that the tombring functions should be independent of the seed. Other notions like correlated tape multi-party computation, correlated tapes' secure encryption can be defined in an analogous way. The central object in our work is a new notion of random extractors, which we call correlated source extractors. Very informally, a seeded correlated source extractor on input a seed S and a source X produces a uniform output, which is independent of the temporal results generated by the same seed S, but the temporal source is AIX. We require that for every tombring function, AI should not alter the same string as its input. One may think that it is a due notion of a number of extractors. As for numberable extractors, there is only one source but multiple tombring seeds. Numberable extractors were introduced in 2009 by Dodis and Wich. They helped play an important role in cryptography and complexity. For example, in privacy amplification, designing two source extractors and designing numberable codes. In our work, we defined correlated source extractors and another notion, weak correlated source extractors. The entropy requirements of both notions is a polynomial k, which takes the number of the executions t, the length of the output m, and the seed length d as input. The error rate is an illegible function of the security parameter kappa. The difference between these two notions is the requirement of the seed lengths. As for correlated source extractors, the seed lengths only depends on the security parameter, which means the number of executions is independent of the seed lengths. For weak correlated source extractors, the seed lengths can grow with the number of the executions. We would like to point out the connection between weak correlated source extractors and 2,000 numberable extractors, where the latter notion was introduced in 2014 by Chirupji and Gruswami and was first constructed in 2016 by Chad Padilla, Goya, and Li. It is actually an even stronger notion because the adversary is allowed to temper both sources separately. 2,000 numberable extractors imply the existence of weak correlated source extractors by considering the second source y as the seed, and there's no temperance on the second source. However, the length of the second source grows with the number of the executions, which means they do not imply the existence of correlated source extractors. Our result gives an explicit construction of a correlated source extractor with the following parameter. So we simply set the seed lengths as the security parameter. Recall that k is the meandropy requirement, epsilon is the error rate, t is the number of executions, and m is the length of the output. We also gave an existential result of correlated source extractors. We note that the meandropy requirement is almost a necessary condition. Imagine that all temporary functions are chosen as different permutations. Then in this case, every output should be uniform and independent with others. So it requires that the original source to have at least t times m meandropy. Now let us first see how this new notion could help us construct correlated tapes of knowledge. We first require that for every temporary function ai, aix should have enough meandropy. Now under the constraint sets, for every two temporary functions ai and aj, they will not output the same string on every input. Then the prover can simply apply a correlated source extractor on its random tape and the seed. Then use the result as a new random tape in this execution. The property of correlated source extractors guarantees that the prover will use independent random tapes in different executions. And therefore the security is preserved. To relax the second constraint, we rely on the technique from reset both your knowledge. In general, reset with your knowledge allows us to handle the case where the prover uses the same random tape across multiple executions. Co-related source extractors allows us to handle the case where each random tape differs from every other one. Therefore, we can combine these two notions to handle all possible temporary functions. However, there is a subtle leakage issue with this approach. Imagine that some temporary function ai just outputs the same as a one with probability of half. Then learning whether these two executions use the same random tape leaks further information about x to the adversary. Fortunately, this amount of leakage can be upper bounded. To show security, we simply leak this information about which two temporary functions will be of the same string to the adversary. We define a pattern of x to be a vector as 1 to s t where each element is an integer between 1 to t. Then it satisfies that if s i equals to s j, then it means that the random tape in the execution is the same as that in the jth iteration. Now, the number of patterns is bounded by t to the t, which means leaking the pattern information only leaks t log t, which is bounded by t times t bits of x. Given the pattern, every two temporary functions, sorry, every two temporary random tapes are either always the same, which can be handled by the residual knowledge, or always different, which can be handled by query source instructors. Therefore, our final construction of query-tape-zero-knowledge is the following. In each execution, the prover first applies a query source instructor on its random tape and the seed. Then use the result as a new random tape in this execution and invoke a reset-for-zero-knowledge protocol with a verifier. Now we give an overview of our construction. There are three steps. So first, we generate an advice, where this advice will get the extraction process later. Then we break the original source x into two L-limited-correlated parts. The resulted sources are paired up, and our extraction process starts from the first pair x1, x2 to the last pair x2L-1 and x2L. In the first step, the advice is generated by using the source and a fresh piece from the seed. It satisfies that with high probability, this advice is different from every temporary one. So this idea is not new and has been widely used in the construction of numberable instructors. Then to break the source into several limited-correlated parts, we use a strong seeded instructor with a fresh seed each time. And here is the case for a temporary source. Now we have 2L times t plus 1 sources in total. They are paired up in the following manner. Each column of sources are denoted by a set chi. We note that the sources in different sets are generated by using different and independent seeds. Therefore, for every j, xj is uniform even given all the sources, except those in the same set as xj. Now our extraction process starts from the first pair to the last pair. In the j iteration, x2j minus 1 and x2j will be used. Depending on the j speeds of the advice, one of the source is chosen. Then we apply an instructor on the chosen source and the result in the last iteration to get wj. Finally, we apply another instructor on wj and a fresh piece from the seed. zj will be the final result in this iteration. A general picture of our extraction process is the following. So the first seed, z0, comes from the seed because there is no iteration 0. And the final result, dl, will be the output of the instructor. Now to show that our construction is indeed a temporary source instructor, it is sufficient to show the following 2 properties. First, if the j speeds of the advice is different from that of a temporary one, then we should be able to break the correlations between them in the j iteration. Once we break the correlations, we should be able to make sure this independence remains till the end of the extraction process. So we first point out 2 important facts of extractors. For 2 sources x and x' if given x' x still has enough meanthropy, then the result of extracting x using the seed y is independent of the results of extracting x' using the same seed y. To see this, we may first fix the second source x' and in this case, x still has enough meanthropy. Then the result of extracting x using the seed y is independent of the seed y and also the second source x' and therefore the second result. Now for 2 sources x and x' if x itself has enough meanthropy, then the result of extracting x using the seed y1 is independent of that of extracting x' using another seed y2. This is because we may first fix the second result and in this case, x still has enough meanthropy to use an extractor with a uniform seed. Now for the first property, we compare the extraction processes in the JC iteration. We note that in the second level of extraction, they use the same random seed yj2. Therefore, if we can show that given Wji, Wj still has enough meanthropy. Then we are done. Wji is determined by two parts, a temporary source and the j-1i. So we set the length of z to be much shorter than the length of w. That means fixing z only fixes a small amount of w. Therefore, it is okay to fix the j-1i. As for the temporary source, note that the j speeds of the devices are different, which means these two sources come from different sets. So we may first fix the temporary sources in the beginning and if you will not influence the extraction process of the original one. Now for the second property, by induction, we have zj'-1 and zj'-1i are independent. Therefore, by the second effect about extractor, the result Wj' is independent of Wj'i. And also we have zj' is independent of zj'i. Finally, we would like to point out two future directions. One direction is to discover more applications of core resource extractors. We believe core resource extractor is a very natural notion and we will have many other applications. And the other direction is to construct a core resource extractor to match our existential results. That's all. Thank you. Yeah, so one of our goals is to construct a core resource extractor such that the seedlings does not grow with the number of the executions. But for 2,000 number of extractors, both sources grow with the number of the executions. So, like in our application, we only need a very short seed, which is independent of the executions. But if we use 2,000 number of extractors, then we need to first know the number of executions in the beginning to generate the seed. Have you tried to use a core extractor to construct 2,000 extractors? The other direction? No, we didn't consider this case. So, for core resource extractor, a big advantage in designing is that we can break the seeds into several parts and each part is still uniform. But for 2,000 number of extractors, that is not the case. Okay, thank you. So, let's thank the speaker again.