 Hello, my name is Lars Schlieper and I will present to you the paper Quantan period finding is compression robust. This is a joint work with Alexander May. Today only a few hundred physical qubits are available, which also have a quite high noise rate and so are not capable of executing most of the quantan algorithms known today. Moreover, we do not have access to a single logical qubit. In other words, a noise free qubit, which would require some thousand physical qubits to be created, and on which most of our theoretical quantan algorithms in the cryptographical field are built on. However, there are many companies such as IBM, INQ and SQC that have announced plans to build quantan computers with a few million physical qubits and quantan computers that can simulate a few hundred logical qubits in the next ten years. Our goal in this work is to reduce the number of logical qubits required for the class of period finding algorithms in order to use a few available logical qubits in the future more efficiently. This would lead to quantan computers being earlier with fewer qubits already a threat to security and reduce the time that we have to prepare. For this, we primarily study Simon's problem, which is defined as the problem to find the period S given access to a 2-to-1 function f with f of x is equal to f of x plus s for all x. This problem is classically hard to solve and require at least two-to-the-and-a-half qubits to f. More precisely, we would have to find a collision in f to calculate s for a function that behaves like a random function by site of the period of s. Quantumly, on the other side, this problem can be solved efficiently in polynomial time with roughly n queries to f and 2n qubits. This can be done by using Simon's algorithm. This algorithm uses quantan access to f in the circuit below to sample uniform at random y's that are orthogonal to s. After collecting n minus 1 linear independent vectors y's, the algorithm then calculates s via gauss from the collected y's. Thereby it holds that expected about n plus 1 random sampled y's are sufficient to collect n minus 1 linear independent vectors. The underlying circuit requires n qubits to represent the input of f as well as the sampled vector at the end. And additionally, n qubits to hold the volume of f of x. An example distribution of the y's for n equals 3 and s equals 0 to 0 0 1 can be seen here on the right. Note that the 0 0 0 vector contains no information about s. However, the amplitude of the 0 vector decreased with increasing n. An application of Simon's algorithm would be an attack on the famous evenman's rule cipher. The evenman's rule cipher is defined over a key and a public permutation. Thereby the encryption of m is computed as the key plus the permutation of the key plus m and is classically provable and secure. In the sense that any attack I require at least 2 to the n half queries to the cipher to break it with constant probability. Using Simon's algorithm with quantum access to the cipher, the key can be required in polynomial time. For this, a function f is defined as f of x is equal to the encryption of x plus the permutation of x. It is easy to see that the secret key is a period of f and so that the c key can be efficiently be found via Simon's algorithm. Let us come to our compression technique via hash functions. The starting point of our study was the question whether all n bits of f are really necessary or if the number of qubits representing f of x can be reduced. In other words, f of x can be compressed. Previous approach to reduce the number of qubits required for such algorithms focus only on reducing the number of input qubits required. Such as the Mosca Eckhardt approach to recycle a single qubit for the input of Simon's algorithm for special cases. Our main observation by compressing f was that hashing preserves the collisions of f, which leads to the fact that we will in Simon's circuit still only measure y's that are orthogonal to s. On the other hand, however, hashing introduce additional collisions that affect the distribution of the sampled y's, especially a bias towards the zero vector as introduced. In worst case scenarios, this collision could introduce new periods or for a constant hash function could shift all probability to the zero vector. To counteract the fact that a hash function might introduce too many bad new collisions and shift too much probability miles away from a complete subspace, we sample our y's to different hash functions. The intuition behind this concept can be seen here on the right. The green areas, represent vectors that are orthogonal to all vectors measured to a specific hash function. We see that the zero and the s vector are contained in all these subspaces. By combining the measured vectors to the different hash functions, we can compute the intersection of the different subspaces which only contains the zero and the s vector. For our proof, we used a set of universal hash functions, such as, for example, the set of the scalar products modulo 2. It is worth to noting that we conjecture that the necessity of multiple hash functions is just a proof-at-effect, and that in most cases, one single hash function is sufficient. Or in other words, that with high probability, each of the subspaces already contains only the zero and the s vector. Our hash algorithm, thereby, is almost the same as the original algorithm. Again, we use a similar circuit to collect the y's orthogonal to s. The only difference is that the embedding of f is in each iteration replaced with the embedding of a different hashed version of f. We have shown in our paper that we still only measure y's that are orthogonal to s, as well as that the algorithm still works with the constant factor of overhead. Additionally, we provide some example in our paper how such hashed embeddings could be implemented, which only require roughly twice the depth of the unhashed version. Let us consider the distribution of the y's for a fixed hash function compared to the original time distribution. Such a distribution might look like this, for example, for our n equal to 3 and s equal to 001 k's. Here we already see the y's towards zero, and also that some y's are no longer measurable. On the good side, however, we see again that the probability to measure a non-autogonal vector of y's remains at zero. Looking at other distribution graphs to other hash functions, we see further that, for example, for some hash functions, the probability for some autogonal vector of y's might even be greater as before, or that, as mentioned before, for some hash functions, whole subspaces are no more measurable. But also that which vector is affected depends on the hash function. Taking the average distribution over all our hash functions to one bit, we see that the probability of measuring the zero vector increased only by roughly one-half, and also that the probability of any other vectors decreased by only a factor of two. We also prove this observation in our paper. Let us summarize our results. Even with only access to a hash embedding of f, we sample only y's that are autogonal to s. Furthermore, on average over the choice of the hash function, the distribution of the y's is preserved, except for a constant bias towards the zero vector, which means that we can use the same post-processing as before after sorting out the zero vectors that do not provide any information about s anyways. Furthermore, we expect only to require a constant factor of more measurements due to term s with the hash approach, and that in many cases only a doubling of the circuit depth is required to implement the embedding of the hash version of f. Summarized in one sentence, we require only a small additional effort to solve the same problem with almost only half the qubits required. To put this in some perspective, in the setting of even manzure, instead of 2n qubits, we require only n plus 1 qubits, with the number of measurements required increased from n plus 1 to 2n plus 2. We would like to stress again that we conjecture that in most cases a single projection onto a single bit should be sufficient. Furthermore, our hash technique can be combined with other techniques, such as the Grover meets Simon's approach or the offline Simon technique, which also eliminates our reliance on implementations of embeddings of hashed versions of the function and greatly simplifies the application of our hash technique. We show this in our paper in more details. To give a rough outline to this combination, to use our hash technique in this context, we only require simple implementation of our used hash function, which can be realized for our example set of hash functions with the use of a single multi-ctoffally gate. Additionally, we have shown that our hash technique is not limited to the Simon's algorithm, but can also be applied to more general period-finding algorithms, like Schor. These results have also already been used for an attack on a polymax. The reference can be found in our paper. To summarize our results in the light of security, quantum computers may already be a greater threat to cryptography with three electrical qubits and so earlier than expected. Thank you for your attention.