 Hi, welcome to this talk on scalable pseudo-random quantum states. My name is Amrish Mouelli, and this is joint work with V. Kaabrakersky. And this work is going to focus on the efficient generation of quantum random states. And it is a basic fact in quantum information theory that efficiently generating a truly random quantum state is impossible. This follows because if we consider the space of all NQ with quantum states, this space is infinite, but even if we discretize it to any negligible precision, then the number of points is going to be too big, is going to be super exponential, and we don't have hope of efficiently sampling quantum. And now, one known solution to this problem is aiming for a more modest goal, namely aiming to sample from a distribution such that for a bounded number of copies of the output state, it is going to be indistinguishable from the same number of copies of a truly random quantum state. And indeed this modest approach has results. One kind of strategy is called the quantum state design. This is an efficiently sampled, usually efficiently sampled distribution, where for an a priori known T number of copies, an output state, T copies of an output space of this distribution and T copies of a truly random quantum state are going to be statistically indistinguishable, which means that the trace distance is going to be negligible. And another kind of strategy, more recent, is called a pseudo random quantum state. This is again an efficiently sampled distribution such that this time for any polynomial number of copies T, the output states T copies of the output space and T copies of the truly random quantum state are going to be computationally indistinguishable. This time the indistinguishability is going to hold only against polynomial time quantum distinguishes. Now let's define exactly what are pseudo random quantum states. This notion was defined by Glu and Song for the first time in crypto 2018. So this is a quantum algorithm. The PRS generator is a quantum algorithm G. It gets two inputs. One is a parameter denoting the number of qubits in the output state. The other is a classical key. And the security guarantees the following. So for any polynomial, if we think of the output state of the generator for a classical random key, then polynomial in many copies of this state and polynomial in many copies of a truly random quantum state are going to be indistinguishable for polynomial time quantum distinguishes. And PRS generators have a few applications. Some of them are a simulation of thermalized quantum states. Another application is quantum money. And these PRS generators are known to exist based on the existence of post quantum one way functions. Now one property that is present in all PRS generators is that what we think of as the security parameter is the same number as the size of the output quantum state. Let's be more precise about this. So let's recall for a moment the security definition of a PRS generator. So what we're seeing in red is what we think of as the security parameter, right? This is the number that denotes how many copies we are going to give the adversary. We're going to let the adversary have and also the number that denotes how hard is it going to be for the adversary to distinguish for this number of copies between the output state of the PRS and the truly random quantum state. What we're seeing in blue is just the output, the size of the output states of the generator. And as we see, this is the same number. And what this essentially means from an operational point of view is that the more random the state is, the more we want to make it hard to distinguish this state from a truly random quantum state, we have to use more quantum memory. The state size grows. And the question that we are going to try and answer in this work is can we create a small highly pseudo random state? Can we make the security parameter and the state size independent of each other? So for this goal of maintaining the independence between the security parameter and the state size n, we're going to define a scalable PRS generator. So this is again an efficient quantum algorithm. This time, the security guarantee is going to be a bit different. The output state of the generator is going to be for any polynomial number of copies indistinguishable from an n qubit quantum state, but the security parameter is going to determine how many copies and also how hard it's going to be to distinguish from the same number of copies of an n qubit totally random quantum state. And a few words on previous works. So the lack of security parameter, the lack of this independence between the security parameter and the state size is something fundamental because previous PRS generators, they output a state which is essentially a uniform superposition. So the amplitude is stationary, this is just uniform, the vector, if you look at the vector that describes the quantum states, all of the absolute values are the same. This is a square root of 2 to the minus n. And the phases are going to be pseudo-random. And if we're saying that the more the hardness of distinguishing between the PRS output and the totally random quantum state only grows with the state size, it is natural to ask what happens on the opposite, on very small output sizes. For example, the smallest output size, one qubit. What is the performance for one qubit? And these kind of distributions, which are uniform superpositions with pseudo-random or even completely random phase, they actually cover only this red ring in the block sphere. And this is the reason that they are very, very easy to distinguish from a totally random one qubit state, which covers the entire block sphere. So as we understand, in order to get a scalable PRS generator, we cannot just randomize the phases, we need to also randomize the amplitudes of the states that we are out. So onto our results. So we define also explore this notion of scalability in the efficient generation of quantum random states. And we show a framework, specifically we show a framework for constructing scalable random quantum states. And more formally, we show two theorems. The first is given the existence of post-quantum one-way functions. We show that scalable PRS generators exist. In particular, we use the same computational assumptions needed for non-scalable PRS generators. And also unconditionally, we show that there exists the scalable T-design generators in depth that is improved from polynomial in N, the security parameter in T, to polynomial in N, the security parameter in log of T, for any polynomial T. So we improve the efficiency of existing T-design, scalable T-design generators. And we show the existence of scalable PRS generators. And in order to show these two theorems, we use the same paradigms as previous work, which is the first showing and what is called an asymptotically random state generator. So let's first define what is asymptotically random state. This is essentially like a pseudo-random quantum state, but where the distinguishability is statistical. So if we take the output state of an ARS, ARS is short for asymptotically random state. So for any polynomial number of copies, this state is going to be statistically distinguishable from the same number of copies of the truly random N-quad state. And the paradigm, this ARS paradigm that appears in both previous works is the following. This is how we construct both PRS generators and T-design generators. So the main technical point is constructing an efficient quantum algorithm that when it has one to more access to a random classical function F, it generates an ARS, which means that if we sample F at random, then the output of the generator is going to be statistically indistinguishable from a truly random quantum state for any polynomial number of copies. And then when we have this ARS generator, we can take F, swap it with the quantum-secure pseudo-random function PRF in order to get the PRS. And if we want to get out of this ARS generator, a T-design generator, we swap F with the 2T-wise independent function. So essentially this is the way we can view ARS generators as the center for the efficient generation of quantum random states. And this is going to be our focus in this work in the technical part. So we follow this ARS paradigm, but we construct a scalable ARS generator, a scalable ARS generator, which means that this efficient quantum algorithm now is going to get additional parameter, which is the security parameter. And this security parameter is going to determine how hard is it going to distinguish the output of the generator between a truly random quantum state and also how many copies we let the adversary have. And accordingly, our main technical lemma is going to be showing that there exists a scalable ARS generator where the trace distance between the following two distributions is going to be bounded by T over e to the lambda, where these two distributions are the following. The first is the output of the ARS generator, the scalable ARS generator, T copies of this quantum state versus T copies of a truly random n qubit state. And one important thing about this bound is that, of course, it is independent of n, which implies that only the security parameter is going to control how hard is it going to distinguish from a truly random quantum state. And we also improve the bound in terms of the dependence on T, because we get a linear T compared to a T squared from previous ones. We now describe our construction of scalable asymptotically random state generator. So to be concrete with what we want, we want a polynomial time quantum algorithm that gets n number of qubits, lambda security parameter. It gets a quantum oracle access to a classical random function f from n bits to polynomial in n and lambda number of bits. And it outputs an n qubit state psi. And the guarantee is that when f is random, T copies of this state is going to be highly distinguishable from T copies of the truly random n qubit quantum state. One thing that's going to help us with intuition is maybe the most basic fact in quantum information is that the space of n qubit quantum states corresponds to the unit sphere in complex space of two to the n dimensions. Now our approach is going to be very natural and with accordance to this effect, we're going to attempt to sample a unit vector on the sphere just in a quantum state, and the security parameter is going to only increase the density on this sphere of all n qubit quantum states. And the outline for implementing this approach is the following. So given the oracle access to this, the quantum oracle access to this classical function f, we want to transform it into oracle access to a different function that describes a spherically symmetric vector v, where we mean oracle access to the vector. This is a vector in two to the n dimensions. It has two to the n entries and for input x, it is going to output the entry that corresponds to the string x. And this vector has to be spherically symmetric if f is random. And the second part in our algorithm is going to be to efficiently generate the quantum state that corresponds to the spherically symmetric vector v. So let's start with the first part. We want oracle access to a spherically symmetric vector and we have oracle access to a random classical function f. And one thing that can solve this first problem is that if we had a classical circuit g that translates it locally, which means that g of f of x is going to be this vector v such that when f is random, doing this transformation with g on each of the coordinates of f, going to yield a spherically symmetric vector. And for this to identify what this circuit g needs to be, we recall a basic property of multivariate Gaussian distribution is that it is spherically symmetric, which means if we take a vector with many coordinates and each of the coordinates we sample from a Gaussian distribution independently, this vector is going to be spherically symmetric. It's going to a normalization of this vector is going to be a uniform unit vector. And this is, of course, a very known property of the multivariate Gaussian distribution. It is very useful for algorithms for sampling random unit vectors, because all of these coordinates are going to be can be sampled in parallel, independently of each other. And then only the norm of the vector needs to be computed. And this also can be done in parallel. And while this property is very useful for the first parallel generation of a classical unit vectors, we use it in the quantum setting to generate it efficiently. So this classical circuit g is going to be sampler circuit, the classical sampler circuit for the Gaussian distribution, which means that we're going to think about the output of F as the randomness, the random tape for a classical sampling algorithm. And this algorithm is going to be sampler for the Gaussian distribution. We denote this algorithm with S. So V, the X coordinate of our vector V is going to be S, this sampler S, applied to the randomness F of X. And one technical detail to recall is that S has to be an efficient, in particular, a polynomial size, a classical circuit. And it cannot really sample from the continuous Gaussian distribution. It has to sample from a discrete Gaussian distribution. But we're going to leave this technical detail for the full version for the paper. And in this presentation, we're going to assume that S sampled from the continuous Gaussian distribution. OK, so it seems we solved our first problem. We have oracle axis, quantum oracle axis, to a spherically symmetric vector V. And now we want to generate the quantum state that corresponds to the normalization of V. And this general task of getting oracle axis to the description of a quantum state and then generating the quantum state is we don't have efficient algorithms for this general task. But of course, we don't have the general case here. We have a restricted case where our vectors are Gaussian vectors. And we're going to attempt to find properties of the multivariate Gaussian distribution that are going to be helpful in the efficient generation of the quantum state that corresponds to V. For this task of generating the quantum state that corresponds to the vector V, we first recall a quantum algorithm for the generation of quantum states. This algorithm is called the quantum ejection sampling. It is the quantum analog of a classical ejection sampling. And the setting is as follows. So we have one quantum state alpha and we want to transform it to another quantum state beta. And additionally to having alpha, we have a classical, we have oracle axis, quantum oracle axis to a classical transformation circuit F. Now, this circuit F outputs the ratio beta over alpha times a number one over D for some number D, which is an upper bound. Now, this upper bound is an upper bound on M, where M is the maximal ratio between the coordinates, between all of the coordinates of beta and alpha. And if we have this circuit F and D is where it outputs beta over alpha times one over D, we can generate beta with probability one over D squared. So the intuition is the important part behind quantum ejection sampling. And it says that if we have a state alpha, where all of its coordinates are roughly the same as another quantum state beta, and we also know the ratios in all of the coordinates by some classical circuit F, we can generate beta with some good probability. Okay, so in our case, if we take the quantum ejection sampling to our setting, we know what is the target vector that we are aiming for, our beta is the normalized V. We want to generate a quantum state that corresponds to the normalization of V. Now, we have beta, but we don't have yet a suitable alpha. Alpha needs to be a quantum state that we know how to efficiently generate from scratch, not using a quantum ejection sampling. And even if we, in addition to alpha, we also need to know what is the circuit F that is going to output the ratio between beta and alpha times the bound one over D. Recall that we don't even have beta yet. We have only the unnormalized version of beta. We don't have the normalized V because in order to compute the norm of V, we need to know all of the coordinates of V, and this seems to take exponential time. So what properties of the Gaussian distribution are going to be useful to us for the quantum efficient generation of the state normalized? So the first property, again, these are very basic properties. We just observed that they are useful for the efficient generation of the quantum states that corresponds to Gaussian vectors. And the first property is that if we take a random Gaussian vector of 2 to the n coordinates to the n dimensions with overwhelming probability with probability 1 minus e to the minus lambda, all coordinates are bounded by a square root of n plus lambda. This is true by union bound because the probability for each of the coordinates to pass this bound, this relatively small bound, is so, so much time compared to the number of coordinates, which is 2 to the n. Now, this means this less sentence is very only on the intuitive level is that all of the amplitudes of this vector V are roughly the same when they are compared to the norm. So the differences between each pair of coordinates is relatively nothing compared to the huge norm of this vector. This means intuitively again that this vector really resembles a vector where all of the amplitudes are the same. And a vector where all of the amplitudes are the same is simply the uniform superposition. And what is fantastic about the uniform superposition is that this is a quantum state that we already know how to efficiently generate from scratch. So we're going to take alpha to be the uniform superposition, the plus to the n state. And if we take alpha to be this uniform superposition, we can take our classical transformation circuit F to be the x entry of V times 1 over square root of n plus lambda. Now, why we chose these exact parameters? So let's observe that if we look at the ratio beta over alpha, we know that beta now is the normalized V and alpha, we also know what is alpha. This is simply the uniform superposition. We know that because of the first property, all of the coordinates of V are going to be bounded by square root of n plus lambda, which means that our D is going to be square root of 2 to the n times square root of n plus lambda, all of this divided by the norm of V. And by choosing these parameters, we guarantee that the f of x is indeed 1 over D times beta over alpha. And this is all great, and all our parameters are fixed, but we recall that the successful ability of quantum ejection sampling is 1 over D squared. So essentially, in order for this probability to be noticeable, we need D to be polynomial. And this comes down to the norm of the vector V being very big, being like on the order of square root of 2 to the n. And this is where another property of the Gaussian distribution kicks in to help us is that the Gaussian vector of 2 to the n dimension has this square root of 2 to the n over 2 has this norm, at least with over 1 probability, with 1 minus e to the minus 2 to the n. And we know that now our vector with over 1 probability is going to be large enough, which means that D is going to be small enough, and the rejection sampling successful ability is going to be noticeable. And because we can regenerate alpha every time we try, we can try polynomial many times and with overwhelming probability in polynomial time, managed to successfully generate the state V, the quantum state that corresponds to V. Now, one small technical problem that I kind of hid under the rug is that this concentration bound that tells us that the vector V is going to have a large enough norm, this concentration bound kicks in only for large enough n. Right, we want our construction to work even for n equals 1. And for n equals 1, this 1 minus e to the minus 2 to the n is not large enough probability. This is just a constant probability. And our solution to this is that this concentration bound on the multivariate Gaussian distribution is so, so, so successful and even in the exponent we have 2 to the n, so this concentration kicks in already when n is at least log of lambda. And then we have one minus e to the minus lambda, which is good enough, which is overwhelming probability in the security parameter, which is what we want essentially. And in this extreme case where n is smaller than the log of the security parameter, we can just sample the entire vector because we can run in 2 to the n time, which is polynomial time in lambda, and just sample explicitly sample the entire classical description of this n qubit state and generate it inefficiently in n, but efficiently in lambda. And to conclude, this is our algorithm for the scalable ARS generator. For the regular case where lambda is bounded by 2 to the n, our first step is to transform this oracle axis to f, to oracle axis to a spherically symmetric vector, and then to a unit vector, and our second step is to iterate polynomial many times and try to do quantum rejection sampling that we know is going to be successful by some concentration properties of the multivariate Gaussian distribution and then generate the quantum state that corresponds to the spherically symmetric vector v. And for the restricted case of the security parameter being huge compared to the number of output, the size of the output state, we know that we can explicitly sample the classical, the entire classical description of the quantum state that we want to output and generate it inefficiently in the state size, but efficiently in the security parameter. And this concludes our talk. Thank you very much for listening.