 So today, I want to tell you about Kaley Path and quantum supremacy, a term that we've been hearing a lot in the news recently. So right now we have quantum computers that do not have error correction, but the question is, are these near-term quantum computers capable of performing certain computational tasks that are practically impossible for any classical computer, be it a supercomputer or what have. And if there is that exponential separation, there is really from a theoretical perspective a lot of interest because it's the first instance where we show that the so-called extended chair steering thesis is false. Moreover, recently there have been experimental tasks for like 53 cubists that Google did and there's been a lot of activity around showing this. So now a mathematical proof was missing, so today I want to take a really decisive step in that direction. So the problem statement is the following. What is this quantum supremacy conjecture? So you have the quantum circuit, so time goes on the right and the horizontal axis. That's how quantum computation works. There's an architecture of a circuit. So that is you have a set of cubists vertically shown and the placeholders are one qubit or two qubit gates, almost always a quantum computation. That's how we do it. Now, it's a placeholder, so it's a blueprint for what the circuit would be. Therefore, it defines an architecture. It's not really a circuit, it's the architecture of the circuit. Once the gates are specified, the so-called local gates, or the local unitaries, this instantiates a circuit that can be made in the lab. So C1 to Cm are the local gates and, you know, you write the circuit as you can see it here. And we say the circuit is hard distributed with respect to the architecture shown as h7. If every one of the local gates is drawn from harm measure, drawn randomly and uniformly from the space of all unitaries, that's called the harm measure. And like every other quantum computational task, you start from all zeroes of state, you run the circuit, you make a measurement, you interpret the result. And the task that's believed to be hard is that classically sampling from the output distribution of such a random circuit is very hard for classical computers. The terminology is this sharp p-hard. Sharp p-hard is a complexity class that's supposed to be formidable for classical. So let's take the problem statement of one more level. It's sufficient to show that estimating the probability p0 of C is a probability amplitude for starting from all zero states, which is the case, running that random circuit from HA, and estimating the amplitude on all zero state measurements. So if one can show this point is hard, estimating this probability is hard, we're halfway there. What we needed is something a little bit more. So not only that this point is hard, but the whole epsilon neighborhood around it is hard. And that epsilon that I'm showing here has a very particular value that I'm tactfully masking right now, because I don't want to introduce too many terminologies. But there's a very specific number for epsilon for which one has to show the whole interval centered around p0 of C is sharp p-hard. Today we do exactly that. This proof was missing. There were some attempts. We'll do exactly that. This problem has been open for over a decade, but we will show that not only this point is hard, we prove an epsilon neighborhood around it hard. But the epsilon we can obtain falls slightly short of what is required to prove the quantum supremacy conjecture, which is still an outstanding problem and has been for some time. A very interesting problem from a theoretical perspective. But this pushes the frontier as close as we have it so far. Now, how do we do it? Well, the idea of the proof is that there exists some circuit C, some deterministic circuit whose local gates are fully specified with some architecture, for which it's been proved in the past that indeed that p0 of C is difficult. But again, I emphasize this particular circuit or these setup circuits are by no means random. The idea of the proof is that we come and deform the known circuit C sub k. Every one of its local gates, I'm denoting one of the local gates by C sub k, you multiply it by a random unitary, which is h sub k. And that product, C of k, h sub k, by definition of power measure, is itself a random circuit. And we multiply that by this k-leaf function. Now I want to introduce the k-leaf function. So this f of minus theta h sub k shown at the bottom being one minus i theta little h sub k divided by one plus i theta little h sub k, for h sub k being her mission, is the k-leaf function. And at theta equals zero, it's the identity matrix that can be verified. And at theta equals one, by construction, it implements a capital h sub k dagger. It enacts the inverse of h sub k. And as a result, the full circuit, which is C of theta made about little m gates, will be of a rational function of degree little m times capital n by little m times capital n. Well, I remind you that capital n is at most four, namely it's two or four. And little m is the total number of gates, where I remind you that every word that formations are done locally. So we respect the architecture of the circuit. And the k-leaf path is applied to every one of the paths, every one of the gates. And once I take the absolute value squared, well, the degree just doubles. So it becomes two mn two mn. And the idea of the proof that makes this work, and it's a beautiful idea in my opinion, is that since it's a rational function of low degree, I proved using some mathematical ingredients that since it has low degree, first of all, we know that we can run the quantum circuit many times. And therefore, we can learn the rational function near theta equals zero, which is the random circuit. Very efficiently. We're just running the circuit. But if we can learn the circuit, that means if we can learn this p zero of theta, suppose we could, suppose we had a machine, some classical algorithm, some clever thing that we'll have in the future currently, that can learn p zero of theta near theta equals zero. Well, then if we learn that efficiently, we can just plug in theta equals one, and therefore evaluate a sharp p heart problem, classically. But that would be a contradiction, unless sharp p is easy. But we don't believe sharp p is easy. Therefore, average case quantum computers are supposed to be very hard. Okay, that proves the sharp p hardness. It's a reduction from the worst case to average case of a sharp p hardness. So specifically, we show that there exists architecture A, such that competing p zero of C, that particular point is sharp p heart with some high probability, three quarters plus something advantages, this three quarters can easily be improved to like larger fractions. The point is that it's greater than half. Moreover, like I told you, we proved some robustness that is not only that point is hard p zero of C, there's an epsilon neighborhood around it. That is also hard. So there's some epsilon neighborhood defined here. m is the number of gates again, delta is how close capital that is how close near zero, you're going to evaluate things, which for the Google type circuits becomes roughly like two to the minus n cubed. I like to emphasize this may seem like a very small number, but actually the epsilon you need needs to be two to the minus n with some polynomial factors, but two to that. So we need a little bit of distance. These probability amplitudes are indeed very small. They're two to the minus n on average. So you want to be able to just estimate them with a small neighborhood around, but the exponent we have is larger than n to the first power. It's n to the higher power, slightly higher power, but it's all quantified and quantifiable, which is great. So some technical ingredients and summary is that we have a Cayley function-based path that fully stays in the unitary group. No Taylor series expansion, truncation as approximation. So fully a unitary path that can interpolate between random and deterministic circuits. We use it to prove the average case exact sharpie hardness of the random circuit sampling. To do so, I have to generalize the so-called Berlick and Welch algorithm for determining rational functions based on sampling. So there was a quantifiable robustness, and the proofs are very simple and comparable to experiment, which hopefully will help settle this conjecture. There's some open questions, for example, can we actually prove the quantum supremacy conjecture? Is the quantum supremacy conjecture even true for constant depth circuits? I have some doubts based on some of my previous work and the work I'm citing here, and also I believe this Cayley path may be useful for other tasks, such as quantum computing by interpolation or maybe cryptography, and can we prove that, for example, Cayley path is optimal in certain sense, or other proofs that do not use the Cayley path? With that, let me thank you. Thanks.