 I want to start from 30. I'll give you one minute. I'll be prepared for it. You have to start. OK, I started, go ahead. Recording in progress. OK, we have David Hedler, who will tell us about prime-size independent anglers for global-driven QAOA. Please start. OK, thanks to the organizers for accepting my talk. I don't think they're yet in the room. So today, I'm going to be presenting some work of mine which is about finding optimal angles for QAOA in a way which is independent of the problem sizes which that we're solving. And to do this, I'm using a somewhat unconventional driver known as the Grover driver. To summarize this presentation today, there's a version of QAOA called Grover QAOA using a Grover driver rather than a regular driver. And when you use this driver, you take into account a lot less structure than you would if you used a regular driver. And as a result of this, we can calculate the performance of this version of QAOA independently of the size of a problem that we are solving. And therefore, we can optimize this performance to obtain optimized angles which are for the average case of certain problems or in the infinite problem size limit. And this would facilitate a few-shot implementation of QAOA where you've already found your optimal angles and therefore, we can just sample from a quantum computer the good solutions to the problems we want to solve. So a brief recap of QAOA. We are preparing a state on a quantum computer which consists of P layers of operators. And each of these layers consists of a problem and driver Hamiltonian, as we see in adiabatic quantum computing. Apart from these, these layers are independently applied. They're never applied simultaneously. And so these P layers are controlled by 2P parameters. And the idea here is that if we can find, if we have a strategy for finding good parameters, then sampling from this, the state produced by this circuit, will hopefully produce good solutions to combinatorial optimization problems. And of course, AUC provides us with a set of angles such that if we have an infinitely deep QAOA circuit, we should always find the ground states of the problem we're solving. Conventionally, we would use single-cuber X drivers, and as in adiabatic quantum computing. And however, in this talk, I'm talking about the Grover driver in which you're kind of driving transitions between all possible pairs of states. And so if you are using the single-cuber X driver, you are biasing structure in your problem with respect to flipping bits. And of course, this means the arrangement of your objective function or the energy landscape of your spin glass or combinatorial optimization problem matters. And so you've got this hamming distance metric for how different states of your system are. A Grover driver, however, it's not biasing any structure in your problem. There's no kind of distance metric for how far away two states are in this space of possible states. And therefore, only the overall distribution of the values of your cost function affect the expectation value of your problem Hamiltonian, which is what we want to maximize in QAOA. Of course, to caveat this, this Grover driver is harder to compile on advice than single-cuber X drivers, though in the QAOA context, we're in a gate-based context. So it's not quite as fatal as in adiabatic quantum computing. However, it's still not quite a NISC-friendly thing to be doing. So because this Grover driver only sees the overall distribution of the values of our cost function, we can model our problems as random variables. And so if you have a cost function C of z, we can model this as sampling from a continuous variable. So if I picked a random string, this could be modeled by sampling from a random variable. And therefore, this cost function would be described by a probability density function. And if we find some continuous probability density function, which describes the likelihood of a random string of our optimization problem taking a certain cost function value, if we have a continuous PDF, then this is equivalent to taking a limit of infinite-sized problems, assuming we get some scaling right. So this overall distribution stays static with increasing problem sizes. And so if we know a PDF for describing the distribution of our cost function, we can derive an analytical formula for the expected value of our problem Hamiltonian under the Grover QA way state. And so to illustrate this, in the plot on the left-hand side, we see this is a 15 qubit SK model. And the dots we see here are individual configurations of that model and the associated energy. And if you imagine sourcing this from low to high, then you obtain the figure in the middle, which is a quantile distribution function, which you could then invert to get a cumulative distribution function. And from a cumulative distribution function, you could differentiate this and get a PDF describing the likelihood of sampling a certain solution quality. And so for p equals 1, so layer 1 QA away, with pen and paper, you can derive the solution quality for the mean solution quality you would sample from a problem given a PDF. And we obtain this formula here. It depends on C bar, which is the mean of the problem, which is very simple to calculate, B, which is just a shorthand for this e to the i, b to minus 1, and finally gamma, which is the characteristic function of a PDF describing our problem. And a characteristic function is simply that a Fourier transform of the PDF describing your problem. And so if we know the PDF, this characteristic function is fairly easy to obtain. For p equals 2, we can obtain a similar expression, but we need to use computer algebra because it's fairly large. But the dependencies are the same. It depends on the parameters of the QA away circuit and this characteristic function. And so it's easy to evaluate this any point in our parameter space. For arbitrary depth, we can derive an expression and there are 2 to the power of 2p terms in this expression where p is the depth of your QA away circuit, the number of layers you're applying, and therefore the number of parameters, 2 to the power of the number of parameters that you have. And this formula, importantly, there is no dependence on the size of the problem we're solving. We just need to know the characteristic function. And there isn't enough time to explain all the notation here, but we have 2 to the p terms to evaluate here and you can write a Python script to evaluate this function. And it simply depends on the characteristic function of our problem. So what does this look like? So we are looking at number partitioning because this is a problem where we can analytically derive the overall PDF describing the problem. And so in number partitioning, you start with some values and the objective of this problem is to split these values into two sets, such that two sets are equally, the sum of the elements in each of these sets is equally large. And so we start with some, say, uniformly distributed values. And from this, you can obtain a cost function of our problem. From this cost function, we can derive that this has a chi-square distributed PDF. Before you transform this PDF, you would obtain a characteristic function with this form, which can simply be substituted into one of the formulae on the previous slides. And we get a fairly complicated expression for the mean solution quality of the states that we sample from our QAOA state. And we can numerically minimize this to obtain some optimal parameters that are independent of the problem size. So on a surface plot, so here we see on the left-hand side, this is a finite size instance of number partitioning. And the expected value of our QAOA state for different points in the parameter space. And so we can see that as the number of qubits increases from left to right, we have this convergence to a terminal landscape, which is the landscape of derived analytically from this PDF of the problem. And the optimal parameters I derived on the previous slide, this corresponds to this, the darkest region here on the right-hand plot. And so for number partitioning, we can minimize the large formula on the bottom, which is the expected value of our QAOA states, the solutions we might sample from the QAOA state. And we obtain these curves of the parameters at different depth. So the dot on the far left of this plot is the depth 1 optimal parameters for this number partitioning problem at infinite size. And the lines represent the higher depth and therefore greater parameter count versions. And likewise, as we increase the depth of our QAOA circuit, we are increasing our solution quality. So in this context, a lower expectation of our problem Hamiltonian is better, because we're looking at the residue when we subtract the sums of the values in two partitions. So the main caveats to this work we probably don't know the PDFs at infinite size for most interesting problems. But I imagine this is a solvable issue and could at least approximate them for most problems. We need sufficiently large problem sizes for this approach to work well. But from some of the plots we've seen, and I have an extra slide which also shows this, 12 qubits is quite a large number of qubits in this context, or 12 bits in our problem. It's hard to compile a Grover driver, but this isn't totally fatal, but probably not a very NISC friendly thing to be doing. The main problem I think with this work is that if you went through the effort to compile a Grover driver on a device, you would probably sooner compile your objective function to a threshold function, which evaluates to one if you satisfy a condition or zero otherwise. And then you're essentially running Grover's algorithm. And this would probably perform better than QA away in all these problems. And so if you use an approach like this where you use statistical properties of a problem to find optimal parameters, then you don't need to use your quantum computer very much. You only need to sample from your final states. And the main value I see in this is I think it's probably possible to extend this method to more complicated drivers using more complicated statistical information of your problem. And this work is ongoing. And when I initially started this work, my objective really was to determine if Grover drivers were performing better or worse than single-cube X drivers as in the work of Eddie Farhi, who published a paper where they look at the performance of QA way on infinite-sized SK models. And I haven't considered SK models with this work yet, but this is something that I can use this method to compare the performance of these Grover drivers, which is the no-structure to usual drivers, which hopefully should exploit structure on infinite-sized problems. And I have a quick extra slide. So this slide I'm just showing that the optimal parameters as we increase finite-instance sizes for p equals one and p equals two, we are getting closer and closer to the terminal values, the infinite-sized values as we increase our problem size. Great, and I think that's my time. So does anyone have any questions? I have a question from the internet. So, yeah, I guess it is, if you have... What would be the purpose of Grover-driven QAOA which gives you an approximate solution if p is one or if it's a small number of iterations? Why would you use that instead of just implementing Grover's algorithm, which gives you an exact solution? Well, I think the answer is I'm interested in this from a theoretical perspective. I don't think it would be a useful algorithm to run on a quantum computer, but I hope this leads on to more interesting work, which could be more useful because we can find optimal angles for more relevant drivers. Okay, can I ask my second question then? You're probably aware of the paper by Betel and Kleisch that says that finding optimal beta and gamma is an NP-hard problem, and so it makes me wonder what's the catch, is it because they're not using Grover drivers? Is it because you need, eventually, you'll need an exponential number of samples to properly estimate the mean of this distribution? And I guess I'm wondering about the relationship of estimating the quality of estimate of the mean of this distribution and how that converges to good quality solutions. Is that an approximate convergence itself? Okay, so, I mean, I would expect, so in these plots here, I am optimizing some formula which has an exponentially sized parameter space in your number of parameters, so I would expect that as you increase the depth here, it's a hard problem to optimize these parameters. I wonder if you're talking about it being an NP-hard problem if you're actually dealing with samples from the quantum computer, or? I don't remember, but it's NP-hard in both N and P in this paper, but this, I guess, is probably assuming using the conventional method for finding the conventional drivers for beta and gamma, and I wondered if that was the difference because this doesn't look to be an exponential time. It's certainly not exponential in the number of qubits. It's probably at least, yeah, it's exponential in the number of, in the depth. Yeah. So there's a mismatch there, and it would be interesting unless you've got a polynomial time algorithm for an NP-hard problem, in which case congratulations. Let me be the first to congratulate you. All right, thank you. How sensitive is this to your parameters? It doesn't look like the solution quality is that sensitive to how you've set your beta and gammas. I think when you have the kind of energy landscape, when you use a Grover driver, it does tend to be very smooth. So I think the optimization task for a Grover driver is significantly easier than it would be for a more complicated driver. So if we don't have enough time, we have to move to the next talk. We can thank the speaker.