 The next speaker is Dr. Fransa. The title is Limitations of Noisy Quantum Manilers, please start. So let me start by thanking the organizers for this invitation. It's been a really nice workshop. So I'm going to talk about, as the title says, limitations on noisy quantum manilers, and it's going to be mostly based on results I have with Raul Garcia Patron at the University of Edinburgh. So in essentially all of the talks we had so far, I'm going to be interested in finding the ground state of some Hamiltonian that it calls the solution to a problem I care about. And in this talk, I will mostly focus on classical combinatorial optimization problems, but the sort of techniques we developed can also be applied to more general ground state finding problems. And as in, I don't know, like already 20 talks I've seen, we're going to look at this easing model as our main example of what we care about. And our goal will be to try to find the ground state of such a model on a noisy quantum annealing machine and try to rigorously understand how the noise affects the device's ability to find good solutions or the ground state. Now in the first part of the talk, I'm going to consider a very, very simple noise model, the simplest imaginable, namely we're going to assume that every qubit on the device is affected by the polarizing noise with some radar. So like every qubit is affected by this quantum channel. And for instance, this sort of noise can arise if you have control errors in your annealer, but later I'm also going to talk about other relevant noise models, for instance, amplitude damping or the phasing. But for now, let's just keep things as simple as possible, even though amplitude damping and the phasing are of course also a bit of a toy model. Just to illustrate what we can do. And for now, I'm also just going to look at the expectation value of the outputs of this noisy annealer. So we're not going to consider like what is the probability of finding the ground state or something like that. Just want to look at what is the expected energy of the output. But later, we're also going to look at concentration inequalities that is going to be able to say, oh, the probability of finding a state with this energy is this and this much. And these results are based on some other work I have with Camille Scuse, De Palma, and Giacomo De Palma, and Milad Marvian. But for now, we're going to be in the simple setting, just to illustrate things. So in the ideal case, we would like our annealer to implement this time-dependent Hamiltonian evolution. But we are going to assume, again, that we have some Limbladian term that is going to model the noise in our system. And we cannot really tune this Limbladian term. Maybe we can change the evolution time for the Hamiltonian. But this noise is there, unfortunately. And again, for now, you only consider this LT, this Limbladian term, to be this the polarizing noise on every qubit. So it's not global the polarizing noise. It's really like every qubit is affected by this the polarizing noise with the same rate, which I'm going to denote by R. And, okay, this is not sure why, okay. And as the time goes to infinity, of course, so if this system will converge to a trivial state, it will just go to the maximally mixed state. So we know, we expect that under noise, there should be a limitation as to how long we can do the annealing before our system just goes to this trivial state and then things become more or less useless. But our goal here will be to quantify as best as possible when this will happen. And as before, I'll extend it later to more general models. Again, we want to compare this quantum annealer under noise against efficient classical algorithms. This will be our first goal. So let's say we know that in general, finding the ground states of these models is NP-hard or NP-complete, but we also have some pretty good approximation algorithms that provably run in efficient time. And we just want to compare the performance of those with the output of the noisy annealer for some annealing time t and some given noise rate. So this will be one of our goals. But second goal will also to be able to compare just the performance of the noisy annealer with some heuristic algorithm of your choice. So maybe you don't have a proof that this algorithm is very good. You just see in practice that it's running well and you just want to be able to compare it to some given quantum annealing device with some noise parameter. The important point here is that we don't want to actually have to simulate this noisy quantum evolution. So we all know that simulating noisy quantum systems is a very hard task. But it doesn't mean that just because you cannot simulate this device on your classical machine, that it's going to output better solutions to the problem you care about. So we want to be able to say whether you'll find an advantage or not without actually having to simulate this complicated quantum system. So we first show that this, as one would expect intuitively, under the simple the polarizing rate model, if you go to annealing times which are roughly one over the rate, then if you just compare the outputs of this noisy quantum device to efficient gip sampling, so like high temperature gip sampling, then the quantum annealer will not give you any substantial advantage. So as expected, if your annealing time is large enough so that you can feel the noise, then simple classical algorithms will outperform it. But, and I apologize once again, I don't know what's going on with my computer. The transitions are a bit strange. But as I said, we also care about the heuristic algorithms and what we are able to show is that no matter what annealing schedule you're doing or what evolution you're implementing, if you actually compare to good heuristic algorithms, you're going to lose advantage at a time that is, again, proportional to the noise rate. But the constant in front can be also very small. So like 0.1 or 0.05. So, you know, even if you have a small density of errors, then the advantage against good heuristic algorithms is already lost. Okay. And again, the nice part is that we don't have to actually simulate big devices. We can even go beyond what is currently available and still draw similar conclusions. Now, how do we actually show this? What is the intuition behind the technique? So we start by the following picture. So here, this is, imagine your state is initialized in the zero state and annealing is usually plus. But let's take zero here. It doesn't make a big difference. And ideally, we want to follow the orange path over here. So it's going to drive us from the zero state to this sigma infinity point here that is the ground state of the model we care about. However, as I mentioned before, under the polarizing noise, your system will be driven instead to the maximally mixed state as the evolution time increases. So it won't be able to follow this orange path. It will follow this black path to the interior where you have sigma zero corresponding to the maximally mixed state. And what we need to do to derive our results is to quantify how fast this happens in the relative entropy or the KL divergence maybe as some of you might be more familiar with that term. And what we're then able to do is to just, at each time, we'll be able to assign an inverse temperature to the state such that the Gibbs state at that inverse temperature has a similar performance when compared to the output of this noisy quantum computation. So at each step, we know, oh, there's this much noise in the system. Based on this fact, we can assign an inverse temperature to a Gibbs state that will have a similar performance. And after that, we know that for most of the models you care about, for inverse temperatures that are small enough below some critical value, you can actually simulate the Gibbs state classically on classical computers efficiently so we know that at some point we'll actually enter the region of efficient classical simulation which is denoted in red over here. So all we need to do is to estimate whenever this will happen. And just putting things together, we, as I mentioned before, if we have the noise rate to be this R over here, then the inverse temperature you have to assign to your Gibbs state to have a similar performance as whatever is going on on the noisy quantum device will decrease exponentially in time. And you have the promise that the output of the annealer will satisfy an equation like the one down here. So below we see the expectation value of the energy on the output of the annealer. And we want to find the ground state, so we want this to be as small as possible. So what this is saying is that the Gibbs state with this temperature I put over here with this exponential decay in time will have an energy that is kind of similar to the one of the output of the annealer. So again, as we want to find the minimizer, if the energy of the Gibbs state is lower, it's better for the Gibbs state. So that's the sort of result we can show. And because of that, as I said before, usually for most models there is some critical temperature beta C such that if this beta is below that, then we are in the regime where you can observe similar energies just by doing it classically. And we see here that this will happen in a time that just depends on 1 over r roughly, so it doesn't know dependency on system size. Although most of the previous results had some sort of dependency on system size, here we see that actually at constant time, just depending on the noise rate, classical algorithms will have a similar performance. And I should also stress that this is independent of whatever evolution you're actually implementing. This is just a feature of the noise. Now just a quick discussion of the technical tools we use. We use the fact that the relative entropy contracts uniformly under this sort of noise. So it doesn't matter what input state you have, what sort of annealing path you have, as long as you have this noise model, the relative entropy will contract exponentially in time. And from that, you can just use a variation of the maximum entropy principle to assign this inverse temperature to this annealing model you have. And then to get this critical temperature, you just have to get your favorite result from the literature that tells you that there is such a critical temperature and apply it to the model you care about and for the estimates you care about. But as we all know, actually if you want to solve a practical problem, you're not going to do it through high temperature Gibbs sampling. So it doesn't make a lot of sense to try to compare when annealing will lose advantage against high temperature Gibbs sampling because nobody is going to do that. So we want to be able to compare it to your favorite efficient classical algorithm, or sorry, heuristic algorithm. And what can our technique do? So let's say you run your favorite algorithm for your favorite problem and you obtain some energy EC. After that, you want then to know if, let's say, if you had access to a large quantum annealer with a given noise rate, would you actually expect a significantly smaller energy for the output if you were annealing for a given time? And, you know, maybe we're even talking about devices that are not available, so this problem instance is larger than what can be solved in current devices and we are in the regime where it cannot simulate such a machine classically. So even then, you want to be able to have some estimate as to whether you should expect to obtain any advantage through quantum methods. And the way we do this is very similar to what I discussed before, but so we, again, are in this picture where we are collapsing to the to the maximum limit state that is sort of trivial. But we can then just, if we can quantify how fast this happens in relative entropy, then it's easy to obtain lower bounds on the energy of the output. So, and in particular, if this lower bound tells you that you have a separation between the output you observed from the classical algorithm, then you know that you cannot really expect the quantum annealer to give you any sort of advantage. So, and in this way, without having to actually simulate the quantum device, you're able to say like, oh, actually, I don't expect it to be better than my favorite heuristic algorithm. And to get this, we use the variational characterization of the relative entropy, which looks like this. It says that the energy of the output of the annealer is lower bounded. And again, we want to find minima, so it's better, we only care about lower bounds by the log of the partition function minus this relative entropy term. And if you let beta go very close to zero, then this first term, like minus log partition function plus n times beta to the minus one will converge to zero, which is just the energy of the maximally mixed state. But if you let beta go very small to zero, sorry, if you let beta very small, the second term will explode. So you have to somehow see how these two things interplay. And the important point is that this formula can be evaluated efficiently for a small range of beta, because we also know that for beta in some temperature range, you can actually approximately evaluate the log partition function so you can actually evaluate this function, sorry, this bound for a range of beta and get the lower bounds as long as you can also control this term over here. And as I said before, using these relative entropy convergence techniques, you can bound this second term. So by evaluating it, by plugging in an estimate on this last term and evaluating the log partition function for some range of beta, you can then get your lower bound on the energy of the output. So here's an example of how this performs in practice. So I should say, of course, that all of the techniques we developed also work for the circuit model, not only annealing, and here I'm actually comparing it to the results of QAOA on the Sycamore device. And because these instances are very small, I evaluated the variational formula I had before for a large range of beta, even outside this critical temperature range. And I just, as a noise model, I took the polarizing noise with the average gate fidelity stated in the paper. So let me explain you the plot. So unfortunately, I don't think you can see my mouse. As I mentioned, I'm having some technical trouble, but let me try to explain it anyways. So these are the results for the SK model, like QAOA for the SK model. I'm plotting the ratio between the ground state energy and the energy found. So the orange line, one, would be like the ground state, right? The gray line is what you expect to find with, say, an STP relaxation. And the green line is what they expected to see if there was no noise in their system. And the yellow dots are the energies they actually observed. So, you know, we see the disparity between the green line and the yellow dots, which shows the effect of the noise. And the black dots are actually what our bound predicts, or gives as a bound on the energy they could observe, given these noise levels and circuit depths and so on. And we see that as the number of qubits increases and the noise starts to dominate, we are not too off. So at least in this small example, we actually obtain reasonable results from our bounds. And yeah, I showed you these two different ways of just bounding how well the noise annealer can perform. The first one was by comparing it to some GIP state, and the second one was by evaluating this variational formula. And what is the difference between them? The first one is nice because it lets you do some analytical analysis of what you expect. You can rigorously show that the amount of noise you can tolerate is independent of system size and things like that. But it comes at the expense of slightly weaker bounds. And the second one, it has the advantage of being tight. Actually, if you let beta go to infinity there, what you get is just the ground state energy. So this will always be a tight bound if you're allowed to evaluate it for all betas you want. But you need to numerically evaluate it in practice to get something. So it's less useful to do analytical calculations. However, it's nice that you can compare it just to any algorithm you like. You can evaluate this formula, compare it to whatever you get from your classical algorithm, and then draw conclusions. Okay, so now, as I said, I only talked about very simple noise models. The simplest you can imagine, one side, the polarizing noise. And we want to go a bit beyond that. So what we'll always need is that this noise is driving you to some unique fixed point. So that there is this full-ranked state here, such that you have this exponential decay of the relative entropy. And inequalities like that have been shown for a variety of noise models, but I won't get into that. And in particular, there is one very popular noise model for which this is not true, namely the phasing, because pure the phasing doesn't have a unique fixed point, right? Any diagonal state or classical state will be left invariant by the phasing. So that's something we are working on. And we, again, will just try to quantify how fast this noisy annealing calculation will go to this fixed point or somewhere that it's close to that fixed point. But this is a bit more tricky in relative entropy because it does not satisfy, say, the triangle inequality and doesn't have other simple nice properties of distance measures. However, we're still able to show the following inequality. So if you take on the left-hand side below, we have the relative entropy between the output of this noisy quantum annealer and this fixed point of the noise. And we see two terms. The first one is what we had before, just this exponential decay to the fixed point. And the second one is something that depends on the commutator between the Hamiltonian evolution and this fixed point. But notice that there is a e to the minus alpha t term there and t is the, like, largest time you go to. So whatever happens at, sorry, t minus tau. So whatever happens at the beginning of the computation doesn't matter too much. The only thing you need for the second term to decay to zero is that at late times of the computation, the fixed point of the noise and the Hamiltonian actually commute. And then this term over there will go to zero. So let me give you an example where this actually holds. So let's just assume for simplicity that our fixed point is just, let's say, something like what I have over there, like e to the minus beta Zi for some value of beta. And we actually want to optimize classical problems, right? Then at the end of the annealing, the Hamiltonian will be close to diagonal because we are optimizing a classical problem. So it will commute with this fixed point. So for classical optimization problems, we are indeed in this regime where this second term over here will converge to zero. But there are some caveats, okay? So for instance, although we can show that this will also still go to zero for more generalized noise models, we cannot work very, this technique doesn't work very well when the fixed point is very pure. So for instance, if you have pure the phasing, these things will also blow up. And so to make things a bit more concrete, let's assume that we again have just one qubit noise and we have a combination of three sorts of noise sources. The first is amplitude damping, which will drive you to zero. The phasing, which will just kill off diagonal entries, right? And these sort of control errors, which lead to depolarized looking noise. So this is still a toy model, but at least one step closer to something realistic. And you can show that if you have these combinations of noise models, you'll have a fixed point, which is again of this simple product form with the inverse temperature depending on the ratio between the amplitude damping and the control. The larger the amplitude damping, the purer it will be and so on. And so putting all these assumptions together and assuming a linear annealing path, what we then get from our result is that this relative entropy between the output of the annealer and just this trivial fixed point will be given by this formula down there. So let's just go a bit deeper into what this is telling us. First we have this term that is just an exponential decay. It's essentially the same as we had before in the toy model of the depolarizing noise. But we have this rest term over here that has a different behavior. So first of all, again, everything explodes if you let gamma go to infinity. We have very pure fixed points. And we have now a polynomial decay with time and with a prefactor that is r squared. And so this relative entropy will decrease a bit slower than in the case where we just had the system being driven to the maximally mixed state. Now, a very natural question to ask is, is this just a consequence of my stupidity or lack of better tools or whatever? Or is there actually a fundamental difference between different noise models? So is it actually easier to deal with noise models like amplitude damping than depolarizing noise? And there are some results in the literature that indicate that this might actually be the case. So for instance, there's this quantum refrigerator construction by Gottesman from 2015 where they show that, okay, it's in the circuit model. But they show that if you have the polarizing noise and you don't have access to fresh qubits, then there is no like threshold theorem. But if you have amplitude damping noise, then even without access to fresh qubits, you can perform arbitrarily long quantum computations. So I think that's an interesting question as to whether this is fundamental or just lack of creativity or better tools. And then we can easily generalize the bounds I discussed before, but to these other fixed points, this variational characterization still holds. But now with a slightly modified partition function, so you have to account for the fact that your noise is going to bias you. So the thing is that now we add this additional term to the partition function that will make sure that the lower bound also is biased towards more of the outputs being zero than when we just had the uniform noise. And yeah, just to show you the effect that different noise models have on the bound, here we have these three curves. So R3 is again the control errors. So like these, at least for R bounds are the very bad ones. And we see that if R1, which is the amplitude damping rate, is non-negligible, then this decay will be very slow. Whereas if it's negligible compared to the control error rates, then this will be a very fast decay. Okay, so it is possible to extend what we did before. Sorry, was there a question? We can extend what we did before to these other noise models, but at the expense of poorer bounds. And right now we are also generalizing it to the phasing, again the problem being that it doesn't contract uniformly. But we are now working on also getting similar results for pure, the phasing noise. And in the last few minutes I have available, I would just like to talk a bit about concentration bounds. So as I said before, so far we only talked about the expectation value. Let's say we could be in the very extreme case where with a very, let's say 99% probability we output complete garbage and 1% probability we output the right solution. Expectation value would be very bad, but actually this would be an amazing solver, right? So we want to analyze actually the probability of observing good outcomes. And we derived similar bounds as what I was showing to you before for the expectation value, but now in concentration, not in probability, not in expectation. So I don't want to get, I won't get into the details of how we prove it, I'll just give you a quick overview. But the message is the same, that like as long as the annealing time is large enough so that you can feel the noise, the probability of observing good solutions or non-trivial solutions is exponentially small in system size. So I will go back to the baby model of the polarizing noise, but the main technical difference is that if before I was quantifying convergence in relative entropy, now you'll have to look at the so-called Sandwich-Renny divergences to look at concentration and quantify convergence there. But similar techniques exist to control the sort of entropy, relative entropy decay under noise. And I just want to give you an example of the sort of concentration inequality we get. So let's say we have one qubit of polarizing noise with rate R and H i is just let's say an easy model on a graph with maximum degree delta. Then we can show that the probability that the outcome of the device will have an energy that deviates significantly from just, let's say, peak epsilon should be a small constant and something that the case exponentially in the time is exponentially small in system size. So this means that as long as the time, large, big T is big enough for this e to the minus 2RT term to see some sort of contraction, you already be exponentially concentrated around, let's say, not so interesting output strings. And so in particular if the time is of order R to the minus 1, you'll essentially never see the ground state. So we have some other results in this paper I don't have time to talk about, but they essentially show that the popular error mitigation techniques we have right now do not really change the big picture. And you also showed that then there is no exponential advantage under noise if the noise rate does not scale with system size because of the concentration inequality I showed you before. You would have to sample an exponential number of times to see non-trivial strings at those annealing times. And I hopefully also convince you that these relative entropy convergence methods give you a nice way to rigorously analyze the performance under noise. And unfortunately we see that if you have a small error density then it's very unlikely that you're going to outperform like heuristic classical algorithms. But there are still some important problems. As I mentioned before, it would be nice to extend it to more noise model, see if there is actually also a difference, whether you have amplitude damping or control or whatever, or it's just an artifact of poor balance. And in all of the results I showed you, I assume that this contraction of this relative entropy is independent of the state. In practice, if you're looking at interesting, let's say highly entangled states, this contraction can be much worse. So this would make it converge to trivial states even faster than predicted here. So that's also something we're currently investigating. And it would of course be nice to see what is the effect of some primitive error correction methods, to see what sort of effect do they have, how much is needed to do these sorts of analysis, because this would be kind of easy to incorporate into our framework and can be maybe instructive. And I only showed you this benchmark on the Google device. Of course it would also be very nice to run it on other devices available and see if we see an agreement or if whatever my technique is doing is completely off or just to see to what extent it can be applied to other devices. And with that, I'd like to thank you for your attention. Thanks, that was really interesting. So I have a few comments. The first one is that I'm glad that you did the beyond depolarizing, because depolarizing noise is not particularly relevant for quantum annealing, but certainly amplitude damping is. The more significant comment is that the model where the Labladian is time independent is not a good model. Yes, and you have a paper on this, right? Well, many people have papers on this, but the Labladian is definitely time dependent in a way that I think matters for your bounds. So it would be very interesting to see what happens if you take that into account. And thirdly, the error correction. So there is a body of work on error correction for quantum annealing, in particular for these Markovian models. And we know that there are methods introducing, for example, stabilize a code subsystem or subspace codes which grow the gap. The gap somehow didn't appear at all in your discussion. So it would be interesting to see if these are mitigation. Well, it's not error mitigation in the sense that you talked about, but error suppression methods, whether they change the results. Yes, yes. So thanks for the comments. Indeed, I forgot to mention this. I think it would be highly relevant to also adapt the results to time dependent. Labladians, I think this shouldn't be too much work. And yeah, maybe you can address this shortly in the future. And just on the error correction front, indeed like what I guess you would just have to see how much entropy these methods allow you to save and things like that. And as you said, if you have some explicit control over the gap, that could probably also be taken into account. That's also something we want to investigate in the near future. Thanks. I have a question about your concentration bound. Yes. Would you say a few words about the technique that you have used in order to get it? Because usually people, at least I am familiar with the Levy Concentration Lemon, but that would give you some different dependence on epsilon. You have an epsilon as opposed to an epsilon squared. So there's a much better result. So I'd like to hear about that. Oh, maybe there was a typo. Let me check. So let me just go back. So did I get an error here? So let me just check. So the probability should be of constant order if I let squared of n. So if I choose epsilon, indeed, sorry, that's a typo epsilon squared. Sorry about that. No, no, no. But the technique is as follows. You show this sort of concentration for the fixed point of the noise. And then you show, if you combine the concentration for the fixed point with a relative entropy bound, you are then able to transfer the concentration bound from the fixed point to states that are close in relative entropy. So that's more or less how it works. Really interesting talk. So here, and actually conveniently the right slide is up, when you talk about finding the ground state being exponentially unlikely, and this is basically treated as a no-go, is that actually appropriate for optimization because you kind of know you're going to have to run exponentially long unless something very surprising happens with complexity theory. So is it justified to immediately rule out these cases where it's exponentially unlikely as having any advantage or could you say, well, maybe there's still something there, you just do it many times? Sure. So notice that there is this dependency on T here. So the longer the annealing time, the more you'll be concentrated around more and more trivial states. And then at some point you'll be in the zone where the classical algorithms are provably better than whatever you're doing. So in order to, let's say, not compare to the ground state but say states you can reach with efficient classical methods, you would just have to go to higher T's and then you would be concentrated exponentially in the zone outside of those strings. Does that make sense? Okay. So for water values of T, you still might be in this regime but it could still sort of be okay. Yeah. Yeah. Okay, cool. Actually, I'm a little bit confused about the idea that with longer annealing time, you always at some point will lose the advantage because if you have a long anneal time with the annealer, which has some finer temperature, then the interaction with the bath at least at the later stages of the anneal, it appears, would move the state towards the Gibbs state corresponding to the temperature of the bath. And if the temperature of the bath is sufficiently low that sampling the Gibbs state at that temperature is itself non-travel and useful task, then it looks like even very long anneal times at a finite interaction with the bath are useful. Can you comment on that? I can comment on that. So I think this is best illustrated. Let me go back here in this formula. So let's say that this parameter gamma here is corresponding to the temperature of the fixed point. So if you are assuming that the fixed point or the thermal state you're going to is a very low temperature state, then this will blow up. So these prefactors like this cosine hyperbolicus of gamma and more importantly the sinus hyperbolicus, they will blow up. So this bound will be sort of trivial unless you go to very large times. So this convergence will be sort of low if your fixed point itself is very pure. But what you're saying then is, okay, it will take me a long time and I will converge to this state which might be actually difficult to sample from, but the energy will just be always be comparable to sampling from that. So I'm not sure if that's useful to actually solve an optimization problem. Does that answer your question? So if the final Hamiltonian is the classical Hamiltonian in the annealing process, and the gap between the ground state and the first excited state is significantly larger than the temperature, then we actually converge to a state like in the limit when the annealing time goes to infinity, we converge to the state with very high probability of the ground state. No, because of the noise, right? The noise will drive you to the fixed point of the noise, not to the solution, right? I mean, this is what this equation is telling us. So that's actually confusing, right? Because the quantum annealer implements some Hamiltonian, right? Yes. The Hamiltonian has the ground state, which is the state you're interested in. If the reason interaction is the bath, it's reasonable that if the temperature of the bath is small, then the state converges to the Gibbs state of that Hamiltonian, no? Well, I mean, okay, so maybe the confusion is coming from the fact that the sort of noise model I'm considering does not model the situation you're considering well, but if you agree with my noise model, then this is not what happens. Then you're just going to converge to some sort of trivial state. Yeah, I think so. I think that my confusion is that it seems that this noise model is not accurate in at least some... I should say, for instance, control errors, right? They are there, and control errors, they will just kick in entropy, right? So, for instance, this will by itself already, they will always be there, right? Okay, so let's... Okay, the last question. Thanks for the very interesting talk. So I actually don't know if I misread it somewhere, but I'm a bit confused. So in the final Outlook slide, you had this looking into error correction, how this affects all the analysis that you do, but before that, you had to... No, no, no, it was actually earlier. You said that error mitigation does not change the general picture of your results. Yes, I think it was over here. I mean, it just said error mitigation does not change the picture, but I can just go to the slide anyways, although it's not very informative. Yeah, sure. What about you considering here for error mitigation? How did you draw the distinction and how did you make this analysis? Yes, exactly. So what we show in our paper is the following way of quantifying that error mitigation is not very useful. The idea... I mean, first of all, we distinguish between two forms of error mitigation, and there's a wild zoo of error mitigation techniques out there, and it's very difficult to even properly define what it is and what is error correction or... So maybe what you have in mind is not what I have in mind, but we distinguish between two sorts of error mitigation protocols. The first ones are those that allow you to extract expectation values of a noiseless computation from noisy ones, and what we show is that somehow the overhead and the number of samples that you would need to get a decent estimate is exponential. And another protocol we analyzed are these hardly called something cooling... Yeah, essentially the idea is you take many copies of your quantum state and you do some sort of generalized swap test to prepare the state like row to some power, and we show that the probability of such a procedure working is exponentially small. So these are some of the things we considered, but again, as there is this wide zoo of protocols, there might be still something we didn't consider there. There are comments in chat. Oh, should I open them? Sorry, my computer is terrible. I cannot... Oh, I see. Okay, thanks. So yeah, I should actually say that you should actually look at the published version. The archive version is outdated, and because of the embargo, we cannot update it, so I recommend looking at the published version, not the archive version. Maybe the comments in chat are not answered. Oh, it's answered. So let's thank the speaker again.