 of setting up this wonderful as school and workshop on quantum science and technologies and unsurprisingly this is what this talk will be about yet in a very specific flavor this will not be a review talk but there will be one particular question in the focus of all this yet one that seems pretty much at the heart of the matter which is how can we hope quantum devices to eventually provide some speed up over classical computer so what at stake here is how we can hope quantum devices to in one way or the other computationally outperform classical supercomputers so this picture here we've seen maybe rather too many than too few times potentially but what is undeniably true is that at some point the field needs to deliver a strong evidence that quantum machines can do something we cannot do otherwise in computational means so that's surely undoubtedly be true for the anticipated device of a universal quantum computer which could solve some NP problems in polynomial time so you could even check the correctness of the computation at the end of the day but then it's great and there's been all this progress in recent years and months of realizing universal quantum computers alas we do not quite have them yet what we do have in contrast our quantum devices are analog simulator systems that do something that that mimic other quantum systems that probe interesting physics that maybe just simulate themselves but some system where a very large degree of control can be reached and with a kind of top-down approach one can have control of a very large number of degrees of free these systems do exist so what ultimately is their computational power what are they doing and this seems much less clear in fact although this seems a crucial question so analog simulator so they're not be could be complete they're not fully fledged quantum computers but what's their precise computational power after all then error correction or let alone fall tolerant seems to be out of scope I mean that's out of the ballpark here but is this a bug or a feature I mean do errors necessarily accumulate and make everything like noisy and wobbly or is there some hope to these these guys to be stable in one way or the other and that seems to be I mean why I'm not promising to present any kind of fully fledged answer to these questions I do think they're crucial if you want to make the point that realistic devices that we may have now may have in the near future can promise a speed up over classical machines in one way or the other I'm also not saying that quantum simulation is all about the speed ups of that type but if that is not available then the entire fields will be challenged to say the least so let's assume that we have a good day and anticipating that current simulators should be able to solve problems in accessible the quantum computers we go into some lab and perform a simulation where we have good evidence that we could not have done otherwise why do we have this sophisticated cold atoms experiment we do a computation and the outcome of a simulation and the outcome is five let me say great is this correct well how do you know it's a hard problem right I mean these are not NP problems there's not a witness that would easily check the correctness of the outcome so how would we know we've done the right thing in an analog simulation and that shows part of the irony of this question and the whole talk is a bit of a high level joke in the sense in that the superior computational power of quantum simulators on the one hand and the certification of the correctness of a simulation in the absence of being an NP problem the other this seems to be a highly intricate and somewhat ironic fashion and it's a question this will be the guiding question for the rest of this talk again no fully fledged answers but we should definitely ask this kind of questions okay that sets the stage so analog quantum simulators and well the presumably most advanced simulators that kind are so-called systems based on cold atoms in optical lattices and they are kind of artificial condensed matter lattice systems that are formed by counter propagating laser light where the atoms sit in the in the potential minima of this of this artificial lattice it's an interesting topic in its own right to talk about the implications and that type of architectures and we might hear more about that even in the subsequent talk from right yet for the present purposes of this talk because we want to keep on focus it's maybe good enough to say that such systems allow to probe local Hamiltonian problems to remarkable accuracy for a large number of particles so this context I like to cite Ian Wormsley who once got a question from the audience when giving presenting some optical protocol and then some smart ass asked oh but can you do this as an asymptotic protocol and Ian with this nice Oxford accent said yeah look I'm an experimentalist we are not asymptotic people having said that this is pretty much asymptotic so they like 10 to the power four and five particles are well approachable in and in and in this setting what can look at ground state properties of phases then quenches like you quench a system out of equilibrium look at the evolution in time slow evolutions like kibble Zurich type settings driven systems like flow K type driven systems these are all accessible with that type of architecture under extra ordinarily precise conditions and in the Korean fashion for 10,000 or even 100,000s of particles so a plot that I like to show in this context is this one here that relates to the equilibration thermalization in in quantum many body systems so this experiment this quantum simulation what has been prepared was a an initial state of 0 1 0 1 0 1 so a charge density wave initial state if you want and then this state was suddenly and quickly quenched to a full translation environment both the Harvard many body Hamiltonian and then the evolution was monitored in time as a function of time so this is how it looks like if you look at the even particles per site as a function of time and there's some I mean initially there's no particles whatsoever in the outside and then there's some non-equilibrium dynamics until for long times one gets obviously half a particle per site and so there's a lot to say about this not today for the present purposes it's only interesting maybe to emphasize that this picture not only shows the quantum simulation and a very precisely controlled conditions in a manual block slab in in in garching as a function of time but also in blue the re-simulation on a classical computer on the same under the same conditions that's not a fit but it's a re-simulation on the under the same conditions and it's not only a re-simulation but it used the best algorithm available for that type of problem at that time like a matrix products a best algorithm then it it runs on the on the fastest computer the German taxpayer could afford in the super computing center and it costs like about five weeks of runtime per plot for six thousand by six thousands MPS bond dimension in the NPS that's pretty much the upper limit of a publishable result on on the merits it seems fair to say yeah five weeks roughly and so this is nice because these algorithms have very strong predictive power not only can you say something but for what they can say in principle one could even give error bars putting together trotter errors and the NPS approximation yet they only work for short times for longer and longer times there will be a blow-up of the entanglement entropy and there's no way one can faithfully represent the state anymore as an MPS so there's a barrier no algorithm of that type type can go further but that creates a very interesting situation in that the quantum simulation runs on why would it care what we can do classically on our computer right so you can ask questions better based on the data then on the classical simulation that's only used to build trust in the correctness of the quantum simulation that's a baby step but it's a it's a step in this direction that there is scope to to to to to to perform this also got a long story short short times can be efficiently approximated here why a long times cannot cannot be then in a similar mindset we looked at dynamical phase transitions or dynamical settings beyond the kibble they were accepting where one-dimensional settings could be completely resimulated in all glory in detail and and all error bars and so on but 2d not yet the experiment in 2d is a very small modification of the 1d experiment just changing the confine so again having a realm of classical simulatability and not in other settings or in the many body localization context that question I'm much interested in these days again 1d systems can be simulated 2d not so this is sometimes underappreciated but it's important to stress that even with present technology there are quantum simulators that outperform state-of-the-art simulations on classical supercomputers running the best algorithms to date which is a very promising and and good insight in that so in this sense we are already there in that there's evidence that our present technology machines can at least solve problems and these are problems addressing questions interesting in theoretical physics like equilibration times and so on that cannot be kept track of on classical supercomputers having said that and and I'm actually happy to provide feedback this talk I keep it rather good in time to have some space for threats and discussions at the end good anyway I can play devil's advocate say good that's a nice baby step but there could be clever simulation methods who are we to think that we cannot simulate that system and like I get what long one email per month or so of people who re-simulate that plot that I showed earlier that kind of provoke people to try that and that's a very interesting endeavor however there's a couple of fallacies one can fall into like one is it's it's not a fit I mean noise the first thing is it's about functional dependence it's not like providing one plot but you have to have the knobs of the initial preparations and reconstruct the functional dependence of this family of plots given that parameter that you can feed in it's a much heavier problem second it's about predictive power it's not that in which respect you look at the plot and say oh this must have been this it's about being able to predict this with this error bar so even if like some uncontrolled approximation may give rise to a similar picture that doesn't mean that you could have predicted it with the same level of accuracy so there's a couple of fallacies that one could fall into having said that there's no proof whatsoever that you cannot simulate these things ultimately on classical supercomputers so to be safe against that you want to have an argument of proof based on notions of computational complexity in the same way as we say quantum computers are more powerful by which we mean there's a computational complexity claim that they can solve BQP problems outside BPP so to be sure we want to prove the hardness of the task and identify a feasible task some intermediate problem that lies outside of BPP sounds like a communist party but it's a complexity class check the classical probabilistic algorithms but it's not BQP hard so cannot solve an arbitrary quantum algorithm because these are just not quantum computers so that is at stake by the feasible scheme that you can realize in the lab yet does something hard and interesting not accessible otherwise I mean we have to deliver this okay so super polynomial computational speedups one candidate of this type is this famous boson sampling it's it's it's set up to find some problem with strong evidence for super computational speedup that's sometimes that quantum computational supremacy is not there are nice of all words but it says what it says it's like some evidence for a speedup this boson sampling problem is a very simple problem of this type it's a beautiful setup setup where you take m bosons in m optical modes and put them in a linear optical multiport and at the end of the day you make a photon detection detecting the number of photons let's do the experiment one oh one oh good let's do it again one one oh whatever it's a quantum golden board it's a random thing it's like a very expensive number the random number generator having said that it's a very interesting random number generator because it's quite uniform but not quite there is intricate tails in this distribution and it's so intricate that you cannot sample from this distribution on a classical computer as has been shown under reasonable assumptions right this is so this can be realized as a linear optical multiport as we've already seen in other talks in this this week and so this is the claim that you cannot sample from a distribution close in L1 norm if the scaling of the number of photons and modes is right and the unitary is shown chosen in a high random fashion that's very exciting because that's an experiment it's very easy to do it doesn't solve the most practical of all tasks but it solves a problem that's inaccessible to classical computers and that's of course a very exciting premise and that's motivated leading experimentalists to provide a number of beautiful experiments of proof of principle type of this kind based on either integrated optical circuits or bulk optics kind of realizing small instances of such a machine and that's surely a very important way forward now is this functioning I can ask if look at the state preparation and ask can you verify the correctness of this of this state that is being prepared maybe I cut this short for reasons of time I'm just saying we spent a lot of time on finding tools to verify the correctness of states based on measurements that's not achieving tomographic knowledge but as asked has the right state been prepared and here I say this is good for Gaussian states for low photon number states it's not quite possible to verify the correctness of this thing but that's a stronger state but that's more ironic that's true at the same time which is this one take the boson sampler and remove the boson sampler right what I mean is don't read this thing it looks technical the thing is if you have a boson sampling quantum device there's a slightly longer classical circuit with the property that you cannot distinguish the output from one from the other with polynomial many samples if we think what that means is like I claim I have a boson sampler but I've just program my phone and you've no way of falsifying me having just program my phone it's like with polemically speaking it's like I claim I have a super car and you look at it it's like it looks like a Volkswagen Polo so no no no it's a super car they can't even prove it and then you drive it it accelerates like a Polo it breaks like a Polo it feels like a Polo we look at it as a wall it's like a Polo and well no but it operationally is in the single ship from one right I mean this is not a contradiction but it shows that there's an interesting twist and irony in this super polynomial power and the verification of that in the absence of having an AP problem so generates by no means I'm critical of that type of endeavor in fact I'm encouraging it but it does show that there is an interesting kind of twist and even kind of like something to be clarified here and after all that's not so so surprising I mean what one could say in order to be able to verify quantum simulation one needs to be able to efficiently simulate it if you can do it you can check if you can't how can you check these are not NP problems it's a very common de-stated setting there is other examples of that type IQP circuits have been discussed running universal circuits easing type interactions that are alternative ways of achieving such a promise with nice properties some of them have a heartless proof under L1 errors in an additive fashion that's not a detail you want to be able to prove hardness under at least realistic additive errors because I mean no apparatus will ever be error free so the absence of error correction you have to have that property yet these schemes are all extremely hard to implement to say the least they are to scale up with present technology either or you need an arbitrary gate choice which is fine but then why not build a quantum computer or their periodic which is good but they would be periodic with a periodicity say of 56 in the best non-scheme previously like one unit cell as 56 qubits that you need to go into many parade and then it's periodic they say good but I mean if the unit cell is 56 qubits which is much larger than the best universal quantum computer we have to date then that's a bit pushing it I think that's a fair statement so there is more work to be done which brings me to the last part of this talk already which is can we think of feasible quantum simulators providing a speed up where we want to find problems with some strong evidence for super paranormal speed up but we want to combine both the best of both worlds and bring speed ups closer to experiment but not having periodicities of 56 or intractable scalings and so on but something that's more reasonable more realistic to present experiments or even close to ones that are already there but still have that that promise that's that speed up and this is the point of the last bit of this talk and then we can discuss good so what we want is or what I want I mean this is but this is not a not an unjustifiable desire is we want to look at a Hamiltonian quench architecture that is really reminiscent of the type of non-equilibrium quench quantum simulator settings that people are anyway interested on I'm also interested that I mentioned at the beginning on probing stuff then you want to have a low periodicity of the interaction Hamiltonian so please not 56 or so but one maybe two I mean nearest neighbor that's what you have in labs in nature you don't have tuned 56 cubic interactions and this is still local but it's mathematically local and you want to have heartless proofs with an L1 norm error bound on the reasonable assumptions and and I'm presenting one of them so there's a couple of architectures we've conceived I will only look at one to leave time for discussions which is based which is this one here which is to my understanding a very simple prescription I mean I have no imagination to make it simpler it's this well but I'm free to get insights into that so what it is it's on a square lattice think of a cold atomic architecture square lattice and you have an initial state of all the spins qubit whatever you want to call them in a product state or product no correlations whatever get in a mighty random fashion there's some random involved you either have a zero or some tilted state but that's what it is you can think of this as being the ground state of a disordered optical lattice that has already been done in the manuals and only Schneider's labs you can think of another thing I just have a product initial state with some randomness highly reasonable initial state then one looks at the quench for a finer time under a and plain vanilla easing Hamiltonian so no long range no 56 whatever in years they were easing Hamiltonian which is presumably the easiest Hamiltonian that nature has in store I think this is a fair state so it's just a normal easing classic or whatever you want easing Hamiltonian evolve it for unit time this is not only conceivable but has been realized in the lab as was one of the first experiments done in optical lattices which doesn't really catch on so much which surprises me but this kind of cold collision experiment by these people when they were still together was one of the earliest interesting cold out there make experiments so this is kind of can't be considered done and the last step is you just measure but no adaption no tuning no whatever you just measure it out but all the x-paces there's one sampling measurement of the x-paces and that's also not out of the way although that's made me there might be the technologically most challenging aspect I mean there has been lots of work on single side addressing in optical lattice experiments that's a very interesting technique and in many settings yet one has to map the internal degree of freedom to a density particle density I suppose so this might be some issue but by no means is this out of the way or so I mean I talked to Emmanuel it says five I mean it's not super easy but it's also not difficult and that's the scheme product initial state unit time quench and you measure okay the scheme now the argument is that this is solving our problem so assuming three plausible complexity of the projectors that I will mention one cannot sample from that distribution on the classical computer oh but though you can do the sampling on the experiment so this is the statement that this may be not the most practical of all problems but it is producing a problem that you cannot sample classically and the argument behind that is an argument that makes use of the fact that it's sharpie hard to approximate the output distribution of this all x measurement to a constant relative vector I have some technical slides I will maybe shrink them a little bit but I'm perfectly happy to provide answers I hint at the techniques involved because the setting is very simple there's no adoption no circuits no gates no nothing but of course the the underlying argument is long in winding and goes through all kinds of quantum information techniques so in particular the setting is matched with a non adaptive measurement based quantum computing scheme which is then mapped onto a certain type of random circuits involving z control that had a mart and tea gates that is kind of under the hood so it is a random circuit under the game so the universal computation in the post-selected fashion is crucial for this argument and the entire mathematical hardship of this argument is to apply stock Myers algorithm and to relate the hardness of approximating individual probabilities of certain outcomes to the hardness of approximating the sampling algorithms and the whole thing circles around a proof of contradiction that one would end up in an FPPP to the power of NP algorithm to solve sharpie hard problems which would collapse real result in the collapse of the paranormal hierarchy on on on the third on the third level so this comes together with certain arguments that involve probability distributions and the anti concentration of probability distribution which I will say something about in the second maybe I highlight the assumptions going in here I mean all these arguments have some assumptions but we should be aware that assumptions in computer science means something less severe than in physics and physics we say the assumptions it's mean field a very strong assumption in computer science I say that the paranormal hierarchy is infinite that's an extremely weak assumption it basically like says that P is not NP fine it's an assumption but I mean if that's not true then why are we here right or I mean then why is there internet security and so then it's an average case complexity argument that an finite fraction of these instances is hard to sample in worst case then in worst case that's strictly speaking also an assumption but let me remind you that this is the same assumption that you do when you rely on the security of your internet banking because this is something that people should be aware of that factoring is an NP problem of course but by no means is it understood in what regime the hard causes lies not like in three software this is very well understood like an LLL so it's not so easy so there's an average case complexity assumption when you make internet banking or HTTPS what's that so that this is a strictly speaking assumption and then anti concentration bound which is a property of a distribution here it's very strong numerical evidence but in a mild variant of this problem we have a rigorous proof of that which is interesting method development is not in its own right to close the different loopholes in this type of argument so if interest ask me about it there is method work related to rigorous arguments on random circuits involved here but this is but under all these assumptions one can go into the lab perform this experiment under reasonable technological assumptions using techniques that are not only conceivable but that are already available with present or past technology and the quantum simulation is intractable on classical computers so that was my my main message this one add-on which is fine but how about the irony I was almost making all this point of the irony right so it's supposed to be a high-level joke so I again I mean how would we know that it's correct it's not an NP problem again so the same burden is why that is a true strong word I mean this should not sound negative but I mean the same intertwinement could be there but here now this interesting twist which is one can with order and many measurements verify the correctness of the state prepared in in L1 now and even the distribution in L1 know by making measurements that are very similar to the ones that you would make anyway but that's not just building trust in the computation or just like an evidence of some kind it is really bounding the right quantity the L1 of the distribution of the one distribution that you use in the argument that's extremely interesting that you you can go into the lab you can make measurements and ultimately or you could say that either the red light goes on and say oh we have to try harder another research ground we have to make a better measurement or the green light goes on but if the green light goes on it's not evidence that or trust in that or you have verify that this computation as such is doing the right thing which is a nice feature draft so you can verify the correctness of this so there's this all what is happening here there's this oh what is stop stop stop stop I go back so this is that's part prejudice that in order to be able to verify quantum simulation one needs to be able to simulate it efficiently but this is not quite true there are some settings delicate settings in which a trustworthy quantum simulator can be verified it's not an NP problem but you can verify the correctness even if the classical simulation is beyond reach so what is coming out I cannot predict you have to go to the lab you have to do the quantum simulation but you can verify the correctness it's one bit correct and then you add to the simulation it's an interest is an interesting thing that being able to prove the correctness is something weaker than making the prediction it's also a logically an interesting observation I find you don't have to find this interesting okay now I'm really at the end of my talk so the whole talk was a question what sense it can be feasible to realize quantum devices that outperform classical machines that are not fully fledged quantum computers that is something we have to deliver soon to give me to this field that we can do something interesting now the positive note of this is there is hope that feasible quantum simulators can show a super polynomial speed up solving some problem it's not for tolerant but that's a feature of the bug we don't want for tolerance I mean of course you want for tolerance but if it's not available then we should find ways of of getting around it but it is error resilient in the sense of additive errors and it's certifiable in the sense if the error levels are small enough the green light goes on or the red light if it's too much noise but it's error detecting if you want so one can efficiently assess the correctness even if the simulators exhibit compute going to computation supremacy it's philosophically also interesting that you can verify something you cannot predict it's saying I'm right means the following statement but I cannot give an evidence for this that's kind of an interesting interesting twist so this all had to say as an outlook so this is great but to be fair it's not the most interesting of all schemes from a physical perspective so quantum compute simulations about lots of things it's also about probing interesting quantities I mean what we said is already baby step it's a disorder Hamiltonian quenched in a in a many body sense so that has already the right flavor of a proper quantum simulation but we want to be more practical even link it more to mbl to actual non-equilibrium problems that are people are anyway interested that's a way an important way forward to connect it more to physically important schemes then the robustness how can we think of quantum simulators being intrinsically robust in one way or the other that arrows don't accumulate too much and finally read out read out read out I mean there has been more techniques for tomographic tools to improve readout techniques there's something we are doing that we get my own having new windows into that type of system so quantum simulators I mean I have made the point like how closely intertwined the computation speedup is with verification scheme so this is sometimes underappreciated is the the lame duck in this game but I think certification of verification is the heart of the matter if we want to progress which is the perfect moment to stop my talk and I thank you very much for your attention