 Thanks and thanks for the kind invitation and thanks to Jay and David for all the effort and sweat of setting up this wonderful conference on approximate quantum computing and Unsurprisingly, this is what this talk will be about when we look at steps towards quantum advantages of quantum simulators Now obviously I'm very embarrassed about Rob inciting me yesterday concerning talks the titles of which start with torts well Here's really meant in the sense that while it may be a long way for experimentalists to actually achieve Such quantum advantages and quantum simulators. It may be less a more around the corner than one might actually Think so there's one question in the focus of this Which is also one that was much in the focus of yes and s talks and will even more So be in the focus of today's talks Which is when and in what precise sense we can hope quantum device is doing approximate quantum computing to provide some kind of computational speed up over classical computer, so we are extending on the theme of make yesterday and also about Robin and Aram and and also M others Now the heart of the matter obviously is that This has become much less of an academic question Simply because it has become more likely to come up with a positive answer to achieving such kind of a quantum Advantage not the least due to the fact that some protagonists have decided to actually build such Devices so there's been a lot of Funding in the private and the public sector on actually building such quantum devices in this picture here It's showing is shown from the the European Remification of this creating lots of hype and also much interest. I would say mostly for good reasons and Good so at the right-hand side of this diagram we see the fully fledged Universal quantum computer and for those since we know that they can solve some NP problems in polynomial time the question of a speed up is Can be considered settled here and also Well, there has been enormous efforts of actually building such devices and we are here I'm humbled by being in the in the epicenter of one of these endeavors of Realizing a 20 to 50 qubit superconducting quantum computer much excited of learning more about this later today and tomorrow Exciting as this is it seems fair to say that I mean we have these devices, but soon These devices, but we still quite a long way towards achieving a fully fledged short-class fault tolerant Quantum computer so what we do have at present as of today are Quantum simulators, so that's in the focus of this talk so Systems that allow for high levels of control and precision, but not quite enough to achieve a universal quantum computer, but One thing they are large so their asymptotic limit is is in a way inbuilt They're the the dirty and the the ugly brothers of quantum computers if you if you want, but then new questions pop up So these analog or analog key quantum simulators so citing this gentleman here I Mean they in a way simulate themselves right there surely not be could be complete So what is their precise computational power? Then error correction let alone fault tolerance is unavailable. So is this a bug or a feature? I mean is this just drowned by noise or is there some hope to have some robustness of quantum simulators or at least some spatial or time Window where one can hope to do something interesting along The way so at best we can hope for a notion of approximate quantum computing at it At the heart of the matter here, but the good news is that we do have these dudes in in the lab So let's assume that one good day. I go into the lab. Well, that was a joke somebody else goes into a lab and Is convinced of doing something a quantum simulation that is interesting I mean clearly quantum simulators should solve problems that are physically interesting but also inaccessible to quantum computers so what you see should relate in one way or the other to a problem that is computationally hard in a precise sense So let's assume this gentleman this woman Does something and performs a proud measurement at the end of the day and the answer is? Five is this correct? Well, how will we know it's a hard problem and these are not NP problems So we cannot efficiently check the correctness of the of the quantum simulations So how do we know we have done the right thing and we will see and that we will meditate on this throughout the talk that the question of a superior computational performance on the one hand and the Certification of the other is our kind of intertwined and almost two different sides of the for the same Here it will come in in the flavor of achieving a reasonable and testable quantum advantage in a quantum Simulation so that's what it's take here in this Good Okay, so let's get going. I'm on a log analog e quantum simulators Anyway, I'm the most advanced architecture for that type of endeavor is code atoms in optical Lettuces there's a lot to say about that. I will not for today. There would be a talk in its own right Ask me about it if interested for the present purposes. I think it's good enough to say that these are very large scale lattice simulators And I'd like to attend to a site Ian worms in this context who gave a beautiful talk on some optical protocol and then some guy asked in the audience Oh, but can you make this an asymptotic protocol? And then Ian once he said We are experimentalists. We are not asymptotic people But well, this is not quite true here This is kind of asymptotic in the sense that you have good control over 10 to the power of 4 in 5 sites and particles in such an optical Lattice system you can probe ground state problems like topological order in a ground state You can do sudden quenches and look at the time evolution generated by a many body local Hamiltonian for longer and longer times You can look at slow evolutions reminiscent of adiabatic quantum computing and so on if interested ask me about it so There's a lot to say about this But I will only say one thing a picture that I like to show in this context I've shown this before is in the context of the foundations of Statistical mechanics and how equilibration and thermalization comes about I think you will say more about this in your talk Where is a complicated story again asked me but roughly speaking you've controlled over a One-dimensional system of like about a hundred sites of the order of magnitude and you can prepare this in a 101 oh 1 oh 1 oh charge density wave initial state and then you can suddenly quickly quench the system to a fully Interacting many body Hamiltonian and they monitor the number of odd particles Particles in the odd sites as a function of time and you see initially there are no odd particles whatsoever There's a complicated dynamics and for long times it will eventually equally break now that's great and This is a many buddy simulation if you want but this picture here should not only shows the quantum simulation in the lab If you want but also not a fit But a classical Resimulation of the same problem on a classical supercomputer that's shown as the blue curve and it seems fair to say that the agreement is very good now For our purposes, that's not just some classical simulation But it is done. It was done using the best algorithm for that type of problem at the time based on matrix product states It takes about about five weeks of runtime per plot For PG student and it was run on the newly supercomputing center Which is the fastest computer of Germany, which is a pretty large economy after all so This is kind of the the upper limit of a publishable numerical result if you want and the agreement seems good The interesting feature is that For short times you can do this if you want even to a rigorous one or mirror if you have to so you really know What's going on except that at some point you reach a barrier And you can no longer simulate because the entanglement growth is too much and you can no longer faithfully represent the state at hand with a matrix products But that generates the interesting situation that the experiment runs on why would nature care what we can efficiently Capture with with our classical computer and you can ask questions on the problem better on the quantum simulation Where the classical simulation is only used in order to build trust in the correctness of the quantum simulation It's kind of an interesting state of affairs so To cut a long story short short times can be efficiently simulated long times north and that's an interesting Tension if you if you want there's related settings of Such a kind Where say when you probe kibble Zurich type mechanisms or many body localization? I think we will hear more about this later and again if interest had asked me about it and Where you have the situation that in one day you can really hammer down this problem completely numerically and with all glory and error bars You can just simulate it fully out And see what's going on in the lab yet 2d is completely out of the way But the step experimentally to go from 1d to 2d is a very small step and just requires some tuning of some confinement so again 1d systems can be done in order to check the correctness of the simulation But then you know go 2d and go into a realm that you can no longer and keep track of So this is sometimes underappreciated, but this is important to stress that Already there are existing quantum simulators that in a way outperform state-of-the-art simulations on Classical super computers in the sense that they outperform the the best performance using the best algorithms to date This is great. This is a nice baby step in in the right direction. I'm convinced this is true but Then one can play devil's advocate Most physicists are already happy with that type of explanation. I would I like to add But you can also play devil's advocate clearly you can say ah, there could be a better simulation method for the same type of problem and How am I to say that this cannot be true in fact? I'm often getting Emails of people who have resimulated this with another method and that's extremely exciting and it's a nice challenge sort of same And that's very interesting Well, let's be careful. However, not to fall into one of two fallacies, which is one It's not about reproducing just one plot, but about reproducing a full functional Dependence meaning you have an an array of of knobs you can turn you want to faithfully approximate this whole array of plots in in in an efficient fashion, which is a more demanding and Constraint and it's about predictive power You just want to don't want to generate a plot that you want to compare with other plots But you want to make a plot and say up to that accuracy. I know what's going on at least in the classical simulation That could be a fair viewpoint But I'm not saying that there cannot be a classical algorithm that does that It's a nice challenge and one should do these things that could be clever methods of doing the same thing Classically as well, which is that to be sure we would like to prove a The hardness of the problem and identify a feasible task that lies outside BPP But it's not BQP heart or some intermediate problem that may not be so interesting or whatever But it is some problem that for good reasons would be computationally hard on a classical computer so we are in the realm of Super polynomial computational speed-ups kind of building on the themes of Mick Robin Aram and others at and others and yesterday So the aim is to not solve the world's problem. It would also be good, but not for today But if there's some some find some problem no matter how interesting for which there's strong evidence for a computational super polynomial and speed up this Endearment is a kind of a milestone. This was formerly known as the concept of quantum Computation supremacy the infamous s-word although this has fallen a bit in disfavor I'm no longer often using it, but it's clear what it's meant It's like the quantum advantage you want to do something for which you have good evidence that you're solving a problem that You cannot do them otherwise Now one of the most cited problems along these lines is is the famous boson sampling problem that goes back to work by and Scott Aronson and Alex Archipoff Which is a very simple prescription at least on paper Where you have n bosons like photons in m optical modes that are being sent through a linear optical Multi-port that is governed by a ha random U that makes a mode transformation on these bosonic modes and at the end of the day You are measuring the particle numbers say one oh one. Oh, let's do it again one one. Oh, oh Whatever it's a it's random. It's like a quantum golden board Now it produces some Distribution that looks actually pretty uniform. It's just some distribution But at the heart of the matter is that the distribution is so intricate that you cannot quite Do the same sampling on a classical machine in fact Sound like from a distribution that's close to an additive error in the L1 L1 norm in the total variation distance to the true Boson sampling distribution is computationally hard I mean would lead to a collapse of the polynomial hierarchy to the third level with high probability if the unitary U is Chosen from the harm measure and M the number of modes increases sufficiently fast within the number of Photons in in the system and that was an excitement to many linear optics people who At least proof of principle wise ran into the lab and and generated this because it's an exciting setting to think of a quantum advantage in an architecture that doesn't require a photo and big-style quantum computer, but just a simple sampling prescription along these along these lines now How can we be sure that we've done the right thing? So can we check that the state prepared at the end is the right state with we think we have This can be nicely formulated in terms of a membership problem or weak membership problem or some variant There is a bit like direct fidelity estimation where you would do measurements and you can almost do it in the sense that for a fixed photo number an arbitrary number of modes you can with physically realistic measurements can Certify that you are like clothes in a fidelity or one norm to the right state But for a fixed photo number is that that's not quite enough because for the boson sampling you need to scale up the boson number to have an Interesting problem, so good, but not good enough Actually, what is even more interesting is that if you just look at the black box verification setting What is true? Is that if you have a quantum circuit? There is always a slightly longer classical circuit That is efficiently operating that cannot be distinguished from the quantum distribution from polynomial samples alone So this doesn't take anything away from boson sampling. I mean it's an extremely interesting setting But it generates a kind of ironic twist to the story in that you have a quantum device as doing a super performance But there's a slightly more complicated classical prescription that operationally looks the same as the quantum prescription you have a quantum super machine and It's not operational to signature from a classically efficiently working machine That's an interesting Kind of twist to the to the story shows how important it is to to think of verification schemes and well, I mean it is what it is I mean There's no need to be very much surprised about it. That's a sloppy way of putting it you say well sure, but that's not Too surprising because in order to be able to verify quantum simulation One needs to be able to efficiently simulate it if you can that's great if you cannot then You cannot and you can make measurements to build trust in the correctness of the simulation or so, but But ultimately there's no efficient verification scheme that ultimately Settles this. This is a commonly stated setting Now along these lines and I think Nick also gave a nice overview about these ideas yesterday There Has been a lot of effort in Kind of elaborating on schemes that address such intermediate problems Like beyond both on sampling. I mean we heard about Iqp circuits then variants in terms of random universal circuits there's this famous Google endeavors of Realizing quantum advantages in the superconducting qubit machines as we nicely heard. I'm also yesterday. There is Schemes based on easing type interactions, which are extremely interesting. They are Closest to what I will say in the in the in the rest of the talk However to be fair and this involves a setting that is periodic But the unit cell has 56 qubits as one unit cell before the system again and repeats itself This is extremely interesting and and very nice and and paradigmatic But it's not very I mean it's not practical to think of a of a machine that has a period of 56 qubits to start with But paradigmatically is of course. I'm very exciting. So The good news about this is that these settings have They're probably classical hard with additive L1 arrows under reasonable assumptions that will come to that later in this talk at the same time It seems fair to say at least if we are thinking of this large-scale kind of quantum simulation type settings that we have in mind here It's they are very hard to scale up with present technology. That's surely the case for like most on sampling where Like mode matching or so will most likely eat you up if you go to a larger systems I think it seems fair to say then some schemes require arbitrary gate choices Which is good, but it can be or depending on the architecture Realistic or extremely demanding Say I mean it's also very close to a universal quantum computer after all that if you have that then you can also build a quantum computer and In the type of setting you have in mind here It seems out totally out of the way to think of a fully gate-based local quantum circuit that you built up So it's not not very realistic and then periodic Hamilton is good to think of optical lattice systems. They are standing wave Sending waves made from counter propagating laser light, but then if you think of periodic 56 in the lattice then experimentalists will frown and you say, okay I mean it's it's paradigmatics very interesting But it's not a thing you can actually realize in any realistic lap in and in our in our lifetime But it is I'm very interesting Now what I will say in the rest of this talk is to what extent it's possible To think of feasible quantum simulators showing a speedup that combine the benefits of both worlds and bring the speedups Closer to experiment precisely having that type of many body architecture in in mind that I be eluded At the beginning of the talk so make has had this nice picture of this space-time trade-off So we here allow for slightly more space, but to go back in time in an extreme fashion as people and elaborate on A second from now So what is the we had this visited our desiderata list? in Shelby's talk yesterday That was very nice. She'd be also writing a paper with her that also has these boxes ticked so and That is rather that we want to achieve here is we want to have a kind of a Hamiltonian cringe architecture. That's very similar to What we've seen in these actual quantum simulation that actually probe physically interesting many body problems That's a reasonable type of architecture in this setting Then we don't want a a Well, we want a periodic setting but not like periods of 56 or something or large But we won't have Hamiltonians which are also not long-range, but like nearest neighbor or best next nearest neighbor I mean that stuff that you can reasonably hope for to implement in a in a realistic setting of that type And you want to have hardness proof with L1 norm errors under some reasonable assumptions That's the list of desiderata that you want and that's what we elaborate on in the rest of this of this talk and the point I'm making and that's also the last and the key point of this talk Is that yes, you can do this this yes we can So last year or But there are schemes of such a type that would live up to these expectations and we have a couple of them But for reasons of time and for the next speaker on the coffee break I will only hint at one, but I'm very happy to discuss more in any break you wish so some Periodic some have a bit of randomness some completed translation where and there's different Flavors, but all are kind of simple if you want so let's look at this What is it? Okay? You have a square lattice Okay, d by d lattice and you prepare the state initially in a product state the system in a product state So this is that has some randomness But just whether it's zero or like a bit of a tilted state the product say the simple initial thing Let's also scheme without randomness, but this one has a bit of randomness fine To emphasize that this is reminiscent of like ground states of this ordered optical lattices That's something that produces a lot of interest in the condensed matter and called items community This has been done recently like in Manuel Bloch's lab to have such a ground state of a disordered model And I think be level continue along these lines in his talk and later on many body localization And so that's something that's not completely out of the way But that's something that can be reasonably done in in in labs as of today the next step is a Unit time evolution under Hamiltonian, but not a 56 periodic Hamiltonian or whatnot, but a plain vanilla nearest neighbor easing Hamiltonian as you know it from high school and So it's nearest neighbor easing Hamiltonian evolve it for one unit in time Accounted for in the coupling strength of this Hamiltonian Now this is something that is not only feasible in optical lattice But in fact has been done a long time ago was one of the first experiments in optical lattices using hyperfine levels Controlled collisions that's basically implementing an easing Hamiltonian in in in in in an optical lattice Even interestingly this was this idea was predating the idea of a cluster state and measurement-based computing In fact that was born out of that that was this guy first the people thought oh That's an interesting state Can you do something with it and then the cluster state and Measurement-based computing was emerging out of that endeavor so my point is that's not only realistic But that's something that has been done long ago in the lab Such a quench for unit time and the last bit you just measure the thing just measure it out You measure all qubits in the x-space you do a sampling measurement in the x-paces Now this is also not unrealistic in such an architecture. There is single-side addressing That's again creating much interest in this community I think it seems fair to say that this is the most challenging step because you really need single-side Resolution as to be a good measurement and so on this is possible within limits. So if there's some Hesitation it is here that this may not be Completely around the corner, but to be fair you can do singles and addressing measurements in realistic large-scale optical lattice experiments so This is the setting product stage one unit of time like a unit depth circuit and you measure the thing and this is the scheme now I'm lacking imagination, but I find it difficult to think of a simpler scheme of having a state prepared one unit Of time and measuring the state out. So this is what it is and the statement is that That this is producing a scheme That shows a quantum advantage in the same sense as like I can be circuits and both on sampling and so on so assuming Three highly plausible complex arithmetic assumptions of conjectures a classical computer cannot efficiently sample from the output distribution of our Scheme up to a constant error in the L1 distance unless one accepts a collapse of the paranormal and hierarchy, so that's a kind of similar statement, but again, it's unit depth circuit it's Kind of realistic and experiment and there's a and feature of this that I will highlight at the very end of the talk Which is a nice feature to have in such a setting now to have some time for meditation at the end it's 26 minutes into the talk I Will say a bit only of the type of argument, but there's also the coffee break coming up So I mean the logic of the argument is not maybe so surprising to the experts a long and winding story in detail because You have to go through things, but the overall logic is it's not so I'm so hard to capture so ultimately this whole tricky hardness of approximation of the outcome distribution in worst case comes from it being sharpie hard to approximate the output distribution from an all x measurement on the Circuit up to a constant Relative error at the end of the of the day and the overall argument is that the physical scheme you implement Makes no use of any gates or adaption or post-processing or whatever You just cringe and make a sampling measurement But this type of setting can be related to a measurement based scheme on a cluster state with measurements in the XY plane non-adaptively Which in turn can be related to a random circuit that involves T gates set gates control sets and and hard-on-mart gates Which is a post-selected universal quantum random quantum quantum circuit and that alone is good enough to up to multiplicative multiplicative errors in the Total variation distance show that if you could sample from that you would get a collapse of the polynomial and Hierarchy yet. This is this would be very demanding to have such a small multiplicative error So this can be beefed up to a more sensible additive error in the total variation distance by making use of this logic of stock miles argument that Takes as input a classical algorithm that samples from the output distribution of a unitary u prime That is close up to additive out of the true Circuit that you would want to have realized and a binary string that takes the role of the actual output of the of the sampling thing that you get out at the end of the day and what it produces is that and is the approximate probability of Getting precisely that outcome for this approximate circuit up to a multiplicative error now this can be used I mean this is the Approximate probability for the approximate circuit Well, this can be related to the true probability of the true circuit by making use of the of the mark of inequality and using a property of this distribution that's called Anti-concentration property of this and if you combine this in a in a in a good fashion You see that you get a hardness argument for an additive error in the l1 distance for the full and sampling Setting so ultimately if you could sample from the type of distribution you would get again a collapse of the polynomial hierarchy so what are the Conjectures going into the game here. So the first conjecture is that the polynomial hierarchy is infinite. That's an assumption Clearly, but there's a plausible assumption from complexity of static Perspectives like at the lowest order. This would be that you assume that p is not NP. That's also an assumption But well, that's what you what you assume What you also assume here is that it's an average case complexity assumption that for a Concentration of the instances is as hard to sample from the outcomes of measurement as it is in the worst case So that's an average case hardness assumption That's not on this not uncommon in these schemes, but basically I mean strictly speaking This is an assumption and finally there is the anti-concentration property of the output distribution as much less Not about anti-concentration than about that large deviation balls or concentration But this is what it is So that's a property of the distribution that here is not proven in for this specific example But there's overwhelming obvious evidence that this is the case because the distribution is Thomas Porter distributed to Overwhelming numerical evidence and that would hence anti-concentrate What we do have is that is rigorous proofs for random circuits making use of an epsilon approximate a unitary designs Where we can show that this is this anti-concentration is true for nearest neighbor random circuits Which would produce examples of schemes of that type but not precisely the scheme I've just mentioned I'm just saying we work on getting rigorous statements on anti-concentration bounds in several flavors and also make us work on that But in this specific setting I just mentioned there's a numerical evidence for this which is yet overwhelming in a certain set So to cut a long story short This is a setting that is In that type of a system. It's highly plausible. It's a it's a unit depth circuit You can kind of realize it in the lab But the quantum simulation that this sampling experiment is intractable for classical computers We have a quantum advantage setting that in this sense. It's highly simplified and made more more plausible That's great, but there's one more Property that I would like to stress to come to the end of my talk 31 minutes. I want to be nice in time Which is one desirable feature that this scheme has that I'm not aware of this feature being present in any other known scheme Which is still a good thing to have which is The following one you can do this Experiment but you can also go into the lab and perform measurements that are quite similar to the actual measurements that you do for the sampling such that with the order of n many measurements You get an outcome of the of these guys and from them you can detect not only Like some trust value cannot just say oh the measurement gives me trust that the simulation has been right or you you get a Convincing argument that things are going well, but the measurements you get Directly bound the l1 norm distance to the actual samples that you use in the hardest proofs So you make measurements It's not error correcting because it's without error correction But you make measurements and then if you're lucky enough the green light goes on and you says oh that's great It's correct and I can use this these outcomes to my sampling experiment and I've done it Of course, you can be unlucky and then you make measurements in the lightest red Too bad, then there's too much noise in the system There's nothing you can do about it, but it's it's a verifiable setting in the sense that the green light goes on It's not just building trust or being happy about things It's really verifying the very same property that goes into the hardness proofs Which is a nice feature because it verifies the setting This is a very nice feature to have because it it shows this it kind of resolves this ironic twist I was mentioning earlier. So there's this Somewhat sloppily formulated common prejudice that in order to be able to verify a quantum simulation one needs to be able to efficiently simulate it and Well, this is not quite true in the sense that one can think of trustworthy quantum simulators in the sense that They do something not terribly interesting. I mean, this is like a Advantage type of experiment, but they do something that you cannot keep track of on a classical computer But you can efficiently check Where they've done the right thing from a few measurements and from that You can infer that your measurement has been right if the noise levels are small enough So that's interesting because you can check whether it's correct But you cannot predict the outcome of the measurement for that you need to go into the lab You need to some you need to do it But you can efficiently check whether the outcome has been right and this is a nice feature and it's possible that you and Can do something quantum you can have an advantage and still efficiently certify that you have done the right thing good I think that's a perfect moment to Summarize and and come to the meditation part. Yeah. Oh, yeah, it's good Time so what we have looked at so we've Meditated in this talk on the question whether there is any hope for feasible quantum simulators with a super paranormal speedup that is kind of reminding or reminiscent as taught me Then that are reminiscent of these of what physics people in in cold atom settings I would call a quantum simulator whether with these type of systems of realistic system that actually exists with present or past Technology whether they are good enough to show Super paranormal speedups in the sense as we're discussing here at the conference and the answer is well, yes to In a good sense So these are not fault-tolerant, but this is not a feature and not a bug. It's a feature I mean you don't want to be fault-tolerant because that's not Not plausible and that type of architecture to be fully Fault-tolerant, but it is kind of error detecting in a certain way I mean you can make a measurement you can build trust and if the green light goes on you are you're very honest You will never publish a dishonest result in the sense that maybe you publish a red light and say oh I haven't been there. That's as good as it gets. But that's my outcome. Here's my result It is what it is. Oh, you have a green light But then you can really show what you've done and it's a certified and quantum advantage experiment It's a nice feature to have so one can efficiently assess the correctness of a supremacy Advantage type experiment of this in this sense, even if the simulators exhibit quantum computational speedups This is great so Open questions. Well, there's loads. I mean, it's nice. I mean, but it's still very paradigmatic. So from a Physics perspective when you talk to the amount of loss of the world they want to say oh, that's great But they want to see this more Even more closer to the actual experiments on This ordered Hamiltonians, I mean ironically it's a it's a it's a baby step in this direction in the sense that you can also Take the randomness and put it to the Hamiltonian. You can read this as a quench from a ground set of a local phase local gap phase to a disordered model and Looking at the time evolution, but it's a bit artificial You want to bring it closer to something people are really Physically interested in labs and say oh, that's physically interesting But still you have something some computation advantage that relinks to one of the questions that Robin was flashing in his thing More computer science is speaking you want to be closer to structure problems to to special purpose type problems in healing problems Optimization problems that are interesting and bring it more into that realm of not just having some speed up But having a speed up and kind of at least solve some Semi-interesting or even interesting problem at the end of the day Then what's the robustness of quantum simulators are they do them to failure in the long run? Well, not so clear Is the error correction? No, but is there some sort of step in that direction? Yes, ask me about it if interested I mean you can at least do some sort of basic Approximate error correction in the way it's not an error correction because there's no code It's a state preparation But you can do something to to bring you closer in in the total variation distance to what you want So what is the scope of approximate error correction in this context that links to? Themes that David has also put up on the web page for this workshop I think there's an important question to ask and then space-time trade-off. That's a beautiful picture that Mick showed in his talk It's quite quite exciting. They have the timeline and the number of qubits It's beautiful to have the the main message here in this talk Maybe is that it's great to think of large systems, but size is not all I mean, it's not about numbers Alone, but it's also of the type of control you have and the flexibility you have in in time and space Settings and here the cute thing is that you can by having a bit of more spatial overhead You can bring down the time overhead to to a unit time circuit a unit depth circuit And you can still think of these quantum Advantage settings and with this I thank you very much for your attention. I'm looking forward to the questions you might have So we're going to take questions. I mean you seem to be saying that you can do a supremacy experiment In which you can verify Even when you're in the realm where you cannot do a simulation They you asserted that but you didn't tell us how you do it. How do you verify? when you cannot produce the How can you verify the correctness in a situation when you claim you cannot sample? I mean that was the key point of your day Very good. Thanks for the question and so So the point here is what you can do is You can make a measurement and from that Verify the trace norm closeness to the prepared state The actual state you would want to have prepared and from that you can infer that if you do a sampling measurement That this distribution that you get is also up to that epsilon close in the total variation distance It's not a black box verification So I think for that the same arguments that I hinted at earlier would also hold I mean just looking at Samples would not do the job, but what you can do is you make measurements and Then you can infer that you are one on close to the right state And then you say well, how do you do it and it's extremely simple? It's I mean it's embarrassing simple and the point here is not that you can do it But the interesting aspect is that you can produce a scheme and shape that so that very Simple measurement scheme is possible and I tell you what it is You can see the state that you have basically as a ground state of a frustration free Hamiltonian and then you just measure terms Right you measure Hamiltonian terms and then if you get this leg a bit like a stabilizer state You make you make measurements and if you get the right outcome sufficiently many times You can infer that you're close in the in one long distance, right? Because your frustration free then you make measurements and then if the outcome is right You can infer that you're close to the right state, right? So but this is not very difficult or elaborate or deep in any way What is kind of neat about this is that you can tailor a scheme so that the ultimate state you have You are prepared can be seen as a Granted of a frustration free Hamilton and then you can make these pretty stupid measurements Yeah But thank you for the question Could you go back to the plot on the Trotsky paper thing? Oh, yeah. Oh You want to talk about this? Yeah? Yeah, very good. We can also one of the people that's in your Sorry, yeah, we would start your question and I will in the meantime I can listen. I'm a man I'm not good at multitasking, but this is something Okay, yeah, so here The x-axis that's basically the swapping time exactly right and you're measuring a local observable Yeah, well, it's it's globally local. I mean it's like it's it's the number of of odd particles in the full many body system, but it's Like morning. It's a local quantity. Yeah. Yeah, so, you know, I can just measure the particles in system And do it again with you know J prime So because you're only evolving for you know, like two swaps before it equilibrates. Yeah And you're measuring a local observable what I can do is I can just simulate that side and it's a 1d systems I can just simulate that side and some a small neighborhood Right. Yeah, sure. And that has to be enough. There is no no there, of course No, no that there's no approximation there. Oh, no, this is true. I mean, okay This is interesting question. I mean what he maybe to translate that a little bit what he says it's like I mean forget about this even all things just you measure one side as a local quantity and then you make a Dynamics under local amatonium. We know that there is a Sound or light cone type dynamics is like a leap Robinson cone Which means that information will propagate up to exponentially small tails up up to a velocity a sound velocity that is bounded by But the graph and the operator normal of the Hamilton, but this can be checked and this well not in this Hamiltonian That's both Harvard, but morally it can so that means that in order to predict this thing It's perfectly right that you can look at a finite system that is linearly growing in time Of course, they have a space will grow exponentially, but never mind For a finer time to approximate this so in this sense in in time. It's exponentially Heavy in space. It's legally speaking efficient. Although the pre fact that will grow In favorably in time, but it's true that one can in Principle propagate this for finite time and then truncated at the finite level and make that type of prediction, right? The argument here is that you could do it the algorithms that I mentioned have the same type of Moral flavor in that they are exponentially costly in time and then As I'm totally good in space or to say but they're still more favorable than this method So it would see me better than this although Strictly speaking at the end of the day if you have This is still a constant time So at some point it's about pre factors We could have a supercomputer that will just be good enough to do this on some Finite system that's not yet available, but if you push it at some point this will happen So you have absolutely right this could be done Except it's not not possible at the moment, but that's what I said. That's why you want to work harder and think of But there's one more thing I would like to say which is I mean One should be aware that if you have a product initial state and evolve that state under a local Hamiltonian That this is in principle be could be complete But even with the translation by Hamiltonian was citing this just in the coffee break earlier This thing so in principle, it's a quantum computer to such a quench experiment Of course, that's a kind of very elaborate Hamiltonian and very funny and complicated But I'm just saying that there's kind of lots of space for this to be really hard in a In a quantifiable sense, but you're still true that ultimately at some point You take a big system conjured through and you could do it I mean I actually did it and I only needed like 12 sides and I can evolve for as long as I want and I get those scores Yeah, we were discussing this right now. Yeah, then it's also about arrows and predicts power But this is what we want to discuss. Yeah indeed. I mean, it's a nice challenge I mean, you're not the only one presenting because I mean, there's also you can do like Dynamic a mean field that it comes kind of close. It's missing some aspects. It's a nice challenge And as I said, I mean, it's a baby step. It's good and the trans is good to have Ultimately, you will not be able to make a hard claim with that type of setting. That's why I had this disclaimer This one that there could be devil's advocate that could be clever as a violation methods only that let's talk about clever as a violation methods But it's still good to have this challenge Okay, so due to time constraint. We're gonna move on to the next talk. We think our speaker again