 Thank you all for coming. Thank you to the organizers and to IBM for putting on what seems to be a fantastic workshop so far And without further ado I'd like to launch into my technical content so If you noticed on the slide that I just jumped past the title of my talk is Toward protecting something from something and I asked Yen Zeissert just before my talk Have you ever given a talk where the title started with the word toward he said? Yeah, and said so what does that mean? He says these you have no results Now I'm not going to agree with that per se, but I do want to start with a couple of things an apologia is different from an apology It is a formal written defense of one's opinions or conduct So I hope that my conduct will not require defense, but my opinions well It is has been said that physics majors may be annoying sometimes But there is nothing more obnoxious than a quantum information scientist first encountering a new subject if you haven't seen This this xkcd cartoon before It is it is the most wonderful thing about being a physicist. My wife is an economist She has this blown up to poster size on her office wall so I Am going to express opinions and conjectures on the subject of analog simulation While knowing shockingly little about it Now I think they're interesting I I aspire as Steve Flamie has said before to be wrong in an interesting way and in fact I hope that you will come up to me afterward and over the next two days and explain to me exactly in Painstaking detail why and how I am wrong because that will be very productive for me at least The point of this well This is this is my slide my version of the same slide that essentially every speaker is giving right near future qubits yada yada yada So you can read this I want to focus on on a couple of distinguishing points if you will I am interested in how we can find some quantum utility in the near term Which is different from quantum supremacy or quantum advantage? I want to actually do something that well I do work for a DOE national lab So something that increases us national competitiveness or something like that in other words where's the money and Tragically neither for elation nor IQP nor boson sampling appear to have any money attached to actually well They do have money flowing into them, but the money is not coming out yet So What do we do one approach to trying to deal with the fact that noise is the dominant aspect of near-term quantum computing is to look harder and harder for new Algorithms new digital algorithms quantum circuits that require fewer gates because we just can't do very many gates with relatively few noisy qubits But another approach is to temporarily not permanently but temporarily Abandon this digital circuit framework and look at a different paradigm to try to look at Analog dynamics and try to get computational advantage out of noisy analog Dynamics so the point of this talk is to explore whether this idea has any legs We would like to know that so that we can decide whether to invest more resources in trying to squeeze some utility out of it So those of you who were here four years ago might remember a Talk that I gave at that workshop where I think the main impact on people was that I appeared to care a great deal about British versus American spelling So Analog I think we all know basically means not digital or more precisely it means that there's something Continuous not discreet so analog computing is Computing with continuous variables and it got a very dirty name back in the 1970s Because people hyped the heck out of it said you could solve NP complete problems in unit time And then some smart people more or less proved that that was all hogwash Because things were noisy There's been a bit of an overreaction to that in that people often sort of right in the computer science community Kind of write off analog of any kind as being obviously useless Well the real statement is it is obviously outperformed by digital But if you don't happen to have fully functional digital computers of a given type floating around then perhaps analog computers of that type could be useful There are Several quantum computing paradigms that are analog in one sense or another for example the variational quantum eigen solver involves continuously variable parameters that you tweak to optimize over Adiabatic quantum computing also has continuous parameters in it. I want to talk about the sort of analog That is if you'll forgive my obsession with spelling both analog and analog we in that there are continuous variables but more importantly There's an analogy between the computer and the physical system that I'm trying to simulate and And that analogy is strong There is always an analogy between the simulator and the simulated in any model of computation because Otherwise, what does it mean to simulate? But I'm talking about systems where that analogy is also a topological Isomorphism more or less in other words if I have a system that I am simulating and I map it to a physical system that a computer that I use to simulate it Then nearby states map to nearby states where the definition of closeness is to be determined later In contrast a digital simulator almost always maps nearby states to far apart states of the computer And it has to do so because when you encode numbers real numbers In binary or or really carry of any sort then you have carries Which means that you have a long distance here It's hamming distance in between real values that are actually quite close and this goes the other way So there are close together states here That correspond to far apart states of the system in question And if you've ever seen a corrupted JPEG then you know what consequences that have Flipping a single bit in a JPEG can cause the entire bottom half of the picture to turn into colored snow The reason is we had a very small jump in the computer memory That led to a tremendous change in the output So this is all about point set topology. It's about a notion of closeness according to some metric Analogies here in the way that I'm using it Preserve this sort of adjacency with respect to a Metric that's defined by how easy it is to get from one state to another how natural it is So when we have physical systems in the laboratory, we have certain operations we can do to those systems Those tend to be the same operations more or less that nature the environment can do to those systems And so there's a notion of closeness of states by Ease of hopping from one to the other according to the control that we have and the control that unfortunately nature exerts on that system and An analog simulator is one where The mapping between nearby physical states of the system. I want to simulate Preserves that notion of closeness so that it is if it is easy for the system to make that transition in Reality then it is easy for me as the controller to generate that transition and Also easy for the environment to generate that transition, but hard for the environment to generate big jumps so to summarize that the advantages of an analog simulation are that Potentially it's error resistant in that the typical errors that happen will not map to Tremendous big catastrophic failures in the system the state of the system that you're simulating You also potentially minimize control overhead the things that you want to make your system do are the things that you can do easily But the disadvantages are first of all due to its analog nature This is almost certainly incompatible with quantum error correction So you get these advantages you get a sort of a head start But then that turbocharger that we hope to apply to digital computing to blast it ahead to quantum utility probably just won't work with analog simulations and There's no obvious way to encode Non-physics problems here although the idea of finding computational problems that you can map to phase transitions is very interesting So I don't mean there's no way just there's no obvious way in other words The problems we're talking about here are basically what's the behavior of a real or real lish? realistic maybe quantum physical system under real lish listic conditions So as I said at the very beginning My goal and our goal at Sandia in sort of spooling up a research program in us is to figure out whether this can be useful and The first step in doing that is to make a long list of The most important questions the showstoppers if you will that determine whether this is just a sort of a curiosity That we could write some prls about in which case we'll write the prls and then bail or Something that really is worthwhile and should have money invested in it So I want to sort of pause here because as I said this is a workshop talk and I'd like to spur discussion Here are here's a partial list of the questions that we worry about when we have meetings about this And then I'll tell you which very few of these I'm actually going to address in the rest of the talk First question is assuming we're talking about simulating physical systems here. What questions about those systems Do we want to address with an analog simulator? Do we want to find out what color? They are do we want to find out how much they weigh do we want to find out whether they super conduct? Do we want to find out what their ground state energy is there's a mirror out of questions and some of the ones? I just gave are silly ones But amongst the ones that are not obviously silly. It's not immediately obvious. Which the right questions to ask are in particular We'd like to ask the questions that are hard for classical algorithms because the ones that are not hard for classical algorithms Well solve them with the really good classical computers that we have If there are such problems What makes those problems? Classically hard. Why is a property classically hard one of the things that I get a lot of flack for from the very good classical computational scientists that I work with on a daily basis is The tendency of the quantum community to say oh look we have a quantum algorithm for this problem and completely ignore the fact That there is a classical algorithm that solves that problem well enough, right? And I mean sometimes this is a very nuanced back and forth as Eddie was talking about earlier today Other times it's just a matter of me not reading the literature So what are the classically hard problems and then what makes them classically hard and then of the classically hard ones? Which ones can this very limited? faulty analog simulator answer reliably and efficiently Um We'd also kind of like to know which of those questions or which problems analog simulators fail on and why what makes those problems Quantumly inaccessible or at least quantum analog Lee inaccessible What is the what's a computational model so for classical computers? We have Turing machines for quantum digital computers We have the circuit model one of the things this lets us do is specify an exponentially large Ensemble of input instances, which is critical for defining hardness because if you don't have an exponentially large Library of questions you could ask about you know variations of your problem Then there's this cheat in theoretical computer science where you just take a bunch of time upfront to build a lookup table for all of the Relatively few instances and then you can solve them very easily So the fact that we don't seem to have a reliable computational model that everybody agrees on for what an abstract quantum simulator would look Like makes it difficult to prove hardness results And then some more technical questions What if we have a simulator? What is that simulating? In the community there's a lot of discussion of that presumes that to simulate a system means to construct its ground state There are also some of the literature that presumes that simulating a system means Reproducing its dynamics. There are complex and deep relationships between these But sometimes the two halves of the room talk past each other a bit Are we talking about simulating spins bosons fermions? Are we building simulators out of spins bosons fermions these seem to have different properties? What kind of Hamiltonians are we looking at lattice models quantum chemistry? Anything else? I can't think of anything else off the top of my head, but I would love it if there was something else to simulate Do we want to focus on? models that really happen in the real world or Models that could only happen in another universe because we don't have access to them in the real world There are arguments for both and then starting to get to things that I really care about How do we characterize verify validate the behavior of an analog simulator other than just running it on your favorite Ising model and confirming that you get right roughly the same results how do noise and errors impact performance and What can we do in terms of engineering and control to mitigate the effects of that noise? And so these are the questions that I want to focus on having you know, hopefully Some of these resonated with you. I'm going to move on completely and not talk about those at all But they would be great conversations over dinner or beer later on To motivate the rest of my talk and I'm about to get technical here, but to motivate the rest of it I want to start with a very simple model of noise because as with everything else about analog simulation This isn't obvious and people debate about it. So What does it mean to say that there's noise in my analog simulator? How should I model that noise? I'm not going to try to give the answer to that question Because we would fight about it because there are many different ideas So let me just start with one that's plausible at least the model is no It's the same one that we usually use for digital quantum computers. It's I think it's one of Ken Brown's favorite things This is independent single qubit depolarization on each qubit in the simulator So depolarization does whoops does this to the block sphere And so if I have a bunch of qubits in my simulator the model I'm going to go with here is they all just Markovian lay decay towards the maximally mixed state Maybe kind of slowly, you know at a rate of 10 to the minus 3 or 10 to the minus 4 or one of those happy numbers That Ken was putting up earlier the trouble is When they get there You're done This is called thermalization to infinite temperature because the maximally mixed state is uniquely the infinite temperature thermal state for every Hamiltonian So if I give you a system that's in the maximally mixed state, that's infinite temperature. I Don't have any questions that I want to ask about the infinite temperature behavior of any physical system. I Don't think you do either. It's typically very boring and random The questions we want to ask are about the low temperature behavior of these physical systems And so if this model is even remotely right then we need to fix something and and not have this Thermalization to infinite temperature and I am now going to beg a question, but admit it I mentioned a minute ago. There's a question of what do we want our simulator to simulate? Well, I am not going to consider simulators that attempt to prepare ground states or thermal states I am going to consider dynamic simulations because I'm that kind of guy I want to be able to Start her up and see what happens I want to put an electron in this side and watch it come flying out resistance free on that side or whatever the dynamics do That's only part of the world, but I've only got 15 minutes to talk about it So we're gonna have to talk about a narrow part of the world So the core of the the project that we started at Sandia was to explore to what degree this problem of Geez if I wait long enough my simulator is going to thermalize to infinite temperature How could we mitigate this using what we gave the fancy name of active logical cooling to now? We started out by giving a nice simple name which was cooling and then we realized that People misunderstood us so if you just say cooling which is a nice humble way of describing this People think you mean submerge the system in liquid nitrogen So we had to point out that this is active cooling We're gonna actually go in there and try to do something clever to cool it down rather than just put it in a block of ice and then People in the ion trap world that we talked to assumed that we meant something like Putting in a cryo stat to cool down the environment the emotional modes of freedom and we had to say no No, unlike in a regular digital computer. We're actually going to cool the qubits Right, so normally you think of a quantum computation as proceeding via unitaries And the last thing you want to do is go in there and screw that up intentionally Well in this case the environment is already screwing up our qubits. It's heating them up so we're gonna go in and we're gonna screw them up back by Pushing them if you will down towards their ground state Because that's pretty much what we want to simulate So the question here is how well could this work and I'm actually gonna take a 90 degree turn in just a minute But I want to give you a brief survey of sort of what what's in the background of the questions We ask in this talk the idea is that for an analog simulation unlike a general, you know arbitrary unitary algorithm The ground manifold is special not just the ground state itself But we want to see dynamics that occurs in the low energy space because that's what's happening with physical systems They're cold, but they're not locked into their ground state. They have Excitations above that they transition through excited states etc. Etc. So we want to allow it to wander around in here Approximately unitary, but we don't want it to transition up here And so the environment is putting noise and energy into our system Well, we want to suck it out via cooling and we only have access we assume here to a limited set of operations Basically what you get with a quantum computer so local operations and so we want to do local operations that produce cooling So here's a toy example Here I've got a two qubit icing model and I want to cool it down To force it into its degenerate ground manifold spanned by zero zero and one one And so I have a Hamiltonian that's a zz Hamiltonian that's coupling my two Simulator qubits and then I bring in a third qubit that couples to each one of them by an xx Hamiltonian and This was for an ion trap system. So I optically pump it I basically jam it into its ground state and I allow this xx coupling to weakly take energy excitations out of the simulated system and into this heat pipe Where I then blast it out into the environment This is just a toy sort of you know first-year grad school model of how you could do this kind of cooling and allow Approximately unitary dynamics to go on inside a low energy manifold While preventing the system from getting way out of that Now in a minute here I'm going to come back to this technical thread that I hope what is your appetite a little bit And I'm going to talk I'm going to start my technical bit by comparing this kind of cooling to quantum error correction But before I do so since I am going to kind of dive in I want to explain where I'm going with this I'm really not going to analyze how to do cooling or exactly what it does in detail I want to ask a different question. I want to ask let's suppose that this worked What could we hope for is this a problem that is actually worth solving? Because if I am constantly cooling my system, I'm inducing errors. I am not Preserving unitary. I mean the environment is hurting me and what I'm doing is I'm slapping the system down and so it's it's dynamics are necessarily going to involve Transitions that I really wish didn't happen Decoherence I'm going to have a limited coherence time. This is not quantum error correction. I can't expect the same Amazing groundbreaking results that we get from quantum error correction. So the question is what could I expect from this? What does success look like and? Does that notion of success allow me any quantum utility does allow me any computational ability? And so at a technical level What what this is going to be I just say this was a toward talk, right? I'm not going to give you any theorems But I hope to present a step towards Understanding what approximately preserved information looks like in quantum systems and also a step towards understanding where analog quantum simulators could provide computational speed up okay, so Back to the technical bits haul out your your math notebooks and take notes I said I was going to talk about Activological cooling and how it's similar and different to error correction. So if you don't know quantum error correction is basically just cooling However, it's cooling with respect to a very strange Implicit stroboscopic Hamiltonian and the idea of quantum error correction is that you have a code Subspace that you want to stay in I haven't shown the superposition states here. This is actually a classical code and These are sort of surrounded by onion like layers of correctable error states So if the environment causes a transition up from zero logical to one of the correctable states I come in and I measure the syndromes and do a correction that pushes me back down into zero logical so Error correction is just very weird cooling. So in that sense Hey, can we see an active logical cooling are the same thing? Except I'm not going to assume that we have those resources. I'm not going to do any I'm not going to do weird cooling I'm going to do you know normal cooling the kinds you do in your backyard And the reason is that quantum error correction is very powerful But it also requires one of these digital encodings that breaks topology and it's not compatible with analogs because of that so What am I going to try to do with active logical cool? Well error correction Has as its goal to preserve quantum information more or less perfectly What if we don't try to do that what could we try to do? First let me tell you a little bit about what perfectly preserved information looks like If you have a quantum dynamical process so a CP map your noise plus your error correction then There's a nice theorem that The states that are perfectly preserved or the information that is perfectly preserved by that map Has to have a very very rigid structure It has to form an associative algebra or a C star algebra, but these are finite dimensional C star algebras So I wouldn't want to dignify them with the term mathematicians would be upset at me This implies a bunch of rigid structure theorems that are pretty cool But what it boils down to is that if I have a perfectly preserved operation or operator Then every power of that and every product of perfectly preserved operators must also be perfectly preserved So for instance if this is a density matrix and the diagonal elements of the density matrix pass through the noise Unchanged and the first off diagonal elements pass through the noise unchanged Then it follows that the entire density matrix must pass through the noise Unchanged you must have the identity channel And it follows from that that you can always write down a sort of a kite like this That corresponds to the shape of a density of the density matrices the set of all of them that are perfectly preserved by this process and So roughly speaking within each block. I have a Hilbert space a subspace that quantum information including both amplitudes and their coherences are preserved Different blocks don't get the coherences between them preserved But these correspond to preserved classical information the which block information is classical So this really is as rich as it gets for perfectly preserved information any process that preserves information perfectly It preserves some hybrid quantum classical code of this form And so if I don't ask for that the question I want to ask is what can I get what could Approximately pretty well preserved information with a nod to the inventors of the pretty good measurement look like I am not going to go into this in any detail at all partly because it gets a little bit complicated and partly because I don't know the answer So I want to give you one Example that's really instructive and I'm going to spend the rest of the talk Hammering on that example more or less So if you want to prove things like this the central tool is the operator Schwarz inequality if I hadn't blown so much time With hot air earlier in the talk I might talk about that briefly But this is pretty much at the core of all this goes back at least to Cattison in 1959 You can do all sorts of fun stuff with it Mendo and Choi did most of the fun stuff with it and a Nice example of the of how to saturate this inequality is the weak defacing channel now What I've drawn here is a channel that takes a density matrix and maps it to its Hadamard product That means just its element wise product with a matrix that looks like this So take every element of the density matrix and multiply it by the corresponding element in this matrix The diagonal elements in some basis that I have picked out are perfectly preserved. They get ones But the coherences off the diagonal get suppressed by some factor So this is a defacing channel, but by construction The coherences between adjacent states only get knocked down a little bit. This is a weak defacing channel It's pretty easy to actually build this You know well on paper possibly even in the laboratory and what I want to focus on here is that it is possible I state this without proof to suppress first off diagonal coherences by 1% second off diagonal coherences by 4% Third off diagonal coherences by 9% do you see the pattern here? The suppression scales Quadratically with the distance between the states This is a an operation that can be created and it's interesting Because it really screws with the intuitions from that algebra theorem that I cited on the previous slide Right to go back to the previous slide If I have perfect preservation Then as soon as I know that the first off diagonals are perfectly preserved everything locks into place But if the first off diagonals are only 1% preserved then the second off diagonals and the third off diagonals can die off surprisingly rapidly Okay, so a brief break here About ten years ago Scott Eranson wrote a fascinating blog post where he Discovered and railed against how normal people think He points out that in logic As most scientists and mathematicians know if a implies B and B implies C and C implies D and so on then implies Z But the way that most people seem to react to this is if you give them a 26 step logical argument where each one implies The next one they just stop believing you about step three or so And you get to Z and you say therefore Z and they say nah Okay, so I want to argue that preservation here information preservation is like logic Logic says if a implies B and B implies C then a implies C well If zero one off diagonal element is preserved and one two is preserved then zero two is preserved as well But in the presence of noise these chains of coherence become unreliable and they can drop off like this And what's remarkable is that there is a quantum effect here? Classically in logic you would expect that these could drop off at most linearly But in a quantum system they can quite easily drop off quadratically, which is kind of surprising and This has what I think are some interesting consequences And I'd like to sort of Sprint to the finish line of my talk by telling you about these consequences and how they suggest a possible loophole For how analog simulators even in the presence of weak noise could actually do something useful So the implications of this weak defacing channel Well, they're kind of depressing if your goal is to Extend complementarity to approximately preserved information or to have nice rigid structure theorems but on the other hand if you're willing to sort of stop worrying and Love the weirdness then there seems to be something genuinely quantum going on here that could leave room for interesting physics And this has some math stuff behind it, but I'm not going to go there I just want to give you a Another toy example, so take that decoherence channel that I just described the weak defacing and let's apply it to a D-dimensional system. That's just a discretization of a free particle moving along a line, right? So here is my line Here's the zero state the one state the two state that's up to the D minus one state and What I'd like to do with this system is create a wave packet a fairly narrow one and put a twist on it in phase So that it propagates down the line so quantum mechanics 101 here if it's decohering Then decoherence in position defacing in position Corresponds to jitter in momentum and so this thing propagates ballistically, but it also gets a little bit of extra broadening to it But here's the cool thing about this particular channel if I choose my decay rate just right I can set it up so that the decay rate between the zero state and the D minus one state is instantaneous after a single time step all coherence between widely separated states is completely lost but The rate of decoherence between adjacent states is one over D squared and in the limit of large D What this means is that not only can this particle propagate ballistically Maintaining its internal phase coherence that's critical for remembering what its velocity is across the lattice It can actually bounce back and forth across the lattice it can travel from this side to this side and back Over a timescale where there is no possibility of preserving coherence between this location and this location So in summary You know that the sort of shortest pithiest comment here is that there's a huge range In what you would call the coherence time of this system Depending on which coherence you look at they have wildly different lifetimes. Okay, so why is this interesting? Because between the zero state and the D minus one state there's no coherence there can be no coherence It gets nuked immediately But there can be quantum enhanced propagation because ballistic propagation in this system is a quantum effect that requires phase coherence Otherwise the system just propagates diffusively if it can't remember its phases at all And so this packet can sort of carry its phase along with it It's its internal phase well enough to do something quantum so, okay This would be a cool idea if I hadn't gotten scooped by roughly a hundred years go back read Bohr and Aaron Fest This is semi classical mechanics, right? So I'm calling it semi classical information and Roughly speaking a semi classical code supports a big separation of timescales between local and global coherence And in addition to defining this on going back a slide this 1d lattice We can define this sort of thing on any graph that has distance on it So you could have a high-dimensional graph on which wave packets propagate coherently But there's no possibility of preserving superposition or entanglement across the entire graph So what am I going to do with this? I'm going to suggest that analog quantum simulators might be doing something that I call semi classical computation And I did a Google search the most scientifically rigorous thing you can do of course Quantum computing gets five hundred thousand hits approximately Classical computing only gets twenty eight thousand hits. So we're doing a good job Semi classical computing gets twenty two hits and I exhaustively search them and there's actually one publication that uses this word From earlier this year. So this does seem to be a kind of a late-breaking idea Semi classical physics is a hundred years old, but semi classical computation. What do I mean by that? I mean that I have to stop talking shortly. I mean that there's a possibility for a computational speed up if you're clever Even when the coherence timescale or the correlation length is strictly limited, but you have a big system Cooling here is just supposed to keep the simulator in the right phase to keep it from jumping out into a totally different phase Can you do anything interesting here? Here's the candidate example? We're exploring right now And forgive me for going a little bit over my time here Imagine first a a series of potential wells with barriers I'm going to tilt this thing so that in the absence of the barriers my wave packet would squirt would go from one end to the other But then I put the barriers up there so that classically the wave packet is prevented from moving it gets stuck But the barriers are constructed so that I can tunnel through them Now if I then decoher this it goes back to being classical All I need though to tunnel is local coherence and short-time coherence so I can get a a quantum effect Without long-time coherence and without long-distance coherence I can just tunnel through barriers one at a time And I want to take that system and I want to plaster it in spirit onto some horrible high-dimensional graph Representing for example a molecule right whose configuration space can be represented as a horribly high-dimensional graph And the idea here is that if I have many wells that are separated by tunnelable barriers Then tunneling is a quantum phenomena that is fast and local and therefore even if you have a certain amount of suppression of Coherence that prevents large-scale coherence you can still get quantum effects And I'll wind up by saying this is not a standard quantum speed up because look We've known ever since the type 2 quantum computers debacle back in around 2000 that if you have this kind of limited coherence This can be simulated on a classical parallel machine right a bunch of small quantum computers with classical networking is not interesting Except that only says this can be simulated and the catch here is that like most strong coupling problems There's a basis in which the system is easy. We just don't know the basis so this is all half baked but There's an argument here that analog simulators Could be useful because they would be able to find the right basis using local quantum effects only In which the problem would be solvable and that could apply to problems that are hard for classical computers Not because of intrinsic entanglement or something But simply because it is too darn hard for us to find the right basis in which to solve those problems Which is a statement that applies to most strong coupling problems including chemistry and materials And if this works, which it probably won't but hey workshop talk. I called dibs on semi-classical computing. Thank you very much So thank you very much If you're gonna you start talking about molecules here and you're but It's a rather if you start talking about molecules and you start doing chemistry then I'm just I mean, it's just wondering. Um, what do you think about? The point being that you've got fermions and so But you're simulating them with qubits Can you with this approximate scheme and then local unitaries get around, you know, Jordan Wigner strings and etc That that that is definitely something that should be on my long list of questions that I'm not addressing here It's a great question and I don't feel qualified to answer it But my feeling is if you want to simulate if you want to do analog simulation of electrons, you probably need to use electrons And then I guess the question will be well What's the naive natural noise model for a system of electrons? But doing Jordan Wigner or any of the other transformations these are sort of fundamentally digital in that they break this topology And so if you want to simulate fermions do it with fermions Yeah, another question So this is sort of related to the last question, I suppose so you can imagine simulating fermions in first quantization and Furthermore, let's say you were looking at a 1d system. You could use gray codes for the positions and in a 2d system You could use space filling curves or something like that So in that case, I mean would this sort of meet your criteria of analog simulation because you would have the property that You know say one bit flip doesn't move the fermion very far I understood 30% of your question so I'm going to answer that part and then the rest of it You can tell me why I'm wrong later So gray codes are fascinating because they do preserve the topology But nobody has ever figured out a way to do addition in gray codes other than transforming it back to binary so My take on gray codes is that they kind of look like an analog mapping But but we don't know how to do operations in them And so and once you get to the fermions bit, I am woefully ignorant about actually dealing with fermions So I'm going to duck that one totally So my question is what are the prospects for doing actually chemical reactions, so Nuclear dynamics quantum wave packets This is wildly speculative deep theoretical stuff That's a great question, but I am the wrong person to ask and this talk is the wrong context to ask it I would ask him the guy that just asked me that question Okay, another question Yeah, can you contrast this active logical cooling with heat bath algorithmic cooling? It really did so First I would need to ask you to explain about another paragraph of precisely which form of heat bath algorithmic cooling you mean But let me take a stab at it, and then you can tell me why I'm wrong later One way or another this is these are going to do that Yeah, these are going to be forms of algorithmic cooling But the traditional algorithmic cooling is a non-local thing that you do with a data compression circuit basically a much better Analog for this would be sort of local Maybe RG type Correctors for the Toric code So then it's it's a similar idea, but with a locality constraint is what you're Maybe yeah, they're they're overlapping Venn diagrams Yes some of the ideas like with your Washboard potential where you can easily see how to just do local tunneling But it would take a long time for a thermal annealing is Sort of a hope that because of the difference between tunneling and thermal annealing that you could get some Quantum advantage, but I wonder if there's some reason for thinking that if the tunneling involves a Relatively small number of degrees of freedom it would be easy to figure out an artificial Hamiltonian under which the thermal annealing would simulate the tunneling and that you the That this is sort of a barrier to a hoping much From tunneling without having a lot of coherence in the quantum computer. I Think I agree with that and I'll restate that If there's a route to speed up here It's not because a classical computer couldn't simulate this It's because writing the algorithm for the classical computer would require a very smart person to figure out the structure in the problem So these would these would be speed-ups that are intrinsically heuristic that just sort of you can You can throw your analog simulator at this program instead of hiring a very smart person to find the structure It's a it's a catechiave replacer At best no by a bunch of atoms in a lattice or something So so first of all that my 2d graphs were meant to be spherical cows indicating high-dimensional graphs and No, I don't but I will say that One of the one of the things that I would be delighted if it came out of this project Would be a particular kind of quantum cooperation where by by being a gadfly I Prod Classical algorithms people and in particular classical chemistry people to figure out how to solve the strong coupling problem on classical computers I Would be completely happy with that and and so would the nation I think I think it's entirely possible The best outcome would be we either find out that quantum the analog simulators are good for something Or we find a good classical algorithm for solving strong coupling chemistry problems. Okay Let's thank Robin again