 So I'm going to be talking about quantum algorithms for Hamiltonian simulation. It seems like today is the simulating physical systems day and How my talks going to differ from the other talks is I'm just going to consider this problem very abstractly So I'm not going to focus on any specific Physical system like you know the condensed matter system or quantum chemistry or whatever I'm just going to think of it as just a Hamiltonian and we want to simulate the time dynamics of this Hamiltonian One of the advantages of doing it this way is you you when you throw away all these details, you know often it's easier to design algorithms you can clearly focus on what the problems are and What I think of as the main Benefit of thinking of this problem more abstractly is then it doesn't scare away computer scientists from working on this problem Okay, so and what I'm going to focus on in this talk is recent results and open problems So there's been a lot of great progress in Hamiltonian simulation algorithms over the last couple of years since 2013 or so And it's been kind of hard to keep up Even for some of us who work in the area, so I thought it'll be nice if I can overview what's happened over the last couple of Years give you some idea of How these techniques work and highlight what's what's still left to be done Okay, so just so we're all on the same page. Let me just describe the general problem We're talking about we're talking about the problem of simulating a physical system Here's a classical example of it on on the slide But the general problem is I'll tell you what a system looks like right now You and you have to tell me what the system is going to look like after some time T. So I give you an initial state I tell you how the system evolves in time and you have to figure out the final state of the system a Classical example would be you drop a ball and I'll give you the initial position and velocity And you've got to figure out the position and velocity after some time T Okay, the quantum version of this problem the one We're going to focus on this talk is the Hamiltonian simulation problem which very abstractly is just you're given a Hamiltonian h Which is just a 2 to the n by 2 to the n Hermitian matrix where n is a number of qubits in your system and You're given a time parameter and some error parameter epsilon so that you don't forget the answer exactly right? It's okay if you're off by epsilon and you have to find a quantum circuit that implements the operator e to the minus i Ht if you don't know what that operator is it's just it describes the time evolution due to the Hamiltonian h of your system So if your starting state is ket psi After time T the state will be transformed to e to the minus i ht times ket psi In this abstract version of the problem you don't even need to understand where this Hamiltonian is coming from It's just or who gives it to you whatever. It's just some it's a Hermitian matrix a special case of this problem that's interesting for applications and Relevant in physics is the local Hamiltonian simulation problem, which is the same problem but now I'm going to tell you something more about the Hamiltonian the Hamiltonians local and Formerly what I mean is it's a k local Hamiltonian and what that means is you can write The matrix h as a sum of matrices hj and each of these hj only act on k qubits at a time So remember this Hamiltonian is on n qubits and is large and you want to think of k as a constant like let's say 3 So it's a sum of terms and each of those terms only acts on 3 qubits at a time. So this is a very nice description of a Hamiltonian and it's a very it captures a very special class of Hamiltonians like you can't do this for all Hamiltonians Of course, like there's just way too many Hamiltonians. So this is we're narrowing down on a special class of Hamiltonians We're interested in because These Hamiltonians actually come up in practice Okay, and why should we study this problem? Maybe everybody really knows the answer to this question, but I'll just tell you why This was the original application of quantum computers back when Feynman proposed this back in the day That's maybe not a great reason to study it just because Feynman thought it was cool, but more interestingly A fraction of the world's computational power is devoted to solving this problem today on classical machines It's still one of the best applications of quantum computing, you know when people ask us So what are we going to do when we have quantum computers in the in the future when we have these large quantum computers? What are we going to do with them? Maybe we'll factor numbers, but it's not clear what what application that'll have But this is something we can really do and it's going to be useful and interesting and Finally this problem or when you phrase it appropriately We'll almost never have a classical polynomial time algorithm because it's complete for the complexity class BQP Which captures a set of problems that are efficiently solved on a quantum computer So in other words, if there was a classical polynomial time algorithm for solving this problem Then every problem that can be solved by quantum computer can be solved efficiently on classical computers So, you know for example, you have factoring algorithms polynomial time classical factoring algorithms, and we don't think that's gonna happen So so we're safe. We we shouldn't worry about classical computers scooping us on this problem in all generality Okay, and before I move on to talking about this problem. Let me just Explain the difference between the simulation problem and the problem of computing ground state energies because these are problems that are often confused and People often come to me and they're talking about the second problem when I'm talking about the first so this talk is going to be about the first problem the simulation problem as I described on the last slide and Just big picture wise the difference between these two problems is the first problem is the problem of Simulating the time dynamics of a system. It's it's the problem of predicting the behavior of a system and Just abstractly speaking this problem is usually easy because if you have enough resources to simply mimic the behavior Just just of the system that you're trying to simulate you can do if you can just copy the system You can always simulate its behavior So for example, if you just want to compute the output of a circuit on some given input x You can always do this you take the input look at all your gates compute the outputs of every gate Individually and so on till you at the end of the circuit now You know the output of the Boolean circuit on this input on the other hand computing the ground state energy is more like a global optimization problem. You're asking some kind of Global property of the system and these questions tend to be really hard like NP hard or QMA hard and so on and As a classical example, it's like asking I'll give you a Boolean circuit And is there any input that makes the circuit output one? So you see it's a global optimization problem You have to search over all possible inputs. So the problem on the right is usually very hard problem on the left Usually easy if your your computer has the resources to simulate the system So classical versions of this problem will be in P quantum versions of this problem should be in BQP That's kind of what you expect Okay, just wanted to make that quick note. So there's no confusion or what problem we're talking about Okay, let's let's come back to Hamiltonian simulation. So The local Hamiltonian simulation problem as I taught explained before We're focusing on the specific class of Hamiltonians that are interesting K local Hamiltonians So something you can observe when you look at this matrix H So as I was saying you can you just think of this matrix H abstractly It's a exponentially large matrix. It's two to the n by two to the n What you'll notice is if the Hamiltonian that you're considering is sparse is a local then the matrix is sparse And what I mean by sparse is that even though the matrix has two to the n Possible entries in a row at most D of them are non-zero where D is something like linear in the number of terms that you're summing over So this Hamiltonian is a sum of m terms and these like order M We're thinking of K as a constant so it doesn't matter that there's a two to the k over there So these matrices are extremely sparse because you want to think of the number of terms in this Hamiltonian as being polynomial in M So for example, if it's just each term is just three local at most you can have like something like n cubed terms in your Hamiltonian So it's a matrix of exponential size, but there's very few non-zero entries in it And furthermore these non-zero entries appear at structured locations. It's a structured sparse matrix It's not just any old sparse matrix that has D non-zero entries in a given row or column But you can figure out where the non-zero entries are and Once you make this observation It's easy to generalize this problem to just the problem of simulating a Hamiltonian that's sparse So we even throw away this local structure all we care about is the fact that the Hamiltonian is sparse and Formally I'll define the problem as you're given a D sparse Hamiltonian where D refers to the number of non-zero entries in any row or column and How are you given this Hamiltonian you're given this Hamiltonian via some kind of Oracle or a classical algorithm or circuit just some efficient procedure that when you feed it x and y will tell you the x non-zero entry in Row y so this is a completely abstract way of capturing this Hamiltonian simulation problem for this very general class of Hamiltonians sparse Hamiltonians that captures local Hamiltonians and also captures other things that are not local and Is a class of Hamiltonians that we would like to simulate on a quantum computer? So sparse Hamiltonian simulation and just before I start talking about what we know about this problem Why did I generalize to sparse Hamiltonians? Well one thing? One reason to generalize the sparse Hamiltonians is Because well all the known simulation algorithms for local Hamiltonians just readily generalize to sparse Hamiltonians In the sense that you just look at the algorithm and you see what it's doing and then you're like This would also work for sparse Hamilton. So why not study the more general problem? I mean this gives you more power like you're able to express a richer class of systems So, you know why not study this richer class of systems? More interestingly I've been told that these kinds of Hamiltonians actually do arise even for physical systems and in particular things like we're a Hamiltonian being a sum of products of poly operator so there's not a local operator because it's you can think of a poly operator on every single qubit but These kinds of Hamiltonians arise in fermionic systems when you apply the Jordan-Wigner transformation for example So that's another reason you might care because these Hamiltonians might actually arise in modeling physical systems Perhaps the most interesting reason or one of the more interesting things from my perspective is this has algorithmic applications So Hamiltonian simulation is such a rich and general technique You can use it as a sub routine and quantum algorithms So for designing quantum algorithms for solving other problems like Evaluating game trees or nantries Solving linear systems equations. This is the HHL algorithm Solving the glue trees problem. This was one of the first Examples of an exponential speed up by a continuous time quantum walk what's common to all of these algorithms is they all use Hamiltonian simulation as a sub routine and All of them crucially use the fact that the Hamilton is sparse and Usually it's not local. So we need this generalization to sparse Hamiltonians And finally one of the reasons I really like Thinking about this general problem of sparse Hamiltonian simulation is you can now model the problem as as a query complexity problem So what that means is you can count the number of uses of this oracle the one that's telling you the non-zero entries of your Hamiltonian and once you map a computational problem to a Queer complexity problem where you can count the number of times you use some kind of oracle this makes it Well a first possible to even prove lower bounds because in general we're By we I mean like just complexity theorists are terrible at proving concrete gate lower bounds for anything So it's really hard to prove this kind of thing in general, but once you Go to this model of query complexity at least we can prove lower bounds You can prove upper bounds is also abstracts away a lot of the details you can forget about Specific implementation issues and architecture issues and all that stuff and just focus on the core of the problem Okay, so that's kind of why I think this problem is interesting and it's nice to model it this way and I think a Lot of other people think this is also the case because there's been Tremendous amount of work on on sparse Hamiltonian simulation and all these works generally consider this General case of just simulating the time dynamics of a sparse Hamiltonian Okay, so let me get to That concludes the intro part of my talk Let me get to this thing that I said that there's been a lot of interesting work And I want to give you a rough idea of what's going on in this field and you know, what are the new algorithms? We've been developing and what do they roughly look like? I mean I won't have time to actually go into the details of any algorithm But I'll try to give you a high-level perspective of how these algorithms are designed All the known Hamiltonian simulation algorithms either for sparse or local Hamiltonians doesn't matter Essentially fit into two paradigms. There's just two classes of Hamiltonian simulation algorithms And I call these the divide and conquer algorithms and the quantum walk algorithms. I hope that's readable so there's just two classes of algorithms and I'm going to explain the High-level idea behind both of these algorithms. So how do you solve the Hamiltonian simulation problem using divide and conqueracle? So let me start with that So in the divide and conquer algorithm paradigm as the name probably suggests We're going to divide the problem into smaller pieces and what that means is you decompose the Hamiltonian H into a sum of Easier Hamiltonians and what does easier mean easier just means that you're able to solve the Hamiltonian simulation problem for this Hamiltonian So for example if your Hamiltonians already local then you've been given a decomposition as a sum of local terms and The local Hamiltonian like each of those pieces hj only acts on let's say three qubits at a time So you are able to solve the Hamiltonian simulation problem on this three qubit Hamilton because it's just three qubits You know, whatever it's some circuit on a constant number of bits So you can just write down some unitary that does that So this is an example of a situation where you've broken up the problem into small problems that you can solve More generally for sparse Hamiltonians there is an algorithm that can Decompose an arbitrary sparse Hamiltonian to sum of order d squared simpler Hamiltonians where the simpler Hamiltonian is one sparse and what one sparse means is just every row or column is at Most one non-zero entry. That's another class of very simple Hamiltonians that we can simulate So the thing to notice over here is if your local Hamilton is d sparse the decomposition that you're given already breaks It under order d pieces whereas for sparse Hamiltonians. We can only break it up into d squared pieces That's the best algorithms. We have this is also one of the downsides of this divide and conquer approach This causes all the known algorithms to have at least d squared complexity and the the optimal complexities like is order d So all the algorithms that I'll sketch in this divide and conquer paradigm They're scaling with this sparsity parameter will always be at least d squared. So they won't be optimal But they have other benefits. So and And in particular for local Hamiltonians, you don't face this issue of d squared So maybe it doesn't matter if the Hamilton you you're interested in this local Okay, so this is just step one, you know a divide and conquer algorithm traditionally has two steps The dividing part and then the conquering and you know getting the final solution to the problem You actually wanted to solve so it's great that you can Simulate these individual pieces. That's fine. You can solve the problem on these smaller instances But what do you do with these smaller solutions? And that's step two, which is just recombine these Hamiltonians and the simplest method of recombining is What's you know, very well known in the community And it's known by many different names I'm just going to call it product formulas to avoid having like three people's name up there but it's like often called Trotter formulas or Lee Trotter formulas or Lee Suzuki formulas or whatever Anyway, they're just formulas of the following type They express the the exponential of a sum of Hamiltonians as a product of their exponential So let's say your Hamiltonian is broken up into three pieces a b and c and you want to simulate e to the minus i a plus p plus c times t This formula says that roughly what you can do is instead simulate a for time t over r b for time t over r and c for time t over r and then just do this process r times if you choose r to be large enough the errors will be small and Of course, you have to compute what the errors are and so on to formally prove a theorem, but this idea works great and especially when your Hamiltonians local and It has been the basis of all Hamiltonian simulation algorithms until Five I'm gonna say ten years ago But more recently in 2013 and 2016 New techniques were developed for doing this recombination step, which I'm not gonna explain at this at this point But I'll get back to it in two slides The first technique I like to call the linear combination of unitarious plus oblivious amplitude amplification Which I'll sometimes abbreviate by LCU plus OAA and the second technique is something called quantum signal processing So these are techniques that are in general better than product formulas give you better recombination methods and give you better algorithms in terms of their query complexity Okay, so this is an overview of the first technique divide and conquer break it up into pieces solve the individual pieces Recombine them using any one of these three techniques Okay, the other thing you can do is Use a second paradigm, which I call the quantum walk algorithms paradigm and This one is probably easiest to explain with just an example I won't explain formally what's going on, but what you do is you come up with a quantum walk It doesn't matter how what this is or how it's constructed. It's just some Operator W that you can implement as a quantum circuit So forget about the details of how you come up with this W it's just Some unitary that you know how to perform on you on your quantum computer and what's interesting about W Is that it's diagonal on the same basis as H, but its eigenvalues are off by what you would like them to be So so let's just write down H the Hamiltonian. We're trying to simulate in the basis in which it's diagonal It's eigenbasis so it has some eigenvalues lambda 1 through lambda n What we want to solve is the Hamiltonian simulation problem so what that means is we want to implement the unitary which is e to the minus i lambda e to the minus i h and let's let's say t equals 1 so forget about time and The unitary will also be diagonal on the same basis of the Hamiltonian is diagonal and the eigenvalues will get mapped lambda 1 will Get mapped to minus i lambda 1 That's great. You is your target unitary This is the one that you want to implement and the quantum walk gives you this other unitary W, which is I'm shoving some details under the rug doesn't really matter It's essentially diagonal in the same basis Except the eigenvalues are off from the ones you would like what you would like is the eigenvalues to be e to the Minus i lambda 1 for the first eigenvalue, but it's actually e to the minus i sign inverse of lambda 1 also we all have to worry about Normalizing the Hamiltonian correctly so that the sign inverse function is defined and so on but let's forget about these details The the thing to remember is just the eigenvalues aren't what you want them to be there They're different. It would be great if that was just e to the i Lambda 1 then you would be done the walk operator itself would implement the unitary that you're trying to but it doesn't Okay, so That's great. So obviously step two is going to be use this walk operator to get The unitary that you want, but let me point out one or two downsides of using this approach the great thing is this Unitary W the quantum walk can be implemented using very few queries to our Oracle. So just a constant number of queries So that's great The the downside is it needs two n qubits to implement So so recall that your system was on n qubits So the best algorithm would be would use exactly n qubits like the the size you need to represent the input and output So this is a downside this uses n additional qubits or n and silo qubits The other downside is it needs you to compute trigonometric functions in superposition So you need to come up with quantum circuits that compute things like sine inverse and cosine inverse in superposition This is not something you can do like offline computation on a classical computer In theory you can compute trigonometric functions Relatively straightforwardly. I mean you have the linear time algorithms for doing so but in practice This is really hard and the circuits are terrible So if you're worried about actually implementing this, this is not not so great Okay, so this is step one I've given you an operator that almost does what you want But it doesn't do exactly what you want and you need to fix this and step two is fixing the wrong phases again, there's three different ways to do it the the first way that was discovered is using this phase estimation algorithm and it's It's kind of beautifully tailored for doing exactly this kind of thing with phase estimation What you can do is just read off the phases Figure out what the phases should have been and correct them So that's that's really nice and straightforwardly gives you an algorithm for doing this and Again, the other two ways of doing this are the two approaches I talked about on the previous side a linear combination unitary splice of the levy a sample identification and quantum signal processing So as before there's three different ways of that you can fix this issue as well Okay, so in both of these paradigms what you saw is there's in step two There's two ways of implementing it that are not That are not phase estimation or product formulas Which are these two methods of LCU plus OAA as I've called it and quantum signal processing And I wanted to take just a minute to explain or just give you a high level idea of what's going on so these are Somewhat involved techniques and I won't be able to give you Any details, but maybe if you just can take away one slide about what these techniques do these are the real Innovations in the last four or five years in Hamiltonian simulation like these are the techniques that have brought down the critical complexity significantly from what we knew before Okay, so the first technique LCU plus OAA You can just think of it as allowing you to do a very simple operation and what it allows you to do is if the unitary that you're interested in implementing is called you and You write it as a sum of Unities vi with some coefficients AI Then this technique gives you a method of performing you if you have a way of performing vi So in other words if if the vi's are easy to implement on your quantum computer Maybe they have low low query complexity or they have low gate complexity Then you now have a method of performing you So if you can write the unitary you're interested in as a linear combination of known easy to implement unitaries Then you're good to go. That's kind of the one-sentence summary of what this technique does and In all of these approaches what you do is you express the unitary that you're interested in which is e to the minus iht as a linear combination of easier Unities in the in the divide and conquer approach you would express it in terms of these Pieces that you broken it up into in the quantum walk approach you express it as powers of the quantum walk Okay, that's the first technique and quantum signal processing solves this very interesting and general question Which is we have access to a unitary r of theta Which is that diagonal unitary over there for some unknown theta? So you don't know what theta is so someone just gives you a circuit that implements r of theta But you you can't look at the circuit and figure out what theta is and the question is What kind of unities can you now perform using r of theta and? Other gates of your choice so the other gates you use are first independent of theta because you don't know what theta is So for example, we can perform this gate And that's because that's just r of theta squared, so you just you know apply the gate twice now you can do that Then you might ask what can we perform this gate over here, which looks kind of like r of theta, but but it's not the same gate and If you think about it for a bit, you'll see that yeah You can perform this all you do is you apply the poly x gate followed by r of theta and you'll get this gate over here then you can ask well, what about this gate over here and You can figure it out actually turns out this gate It's just a product of these two gates over here So therefore you can still do it but more generally if I just give you some gate where all the entries and functions of theta Can you or can you not do this as a product of our theaters and theta independent gates? And if you can then how many of them do you need? This is the General question answered by a quantum signal processing and it gives a full if and only if characterization of when this is possible Okay, so that's my Half-slide summary of what quantum signal processing is Yeah, I hope that was beneficial, but I won't go into more details of how these things work and On the next slide what I can do is I can tell you what are these new algorithms we have And if you if you remember I said there's two paradigms and in each of those paradigms There's three different ways of fixing the issue. What are the the problem was and so that gives you six different Hamiltonian simulation algorithms and These are all put up here on this slide So the first column says what paradigm does it use divide and conquer or quantum walks? and Here's the complexity in terms of the three parameters. We care about the sparsity of the Hamiltonian the time that you want to simulate the Hamiltonian for and the error parameter so You know we want as low complexity as possible in all of all three of these parameters we care about and until 2013 or so the best algorithms were just the first two on this slide and as you can see The the one that's based on divide and conquer has a scaling in D. That's more than D squared as I said That's an inherent limitation of this divide and conquer technique The second algorithm is better except that its dependence on epsilon is worse So it has square root of epsilon or in the denominator Whereas the other one has an arbitrarily small power of epsilon It's epsilon to one or two K and you can take K to be a large think of K as a very large number and in 2013 we Discovered the first algorithms where the dependence on epsilon was only logarithmic. So this is really great if you want to have your output be precise to Epsilon you only have to spend log of one or epsilon resources. This is the kind of complexity you you want for algorithms, you know for example And if you're trying to compute some number like pi to precision epsilon you want your algorithms to scale like log of one or epsilon not like one or epsilon So this was the first time we had Hamiltonian simulation algorithms that were extremely accurate This one as you can see still has the D squared dependence because it's a divide and conquer algorithm Later there was an algorithm based on quantum walks that then fixes that and gets complexity that goes linear and sparsity linear in time and logarithmic and one or epsilon and What's nice is a lower bound is also shown showing that you cannot do better than D times T And also you cannot do better than log of one or epsilon divided by log log one or epsilon So it looks like that algorithms pretty tight Except it's not because the lower bound has a sum of two things and upper bound as a product or two things like the product can be bigger so there was some room for improvement and last year and This year there were two new algorithms based on this quantum signal processing paradigm then improve the query complexities and now you see they get this sum dependence and It looks pretty good It looks like we've essentially reached the lower bound except it's off by a log log term So let me just focus on that for a second. Let me get rid of all these extraneous details We have this lower bound that says that you need D times T plus log over log log one or epsilon complexity Whereas this algorithm achieves something. That's a little worse except it's not really This is only the cleanest expression that I could fit over in that cell the complexity of the algorithm is actually Order Q a Q is the smallest integer that satisfies this equation So I don't have a closed form expression for this. I don't know how to express this In nicely in big O notation using just DT and one or epsilon, but this is the complexity It's it makes order Q queries where Q is the smallest integer satisfying that equation What's perhaps even more interesting is that the lower bound matches this? So if you go look at these lower bounds do a little more work combine these two different lower bound techniques Work to the math you get exactly the same equation you get that you need omega Q queries But Q satisfies this exact same equation So this very nasty function of Q that I don't even know a closed form expression for is the right answer for what is The query complexity of sparse Hamiltonian simulation So this question is now Answered if you're all you care about is what is the query complexity of this problem? So that's really nice, and it's also nice that the answer is not something Simple that looks like this. It's some complicated thing and that just really isn't right answer It's not an artifact of our analysis techniques being weak or anything. This is just the right answer Okay, so that's kind of what's been going on in the last couple of years That's good. I have like 15 minutes left including questions So I'm what I'm gonna do is let me just say what what's left? It seems like at this point the story is concluded and while we solved everything the There's nothing interesting left in query complexity. We should now focus on actual Hamiltonians that you care about for your desired application But there's a couple of problems that I think are interesting and open so let me focus on them for the next couple of minutes Okay, so the first one is let's get back to this thing that I was just talking about the conclusion Is that this quantum signal processing has optimal query complexity and it was given by this complicated function Okay, whatever that's great the downside of using this algorithm as you can see It's a quantum walk-based algorithm and when I introduced them I said that had two downsides which was you need at least n additional qubits So if you're trying to simulate a system of size n what you would like is for Your simulation algorithm to also use something like n qubits or maybe n it can use some an insular qubits But maybe not too many maybe like log n or something or some poly log n that would be great Using n additional qubits. Maybe it's not as desirable The other perhaps more serious issue is that needs this computation of trigonometric functions in superposition These are really hard to implement as quantum circuits and these blow up the gate counts by a lot So what I think is still a very interesting open problem is what I'm what I'm calling the best of both worlds sparse Hamiltonian simulation Where you have an algorithm that's optimal in a square complexity so it makes the minimum number of uses of the oracle as possible and uses a small number of n-syllabits like if it's poly log that's probably fine and also No trig functions isn't really a precise formal mathematical statement But what I mean is just it shouldn't be doing exotic gates. It should have reasonable gate complexity So this problem is still open I think the best way to attack is still to I mean there might be many ways to do it But I think the best way is probably still to go back to this The beginning of the divide and conquer approach and the one that divides your Hamiltonian to d squared Easy Hamiltonians and break it up into only D easy Hamiltonians. Maybe your notion of easy has to change or something but I Think this is perhaps one of the most interesting questions that's remaining in Hamiltonian simulation Another interesting question, which is slightly different from the ones. We've just been talking about is This table I presented the more careful among you or the people who read better might have noticed that there's a there's something about the norm of the Hamiltonian over there and All of these complexities don't talk about the norm of the Hamiltonian, but but they have to because The the operator that you're trying to implement e to the minus iht this operators invariant under scaling the Hamiltonian up And scaling the time down so you can just multiply h by a factor of thousand and divide t by a factor of thousand And you know h t remains the same so, you know if the complexity is only dependent on Dt and epsilon what you can do is you can just push all of the time dependence into the norm of the Hamiltonian So you have to put some kind of upper bound on some norm of the Hamiltonian doesn't matter Which one and all of these algorithms traditionally take the max norm, which is just the largest entry in the Hamiltonian So by saying the max norm is one what we're saying is all the non-zero entries in the Hamiltonian have absolute value at most one So there's good reasons for This assumption well, I mean for considering this specific norm So the choice of norm is not pinned down by this argument that says that should be scale invariant only that you have to use some norm there's good reasons to study the max norm, but Another norm that's very interesting. I mean in fact if I was Just talking about Hamiltonians or and norms of Hamiltonians people would think of the spectral norm So what can we do with when you know that the spectral norm of the Hamiltonian is less than one? So in particular this implies that the largest entry in the Hamiltonian is at most one because You can figure that out, but I Think this is an interesting Open problem for Hamiltonian simulation because we don't have tight upper and lower bounds What we know is there are two different algorithms that scale like either D to the two third or D to the three fourth One of them has better scaling indeed the other is better scaling in Epsilon But and there's a very easy lower bound you can show of root D and I think root D is the right answer So I think this is one of the interesting open problems remaining Which is what can you do when you have a bound on the spectral norm? And I think root D is the right answer. So I'm going to say find an algorithm with root D depend on sparsity And let me motivate why you would care about this and there's at least two interesting reasons One is this problem that's called the black box simulation of unitary's problem It's exactly the same as the Hamiltonian simulation problem But in fact, it's the more the more natural problem from a discrete computer science perspective Which is the problem is the following I have a unitary in mind that I want to implement on my quantum computer So there's no Hamiltonian. There's no time evolution, whatever There's just a unitary in my mind and it's sparse. So kind of easy. How quickly can you do that? So I have a I have a way of computing all the non-zero entries of this unitary I have some efficient description of this unitary in my mind How quickly can you do this and this problem is exactly? Is reducible to this problem of simulating Hamiltonians with norm at most one? The norm condition comes from the fact that unitary's have norm one So this I think is a fundamental problem that's still open the same gaps exist for this problem We we don't know what our answer is between D to the two-third and root D Another interesting problem maybe with applications in mind is this quantum linear systems problem So this is the H H L algorithm if you've heard of this I want to explain what it does or anything I'll just say that as complexity scales like the or the best version of these linear system solver solving algorithms They scale linear in the sparsity of the of the matrix that you're trying to solve So you're trying to solve a linear system of equations a x equals b a is a matrix It's d sparse so it scales linearly in the sparsity linearly in kappa, which is a condition number of the matrix and logarithmically in the epsilon parameter So this is great except if you improve this Hamiltonian simulation of bounded spectral norm to order root D I think you can massage the solution to get a solution for the linear systems problem with better query complexity It should go down to root D and another reason why I think root D is the right answer is that in some very recent work with Aaron Harrow, we were able to prove a lower bound which goes like root D times kappa times log 1 epsilon so Because the lower bound gives you a root D. I mean doesn't necessarily mean that's the right answer, but I Think this is that's probably the right expression They're like the lower bound is closer to the truth and we just need to improve upper bounds And one way of doing that is by going through this Hamiltonian simulation of bounded spectral norm Okay, and finally, let me just mention one other open problem. This one is not about query complexity It's a more and a more tangible more more about just the total number of gates needed to Simulate the time dynamics of a system and this is related to the gate complexity of a very simple Hamiltonian So people over the years have asked me this question many times Which is just consider just the simplest possible Hamiltonian you can think of which is just a One-dimensional line of qubits so they're all on the line And it's a two local Hamiltonian and only the nearest neighbors interact with each other So it's just a sum of terms where like there's a term for one and two and three and four and so on Okay, it's a very simple Hamiltonian has n terms has n qubits, but all the algorithms I just talked about today They all have gate complexity Something like n squared times t or worse than this but like the best of them go like this But it seems like if you want to simulate the system for constant time You shouldn't need n squared gates because really well a nothing's really happening in constant time with this Hamiltonian But it's reasonable to expect a linear number of gates You you need to touch all of the qubits and all the qubits need to evolve with time But n squared sounds like a lot and like these circuits will have depth m. It seems like you don't really need depth n to Propagate time dynamics of such a simple-looking Hamiltonian, but I Don't know any better way of doing this And I think this is yeah an interesting open problem. Just what is the gate complexity? If I had to conjecture I'd say it's something like order n times t, but I don't know Yeah, if you have any ideas, please let me know Okay, and these are the open problems. I want to talk about Finally, let me just spend 30 minutes and 30 seconds Advertising something that my group and Microsoft's working on we're they're working on a new quantum programming language It's gonna be great. It's high level. Yeah, this is this is code in this new programming language for the quantum teleportation circuit It's going to let you write Write software like us theorists design algorithms, you know call sub routines and so on like I'll think like call Grover search as a Subroutine or call Hamilton simulation subroutine because that's how we think about algorithm design and it's going to be available soon It'll be free to download and integrates with Visual Studio You can simulate it on a laptop that kind of thing Yeah, and I think it's going to be great and you can find more information on this website All right. Thanks questions Hey, so In sort of practical applications of these algorithms one thing that I've run into in the past is that For the product formulas because they often end up having a spectral norm in the bound It's often possible to special if you're interested in just simulating certain states You can often tighten the bounds substantially So for instance if you're starting in a state of fixed particle number manifold and all the terms in the Hamiltonian conserved particle number then you really don't need to be concerned with the you know eigenvalues of states that are You know in a different particle number manifold and this often means you can just look at the spectral norm of one block of the Hamiltonian which is often much sometimes asymptotically smaller however with The lcu approaches and a lot of the quantum walk approaches You often have this induced one norm of the Hamiltonian that enters which is just you know It seems like this normalization factor and it doesn't seem like it's possible to take advantage of those symmetries in these algorithms So I'm wondering if you think this is sort of a fundamental property of those algorithms or if this is in principle Something that could be improved in the future Yeah, that's a great question. Let me just rephrase the question in my words just for everyone the question is basically if your Hamiltonian has like some You know think of it breaks up into subspaces and there's one subspace in which it has high Energy, but in one it's low energy But all your input states line the low energy subspace your simulations always gonna stay in this low energy subspace Can we use this fact to reduce the complexity and it doesn't look like the these approaches with lcu or a quantum signal processing do that? And I'm not aware of any way of doing this, but I think it's a great question And yeah, now that you mentioned it that that's an excellent question that I should have included my open problems Thanks With regard to the you know end end to 2n You know Number of qubits required it looks like the Zagetti type quantum walks are the main reason right or is that? You know, so they use two copies of the number vertices So I'm just wondering if the love's work which Kind of simplifies this a little bit. I don't know if you've thought of you know looking into that Bellows work. Yeah, does it reduce the size of the qubits that means the size of the registers you need Not sure so what specific work of bellows you're referring to so I Can't remember the name of the paper, but it's just he improves two things I think so one of them is that you know Zagetti type quantum walks require you to start in the Ground I'm sorry the stationary state of the corresponding quantum state of the stationary state of the Markov chain, right? Bellows kind of gets rid of that part and then You know you also don't have to know the graph in advance you can But I'm just wondering if I think the number of qubits also gets improved that the walk takes place on the Graph plus a few more vertices, but not the twice the size Yeah, I'm not sure Look at that work It means possible. I mean that you don't need to store both vertices I mean all you need to store is one vertex and a Register telling you which neighbor and that's only a register of size log B But you haven't seen any implementation that actually makes it work with that for this class, but yeah I'll look into bellows this thing. I see you Hey Robin great talk. Thanks a lot one comment. I wanted to make is Another problem that I've that I've always I've always seen is just the way that air ends up scaling is fundamentally different between the state product formula approximations and the All of the post-trotter approaches in that you know say you have a Hamiltonian that consists of commuting terms But you know you your oracle doesn't necessarily tell you that all those terms commute with a trotterized decomposition The air is zero, but you have to pay logarithmic cost with the other methods in order to make the air small So, you know, I I think that figuring out whether or not there's inter Methods can interpolate between the two is a interesting question Right. Yeah, that's a great question. And yeah, if I can rephrase that None of the techniques other than the product formula approach is actually Scaled better when you know that the Hamilton all the terms commute So if all the terms of the Hamilton community can just simply do each of them individually and then you'll just get optimal scaling obviously That's clearly the best thing you can do when everyone commutes Just think of them as independent problems, but other than the trotter formulas where you can take advantage of this fact None of the other ones have this feature Yeah, that's an interesting question. Can we use these newer techniques? Can these newer techniques exploit this fact that when things commute things get easier and I Guess degrade gracefully back to this trivial solution. Yeah, I think it's a great question But I don't think we know the answer to that So one more question Hi, great talk So I want to just ask about the query bounds you showed which are great results, obviously, but clearly they're asymptotic And you know, especially for early quantum computers Constance matter and secondly obviously Recombination cost is just as important in some cases can be larger than the query complexity So I was wondering if you have any sense of say when these you know What point these more sophisticated algorithms do give say gate count advantages? Sorry, could you repeat the last sentence? Yeah So what kind of scale say a hundred cubits thousand or a particular problem where these more sophisticated methods actually give lower gate counts? asking about the constants basically Okay, that's that's a great question. Like, you know, what happened? What are the actual constants like and how does it scale for? You know specific Hamiltonian you consider and I'm going to answer that question with a 45 minute talk That's the next talk. So that talk will answer your question And Yeah, I'll let Andrew do that. So let's move to the next talk. Let's thank our speaker