 me. So I'm going to be talking about quantum linear algebra over the course of the week. I still haven't like totally pinned down what I'm going to talk about but it'll be around that area. And I'll basically be following the lecture notes which I just checked apparently they're on like the PC my website now or I think I put them on my website just now. So okay but for the first lecture I'm going to be talking about this for the first I think maybe a few lectures I'm going to be talking about quantum singular value transformation. And this is a framework for doing quantum linear algebra and it's been useful for sort of it's a useful tool for quantum algorithms in general because it's proven to be useful in a variety of settings. And by the way feel free at any time to ask questions if if you have any. And the motivation for quantum singular value transformation or QSVT it comes from Hamiltonian simulation. So here the problem is that we have some Hamiltonian which is a matrix but in particular it's a linear combination of some terms EAs and then what we'll use about these terms is that they are products of Pauli matrices. So I guess in particular we can say that they're like P1 tensor P and where these are Pauli matrices. And the so so it's some object but you can sort of specify it with a sort of small number of it has like a small description. And so the sort of reason why you might study these things is that they appear commonly in physics. You can imagine for example you have these like when people are studying many body systems. There are settings where you have these for example if you look at like the easing model which is what people study when they want to understand many body systems. You can imagine like I have a bunch of qubits and I put a qubit on each site. And then I have some like local interaction terms between the sites. And then for example one such term could be a term that is the identity on some of the qubits. And then it has some interaction term between like adjacent qubits. And so linear combination of these is going to be sort of the interactions that govern the system. And the goal of our Hamiltonian simulation problem is to implement e to the i h t for some for some t. And so we're gonna this is e to the i h t you might recognize term like a Schrodinger equation. We want to evolve a quantum system with respect to these with respect to the dynamics sort of defined by the Hamiltonian. Now early algorithms for this basically proceeded by using this trotter approximation. So what this is is it's basically that if you take your Hamiltonian you can sort of approximate it by an evolution of individual terms. So you can evolve with respect to e one for some small time and then e two with some small time and so on. And the idea is that this approximation works when C is large. And so you could just take C large enough that this is a good approximation. And then all of these terms they will probably meet you see them so they're sort of easier to implement and then you can use this to perform the unitary that you want to perform. The issue with this is sort of that there's some inherent limitations if you just try to apply this directly. So I'm not an expert in trotter approximations or anything. But I think like naively if you try to do this then sort of the value that you need C to be, so if you want to get like an epsilon approximation then you might need C to be you know naively as large as like poly one over epsilon. And if you try to apply this then you're going to like ruin your gate complexity. So over the course of like there's a bunch of works on this that eventually got like the optimal time complexity. And the approach ended up being this QSVT that I mentioned before. And so I'm going to explain sort of the sketch of like how you can do this with QSVT. And okay so it'll basically proceed by defining something called like a blocking coding. And you can think of like a blocking coding as like some sort of quantum circuit. And this quantum circuit is going to have some properties that are associated with the thing it's encoding. And then the second thing that we're going to observe or like these algorithms observe is that you can get, can implement blocking coding for your Hamiltonian H. And then what you can do is you can use some properties of these blocking codings to take your blocking coding of H and turn it into a blocking coding of sort of a polynomial of H for H for a piece of polynomial. And finally what you can do is you can take this P of H, choose P in the right way such that you can, such that P of H is going to approximate each of the I, is there a minus sign that I missed? Okay sure. Minus IHT. Okay so this is like the basic idea. And then this blocking coding is something that you can apply to a quantum state. So this is sort of how these things are going to work. And the rest of the lecture is just going to be me sort of explaining this. Any questions so far? So first I'm going to define this notion of a blocking coding. There are a lot of different definitions. This is going to be the definition I use. So for a matrix, it doesn't have to be her mission or square. It can be rectangular. What we say is that we say that U is a blocking coding, it's a blocking coding of A if U is some quantum circuit. So it's, I guess it's a unitary. So I should say is some quantum circuit. So here I'm like referring to the unitary here though. Such that sort of A is in the top left corner of U. So if you wrote U out, then it would basically look like A and then there's a bunch of other entries here that don't matter for us. So, right, so this is the definition. Another way that you might define this is to say that BL dagger UBR is equal to A. And here this guy is like the identity, but it's only considering like the first R columns and this is the same thing, but you're only considering the first C columns of the identity. So I'm just writing the same thing, but I'm doing, but I'm sort of writing it more in the linear algebraic way. And then finally, like for the purposes of this talk, I'm going to assume that everything is like powers of two, so we can write everything in terms of qubits. And then what this is is that if you have some, that you can index it into your unitary U in this way. And this is also going to be equal to A. Okay, and sort of this right hand most one has sort of, you can imagine it like operationally, like I have my quantum state, I add a bunch of, I have a quantum state that's the size of, that's the size of C. And then what I can do is I can attach a bunch of qubits and then apply U. And then I measure, and I post select on the measurement being zero. So like I start, I add AR many qubits, and then I measure AL qubits. And then if the measurement is all zeros, then I'm going, then I sort of like I applied A. So that's how you should be thinking of it. Great. Okay. So there's going to be times where, I don't know if I'll get to it, but sometimes we're going to say that things are like Q blocking coatings. And what this means is that this quantum circuit has Q many gates. So it's like, or O of Q, I guess. So I'm going to maybe include the gate complexity in here too. Okay. Now, so I define this blocking coating. What is it good for? Or what's the point? Well, something that you can observe immediately is that if you have a unitary U or like a quantum circuit. So if U is a quantum circuit, then it's a blocking coating of itself. Yeah, then U is a Q blocking coating of U. Okay, just because U is in the top left corner of U. And in the same way that if you, if your quantum circuit is mapping psi to U times psi, then our blocking coating will have the property, will have the property that we can convert a copy, copies of psi to copies of a times psi. And the thing here is that a doesn't have to be unitary. So it's sort of more general. It sort of generalizes this notion of, you know, like a like a unitary circuit. A lemma. Oh, I'm saying a quantum circuit is a blocking coating of the unitary matrix. Yeah, I'm not saying yeah. Any other questions? Yeah. Yeah, so there are restrictions on A for like only certain kinds of A can be put in a blocking coating. One simple thing that you can say is that right, so if if you have like A in a blocking coating, then the spectral norm of A is bounded by one. So you need the spectral norm to be bounded. And this is just because, you know, U is has a spectral norm of one. And so if A is inside of U, if A is a sub matrix of U, that means A has to be bounded by one in spectral norm. But apart from that, I don't think there's any restrictions. Okay. Right. Okay, so just to formalize what I was saying before about applying A, I am going to say, okay, so if we have U, that's a blocking coating of A, then we can say, okay, then there's a quantum circuit that takes from that goes from a quantum state, psi, that's like, okay, it's the same dimension as A. Just take the dimensions to make sure everything matches to the quantum state corresponding to A times psi. Here I've just normalized it to be normal one. And then this thing will succeed with the probability the norm of A psi squared. Okay, and how do you do this? Well, the circuit is pretty simple. You just take your, it's basically what I said before, where you take your psi, you add your ansilla cupids, and I think that you want this to be the right hand side of this, but okay, it doesn't matter too much. And then you apply U, and then you measure, and then you post-select on this guy being the thing that you measure is being all zeros. Okay. So before technically, you're measuring AL, many qubits at the end. I won't discuss this. I'll just pretend everything is square. Right, and so the reason why this works is that if you look at what this circuit, the output is, the circuit is going to be precisely, is going to output precisely this expression here, where here there's a psi on the outside. So like, I guess technically, it would be 10 series psi. And so this will be A times psi, because I start with psi, and then I add a bunch of qubits, I apply U, and then I measure and post-select. And the probability that I see the all zeros outcome will be the norm of A times psi. This is just what the quantum circuit says. Yeah, it might not be immediate if you haven't seen this before, but it's just a small computation. And so the thing to note here is that your probability of success is dependent on how big A is. Like, if A was unitary, so it had spectral norm one, and U psi always had norm one, then you always succeed, so you succeed with probability one, because this norm of U psi is going to be equal to one. But if A becomes very small, then you're almost never succeeding. And you're never doing this linear algebraic operation that presumably you wanted to do with your blocking code. And so the thing to keep in mind is here, like, basically, it's the size of A matters here. And so if A is like, well, we'll see is that there are various ways to get blocking codings, and then they're going to be A but reskilled by down by something, and then that's going to show up in the runtime later. And like, essentially speaking, the scale of A is like the complexity, it sort of becomes the complexity of the algorithm. Okay. Great. Okay, so now I've explained sort of the basic thing that you can do with the blocking coding. Now you might be asking, how do I actually get a blocking coding? Yeah, sorry question. Do we know that it has to be, yep, you, so I don't know much about this, I think like, I think generically, if you give me a matrix, I can put it like provided that its spectrum is bounded, I can put it in a blocking coding. But this would be some question of like unitary synthesis that I'm not really okay. Yeah, okay, that sounds right. Yeah. What's the norm bounded by? Okay, I don't see it. Yeah, I don't see if I believe you. Okay. I think the thing that I know is like, if you have A, you can embed A over for benius over weight, but obviously, a respectional worth weight is better. Yeah, okay. So that's that suffices as an answer. Okay. I mean, like if you, for example, if you knew that your matrix was going, if you knew that you're going to apply a matrix to something in like the poor condition part, then you could amplify this poor condition piece. But in general, this, this probability, I think should be, I mean, like actually there, you can do like a quadratically better, I'm pretty sure. But yeah. Okay, was that? Okay, cool. So now you might be wondering, how do you get this blocking coding? I'm going to present a couple of properties of these blocking coding. So I'm calling these like extensibility properties. And there's a couple such properties that I'll talk about, which is that if you have blocking coatings of A and B, then you can get a blocking coding of A times B, assuming like all your dimensions work out. Similarly, if you have blocking coatings of A and B, you can get a blocking coding of some convex combination. So here, maybe I'll say C0A plus C1B, again, if your dimensions of A and B match up. And I think, yeah, so the complexity here, so if this is like, these are like, here, if these are like QA, QB are the complexities of your A and B blocking coatings, then these guys will be QA plus QB, as the size of your, as the complexity of your blocking coatings. And so, oh, I need help. Where's Chris? Do I need a passcode? It's blocked, yeah. Well, it's not showing, so it's fine. Yeah. Thank you. Great. Okay. Right, so using this, then we can actually get a blocking coating of, we can get a blocking coating of age since so returning to our Hamiltonian simulation problem. If we're imagining that we're looking at all of these linear combinations of terms, we can notice that these EAs were poly matrices, and so we could, they are unitaries, and so we can implement them efficiently, or they are unitaries that we can implement efficiently. And so this H is some linear combination of these EAs. And so we can use, we can use this second property iteratively to get a blocking coating of some rescaled H. So maybe I'll say H over C. And I think the thing, the value you need C to be is just the, I think this is the value you need C to be. And if you look at Hamiltonian simulation runtime, this is, this appears in the runtime. Okay. So, so we've gotten our first part where we have a blocking coating of our Hamiltonian. So now I'll give resketches of how you get these, how you prove these accessibility properties. So first of all, this is multiplication. Okay. And the circuit is, I'll just write down the circuit here. So I'm imagining I'm applying it to state psi. And then I have a bunch of, I add a bunch of qubits. And so here, U is a blocking coating of A and then V is a blocking coating of B. And then what we can do is we can, I think this, okay. What we can do is we can apply U here and then apply V here. So this, this, this goes on. And so this is going to be my blocking coating for, I think the way I wrote it, it's B times A. And the way that you can see that is that there's basically two circuits going on here. There's this U and there's this application of U and then there's this bigger application of V. And they correspond to this circuit that I mentioned above where if you sort of selected on the sub matrix for everything is all of your auxiliary qubits are zero, then you see that from this circuit, since it's just a composition of these two circuits, you get B first and then, or sorry, A first and then B, so it's B times A. Does that make sense? As for a addition or I guess linear combinations, this is the technique that is used is also called LCU or like the linear combinations of unitaries. This is the same thing. But I'm going to show it for just, this works for like a more general linear combinations. I'm going to show it for two. So what you can do is you can take, first of all, something that you can do is like if you're blocking coding, if you're UNV or different sizes, then you can always pad one with the identity until you get that UNV are the same size. Like if you use a blocking coding of A, then identity tensor U is also a blocking coding of A. So I'm just going to pad so UNV are the same size. And then what I can do is I can attach another qubit and then apply some one qubit unitary that I'm going to call VDagger. And I'm going to apply UNV conditioned on this one, this qubit. Okay. And this circle here means it's conditioned on the value being zero and the filled in circle means it's controlled, I guess, controlled on the qubit being one. And, okay, so why does this give you what you want? You can, if you look at what this guy is, I'm going to write in terms of block matrices. So what this is implementing is it's taking, so this controlled V is going to be controlled on the qubit being one. So this is V and if the qubit is zero, it's performing the identity. I guess these are zeros. And similarly, this U, I'm only applying U if the qubit is one. So in that event, I'm applying U and then otherwise, I'm applying the identity. And so this is equal to UV. And then what this VDagger v thing is doing is it's saying, okay, V, maybe I shouldn't be using V. I should be using X. What you can do here is you can then say, okay, what is X? I'm looking at X tends to the identity here. So X apply to the first thing. And it's going to be equal to sort of X zero zero. Right? So it's going to be this dagger times UV. So this is what my expression is. And I want to see what this is a blocking coding of. So I'm looking at the top left block here. And so the top left block is going to be, if I can do it correctly, it's X zero zero squared times U plus X one zero squared times U. So here, X is like a unitary. So these two sum to one, the X zero zero squared and X one zero X one zero squared. They sum to one. So I'm able to get this convex combination of U and V. And because I'm getting a convex combination of U and V, then I'm getting a convex combination of their blocking coatings. And so this allows me to get like linear combinations that are non negative. And if I want to get more general, then what I can do is I can modify these to have some, to have some phase here. And then this will allow me to get like arbitrary complex combinations as long as their magnitude sum to one. Okay, any questions about about these? The block matrix picture? Let me see. What is it going to be? It should be like, I think it's just, like, I think basically what it is is it's just like, you want to draw like a four by four thing. So it's like, it's like one of these things is going to be like, let me see if I can do this on the fly. It's going to be you and then this guy is going to be like, or yeah, it's a little complicated, because you have to like, it's like the dimensions sort of you need to work out, they don't work out. This is nice because you're just attaching it. So it's like, yeah, it becomes a two by two block matrix. Okay, any other questions? Okay, sweet. Now you might imagine, like, what can I do? I just showed you how to get like linear combinations. I just showed you how to get like products. And so this means like, if I have a block coding of a, then I can maybe get blocking coatings of like, polynomials of a, just by using this tool of like, okay, I get, I can multiply by a, I like add something, and I multiply by a, so on. And so this sort of works. So I'll define what polynomials of a means because it doesn't maybe quite make sense for non square matrices. So, okay, just some quick definitions. So if we're looking at in a matrix A that's her mission, then this means that, okay, we have we can write it in terms of like, it's eigen decomposition. So my eigen values and my eigen vectors, then we can apply any matrix, any function to that matrix by just applying it to the eigen values of, of a. And so the thing to note here is that if f is a polynomial, it like matches, it corresponds to the thing that you get if you apply, if you just plug in u into the polynomial. Maybe I should just say like, like, so if you're applying like, say p of x is equal to x squared plus 2x plus 1, then sort of then if you plug it in, just by saying p of a is equal to a squared plus 2a plus the identity or something, then this is equivalent to the definition you get when you apply it to the eigen values. So what happens when your matrix isn't square or isn't her mission? So now we're looking at for a that's potentially not her mission. Then we do have a singular value decomposition of a. And then we can define something called like the singular value transformation, singular value transformation of a, which is that if we have some function and it's odd, then we can apply to the singular values. And when it's even, we can apply to the singular values. But the difference is sort of like, we look at, we change here so that this is a V instead of a u. And the reason I'm defining it this way is so that it makes sense with respect to polynomials. So like, for example, if I have p of x is equal to x cubed plus x, then p of a, the definition that I defined just now, maybe I'm going to add an SV to denote that this is like a singular value transformation. This is just going to be the same thing as a a dagger a plus a. And if it's, you're looking at some polynomial x squared plus identity, then this corresponds to like a dagger a plus the identity. So I'm just defining things to make. I'm just defining the singular value transformation to make sense in terms of polynomials. Right. Okay. And now, now that I've defined what these polynomials are, we can sort of formalize what we mean by we can just do arbitrary polynomials, which is that we say that's a degree D polynomial. Maybe I'll call it P. So you have some polynomial possibly with complex coefficients. And we say it's achievable if if we can some way have some way to map quantum circuits, such that we can get a block map a blocking coding of a and take it to a blocking coding of the singular value transformation, like p of a. And so basically it's like, you give me some circuit that gives me my blocking coding of a and then I give you back some circuit that's a blocking coding of p of a. And what the extensibility properties show, it shows that x x squared x cubed and so on. These are all achievable. I guess like, you need to specify that this like, the new block coding is like efficient. In terms of the degree. But the way that but you can see that all of these monomials are achievable. So for example, here, what we do is we would we need to implement a dagger a. And what we use is that we have a blocking coding of a. And then we also have you dagger, or you inverse, which is a blocking coding of a dagger. And then so we can multiply them together. And then consequently, we can use our linear combination strict to get, then we can get that a polynomial, that's like a linear combination of monomials, like some some a k x to the k is achievable, provided that your sum here a k is bounded by one. Okay, so so here, I can just take a linear combinations of these x to the ks. And this will give me the sum that these are, you can get these from the extensibility properties. Moreover, I'm pretty sure this is like sort of all that you can get from these just chaining these two properties. And the thing to notice is that this is actually not, this is not all polynomials that we can hope for. One example of a polynomial that we could hope to apply is this championship polynomial, which I'm going to talk about in a couple of lectures. And this championship polynomials is, for example, the thing that you use if you're doing amplitude amplification, like oblivious amplitude amplification. And in particular, if you look at the coefficients, the coefficients are all like greater than one. So you're never going to have this property holding. And so you you do need something more. And there is a nice theorem, it's sort of like sort of the fundamental block encodings. And this theorem basically says that all polynomials are achievable, like all the every polynomial that you could hope to be achievable is in fact achievable. So if you have a polynomial p, that's even or odd, and you have that p of x is bounded by one for all values between minus one and one, then p is achievable. Okay. And so I this is a theorem that I'm going to probably hopefully prove next time. And okay, so this is basically why this all you could hope for. Well, if p of x is larger than one, then like what we need, we need that property that we need is that if a has spectral norm bounded by one, then p of a also has spectral norm bounded by one. And so in order for this criteria to hold, we need that p of x is always between minus one and one, whenever x is between minus one and one, right, because the singular values are always between zero and one, and you need this to be mapped to be to be something between minus one and one. And since p is even or odd, restricting to zero one doesn't really change anything in your constraint. Okay. So hopefully I've like shown you that this is like, this is a very nice result. And then so finally, I'll explain to you a little bit about like, how you use this to get your Hamiltonian simulation algorithm that we were hoping for. So turning back to Hamiltonian simulation. Right, we had our blocking coding of h, right, which we got from taking linear combinations of my unitaries, my unitaries EA. And then what I want is I want e to the minus i ht with Euler's equation, this is like cosine ht minus i sine of ht, right. And okay, this is pretty nice because what we're applying is we're applying cosine of xt to h here. And we're applying sine of xt to h here. And so these are both bounded, so they're both in minus one one. They both map minus one one to minus one one. And they're odd or even. And so these two properties sort of suggest that what we could do is we could find some polynomials, find some polynomials such that c of x is close to cosine of xt. And then similarly some s of x sub polynomial s that's close to sine of xt. Okay. And these polynomials, okay, these polynomials are also, we need them to be bounded and even and odd respectively. Okay. And so with this, we can take our blocking coding of h and then use our special SVT theorem to get blocking codings of csv of h and s these are because h is her mission, these are these in fact coincide with c of h and s of h. So it's not a deal. And then what we can do is we can take linear combinations of these two and then we get a blocking coding of one half c of h minus i s of h. And then the the thing that you see here is that this is approximately one half e to the i minus i ht. And here this approximation is like epsilon. Okay, actually, I think it is is precisely epsilon with these two bounds here. You could try to work that out. Okay, so here I get this blocking coding of one half e to the minus i ht. And so, you know, using the thing that I said all a long time ago, we have this psi. And with the psi we can map it to we can use our blocking coding that we've just constructed to get this one half e to the minus i ht psi divided by the spectrum of this guy. I guess we get some quantum state basically that's like we get something that's like approximately e to the minus i ht psi. And the one half sort of comes in like the probabilities like the probability of success is like one fourth here or something. So if all we wanted to do was produce copies of the state, then we wouldn't need to do anything. But there's also this technique of oblivious amplification that can get this one half up to like one minus epsilon with some additional like overhead. Right, so that's how you do it. And if you work out the actual complexities here, the complexity of the algorithm. So the actual cost of the algorithm I haven't explained to you what this algorithm is yet. But I'll just tell you that this is going to be the cost of like D uses or maybe I'll say the degree of C plus the degree of S many uses of H where C and SR are you know approximations. Okay. And if you wanted to figure out like so so this complexity, you can work out what this is. But it just boils down to how well can you approximate cosine of xt and sine of xt. And this is like the thing that is determines your complexity. And the right answers here are like that these these values are like, I don't know, like t plus, it's not exactly this, but it's something like this. It's like log t over epsilon over log e plus log t over epsilon something like this. It's the anyways, this there's some like right answer here. And this right answer is like the optimal thing that you could expect for for the Hamiltonian simulation problem or the optimal, optimal complexity in I believe all of the parameters for for Hamiltonian simulation. That's a great question. So I think so I think the lower bound for there's like a lower bound of t, because of this, like, no fast forwarding. And then there's like a log one over epsilon over log one over epsilon. And I think this comes from like this correspondence between discrete and continuous query models, or something like this. And I, I don't precisely remember like, what the right combination of these, these things are. Yeah, I can look into it after. But yeah, I think the I think the lower bound of log over log log does not go through polynomial approximation, which is somewhat surprising. I mean, it's not it doesn't look like polynomial approximation, I guess. But it is in this paper of like exponential improvement of simulating for our simultaneous. Yes. Yeah. This is not the tight, like, it's annoying because this runtime is actually not in different regimes, it's different things. So this is this is going to be like an upper bound for all of the regimes. But there's a right answer for every any particular regime where t is like small t is large, and so on. I believe it's like, yeah, in the literature, it's called this like r of t epsilon and that's the actual answer. And this is like the Lambert, something to do with like Lambert w functions. Yeah. Okay, so yeah, that's a that's my lecture. Are there any, any questions? Oh, yeah, that's a good clarifying thing. I mean an odd function and an even function, right? So like p of minus x is plus or minus p of x. That's what I mean. Yeah. So if you have a Hermitian matrix, then you can consider things that are not odd or even. But we're looking at rectangular things that we only consider those two. Any other questions?