 Thanks, the organizers for the invitation very happy to be here. It's been very exciting workshop It's hard to be on the session. It's I guess, you know We're very hard to keep the level from a conditional separation of content classical as a previous talk, but I'll try my best So if you want I want to talk about joint work with America Lev Tony and Lee Cedric lean Chris is war and shall give you And what we are exploring is whether quantum computers might be useful to give speed-ups for convex optimization problems in particular for a class of convex optimization problems that They're called quantum send definitely programming and I'll explain what they are so, okay, we want to one they have a Universal quantum computer and we soon have some small-scale quantum computers with no error correction and we like to understand What can we use them for? And there are many interesting quantum algorithms and and maybe you can split me into three categories one Exponential speed-ups. These are right. They really They are really the reason why we want to build a quantum computer I guess so of course a big application just simulate quantum physics We heard exciting things about Hamiltonian simulation yesterday and and there is a lot of work in this area There's of course shores algorithm. There's HHL to solve linear equations and so on Sometimes for some problems. We cannot get exponential speed-ups, but we can still get some speed-ups quantumly We can get some polynomial speed-ups. There is Grover's algorithm with square root speed-ups and this leads to Polynomian speed-ups to really a bunch of other problems if one day Quantum computers are as cheap as classic computers if you this is go, but one day might be true Then they might be relevant and there is a bunch of Quantum algorithms as happens classically that's probably they work well, but we just cannot prove it And we cannot test them also because we don't have the quantum computer. So at the moment, they're just We are hoping that they might give something interesting. We have some evidence, but many times we cannot really know if it's like what my living for combinatorial station problems then for low depth for early quantum computers There is this quantum approximate optimization algorithm, which might give speed-ups We don't know there's a bunch of proposing in quantum machine learning and and so on So what I want to show is that solving this semi-definite programming gives the example in each one of these three classes and And we'll get to that Okay, so what are these problems? So they are a class of convex optimization problems and is the following So we have an optimization variable x, which is just a matrix. So x is a n by n matrix And I have a bunch of other matrices c a1 up to a m and they are the input matrices of the problem And we have a bunch of numbers b1 to bm and they are real numbers and they are also inputs of the problem So these matrices c a1 to a m and b1 to bm specify the problem and our goal is you want to maximize a linear function over over x the optimization variable and the constraint And the constraints they are like linear constraints as well and we have m constraints But there's also this extra constraint which is this positive semi-definite constraint Which says that x has to be remission and all the eigenvalues bigger equal to 0 And we can also consider that all these matrices they are sparse and s is the sparsity is the number of no zero elements in each row for example So as a particular case if all the matrices are diagonal then we get linear programming This is a very simple linear program with n equals 3 m equals 2 right and just like Some people of this form Okay, so there are many applications that I want to read this you can really keep growing the number of applications of LP and STPs Even like I was surprising few theory. I have a new colleague at Caltech that he does Conform a few theory and his main area of activities to apply STPs to conform a few theory for it was a co-application But they are really it's is very is very useful class of optimization problems What are the algorithms that we have? This can be solved in polynomial time in n m and all the other parameters So using three-point methods. That's the action. That's the method that people actually using practice it The the running time is something of this form is m square ns plus m n square and then log Capital R is more are two parameters that I'm going to define next They are basically the size of optimal primal and dual solutions and absolutely the accuracy you want to solve the problem So let's just say that you want to compute this up to error plus minus epsilon addisive error for example This is good. If you want to if you really care about it depends on m and n which we will There are better method. It's called multiplicative weights method And improve to linear in m and ns But with a big hits on the complex in terms of the error and these two other parameters So it depends which application There is also low about this very easy to show that there cannot be anything faster than n plus m Even if everything else is constant. So there are clear limitations like linear time Okay So, oh, sorry, and this is even if you just want to Compute the optimal value because of course just to write down the option solution takes time n square if it's a matrix If you go to the duo like n Okay, so the question really scan quantum help right our quantum computers Maybe help us to speeding up these algorithms and on one hand, maybe yes So first like this LP is a is a very natural generalization of linear equations We change equalities by by Inequalities and we know right there are exponential speed ups for for linear equations Also, these STPs. They are pretty natural in quantum mechanics, right? So we can always reduce a general STP to the following problem We give some emission matrices a 1 to a m like some of observable in quantum mechanics A bunch of numbers beyond to BM and we want to find a density matrix row for which the expectation value of row in each One of these measurements or this emission operators AI is smaller than bi, right? So this is a very natural problem in quantum mechanics. And so maybe quantum can help There's also on the other hand some other On the other hand, maybe not so first Reducing from Grover there is a easy quantum lower bound of other square root n plus square root m So there will be definitely no general exponential speed up The worst case and even polynomial speed ups. We didn't know how to do before right? So it's a natural problem to consider but it wasn't done before so maybe That's a sign That's bad news Okay, so what I want to show you now is that we can We can have some speed ups and the way to see that really quantum Octo help is is just to connect this problem to some other primitive in quantum computing Which is to prepare Gibbs states on a quantum computer So what is the connection of these Gibbs states and what they are to to these STPs? Let me tell you this first So Gibbs state is just a formal state of some Hamiltonian at some temperature. That's the form is the explanation of the Hamiltonian They have of course very important if you understand the props of the Hamilton of the system given by the same 20 at some Temperature if it's a thermal equilibrium with the environment But the connection with the STP comes from a old result in statistical physics due to James from 57 and it says they're falling suppose you have a quantum states and you have a emission operators MI Such that the expectation value of flow in these MIs gives numbers see eyes Then he proves that there is always a Gibbs states of this form a Gibbs state where the Hamiltonian is given by a linear Combination of these emission operators that you care about these MIs and Some real numbers on the ice that you have to figure out what they are They are like some black range multipliers with the same expectation value as role on this observe, okay? So if all you care about About role is the expectation value on these MIs. You can just have these Gibbs states instead of it and Okay, and this is like this gives it also maximize the entropy of always stays compatible with this expectation value But we don't really need that So okay, so let's apply this to the STPs, right? So we have our STP here again So all we care about the STP is really the expectation value on right on trace CX and trace a sub JX So we can just assume that the opposite solution is of this form Right of this exponential form in the input or as a linear where the Hamiltonian is a linear combination of the input matrices By James Prince there is always a solution of this form Right base would just apply James Prince to the normalized version of X optimal and this gives this the solution So the problem really points down to first finding what what are they lumped the J's and second of Trying to you know write down this matrix in the classic computer Or maybe prepared a normalized version of this matrix on the quantum computer and this is what we explore So it's really This then I guess main conceptual idea Now let me tell you Okay, so now I want to tell you just using this idea What is the kind of results you want we can get but before let me define these two parameters of the STP which are important So one thing given n STP of this form Okay, there is another STP which is called the dual and this is the following So we minimize the inner product of this vector B with y y is a vector now of real numbers And the constraint is that if you sum from j equals to 1 to m y sub j a sub j This is a matrix. This matrix is bigger equal to the matrix C meaning that left-hand side my right-hand sides Is a positive semi-definite matrix? This is the positive semi-definite constraint and this y element wise has to be positive So all elements of y are positive so this is a This is another problem and what you can show is that almost always under some by conditions the two solution The two opposite solutions are the same so like you can think if you solve this one of is this one depending on the convenience And now you can define two parameters Which will be very important for the running time of our algorithm, which is the our parameter. This is consider any Optimal solution of the primary problem R should be an upper bound to it So we should be able to guess an upper bound to the size Which is just trace X because X is positive semi-definite of the primal solution, right? It's like the one arm as well. The same thing you can do for the dual But now the dual is just a vector of non-zero coefficients So the size already the size parameter is just an upper bound to the sum of To the to the L1 Norm of Y as a vector. This is called small r and they'll be important parameters in in our approach Okay, so before let's let's show Let me say again. What what is the best we can achieve in the worst case? So even if you want to write down the option solution of the primal this takes other n square Right was a n by n emission matrix and the dual will take time m But maybe we just want to compute the optimal value and we don't want to write down the option solution We do it implicitly If you can show the class it takes times n plus m for everything else constants Oh, I'm sorry. Delta is the error now It was absolutely changed to Delta as the sparsity and these are these parameters that I defined just before Quantum also some easy reduction to search problem. You can show that's never you cannot do better than that Or there's square root n plus square root m So, okay, so the first result is there are like some general polynomial speedups. So There was a first paper with Chris that I put less here now There's is improved with with With Chris and collaborates from Maryland's and what we show is that there is a quantum algorithm for solving STPs And the runtime is like matches the lower bound in terms of n and m is square in the sparsity and a polynomial in all these parameters That I'm going to tell you what it is next So the input is the same right as I specified before There is also this normalization on the matrix the operator norm is one But what what is the outputs? This we have to specify right because as I said if you want to write down all the elements of the matrix That's takes too long So as outputs we output like quantum samples from these quantum states We just prepare a quantum states Which is a normalized optional solution, right? So the optional solution is x and we output x over trace x as a quantum states And we also output the value of trace x and we also put the optional solution up to the small accuracy So it's not the same thing as heavy written down in a piece of paper But maybe you can use right this quantum state for something else this is kind of similar to what happens in HHL Okay, so what is the oracle model here? We assume there is a oracle that outputs a chosen no zero entry of each one of the matrix at unit costs So we just choose one of the matrix which was one of the rows which was a number and out Like chief and it outputs the chief no zero element of this role in the matrix of our choice So like this is the normal model for like Hermit Tony simulation for example So what is good about this? Well, it gives a unconditional quadratic speed ups in terms of NNM right because we have this linear classical or about n plus m. Oh This is not close up. This is optimal right in terms of n plus m this holds So this is also we cannot improve But there is also bad. Well, it's optimal, right? So there will be super polynomial speed ups that I told you before and it depends on the other parameters is bad It's like Actually these three parameters. They are interconnected. You can always decrease one by increasing the other one So you should treat this ratio as really one parameter. This is what matters and it's this one to the power of eight So so pretty high So just this is the final result But as I mentioned before with Chris at first we proved this kind of bound and times m square roots and a huge right to the power of 32 this is Pro there so pretty terrible, right then there was Uphold on Gillian Griblin and the wolf they improve a lot this to to this eighth power And now we can improve this to n plus m which no close what we can do here, but I'm sure there is a lot we can do in this other parameters So applications, right? So okay. So if you first, okay It's at most polynomial speed up where you feel happy with that. Well, can you apply this sdps over and there's some bad news Many interesting sdps for example if one applied this to max cuts using goes more Williamson We have that this this parameter the size and error parameters is of order n So we're not getting speed ups Okay, so we really have to improve the other parameters if we want to have speed ups for Go may Williamson or some other Approximation to NP hard combinatorial optimization problems. So they all usually have this kind of dependence. So that's that's bad But there's also some other sdps for which we can get some speed ups And they're like a bunch of problems in machine learning compressed sensing for which they would give something because they have a nice These parameters they are like of other wonder One is quantum compressed sensing and I mentioned this because there's two of the offers here in the audience So what we want to do there is we want to topograph our rank are Dense matrix row of dimension n by n and they show that you can do that in only roughly r times n measurements So much better than n square if r is small But what's the way to do that? Well, there are also other way Naively you can just do that by solving this easy sdp. We just find a density matrix For which the expectation the expectation value on the measurements AI Reproduce what you the expectation values on row, which is this one that you can measure in the lab It presses sdp is not very used because you want to run a simple really efficient And these sdp is like no n square times m. So that's already too high for this kind of application very really care about Not the order of the polynomial, but if we had a quantum computer we don't Then write the running time we can even improve over what I told you before will be something of this order like our time Square root of n square root of m, which is just our times n over area square. So So, you know, this would be this would beat the best way that they have of doing this kind of of compressions All right, so this was the polynomial speed ups Maybe they are right and it's definitely for now. They are not substantial enough to try to implement them Quant computation is very expensive. So it'd be good if you can have something more. We cannot in general But actually we can try as heuristic to get bigger speed ups So another result is that there is a quantum algorithm which solves sdp's and it runs in this time It's correct of m. This we don't improve all these other dependencies the same as before But there is no dependence on n As it's correct of n, there is this t-gips instead of its and what is this? This is the time it takes to prepare on a quantum computer give states of this form In other words give states given by linear combination of the input matrices What is lambda i some coefficients real coefficients that we can compute them. They are they are bounded and so on So what it shows is that you know when we can prepare these states efficiently Maybe we just try them out and see if it works then we can have perhaps bigger speed ups So we could use for example quantum Gibbs sampling for that Oh, sorry. This is like we want to keep sampling. We could use quantum metropolis as heuristic or something else and We might even have exponential speed ups if formalization is quick right like poly In the number of qubits, which is polylog N But it's hard to prove if there are cases for which this is the case This is nice I think because it gives the application of this Gibbs sampling quantum Gibbs simply outside simulation of physical systems Classical Gibbs simply is huge in many areas writing in machine learning in convex optimization and so on It's nice to find applications of quantum Gibbs sampling. This is one. There are also some earlier work and And like applications in machine learning these two papers Okay So another thing I want to mention. Okay, this is about I guess we are all here interesting this It's more quantum computers that we have now. It would be nice to find new applications for it I don't know if this is one, but I'm hopeful that it might be and let me tell you why so in the simpler version of the algorithm And then he has just a bad scaling on M. It's case linearly in M But it's you like these two Gibbs can the quantum part is very simple It's just we compute expectation values of the AIs on Gibbs states Or here's the same thing on on the Gibbs states Give my linear combination So I think because the quantum part is so simple something they can try out on these small machines and see what you get right? So what would be the idea? I'll tell you more about the algorithm But the outline is that we have the quantum computer and they have the classical computer the quantum computer is just that each time We have a new Hamiltonian, which is a linear combination. These are the coefficients of the input matrices we just We just compute the expectation value of This Gibbs states with the input matrices AI and is a number CIT Then we send this number to the classic computer and then there is a very easy procedure That I'm going to tell you later what it is which just computes new coefficients new I which then you fit forward here again Okay, and repeat that very like a log n times and then you converge. This is where you prove so really like the quantum practice you should be able to To design a Hamiltonian of your choice, which depends on the input matrix and let it cool let it formalize to a given temperature And I tell you what the temperature range is so So this something you can do we can try to do just right by cooling down the system or some other methods So and of course we won't get a Gibbs state perfectly We have some errors and so on and this I haven't analyzed really but Maybe it's not so bad. So this is something for future work So what is a special case just on this and this connection with Like cooling and annealing suppose you just want to compute the maximum eigenvalue of C This is a very simple SDP right. We just want to maximize trace CX where X is a density matrix So this well study problem, right? So this is really quantum annealing. We just want to Compute the ground states energy of the C if you see as a Hamiltonian if you just look out of minus C action and if you can prepare this Gibbs states with this Temperature inverse temperature one of a delta temperature Delta then we can compute the maximum eigenvalue to arrow Delta Or here music fees normalization where like you know the the normal of the Hamiltonian scales as the volume right instead of being constant then the temperature is constant like that and In the point is that our approach for this Gibbs sample Then it's really a generalization of quantum annealing right in quantum annealing usually this is classical here We have a series quantum and we care about no zero temperature. So just some comparison So we have quantum annealing here. We have this quantum SPS over they haven't only quanta nearly is usually classical or interest here We interest in general quantum Hamiltonians The temperature in annealing should be zero right at least ideally here We don't care about zero if you want to solve the SPU where a delta temperature Delta square is enough So there might be advantage And what how what is the preparation of the Gibbs state? Well in annealing we can do cooling right just cool down the system But there is also as a max evolution. There is this qa oa. There are many alternatives Well study problem here not so much right this is a open question. So we can cool down the system That's one way we could use quantum metropolis, but that's very costly for small quantum computers Are there like analogs of these two for Gibbs states? That's a interesting question that I don't really know the answer But I think it's worth exploring more Okay, so actually I'm in my time. I have 20 minutes perfect. Okay, thanks Okay, so let me go to the to the final results before I tell you a little bit up more about the algorithm And this is a situation where we can have bigger speed-ups so Where is the well? I think you cannot see it anyway, right? So there is a quantum algorithm for solving STPs and and now he runs in times current of m the same thing in terms of the constraints but now Polylog in n so that's an improvement Everything else the same but also there's a rank and this rank will be the rank of all the input matrices of the problem But there's a sports and catch this is only if data is in what we call quantum form Let me tell you what it is is a different oracle model now so the oracle model is that We assume there is oracle that given I outputs The eigenvalues of AI and the eigenvectors as quantum states or actually more precisely in the later version if you assume for example that this is Positive-same definite we should be able to prepare a purification of the normalized states onto this AI So this is some old version and the rank is just a maximum rank of all of this So if you have For some reason this this oracle model Then you can get algorithm that only scales with the rank of the matrix instead of n right? And this can give some advantage sometimes if they're low rank for example So what is the idea? I won't tell you much about it, but I just want to tell you that this is one of This is an instance where we can actually solve the Gibbs sampling problem and show that it converts faster than Then just like a square root n which is the worst case using rope. So so we just show They can perform the Gibbs sampling easily in the sense that efficiently here in in poly log n ranked time and the proof Which is in this second paper that I mentioned it uses Quantum principal component analysis to actually first prepare roughly like a the maximum mix state on the common subspace of all On all these matrices and then to do the Hamiltonian evolution there and to the phase estimation and so on what I expect But it's very important to have it in quantum form because then we can really prepare initial states Which has a non-negative overlap with all this with a space where all these matrices live So that's that's why this quantum model is is useful Okay, of course is the question how relevant is this model right and I I don't really know But there's one One application that we thought about that I want to mention is also about quantum learning But a little bit different from this compressed sensing because you don't want to do full tomography here So what what are we interested here is that we have a set of measurements AI let's assume they are projectors And we have access to copies of unknown quantum states And I would just want to find we want to again find a description of the quantum states Description of Sigma, but only that reproduced the statistics on on this on this measurements AI Okay, and we don't care about really full tomography. We just want to mimic the expectation value in these AIs This is a STP, right? So this will be the BJ's if you put in the formulation I showed you before where we just search over this density matrix Sigma We have our Oracle for be AI just by measuring AI on raw, right? Just we do the experiment and then this this gives us bi And let's assume that there is also efficient way of measuring this AI, okay in in poly log n time. They're like There's efficient quantum circuit for it Then using the what I showed you before We can find this lambda ice and we can find a circuit that Constructs the state Sigma satisfying these conditions in this time So square root of m, but polynomial in the rank of the matrix So if you care about low-rank measurements and and on this right usually we don't care about a low-ranking measurements Usually local measurement has very high rank for example. So so that's why this perhaps has a limited Applicability, but but nonetheless if you care about this low-rank measurements You could have larger speed ups over the classical approach Which we don't know I don't know how to do anything better than just really computer expectation values and running the classical STP, right? Which which takes at least like time leaning and Okay, so what is the algorithm so first as I mentioned the algorithm is is based on this idea that we have this Solution of the STP, which is always a quantum Gibbs states and actually there are many People explore that classically for classical STP solvers I forgot to put the reference unfortunately, but this was first paper by Aurora and KO and then a paper by Hassan and they propose different ways of exploring this you know this solution of the STP as a as a Gibbs states to come up with Resolvers and this is what they call this multiplicative weight methods Multiplicative because kind of you put the you get a violates the constraint and you put a penalty Multiplicativity which just mean adding something to the exponents of of these Gibbs states, right? It's class you just multiply it quantum. We have to add something to the exponents of the e to the h So here, you know after like many verses of of this of this procedure and look at the classical solvers and trying to simplify it There is a version that I like because it's pretty simple. So this that I want to explain to you So first let's look at a particular case. There is a reduction to the general case We just want to find a density matrix row for which trace a sub i row is smaller than bi Okay, so now it's a quantum state and how we want to do that Well, we want to find the lambda i's in James principle, right that achieves the solution But we want to do that by interactive arguments. So what we do we start from from the maxi limit states, okay, and then we iterate 16 times log n over epsilon squared times the following procedure So first we do quantum Gibbs sample and we prepare square root m copies order of this row sub t on Notes, right? That's actually the capital R and and small R. They are constants here because it's quantum states So that's why they don't appear here in the general reduction. They would appear again Then okay, we have this number of copies of road and then with this number of copies of road We just using rover or some versions of rover we search for a index i Such that trace a sub i row t is bigger than bi plus epsilon Over two so we're looking for a violation of one of the constraints Okay, so we find a constraint, right? So if all the constraints are satisfied We are happy we find one which is not which is violated after this error epsilon by two and then we let i sub t to be this violated constraints And then what we do this like the music because you step we just add The constraints into the Hamiltonian as a penalty, right? So we just adds we have our original Hamiltonian, which is h right? This is just h log row sub c we add minus epsilon square a i sub c to it So we always find a violation constraint and we just add it to the explanation And repeat that and what you can show is that if you converge in the end We have a solution which would satisfy all the constraints up to some small error And then we can put the error we can massage this error to really satisfy it perfectly if you want But don't worry about this so what is the what is the complexity of this? So if you use phase estimation and amplitude amplification Pulling of ocean shoulders that we can do the give sampling in times square root n Okay, but we need a right if you just put this together this gives Complexity square root m times n which is this first one right or you it's not this option that we found to get the option I won't explain but let me just say what is the new idea to get this square root n plus m We have to use Quantum or bounds which is from this paid by heroin and Montanaro It's a nice result and we improve it to to with amplitude amplification to do it faster And that's really the idea that you need to to get this but Let me skip that Okay, so let me now give you intuition why it works right so so why you convert so fast It's crucial that we are converging in log n time here right if if conversion would be a polynomial in n or m Will be doomed but it's really fast and this is because a nice arguments There are many ways to see that one way that I like is using relative entropy But you just have a cost function which is in the case will be the relative entropy and it really decreases at each iteration And because this cost function the maximum value is not too big. It cannot have too many iterations before you converge So so let's roll just be a feasible solution Then there is this pyros both in the quality I don't write what it is but it's well known and if you write in terms of row row sub t and row sub t minus one You get these constraints right where This is the definition of row sub t so you have to trust me I think you cannot do yourself now, but you get this you say that we can show that The relative entropy from step t to step t minus one you decrease By this factor minus epsilon over 8 and then trace a I sub t row minus roti but now suppose that At some step Grovers, okay suppose that's what we happen So we I didn't tell you but if in case Grover search fails that we are happy right because if Grover search fails We we have Row sub t which satisfies all the constraints approximately at least so the bad case is when Grover's When it's Grover search never fails for all the iterations And then if it never fails at each time step we have this right We have the this trace a I sub t row minus roti is smaller than minus epsilon over two So if you plug this in we see that each time you have to decrease this relative entropy by roughly by Epsilon square over 16 right each time we reduce it by that But the maximum value of the relative entropy because row sub zero is just massively mix is log n So really after 16 log n over epsilon square steps the relative entropy would become negative is a contradiction So we must stop before that right so that's the proof So Grover search must fail at some C prime is smaller than C and this shows that this row Subsea prime is is feasible Which is what we wanted to do Okay, so that's all So I show you that quantum computers can give speed ups for solving STPs It's it's complicated what kind of street ups and right we still figuring out what it what is the extent? There are many open questions. Some of them is can we find relevant settings with large speed ups I showed one in quantum learning. I know not sure how relevant it is, but is one would be great to try to find more So there is this dependence on on capital R is more over delta is prohibitive right for many applications can we prove on it and actually if you use this quantum interior points Classically they have a worse scale in terms of M or n, but they have a logarithmic scaling in terms of this parameter So this is really great. So so if you could get a quantum version of interior point I think it would be very exciting. You really need new ideas. I have no idea how to do it But I think it's a this Improving this dependence gives a good motivation to explore this question. I think What is the robustness to error of the procedure right so these give states, you know Like hand-waving you expect they are robust to error right so they have a give states for my equilibrium, but of course Would be nice to if that's actually the case and to make it and to study this problem more And if that's the case if there is some robustness to error The quantum computers right there only use for this give sampling at least if you don't want to have speed ups in terms of the number of constraints Is this a potential application for a small size quantum computers? I I don't know, but I would like to figure out so That's all thank you. Yes You all want to learn or the other one? This so There I mean you can think of that as if I think of that Thing inside the box as a vector indexed by I then this is a bound on the infinity norm of that vector and You're trying to learn You're trying to learn like all of those expectation values up to some bound on the infinity norm No expectation values you're given because you can just measure them. Oh, sorry. Sorry You want to find you want to find you want to find Sigma the output is a description of Sigma Sorry, yeah, you wanted to find you want to find a Sigma what sorry what I was what I meant to say is If I change that to something other than the infinity norm. Oh, I said yeah I mean, I'm not saying one norm, right, but just even a little bit away from infinity The log n norm or something I don't care just a little bit off of infinity Can you say anything about that case because I mean this would be if you wanted to try to use this idea for tomography It's gonna be poorly conditioned Yeah, if you can if you can I mean if you could do it in the one norm, right? Then of course you can you can actually invert this and do tomography So the question is is there any meaningful sense in which you can get I Mean, yeah, I think just thinking of this thing as a norm and asking if you can get away from infinity It's a good question. I The approach we're doing right now. I don't see how to improve it Right was just the proof I gave is very simple. You really have this is absolutely niche niche of the constraints in the right image of the constraint of the STP But I don't It's still convex if you do this, right? So I'm not sure sure. Yeah, but it's just whether you can yeah Yeah, you can get one to speed ups for this setting. Yeah, it's a good question, but I don't know. Okay. Thanks. Be nice Thank you. I think I noticed that the dependence on the error was linear in all the cases No It's like this. No, you had a one of an error to the eighth power right to the eighth power Yeah, so I know there's a paper by Robin Kotari where they did where they got exponential Improvement on the dependence of the error for the HHL type algorithms. That's right So is it possible to use the idea those types of ideas to improve? your Yeah, so that's a good question and The answer is I think no with this approach because this approach They improve a lot the error because they improve in the end is like because they can really improve the error of the Hamiltonian simulation Right, and we do use Hamiltonian simulation and these parts you could improve their techniques But the arrow is really coming from from this kind of reasoning, right? That's They know you really need one of our epsilon square iterations here to get epsilon approximation So you really will have to change these parts, which is just kind of the classical parts So that's why I think if something like this works and will be very nice I think probably have to go to quantum interior point methods because interior point method This is the way classically people achieve log one of error, right? So I think and then it's a new research project. We like they're very different methods. I don't know how to do it Okay, thank you. Thank you. So thank you So I think if I'm not mistaken the SDP can be approximated by many L linear programming LPs, right? the concept of SDP is a LP is a subset of SDP, right? Yeah, but then I think the polytope of the SDP can be approximated by many infinite number of LPs. So probably you can use HHL algorithm for that No, because first you cannot use HHL even for LPs, right? We don't know how to do it Because actually this this bound that I show you I square root n plus m they were for LPs already So for LPs already you cannot have explanation speed ups quantum in the worst case So we just change it from yeah, there's a good point But just change it from having a system of any equations to have a system of of Inequalities that that it's that matters a lot, right? You will from yeah from this explanation speed up to at most polynomial speed ups in the worst case So these upper bounds that you state I just wanted to confirm are they all in the these time complexity upper bounds Are they all in the quantum circuit model or do you use quantum RAM? So Wait the upper bounds. Yeah, just here, you know this root n plus root m and The algorithms of the law at the algorithm. Oh, yeah, no the algorithm. Well, they use quantum RAM They sense that you need this oracle that given the input matrix gives the element of them No, that's fine. Yeah, so you have oracle access to the entries of all these matrices. Yeah a 1 2 a m and yeah, that's fine Yeah, but that's all you that's all you need. That's all you don't in addition need quantum RAM to store data or anything doing the Well, it depends it depends on the version, right? So if you it is basic version No, because really just like formalization of these keep states, right? And and that's it. There is also these versions we've Like to get this credit of N plus M. You have to use this quantum or and There I'm not They might need I shout it. No, you don't need no, okay, so you don't need it Okay, I I'm not really sure what he says is it don't need to look it, but I will trust my golf