 A fétállalat, de a kvandumfézesztimáció és a fétállalat, az esélyes és kvandumförésre, kvandumfézesztimáció és a kvandumfézsebb, nagyon szemponti, a sállalat, hogy éskivállalatban elgondolom a személyesztimáció, és a kvandumfézsebb és a személyesztimáció. A felszfejszestimációt, a problémához az, hogy a uníteri, ő az, és mondjuk, hogy az uníteri a Black Blocks, hogy nem tudjuk neki más kérdésre tudná. A kérdésre tudná, hogy nem tudjuk a kérdésre tudná, hogy nem tudjuk a uníteri, és nem tudjuk neki más kérdésre tudná, de a uníteri az a kántum církét tudunk állni. A kérdésre tudná, hogy nem tudunk neki más kérdésre tudná, keverjük a vlétéseben pigsésem,�kik С valictionalítól látnak Eating S Kilowers, arról ma negy látnunk, kanálnak tentu porfl Fundus rá. Tényleg, hogy a hely jurorsálists a plátokat 내�zi� a r executive. A Transforms kérdés a felszieslél követőt always bármifterûn. A sérümet is olvastam hogy az erőszak, amit az utóchiket leszel, aki attól is sörnyű és szárzyjátjátul, hogy ezt a félhettet, ezzel annál az összes félhettet, és a kvédtelteneket, aki összes előttenséges, aki a félhettet lehet appárati. Aziásorban volt, hogy akik egy kb hely triggerszív, eszél meg nem ver cobragon, felszéget mennünk, hogy az egyre van egy kappalmában a n-bits, az egyenlő kérdés, akkor azt hisz, hogy a kappalmában a kappalmában a kappalmában képesztetek, és azt is a kappalmában a kappalmát a kappalmában a kappalmában a kappalmában. És ha a kappalmában a kappalmában a kappalmában a kappalmában, lennie előntenére is tovább a Lajolatot, hogy a tegélőú, amit egy egész szikről sikerült. És ez egy összefizsérthetős�jét, hogy ez az első illetésű és az integyérősik egy idő, aki olyan illetés, amit lehetünk összefizsérthető, egy olyan, annak egy cokekányos igazlegtok, szieteketős, hogy nem legkült helyen engem az erős, de hogy egy, hogy egy, then lehetünk összefizsérthető. És talán előtt, A bíró emelként a helyzet kibáptakítunk, Basically applied to the uta, a léped.- uta, a léped.- uta, a léped.- uta, a léped.- uta, a léped.- uta, a léped.- uta, a léped.- uta, a léped.- uta, a léped.- uta, uta, a léped.- uta, akik gyújtottunk. Roly mellékeztük a tegékezőe bíró tablet, és a tegékező emelként a helyzet, a leheté клátottok. That is capital N is as usual 2 to the lowercase n. And so if we were lucky enough that this unknown phase was indeed an N bit binary number, then we would just get back exactly these N bits here. And a bonus our eigenstate will not get disturbed. Okay, so why is that the case? Let's just walk through this algorithm. We start with the eigenstate psi and n zero qubits. Tehát egy kis Szörnyes. A felszörneket megy egy új nap, aki felszörnek, mert ez a Pijáradígy tűjébe zavar, aki és a felszörnek. A resonate-i tлегésben. Aztán azt gondoljunk, hogy a tanálási unitére valáig az u-hagyja is van. A útogási útére, amikor szálunk a barát kezésvel a épületeléhez, az euróokban esetben a különső szükszikbocs, hogy észsér, az is a legtöbb szépen. Előbb egy ujjjá目t, ami a hatalmas a különső szépen. A hely szép vele, amit előtt a különső szépen vagyunk, E 3x5xt. És ez a fags payfactor, az új fags tablok az érkeket, és ez az új fags gondolkozunk a kis fetszftről. Szomogatják róla, és a fags drukokra a kis fags gondolkozunk a kintletre. Ez az új fags tisztetése. A kerül endésnek, hogyha egy elvésdek, akkor nem talán isváltak majd szóval alapjához a kis fagsgondolkozó, hogy meggondolkodni az ebből. Azt hiszen az ilyen az összeszegű az elveg... amikor szükséges voltak a PSI-t, és a szükséges szükséges voltak a T-t. Azt hiszem, hogy szükséges voltak a szükséges, amikor szükséges voltak a PSI-t, de ez a kontrolstruktúra, hogy a félkikbák félkikbák szükséges voltak a félkikbák a T-t. És elogatod, hogy ez a grapefélkét is voltak Sicak, és a félkikbák az ég az ide lesz a 1. fúrie coma az 1. félkabacolt néznen miatt mellek agreementet a folytátaval. Hogyt hiszem... 1, félkabacolt néznensoftek, de itt fogja utálani megadják. A folytátaokat, mint munkat nová életezdel featuretathatsz rungott ps 110- 추가, így hip Walkon-ed 11 ijgás принcmelt. Fiatalmi a t reputation is ötletek lesz, ahogy történünk az infugi biztos fiatalmi fiatalmi fejénet. Ez az emberhez, hogy se várjük, ha a pikány is a párká Kitálával férjük? OK, akkor wedzett az utolsó, hogy szükséges a párkány, hogy azok a helyezés, ha nem szükséges a fiatalmi, de nem kídujunk helyzet, a felszorúról, de hívjuk. Egy nap, hogy nem nagyon máskudják, és most is egy két gondolás, hogy a felszorúról kérem. Azt hiszen, amikor az adott órán, az aranyúrú az utolában, amit jön a gondolásban, nem a gondolásban, de a gondolásból, és azok a gondolásban. oké, a koaláfigyű kanányával fogja semmi egy ut Eduardo-t érkezt. Egyesekül bulyan fraszt szíveseket hozzá kell. Az egyik koncási szíveset hozzá kell. Előbb a koncási szíveseket hagyma... és nem átszott sammelyeket, és egy ut對já rútt el, a bírat megszületet érkezni... És azimból eredik a koncási szíveseket отállal, a t cincozgálása, ez a k t comma a títs, emberünk a k menek, és a k gondosztd a zandózása, de az olaján Friendszegyek a zandózása, és a Zodál front az Azt a zandózása, és a zandózása. A fiatal a szemén, hogy az olajánk a zandózása vagy a zandózása, az a zandózás a szemén, a bárm Listener a személet. Egy modifier appó苏felelőt is mutatom egy k, aLook k AB k that I get, and I interpretediley as ayse Reajse 1. So now what I see here this is the geometric series, so I can just apply the usual sumonation formula for it. That just means that I take one higher power of this quotient, minus one divide by the quotient minus one, this is exactly what I get here. Oké, ez az igazság, hogy ez az említősége a különböző és ez az igazság, de oké, személyeket akarunk kérdezni. Egyébként személyeket akarunk kérdezni egy különböző és egy különböző és egy különböző és egy különböző és egy különböző és egy különböző. És de sportatás, és de sportatás, és de sportatás, és de sportatás, és de sportatás, és de sportatás, és különböző. A különböző escemélyek és egy különböző escémélyek és egy különböző escémélyhez is megtaláltozni meg a különböző, és hogy a különbözések őket ürjül, hogy a különböző érdekes bécei kicsit hozzá hagyíteni miatt a személyére, és ha olyan felszörni a pezondítófézbecsérését, a felször fonkonszorban de dangotjátok, amit is hagytok, akkor kezdjük az olyan megkenyeből. Ez egyébként és az egyébként, azzal nem teszem ezek a Felszörnek, feliratom, a szemmelnek, k mangára felszel. Mégszerre nem undítod rá a felére, hogy a legtöbb felére lehet így készülni meg a másik felszakon, és ez a számomra, hogy az a megségezőistsz ki mellé. Itt szokásunk, akkor Eating би melyik, az együtt jelenkező Really, az én vagyok a tehát a legyen. Tudod, hogy ez a legtöbb magyar, a szereve szíaszokat. Aztán majd megpróbálnéz ki, hogy ez egy alapboard legpúl a zöldek, kis pályán és zöldek,ated a zöldek alap worldly ebben. A zöldek is, aka a zöld 거�íthus , ki tudom, hogy a zöldek lesz egy íz. A zöld következőre, a zöldek a zöld legpúl a zöldek, attempt to sum constell this out. So what am doing here is am replacing the sine function by the sync function, and the sync function is just sine and divided by X, and the nice thing is is that this one over N factor is just being absorbed if I am doing this. You can see that made this sine function math and function I should divided by pi and hogy a sinc felszínűfjárt kell természetelni. A felszínűfjárt és a felszínűfjárt helyezem, amit a felszínűfjárt különbözőségek van. A felszínűfjárt mindenki egy kéte legyen, és ez a felszínűfjárt és az a felszínűfjárt neki, hogy ez a felszínűfjárt. A felszínűfjárt nem értek, de ez egy helyen a felszínűfjárt, amit a felszínűfjárt és a felszínűfjárt S insane square pi delta, for n equals 8 so that would be 3-bit estimation of phases. And so this green is just the function itself. But remember that we had some discrete phase and I assume here that the phase was 1 over 24, just a choice that kind of illustrates interesting phenomenon here. And while my delta was n beat binary number minus my phase which is 1 over 24, a fiatalási változásokat, a különbözőre. A fiatalási változások át tudom, hogy a különbözőre a különbözőre is a különbözőre is a különbözőre is a különbözőre is a különbözőre is a különbözőre. Hogyha a különbözőket is szépen látni, az utána nem szemétrik, nem tudom. Vagy a szaláítót érdekes tématásokkus embereket megnep. A szaládióra szalált, deet lelébe később szaláta. Egy ugyanazat tématáshoz kérdésem. Egy más Bangabangára is ezt tudod érdekes szaládió mondani és kérdésebb szaládió meglá通ott másgatás, az, hogy az darács, az az egyadás szerep flawal, az egyedül az egyedül az egyedül, az egyedül a szerep. A választal, azt mondta, hogy ez az egyedül az egyetlál widelygép az egyedül az egyedül az egyedül az egyedül az egyedül lással. A Delta mean is the largest. The Delta mean can only be as large as 1-2 to the N. My mesh has size 1 over N. A like spacing 1 over N. The worst case is when my true phase is exactly in-between two mesh points. That would be the worst case, and I just compute this probability. This is the general case. Az első kezésre, amikor megyek a Delta mintha 1-2n-t, az egy kis helyre. Egy kis denominator, amikor a sinc fúntal az egy kis helyre, az egy kis helyre. Ez egy kis helyre. Egy kis helyre. Egy kis helyre. Egy kis helyre. A denomonátor az egyárt témelére stunned miatt. Just compute is spelál, 4 over pi square, which is at least 40%. The probability that I obtain the best embit estimation is not so bad, it is at least 40%, quite good actually! Now what's the probability that I obtain one of the best two embit estimates? Once again, you can see that the two closest one aftertummin, and also on the other side of the estimate is 1 over N minus the other means, so this is the two closest points corresponding to your true phase. And you can once again see that actually the worst case happens when you exactly fall in between two mash points, your true phase. A spúsba szinte az utolányok, ami isményes, később szintén tudnak közszíteni ide, és az esély nullával valamit az ördöntöm worrying és autóleimel van. A változások változások változások van, de a változások haztek az első egész ámintával és a változások van, az utolányok és a változások van, hogy a változások értékel magunkat az adott ámintával az utolányok aktális. De a második egyet az, hogy olyan nagyon bárkával nem értünk a négyéből a négyhogynyi szabadásokat. Egyébként megállítod, hogy ez képeseket és ez a különböző technikus, hogy a különböző technikus, vagy a köszönböző technikus. A különböző technikus, hogy a lesz, hogy az az a mediantrik, Hallgálló, hogy nem tudom... Oké, én tudom másokat sz wichtigegeseett aоры... Na de a helyeket az egyik gyakalatási, hogy az összes, hogy épp érne ezek az eszemet, hát mostANÉZ értél egy második az, de az is meg tudják, hogy nem xiéd, de az így nem emeljük, hogy nem érkez Tennyi nyilván. A félhagyó az, hogy ne visszaújtál ne neki egy kis félhagyó személy, de a félhagyó személyhez számi, személytállni az S-sámpóban, és személyhagyogatják a félhagyó személyhez. Azt hiszem, hogy egyik gyönyörű életemegére hogy személyhez, hogy a félhagyó személyhez, és őket személyhagyó személyhez. A személyhez nem a jó ide, т , , , , , At least 50% of the estimates are farther than epsilon, so the expected value of these bad estimates are at most 20%. And now, I ask the probability that what is the probability that this bad event happening that I have many, many bad estimates, it's much farther away from its expectation value. So when you have such thing that you have independent samples of something, and you want to estimate the probability of an event where the average is lying far away from its expectation value, in this case you can use the channel bound, and that tells you that indeed this event has exponentially small probability in S. So taking only a few more samples ensures that you are very unlikely to be in the unlucky situation where your median is far from the true value. Because when more than 50% of the estimates are epsilon precise, then the median of all the samples must be within this 50% part, and therefore it will be at least epsilon precise. So maybe do we have a, yes, maybe I do a quick drawing about this. So this is the true value that you want to estimate, and this is this epsilon neighborhood. And if here you get more than, okay I think I should draw bigger, sorry for that. So this is the epsilon neighborhood, and here I get greater than 50%. Well then the median, which is the middle value, must also lie within this interval, and therefore it will be epsilon precise. Okay, now what's the problem? We are working on a cycle, yes? Is it good now? Cool, thanks. Yeah, so what's the problem here? That the median is something which requires an ordering, right? So here I assume that it's on the line, and I can order my estimates. But we are on a cycle, in fact, phases wrap around. So that doesn't work. However, you can basically apply this idea, because we have very nice high probabilities of getting these best two estimates. So what we can modify this idea, adapt it to the cycle, and simply outputting the most frequently seen element. And say if we have a tie, then we choose one randomly of this most frequently seen elements. And this will work for the same reason, for the same chain of bound argument, because well, it's exponentially unlikely that the most frequently seen estimate is not one of the two best estimates, because they have joined probability 80%. So if we get one of the two estimates with at least two-third, probably then the other element cannot be more than one-third of the cases. And therefore as long as in our S samples, more than two-third of the estimates are actually one of the best two estimates, then the most frequent element must be from these two elements. And so the same chain of bound argument applies by repeating this phase estimation experiment a few times, and taking the most frequently seen element, actually we are exponentially likely to see one of the best envied estimates of our phase. And indeed this boosting in this case means that our distribution will be exponentially concentrated on these two elements. Now the unfortunate situation is that we cannot ensure that we get a unique estimate with high probability. So we can almost get there, we can get two different estimates with high probability, but we cannot make sure that we get one of them. And this is a bit problematic, because what we are doing here is that we are taking a lot of phase estimation algorithms that are basically in parallel, because each phase estimation step will keep our eigenstate, so we can reuse it, but we use again fresh encilas and just get new and new estimates. The problem is that these prior estimates, over which we will compute the median, they are still lying around, or taking the most frequent element. And therefore it is a sort of a garbage state. It is not an estimate, but we also get a history of the other estimates that based on which we computed the ultimate value. So it produces a lot of garbage, and that can be really undesirable for getting coherency quantum algorithms. So we will look into how to solve this several ways. So I will talk about this a little bit later, but now I wanted to talk about this symmetric estimation. So the motivation there is that, remember this was the actual auto distribution, these red dots of the estimates that we got. And it's not symmetric, and in particular we wanted to get something which is unbiased estimator of the true value, and this doesn't look anything like unbiased, and it's probably not, but the underlying function, this function is nice. And actually we can make it that way. We can recover this very nice smooth sync function as the true distribution of the outputs. And the trick for that is to apply random shift. So as opposed to doing the phase estimation as before, we do something a randomly shifted version of it. So once again we take this input, a superposition with these phases corresponding to this unknown value that we have to estimate, but as opposed to directly doing Fourier transform on this state, we first pick a uniformly random phase, and this will be a phase between 0 and 2 pi over n, or well, sorry, now I am including pi in the phase before I didn't, depending on how you wish. It's a number between 0 and 1 over n, or multiply by 2 pi, and so what you do is that you apply this phase gate, and basically add this additional phase to every state that you already have in this superposition, and after you did that adjustment of this random phase, then you do the Fourier transform as before, and now you will get some outcome J. But you don't output the outcome J that you got, but you subtract the random phase that you added because it changed the true phase that was there, and so your estimate will be the estimate that you obtained after phase estimation plus an adjustment by this random shift that you made. And you can analyze this, and as a matter of fact it turns out, well, of course there is some technical detail here that you cannot really take a 0, 1 uniform number because it has infinite bits, but if you were able to make it infinite precision, then it would be exactly unbiased, and while you can also show that if it has many digits, it will be very close to be symmetric, but I don't go into these technical details, and I just think about this as a completely uniform random phase that we added in this interval. And so what we showed with my colleagues, but actually if you do this random shift trick, then the output distribution will be a density function, which is exactly this sync function that we have seen before. So this is the function, and now all these red dots that were there, they are gone. Now, because we added this uniform random variable, which kind of shifted things, it means that now we can get any real number as an estimate between minus pi and pi if we take that phase notation, and it has a continuous distribution of these estimates, and this is exactly this sync function, which is nice and symmetric. So now in this sense, it's a symmetric distribution around a true value, so we can say that it's unbiased. And that's very desirable in many statistical applications, because unbiased estimators have much nicer statistical properties, and you can much nicer combine your different estimates. And so this random shift was a way to tackle this issue that our random mesh of potential phases can be arbitrarily replaced compared to the true phase, which may be an irrational number lying somewhere in the circle. But this random shift, you remove this uncertainty of the placement because you basically place your random mesh randomly, and therefore doesn't matter what was the actual initial phase for any value, it gives the same distribution of outputs, which is this nice symmetric distribution, which is still heavily concentrated around zero, so giving you good quality estimates with high probability. OK, so this is now this new version of phase estimation, and well, this is really just from last year, and so while we had some motivation for doing this, actually this was a key ingredient in a tomography algorithm, where we did not only assume that quantum tomography is the process when you are given an unknown quantum state, and you want to learn it to some grand precision that you determine. And it's usually studied in a scenario when your states are given as copies, so you get several copies of the state, and you just do measurements on it. But in many cases, you have a recipe for preparing that state, as a matter of fact in your lab, when you get these samples, you somehow prepare them. Now if you can prepare your samples within a quantum computer, then you have a recipe for preparing your state, which is a stronger access model than just getting copies, and under this stronger access model actually you can achieve a better precision dependence. So this sort of short noise like 1 over epsilon square scaling, which comes from having only samples, you can get this physicist called Heisenberg like scaling, so it's like 1 over epsilon with the precision. And for that, as a key ingredient for achieving this task, we had to ensure that this phase estimation was unbiased, and we don't get biases ruining our subsequent estimators. But it can also be used as part of some algorithm for improved estimation of partition functions. Yasin and Arjan wrote a paper about this. Yasin is also here. You can ask him more about this. You can also use it for some trade-offs for amplitude estimation, where if you have the full depth circuit, then you can get this Heisenberg like 1 over epsilon scaling, but as you decrease the depth, you need to do more work, but for less depth. There are some nice trade-offs, and original paper which dealt with this was kind of technical, but now you can just use our unbiased phase estimation, and then your parallel estimates will just nicely work together, and you get the same trade-offs, but with a simple algorithm basically. So these improvements in phase estimation actually can give you improved algorithms, mostly related to statistical problems and estimation problems and so on. So it really matters to understand phase estimation well. And while this is a nice puzzle, that you can think of during the exercise session if you wish, so suppose that we are using this nice randomized phase estimation algorithm, where you get this exact symmetric distribution. Now this is still having this issue, but it can have relatively far-over estimates with large probability, so it has a heavy tail. Now can you make such a boosting argument, which will keep the distribution symmetric? So somehow you would need to devise a rule, which is again symmetric, but still achieves the boosting that you wish. This is a nice puzzle. It can be solved, but needs a bit of additional ingredients, because the previous boosting that assumed a fixed grid, and we just took the most frequently seen element. But now we have a continuous distribution, so with probability one all the estimates will only be seen once, so we need to do something different. It's a nice puzzle. Ok, so now in the second half of this lecture I wanted to connect the discrete and the continuous Fourier transforms, because I was struggling with this when I learned discrete Fourier transform. I already had some background from physics in continuous Fourier transforms, and I just didn't see the connection. There is some discrete happening. It's both called Fourier transform, but one is just vectors, other is functions, and everything is nice and continuous, and this discrete is just messy and basically destroyed my intuition that I didn't know what happens there. And so we just recently found a nice way to connect this to each other. I don't know if this connection seems like it must have been known, but I didn't find it anywhere, so if you have any reference then please let me know. Ok, so first let me for concretely introduce the continuous Fourier transform. So we have a function on the real line, and it can be a complex function, and we define its Fourier transform f hat omega as the value of this integral, and I integrate from minus if e to infinity, this phase is multiplied by the function. And for normalization I use this 1 over square root 2 pi, and this is nice normalization, because if I use this then a Fourier transform is actually a unitary transformation on the Hebrew space of square integrable functions. So this is now nicely resembling this Fourier transform, which is also unitary, but on this vector space. Now this is an infinite dimensional vector space, the space of functions. It's a unitary transformation. And to connect the continuous case to the discrete, what we are going to use is some wrapping around periodically. So if you have some complex function and you pick some period r, then we define this periodic wrapping as now a function from 0 r to c, and for a particular value x, we define it as a summation over x plus integer multiples of r and from some bounds. Well, I could have defined it as a summation from minus infinity to infinity, but if I define it this way, that is a more generically applicable definition. So in some cases, this summation from minus infinity to infinity would not exist, but this limit can still exist. This is this kind of principal value evaluation of these infinite sums. So this is the... So yeah, here we only require that this summation is existing in its principal value sense. If you are familiar with analysis, then you have seen these kind of tricks. It's not super important. The important thing is that we are taking a function value and we are adding basically all periodic points from this given starting point. So it's really a periodic summation for all the periodic points. And well, because we want to have something discrete, actually I also define a discretized version of this wrapping and that will produce an n-dimensional vector from this complex function. And it's the same as before, but now the j coordinate of this vector will be corresponding to the function value j divided by n times r. So r would be a full period and within this full period you are at the j-th mesh point and you are again doing this periodic summation with integer multiples of the period over all the real line. OK, so now I define this periodic wrapping and here comes the amazing theorem. OK, this is just repeating what I said before. So take two periods, t and w, this will be the basis of your periodic summation and we need to require this Fourier analytic condition that t times w equals 2 pi n. So if this holds, if this relation holds between the two period lengths, then actually what happens is that if you are taking the discrete Fourier transform of the t-periodically wrapped function with n-mesh points, then that is the same thing as if you take the continuous Fourier transform of the function f and you are discrete wrapping it with period w. OK, so maybe I should draw an image representing this. So we have continuous functions and their Fourier transform and then we have their wrapped version. This is a tn and this is fn. So here we have the continuous Fourier transform and here we have the discrete Fourier transform over n-elements. So this is a commutative diagram in the sense that you can either first Fourier transform in the continuous picture and then do the wrapping or you can first do the, and then first do the discrete Fourier transform. So if you are first wrapping and then discrete Fourier transforming, you get the same thing as if you are first Fourier transforming in the continuous picture and then do the wrapping. So this is a commutative diagram in the sense that while you're always going down because you can only do wrapping in one direction, but if you first do this or that, you get the same result. And I think this is a nice connection between discrete and continuous Fourier transform, but as usually terms and conditions apply, I can't speak so fast as usually you hear in ads, but really should check the details now in our paper, but the good news is that it certainly holds for smooth and rapidly decaying functions and also for quiet general functions otherwise. So you can apply this trick in quiet general setting, but you can still need to check that everything holds. Basically, the main requirement is that the summations exist and then some additional stuff. OK. So now we have a connection between discrete and continuous Fourier transform. This gives us intuition to understand or gain intuition from the continuous case on what sorts of things should we do in the discrete Fourier transform picture. And OK, so a nice application is how you do high accuracy phase estimation in a single run. Before, if you wanted to do high accuracy phase estimation, we had to do several repetitions, take medians and do messy things, which also introduced garbage states. So you don't want to do that, but here the idea is that you should use Gaussian amplitudes. So one thing is that Gaussian is nicely rapidly decaying, so if you wrap around the Gaussian with some period length, it's almost the same as truncation. If your width of your Gaussian is reasonably less than the period of wrapping, then wrapping around basically has no effect on the Gaussian. The tail is exponentially decaying, so truncating the Gaussian and wrapping is almost the same thing. On the other hand, as you know that in the continuous picture, the Fourier transform of a Gaussian is a Gaussian, and so it also roughly holds, it also holds than for the wrapped around version. So therefore the Fourier transform of a wrapped around Gaussian will be the Fourier transform of the wrapped around version of the Gaussian that you would get in the continuous picture. And while there are some approximations, if you use truncated Gaussians versus wrapped around ones, but those are really tiny, exponentially small. And the nice thing about this is that now what will happen in this discrete Fourier transform, again I should draw something, so you have this circle representing the phases, and then you start with a Gaussian, which is kind of wide. I don't know how to draw it, but you have some wide Gaussian, which then exponentially decays. And so you are discretely wrapping it, so in fact it is just going to be some points here. Now when you do the discrete Fourier transform, then it will be almost the same as the discretization of the Fourier transform Gaussian. So you will get something, which will be now much tighter Gaussian, something like this. So if you work through the math, it turns out that if you have this kind of spread out Gaussian, you apply your phases, that is you know that in Fourier picture phase multiplication is just shifting after Fourier transform, so you start with Gaussian amplitudes, apply phases, and that will result in a Gaussian after Fourier transform, which will be shifted and concentrated around the value phi that you want to estimate. And now this new Gaussian after Fourier transform will be kind of narrow. That's why you start with a wide Gaussian, that you end up with a narrow one, and if you have now exponentially decaying a probability of getting far estimates. And so if you choose the parameters appropriately, then basically what you get is that you get an estimator with standard deviation about 1 over n up to some logarithmic factors that come into the picture in a single run whose noise deviation from the true value is also roughly a discretized Gaussian. And all this you get in a single run of phase estimation without creating any garbage until a states, because it's a single run. So this is what you gain from understanding this continuous discrete connection and one extra ingredient is that, but I said that you should start with a discretized Gaussian. Is it efficient? Previously, we just used Hadamard to get the uniform superposition, but thankfully Gaussian states are very nice. You can just create these amplitudes with a very efficient circuit. It's slightly more complicated than just applying Hadamard gates, but still has a low depth. And I should mention here that this can be, in some cases, further optimized by choosing the initial weights from some particular distribution which was understood by people in signal processing, because in signal processing, people want to get the best possible outcome as little resources that they have, and so they optimized similar things. There was also Fourier transform plays an important role, and it turns out that from some perspectives from Gaussians, we can get a slight constant improvement if you are using some so-called Kaiser window function, which is exactly optimized for this kind of task, but that only gives you a constant improvement. All right? So now let's apply this trick to Hamiltonian simulation and energy estimation. So in quantum computing, we often, yes? OK, so for this I should go back to the beginning. Look at this circuit. What happens here is that first you prepare a uniform superposition over these values T. Now, you will not do this here, but it will be a preparation circuit that prepares a Gaussian on these T integers. And that will be, now it is kind of like a flat function over all the values, a kind of a spread out Gaussian in this case. So you need to modify this very beginning. Yes, yes, that's absolutely correct. And for this reason, I wanted to get to Hamiltonian simulation because this is a natural example where you have some control over what are the potential phases. So if you just have an arbitrary unitary, then this can have some wrapping issues, but if someone told you that your phase is definitely between a quarter and a minus quarter, then you can place your Gaussian in the right place. This is a very good comment. And Hamiltonian simulation is exactly such a situation, sorry, energy estimation, which is now I'm describing. So quantum computing can be used for understanding physical systems. Laterally, this is what Feynman proposed it for. So in particular, we often want to understand energy levels of systems, like in chemistry, molecules and so on. And therefore, it's very useful if we can determine the energy of a particular state that we are given. And well, for this, we need a Hamiltonian system, a Hamiltonian matrix, which is a system, and basically one of the most generic forms you can assume access to such a matrix, which is in quantum computing is a block encoding, so that you assume that you have a unitary matrix, implanted with some nice quantum circuit, where the top left corner of your unitary matrix is just the Hamiltonian that you care about. So this just means that take the block which corresponds to however many ansiola qubits being zero before and after applying V. Really, if you would just write down this V as a matrix, then it just means that take the top left corner of some specific size, and the zero bar just means that it's some number of ansiola qubits. I don't specify here, I don't want to confuse you. So if you are given such a block encoding of the Hamiltonian H, then you can use quantum signal processing to implement this exponentiation of it, e to the i th, by something like order t plus log 1, and this will be epsilon precise application of this. And so now what you will do, your unitary in phase estimation will be something like e to the i h over 2 or something, and that will ensure that your phases will be indeed between minus a quarter and plus a quarter, so that this coefficient tricks help you. And so I don't want to go more details into quantum linear algebra because even we'll talk about this next week. Stay tuned for the lectures next week. Okay, so if you remember, if we wanted to get a roughly n-bit estimate of our phase, we had to apply an n-bit estimate is roughly 1 over n precise. We had to apply as large as n-th power of the unitary. So in other words, if you wanted to get roughly epsilon precision, then you had to apply roughly 1 over epsilon powers of you. So in this case, it means that you would need to simulate your Hamiltonian to roughly 1 over epsilon long times. And therefore, using this phase estimation technique, you can get an epsilon precise energy estimate using roughly 1 over epsilon uses of your block encoding v in particular, and every component of this circuit is efficient. So it is kind of like a query complexity estimation because I don't want to go into too much technical details, but really the leading order of this gate complexity will be just this 1 over epsilon uses of the block encoding for the Hamiltonian that you care about. OK, so this is now an application to arts physics. And OK, now I want to sketch another generic application, and in this form it's maybe a little bit less known, but I wanted to tell you this nevertheless because I think it's interesting. So now this is a generalization of this Hamiltonian energy estimation problem in the following way. Now you are given a unitary v, which block encodes a potentially rectangular matrix A, which doesn't have to be Hermitian or symmetric. It is just some matrix. Once again, how you define it, it will be the top left corner of your matrix v, which is even a quantum circuit. And this can be rectangular, so now I explicitly stated that this top left corner corresponds to starting with B, 0 and 0 cubits at the beginning, and ending up with A, 0 cubits at the end. Once again, if I write down this v matrix, it just means that the top left corner of the v matrix is a matrix A. OK, so this is just how you give the input. And OK, now I need to describe this singular vector estimation problem, which is a generalization of phase estimation. So for this, we should consider the singular vector composition of this matrix. The singular vector decomposition is something which exists for every matrix, even rectangular matrices. And so this is basically decomposing your matrix as a sum of rank from matrices. So this uj vj product, this is just a rank from matrix, and this sigma j is the corresponding singular value. And the key property in a singular vector composition is that all the vj vectors are orthogonal to each other. They are orthogonal vectors, just like the ujj vectors. So these sigma j numbers, these are non-negative numbers called singular values, and these left and right singular vectors form an orthonormal system. They need not span the entire basis, because maybe your matrix is singular or rectangular or something, but they definitely are orthonormal to each other. Meaning that the left vectors are orthonormal within themselves, and the right singular vectors are orthonormal with respect to the right singular vectors. Yes. So now I define this singular value decomposition of your matrix A, and so the task will be similar to phase estimation, or energy estimation for Hamilton matrices. We wish to estimate the singular value of a given, say, right singular vector vj. The same thing also works for left singular vectors, but I'm just stating it for right singular vectors for simplicity. So you are given some right singular vector vj, and you want to learn its singular value in the matrix A. And so it turns out that basically the story for phase estimation goes through if you apply the right techniques, and therefore you get similar performance to phase estimation, but there is some annoying technical detail that somehow the sign of these singular values is a kind of arbitrary. You can choose it in a way. It's just a standard thing to choose them positive. And basically all methods lead to not producing an estimate of the singular value sigma j itself, but producing an estimate with 50% estimate of sigma j and 50% estimate of minus sigma j. Well, of course these are by definition the same numbers as we defined, so you can just take the positive, one of the two, but that may not be the same as just outputting this estimate itself from this unbiased perspective, but that's just minor technical detail. And so it was first described, this problem by Kernic and Prakash, was written in this famous quantum recommendation system paper, and then it was later improved by Shantanav Csakraborty and Stacy Geoffrey and myself, and with the latest techniques by Arjan Konecsen and Yasin Hamudi, we got basically the nicest version of it, which gives basically the same guarantees at phase estimation. The difficulty was in the early attempts is that often you did not fully retain the original singular vector. In phase estimation it was very important that your eigenstate is not destroyed, it's kept. And then this was a particular property that was very hard to establish in this case, but by now basically all these things are solved. And so it might look a bit abstract, but you probably heard about amplitude estimation, and that is just a special case of this. So if you have an amplitude estimation, then what you have there is, you start with some, you have some state u applied on zero, it's giving you something like, well, OK, I'm not using these slides anymore, so maybe I can just write it on a new board. Hopefully it will be good reflection wise now. Can you see it now? OK, so you have some unitary that on the zero state prepares something and marks with a zero say the good state OK, some amplitude a and plus some bad state and so with the corresponding amplitude and you want to learn this value, this amplitude. OK, but if you write down this matrix then you can see that zero tensor identity times u and then, well, he just applied this input state. Well, that will be just psi good basically and times this amplitude and if you think about this, not as a column vector, but as a very large matrix, then it has a singular value, that's a. So this column vector, as a column matrix, has a singular value a and so estimating this singular value is the same as estimating this amplitude. But this is a generalization leading further and by this I actually ended, go to the end of my slides so thank you very much and enjoy your meal.