 I want to tell you a little bit about a development that took me by surprise last year, and that is this connection between work, well, there's time frequency localization, which we have been talking about, then Wilson Basis, which I mentioned a little bit about, and which we'll discuss in greater detail today, and gravitational waves. So, well, there was this paper on Archive, this version, a later version, the final version was somewhere in April, I believe, on Archive. It actually, this is only a list of, a partial list of the authors. This is the paper, as it appeared in physical review letters. And if you look into the paper, then at some point, well, I mean, let's first see a few of the, I mean, so this was to remind you, if you need to remind it, the experiment that with, by correlating detectors over a large distance, looked at the event of merging of two enormous masses, and from there, and this is a picture that we've all seen a resilience of times detected this by now famous profile of, and confirmed by the two observations of a chirping waveform that you see here, and that is the signature of the gravitational wave. So, if you look in that text, then you find somewhere that, for our frequency domain, using the Wilson-Dobchin-Meier wavelet transform, now they're not wavelets at all, I mean, to begin with, and I was surprised by this juxtaposition of names, but I mean, it's well, it's explained by the fact, so this reference 34 that's given here is this paper, and they talk about, they give a special time frequency transform, and they say, share some properties with the Gabor frames, it's a convinced the Balian Lothier, which we saw last time, and it also shares similarity to Meier wavelet, and so that's why they called it, the Le Meier got attached to it, and so that's the connection with gravitational waves. I mean, the surprising thing to me, and so we can cut here for the moment, with the projection, the rest for the moment will be on the blackboard, and we'll come back to the screen at the very end. So the surprising thing, and I probably need to put lights, éclairage, tableau, yes, magic. So the surprising thing was that these Wilson bases, which in a paper with Stéphane Jaffar and Jean-Lin Journay, it's myself, we had formulated to circumvent the Balian Lothier, in which we'd advertised to physicists, I mean, which you hoped physicists would pick up for atomic and molecular computations, they actually, and I'll come back to that, had as a result of our construction, other constructions took place, which were, so these led to localized cosine trigonometric bases by Raffy Koiffmann and Yves Meier, and after their construction, I expected that the original construction would actually not be as used anymore, and that's indeed the case for many numerical purposes, but then with the gravitational wave thing, the fact that these had such simple expressions in terms of Gaussians became relevant again, and that's how they were used. So I want to tell you not only the construction and how it works and why it was such a fun surprise, but then also all these other developments that came from it and how it situates in this bigger space of time-frequency bases, which is not in the paper, but okay, so as we saw last two weeks ago, it is impossible, so there exists no possible function H that has good decay in space and good decay in frequency, and such that for some choice of tau and omega, the family, well let's put the i and omega, constitutes an orthonormal basis, and that's the Balianthro theorem, and we proved that, so proof, we did that in two stages, we first did a proof, well you need two stages, but I did the second stage later, the first stage later, first what you really need to prove is that an orthonormal basis requires that the product of tau and omega equals to pi. We saw that by looking at time-frequency localization operators on a big region in time-frequency space and showing that the trace of such an operator was proportioned to the area divided by 2 pi, and then for tau omega equals to 2 pi, it turns out that the Zakt transform is an excellent tool to analyze this, and the Zakt transform showed that having an orthonormal basis was equivalent to asking that the Zakt transform of H, which is a function of two variables, have magnitude one, and that was incompatible with that because functions that satisfy this have to have a zero near Zakt transform, okay, so that's what we did, and that's how we showed that there was no such orthonormal basis, so that's valiant. Then orthonormal wavelets came along in the 80s, orthonormal basis of wavelets, and they were really surprising because there instead of translating and dilating, what one did was when starting from one function, you, in order to get, you took a function that had some oscillations, so in Fourier, you thought of it as something that had a zero here at xi equals zero, so this is psi hat of xi. If you took your psi to be symmetric, then, to be real function, then its absolute value was symmetric in Fourier, so this was psi hat xi, and so if you, in order to reach different frequency ranges in wavelets, what you do is you scale, so morally you're living here, and you scale to larger and to smaller frequencies, so you typically do it by a factor two, two to the jx, and then, well, each psi itself in x-space is typically well localized, something like this, and so it only covers a certain region in x, and then to cover in this frequency range, so with this scaling, different places in x, you have to translate it. If you make this function, if you expand it to higher frequencies, so you make it much more narrow in x-space, you have to translate it by small steps, if you make it very wide, you translate by big steps, so you do here a translating, and so this gives you psi two to the jx minus k, you normalize in L2, so you look at these families of functions, and as Yves Meier showed for the first time, but it turned out it was already in an earlier construction, it was already implicit in an earlier construction of Jan-Olof-Stromberg, it is possible to find functions psi that are beautifully localized in time and in frequency that give you a normal basis, so how, I mean, first of all, you generate things in a completely different way, and dilations do something completely different to you, I hope to come back to that, because there's still some puzzles that I don't understand there, but so you weren't really having a nice lattice, if you think of this as sitting in time frequency space, and having a function that's well localized around time zero and frequency zero, when m and n are equal to zero, different values of n will move you around in time, different values of m will move you to different locations in frequency, and if you do both of them, then you kind of cover the plane with a lattice, and the condition tau omega two pi is a condition on the area of that lattice cell, if you do a similar, if you think of the wavelet construction, the wavelet construction is one in which you, you sit, let's look at positive frequency or only, you sit at a positive frequency, because you're zero at zero frequency, if you look only at this side, and you translate in time by certain amounts, if you go to the half the frequency, you translate by twice as much, if you go to double the frequency, you translate by half the amount, so you have again a kind of tiling, if it's kind of harder to link things, but if for each of these points, you draw the rectangle that corresponds to it best, like here, and here, and then here I have one that's, then you see again that they have constant area, and so you have again a covering, but in a different way than with the old lattice, but the difference is, I very carefully drew only positive frequencies here, that I have these two bumps, so definitely if I like to think of localization in frequency space, then in fact they're all localized here, because they have stuff on the negative and the positive axis, so it's not a carrying away, so it's true that if you look at the quantity xi squared, which is of interest, then you're moving things to higher and higher things, but you can't also say that you have stuff at xi equals zero, because the thing is zero there, so it's a different localization. Now you could say, well, since I am covering the whole frequency plane with dilation, that I mean, here I was reaching different frequencies by moving stuff, so I can move from positive to negative frequencies, that's not the case here, what lives at positive frequencies remains there, what lives at half negative frequencies remains there, so you could imagine trying to see what you could do something on just L2 of R plus, the Fourier transform of that, well the inverse Fourier transform of that, which is a hardy space, and you could say, can I do orthonormal bases there, and it turns out you can't, you have a similar obstruction, it's not the same proof, but Pascalotche proved that, that you can't do that, so it seems that the two-bump situation in the wavelet of the normal wavelet base case is essential. Now it's essential, but it doesn't have to be nicely symmetric, in fact, you can make constructions in which you have a thinning here, and I mean, you can make, and these of course are complex wavelets in which the magnitude of the wavelet looks like this, and what happens is that the integral of 1 over psi, psi hat psi squared between on the negative axis has to be equal to the integral of 1 over psi of psi hat squared on the positive axis, so you have it on R plus and R minus, these have to be equal, but you can still do that by putting this thing very, very close to the zero with very little L2 weight here, so you can make orthonormal bases for L2 of R, they do this, but they're not really very useful because you've kind of cheated, you have looked at much tinier frequencies, at negative frequencies than at positive frequencies, you're coupling things in a weird way. Have a nose, this might at some point have some some some applications, I have never seen any, but you can construct these, but as soon as you don't want anything there, you have this no-go theorem, so that has a little side remark, and oh no, maybe it's here, okay, so that's a side remark to which I will come back. In the late, in the mid to late 80s, Ken Wilson produced several preprints, as far as I know none of them have actually been published. I found, I got a preprint when I was visiting a guy battle in Montreal, she says, look this is a very interesting paper, and I got really intrigued by it, and so that's the paper from which we then did our work. It's only much later that I realized they had never been published. And then if you look in the Cornell archives of the Ken Wilson papers, you can find a different preprint there in which he, which is co-authored with several other people, in which he's using a similar construction with some applications, but again that was never published. I don't know whether these were submitted, they weren't very well written, so it may well be that they were submitted, and the reviewers said this has to be rewritten, and it wasn't done. I mean, and that might be the reason, but so I am trying to get permission to post the original paper from which I worked on Archive, because well now it has some historical relevance. But so what Ken Wilson proposed was to say I don't know whether he was aware of the wavled work, but he based on the fact that Xi squared is really the quantity you're interested in, he said why don't we think of functions, so instead of trying to build bases of the form HMM like we do, let's think of, I mean my notation, not his, let's think of a function fm that lives in frequency. So let's think of a function that has a bump around m and around negative m, and is nicely localized between these two, and then let's look at the inverse Fourier transform of this and translate those around. So in a sense you are as was said in the abstract of the article of Nicola Climenko and Mitzelmacher that you're putting a little bit wavelet-like, meyer-like in your time frequency construction. You're coupling negative and positive frequencies together, and that of course will enable you to build a function so that this thing is real if you wish. And then can we make an orthonormal basis? And so Wilson didn't provide an explicit construction but gave convincing numerical evidence that this might be possible, so you want these to be an orthonormal basis. And in fact the formula that he was proposing, as he was proposing it, well we didn't manage to make work, Stéphane Jaffar and Jean-Lin and I, but what we found was a construction, well Stéphane Jaffar came up with that formula was really intriguing. So what it was a combination of cosines and sines, and so let me take the original one that he proposed. So he said let's build one function phi, I mean and so that's what we're going to see that it's possible to do this, and we are going to call f1 hat, so actually what we're going to do is we're going to do it this way. f1 hat is going to be this function phi, so that's a bump function in Fourier space, and then we are going to move this bump to positive and negative axis, so we're going to take phi in xi minus l and xi plus l and this thing will be normalized because we want that to be normalized, so these things, since they will be far apart, we have to normalize their linear combination, so we'll call that f hat l xi, actually except I will make my index for, I will have a, I will call this 2l plus kappa, so l here is going to be in the in the natural numbers, my natural numbers do not include zero, if your natural numbers don't include zero then this is superfluous, if they do then you have to remove it, and kappa will be zero or one, and then there will be a phase factor, so I have here put here an i pi i kappa xi, so what's, so this seems complicated, but in fact what's happening is that depending on the parity of l and kappa I either take a positive or a negative combination here, which really means that I'm looking at either when I Fourier transform back to a cosine or a sine, so I'm going to mix cosines and sines, this phase factor means that I am going to, so my Fourier transform is not normalized the way I had done it before, I had to give in to working with, yes, so my Fourier transform now, so it's not the normalization I've used before, my Fourier transform now is given as e to the 2 pi i x xi g of x, the x with maybe a plus or a minus sign I forget, no there's whatever, there seems to be a plus sign there, so I put my 2 pi here, and what that means is that this if kappa is zero, it doesn't matter, but the kappa is one, it means that I'm translating by a half, so the result is that if I look now at, so I have defined here f hat m's for m in n minus zero, I start from one and then I have two, three, four, five and so on all here, I, if I then take those functions and I move them by integers, I now have a kind of hybrid family in which I'm mixing cosines and sines with the m's depending on whether m is even or odd and kappa is even or odd and I'm also translating not just by integers but also if the kappa is not zero by a half integer, so I have a complicated, well complicated, I have a something that's a little bit more complicated than that lattice and the idea is it turns out that if you define that family, so defining cosines and sines and looking at integer and half integer translates, that you then, there are, then one can construct phi such that v f n constitute orthonormal basis for l2 of r, so that was the point of the Wilson basis, but the surprising thing is that you can relate that orthonormal basis, so it's not just a question of a trick of combining positive and negative frequencies, you can relate it to taking a lattice in the original sense that is to dance and weeding things out in a good way, so we're back to time frequency and recombining things appropriately, it's very well possible that there are yet other ways of weeding out frames which I'll call frames and we'll see that in a sec in, in time frequency space and making them, so let's leave this for a moment and go to frames, so parentheses, time frequency frames, so time frequency frames started really as a result of the Balian Law theorem, we wanted to localize nicely, we wanted to, to have good time frequency localization, you couldn't do it with orthonormal basis, so let's make redundant and one, and, and on the other hand we also knew we had known for a long time that you had this nice continuous transform, which I also introduced to you, I mean if I defined the transform for a function in, in t and nu as the integral over r of f of s e to the minus i nu s and then some window function s minus t, then we had seen that you could very nicely reconstruct f from taking that windowed Fourier transform and write, so write the windowed here, moved to t and nu dt nu, so there was a reconstruction of the identity for a decomposition, so it seemed natural to look at the situation where you would take a window function and you would instead of taking labels that would be continuous, let me put these in square brackets here to distinguish them from what I'm going to do now, so mn of s would be just a window moved by a multiple of t and e to the minus e to the i nu s with a phase factor n omega s, sorry, with a m, I'm going to get there, so make this family a function and see whether you could, what you could say about, so can one characterize the sequence f w m n and well the question is not just characterize f but properties of f by properties of the sequence and the answer is yes you can do that very beautifully and in fact what if your window is nicely concentrated in time and frequency then every f w m it's like it's like a little crumb a little nugget of information around that time frequency location and you can look at where things live in time frequency and that's how you build those spectrograms that I showed you at the very first, I mean at the very first lecture with these these these color schemes over time frequency plane to give you localization in time frequency, okay now mathematically what are you doing you're mapping l2 of r, 2 it turns out l2 of z2 so you make a square sumable sequence over its two labels and what you find is that tf squared which is just the sum over m and n of these inner products typically satisfies the property that it's mean if you have a good frame then you satisfy that you sandwich your original norm both above and below and whenever we have something like that we say that the family of functions in this case we have it labeled by z2 but it doesn't have to be in z2 any any family of functions that satisfies this condition that when you take inner products with them they give you this kind of of sandwiching of the original norm is form constitute a frame in the case that we are considering where we build our frame by time frequency translations that's only possible if nu times t is less than or equal to 2pi when the mesh of your lattice is too big you just don't have enough vectors to cover the full range so that's the argument that we used you know to show that you need at least that tau omega can be at most 2pi so you need at least that many okay so you how do you build the frame well a very simple way a simple way I mean there are many ways to build is be twice as redundant as needed for an orthonormal basis incidentally most frames have redundancy and I mean this this this this looks very similar to the kind of thing you would write for an orthonormal basis and but in general you have redundancy and a very easy the easiest example to convince yourself of what's going on of what may be going on is what what what some people call the Mercedes frame which is a frame where you work in two dimensions and you consider three vectors that have equal length and that make angles of 120 degrees with each other I mean and so you see starts looking like Mercedes so the what happens if you take these three vectors is that if you take if you take them of length one and you take your vectors uj squared and you take the sum for one to three then you find that you get exactly three halves v squared you see you get the these two contribute three quarters another three quarters of the x component and this contributes the whole y component and these contribute twice a quarter of the x component squared and so it adds up nicely to that so that's a very easy frame it's a frame where you have equality all through and that's the simplest example of what's called a tight frame a tight frame is a frame where a and b are equal to one okay so um let's do we need redundancy well let's take a frame that's twice as redundant so what we should what we can do is we can take a frame we can take a frame that is uh well how do we make it make it twice as redundant we can look at e to the two pi i n s and then we can look at s minus m over two i mean so that corresponds to a an omega of two pi to a t of a half the product is pi so we expect redundancy of two i mean that measures half the size and if you now look at the exact transform of these so the exact transform of the w m n well if i make m even then we've done that already we did that last time i mean that's just integer translates and modulations we know if we look at that that we get the exact transform of w and we multiply it with e to the two pi i m s and e to the two pi i n t if i'm not mistaken well let's compute it quickly okay sorry yeah there's a k somewhere here so well let me uh we get the sum over l of e to the two pi i n s minus l w s minus k minus l and uh no uh yes that's the whole thing s minus l and then i have to also write a t to the two pi i l t so what was i doing here i was going to change uh yes i'm going to change my variable in s minus l so i have to write here a minus k and a plus k so the summation variable becomes l plus k and so then i write here plus k minus k and so i have and it's actually the difference from what i wrote here it's an e to the pi i n s and i get a k here okay but so now we see also what we should do for the odd ones let's write that out u z w two k plus one n s t so i have to take the function in s minus l so e to the two pi s minus l w s minus l minus two k plus one so i get minus a half minus k and that's the function in s minus l and now i have to multiply by e to the minus two pi i l t okay same trick minus k plus k plus k minus k and we get e to the two pi we change summation variable okay we get e to the two pi i n s we get e to the two pi i and that probably should have been a plus here since i'm getting plus out here as well so two pi l two pi k t and then i have exactly the experience so i get the sum here now over l prime of e to the two pi i this is l minus and s is gone i don't know why did this didn't matter at all yeah there's nothing left so i shouldn't have done that in the first place but i get here e to the two pi minus two pi i l prime t and then here w s minus a half minus l prime and so that gives me the exact transform of w in s minus a half and t and so just like before so what i now have is i'm if i'm going to look at these inner products m n squared m n because the exact transform is unitary it's giving me the sum over m n and i get the inner products of u z f with the u z of all these let me write those immediately as an integral over the square and i get u z f s t and then i have u z okay i'm going to split my sum in m and n in a sum over even and odd so i'll split the sum over k and the case will even have the either ones or give me the odd ones so let's first look at the m to k so then i get here u z w s t and i have these phase factors e to the minus two pi i n s e to the minus two pi i k t conjugate here and that's what i get for all these and i'm going to have to take the sum squared of these things the s d t and then i get yeah well no need to leave me room here because i can't use it sorry the s d t sum squared and then i have that to care of the sum for the even ones i still also have the integral over that of u z f s t and then i have u z w s minus half t conjugate same factors d s d t squared now in both cases i am looking at the all the Fourier coefficients of the function on the unit square so the sum squared of those are just giving me the l2 norm of that function so in the first case i get the integral of u z f s t squared and u z w s t squared and the second case i have the same first factor but i all i have this d s d t and so now it's it's what i want i would like this to be i would like these inequalities so but let's go it can be hung for sheep as well as for mutton uh as for lamb so uh you you let's try to aim straight away as for for for a equals b i mean for tight frame so i would like this to be let me say i want this to be f squared if i do that then uh what i need since this is equal since i had a unitary operator to use the f of s t squared what i really need is that this quantity should be equal to one can i do that well remember what do what do what do we have for this window well this window is the exact transform of uh of of of uh so u z w of s t it was because of its construction remember i'll write it again some l and z of w s minus l e to the two pi i l t uh already minus sign doesn't matter much uh i know that uh uh this let's yeah u z w s t plus one is equal to u z w of s t and u z w of s plus one so you have a funny kind of of of uh semiparosity if i put here an s plus one then i can move that out and i find that i get two pi i t u z w s t give or take a a plus or minus one there so these are the conditions i know that this function has to satisfy and i know that trying to get its absolute value equal to one given those conditions is impossible but that's not what i'm trying i'm trying for this song to be one and that's very easy and in fact uh if you work it out then um what you can do what you can do is uh you can just look at a gaussian and uh well the first thing to do is okay uh one example is to take for the window uh of a window of x just a gaussian if you compute the z transform of the gaussian on the square zero one then you find that it has a zero in the middle there's a single zero uh it's you get a uh a type of of uh a theta function a jacobi theta function and well you can look all of it up and so on it has that single zero um and uh if you then look at that function and look at it translate by a half then you have that the sum squares of these has no zero anymore so you have that so let's take look at this then you have that u z g of s t squared plus u z of g s minus or plus a half squared i mean it's the same thing because of the semicircularity um uh it's bounded below by a constant and it's bounded above by another constant what that already means and exactly with those constants is that if you use the g uh uh x minus um yeah m over two e to the two pi i and x so this family so actually with this as the variable constitute a frame with frame constants exactly a and b because well that's exactly what you get but you can now do more what you can do is you can and you can view that it's a construction that you can either view at the level of the frame or you can uh look at it at the level of the zack transform so uh let me uh get boards down you see what you let's do it at the level of of the operators so i have t star so i have t f squared it's bounded below by a f squared and bound above by b f squared so that is saying that t star t f f is bounded below and above so t star t is a an operator between sandwich between a the identity and b the identity in the term in the the sense of uh uh equalities between operators being defined by when when an operator an operator is positive by taking so an operator s is positive if and only if s f f is positive for all f so that defines inequalities between operators and i can then if i look at this star t minus half is multiple of the identity have this average uh uh then i get here that is bounded between b minus a over two and bounded below by negative that number which means that t star t minus a plus b over two times the identity is bounded by b minus a over two and go on here uh which means that in a certain sense this multiple of t star t is close to the identity meaning that it's smaller than b minus a over b plus a which is a number strictly less than one and that means that we can write this as the identity minus in in in my native flamish we would say that's the truth as big as a cow and uh so that means that i can try to invert this operator by using the standard formula for things that are close to the identity operator this is small so identity minus it is just and so that gives us that t star t is a nice invertible operator as we could have expected and uh we get that it's two over a plus b times which gives us actually a very very fast algorithm to because you can rewrite this as a sum of uh dyadic powers so the identity plus then a sum over l of identity minus two over a plus b d star t to the power two to the l times some partial sum s l which are defined iteratively so s l how is it again so s l is really the sum of k zero to two to the l minus one of the identity minus two over a plus b t star t to the k but that is the same thing as uh s l minus one multiplied by the identity plus identity minus two over a plus b d star t to the power uh two to the l minus one it's l minus one l minus one sorry so you see you you you build it up in dyadic blocks you just going here actually i probably have i've done a little bit too much summing in all this i probably should write here the the limit as l goes to infinity that is certainly true and so what what happens is that you just built uh one one plus the operator and then you multiply that by the thing squared and one plus it and that gives you one plus the operator plus the operator squared plus the operator cubed and so on and by multiplying you get bigger and bigger so it's it's very easy to build a very nice iterative thing that converges very fast and because you have here because you're really summing this thing and this in norm is less than b minus a over b plus a which is less than one you have exponential convergence you have a geometric series and so you have exponential convergence so you can also now if you think of what you're doing in terms of the zack transform what you're really doing is you're defining a function g twiddle by saying that it's zack transform in you're inverting this operator t star t which corresponds to that multiplication which means that you are dividing this by this thing squared well that's not what i was saying what i wanted to say is t star t minus one of a function and the zack transform of that in s t is the zack transform of that divided by this which is fine because this needs these funny periodicity conditions the funny periodistic conditions only affect the phase factor so this thing is periodic and so this thing will still have the same funny periodicity conditions as before and so this has the right shape to be in the image of the zack transform of l2 and so this does define that inverse operator you have no problem with that but why am i working so hard on that inverse operator well now a little bit of abstract nonsense um i i would like so i i have that f t f is this sequence of f with the w m n's what is t star t f well t star t f in a product with g is the inner product of t f with t g is in little l2 the inner product of the sequence with these entries with then the inner product of the similar entries with g and complex conjugation give me this so so i can write f g s t star t f t star t negative 1 g and if i work all this out then i get here sum over m n f w m n and i will have the inner products of this i did a foolish thing no i didn't i can put this higher okay okay so i can rewrite this as the sum over m n of w m n t star t minus 1 or if i understand it in weak form this gives me a way of recovering f from its windowed Fourier coefficients by using these now you can convince yourself that uh t star t commutes with translating minus t and multiplying by 2 to the b i uh well i had omega didn't i what did i do i've lost it 2 pi yeah 2 pi i dot t but so you can work that out but we can also still see it's still from the zhac transform remember in the zhac transform doing this translation in in in x and modulating was like multiplying with that phase factor e to 2 pi as it's one so uh t star t we have seen is just like multiplying with a nice periodic function i mean that's not hurting multiplying with these phase factors i mean since the whole thing is periodic you can do that before or after and it doesn't matter so this is just to to make a shortcut and not have to to compute here that this if i first do the inverse on the window function and then look at it and it's uh i'm going to look in my case at uh translates and e to the 2 pi i n n s that this is the same thing as taking the t star t minus one of these functions w m n and that then taking this so say in in another in other words applying this inverse operator to each of these translate and modulate functions consists in just computing one of them and then translating modulating it and so what that means is i just need to here build a w twiddle so this is w twiddle m n which is uh yeah the translates and modulates of that function so that's exactly what i was showing you i can do i can take the inverse so i if i take a Gaussian to begin with i can take my Gaussian divided by that sum squared of of u z g and u g g translated and i will have the corresponding g twiddle but i can actually do even a little bit more i mean so yes i know how to invert but i could have from the very start i mean after all what i've written here um f okay i've lost the thread of what i was going to say so let me erase a board well i would find my thread again um so what we have found is a tight frame of the form window x minus m over two e to the two pi i and x of course is equivalent to asking that it's a z transform satisfies this if for general w we have so not satisfying this condition but not satisfying this equal to one but still a smaller than smaller than b with a positive and b finite we have that f can be written as the sum over m n of f w m n w twiddle m n where u z of w twiddle is u z of w divided by that expression star so all that is stuff that we have now established so that inverse is is really useful but i can do as i said a little bit more suppose i have a w for which that expression star satisfies b and so on and suppose instead of trying to find this dual function w twiddle that i'm doing here that i need in order to recover from the image tf f itself suppose i do something that goes halfway so let's consider the following let's consider a w sharp which has the property that its z transform is the z transform of w divided by the square root of star i mean that function is perfectly well defined so i can do that and this is still going to satisfy these kind of semi-periodic conditions that the z transform must satisfy so i'm doing something fine what it really corresponds to is that w sharp is just the square root of this inverse operator applied to w and so for that also just like i had a fast converging exponential expansion from the operator t star t i can have a fast exponential convergence from from from this upper a fast iterative thing if i don't want to go via the z transform okay so then it's kind of if i now look at the uz of w sharp in s t squared plus uz of w sharp in s plus a half or s minus a half t squared it's i'm going to have that the numerator and the denominator will cancel out and i will find that this is equal to one so w sharp will generate a tight frame with frame constant one so in practice both ways of looking at all this are useful in some cases you get mean your window for your transform it's given to you by by a setup and so on or you have computed or you're given it to it by somebody else and you want to recover f so then you use the inverse operator in some case and that's the case for the the wilson base construction you actually get to choose your window m and then it makes sense to start for instance with the gaussian window for which a and b are fairly close to each other so this ratio b minus a over a plus b is fairly small and so you have really fast convergence for your iterative algorithms and you can build this function and actually i i i i'm going to give you so can we switch for a moment back to the screen so i have to do the lights and i have to bring this down let me do that first maybe we don't even have to do the last because we don't have to so this is uh so this was that paper so let me minimize that so this is the paper on about which i'm talking and if we go near the end very slow i have a figure here somewhere no it's not here and uh so some of the computations i've done here are in there some of them are done differently uh where's the graph i always believe when you construct functions in showing what they look like so in fact um this particular function and this is its Fourier transform do satisfy that uh there there so you see they're built from gaussians with a very fast exponential decay okay so let's get back to the board so why we've now talked about about these frames these tight frames and i've convinced you i hope that i can construct them i can nicely computationally and they have this fast decay what do they have to do with this wilson basis well let's go back to the wilson basis and let's try to see what it means to have a wilson basis so what i wanted was i was telling you i had special ways of constructing functions fm which were done by juggling phase factors and sines and cosines and i wanted to so i had these functions fm that i was going to construct and i wanted the fm x minus n to be an orthonormal basis let's look at what this means well it certainly means that you want uh well you want to sum over m and n of any function h let's call this fmn and the sum for m was going from 1 to infinity the sum for n was overall z you want this to be equal to f squared and this member i'm now uh because i want to be consistent with that paper so that i can look up a formula if if i start screwing up um my four four year transform is now exceptionally defined when when we wrote this paper this is actually the last paper in which i still felt like a physicist when i wrote it it's the very last paper that i wrote in which i had when the inner product in Hilbert space is linear in the second argument and not in the first argument but uh the the only way i could win that point together with my my later i didn't care anymore but then i still cared uh win that point with my two mathematician co-authors was that i had to take their Fourier transform so the Fourier transform here is h of x so the two pi is there so um what that means it's no longer unitary so i have a factor here but so i'm going to have well i don't know where this unitary i'm not used to that one and so um this is the so i have the sum over m and then the sum over n in z and here i have h hat xi f hat m xi conjugate um e to the two pi i and xi okay and here we do the usual trick actually in the paper it said you do um the Poisson formula but whenever you say Poisson formula to a mathematician they say oh you have to verify all kinds of conditions in fact if you work with l2 you don't at all and so that's why i usually do it the way i'm going to explain it now the first thing i do is i say well i have the perfect right to um write this sum as a sum of integrals over pieces to grow from l to l plus one and uh so i had this thing squared so i'm writing my integral as a succession of little pieces of integrals you cannot stop me from doing that um then because everything is going to be really i'm considering functions f that take me very nicely and so on so everything converges that's one i can pull that summation in uh yeah first i i for each of these i'm going to make a change of summation i'm going to into to to make my variable zeta plus l and here zeta plus l and zeta plus l the l i can forget about it doesn't matter the zeta and now my integral is from zero to one once i have done that i'm pulling my summation in because everything converges as nicely as you wish so i take it out here and i now have a function here that is periodic and of which i'm taking all the Fourier coefficients and that i know is going to be equal to the sum to the to the the the the integral of the of that function squared the integral d zeta and now why do you put the square inside the semi there uh i should have done this thank you and uh and now i can unwrap that sum so let me be careful so sum over m is not concerning me for the moment the sum zero to one and i have here i'm going to write a sum over l and l prime and i have h zeta plus l h hat zeta plus l prime m f hat m zeta plus l zeta plus l prime and the whole thing d zeta and again everything converges absolutely so i can do all the changes i want and uh i can i'm going to do a number of things here i'm going to write this as zeta plus l minus l prime plus l prime minus l and here the same thing then i'm going to say well i can take the sum over l out now i can change the integration instead of dragging the zeta plus l i'll call it xi and i'm integrating from l to l plus one in xi i'm also in the sum over l prime i'm going to redefine the integral the summation variable i'm going to call this a sum of k i have a k for the moment not yet and so this becomes k now i see that i have no l dependence in this integral anymore and so the sum over l of integrals from l to l prime l is just a sum over all of is an integral over all of r and the sum over k i can take out as well and i get h hat xi h hat xi plus k m hat xi xi plus k and i don't have to verify any of the uh nasty conditions for the personal summation formula everything works out okay so that's that now uh what did i want to do that uh okay so this i want to be i want that to be shoot what did i do here that should have been h squared thank you so much i want that to be the integral so let me work towards that the sum m equals uh so if k equals zero then i have the integral r is that how i make sure that i don't screw up too much okay so i should have been a little bit more well i can get it from polarization i mean i could have done the same thing from the start but if i had said h uh h1 h2 should be equal to sum over mn of h1 uh f mn f mn h2 then what i would have seen is that this here if i work it out is the sum over m and k of the integral h1 hat xi h2 hat xi plus k f m hat xi f m xi plus k d xi uh with complex conjugations and this i can write as the integral of h1 hat xi h2 hat xi times the sum and m was a sum from 1 to infinity remember a long time ago and then i have a sum over so it was for k equals zero now i have a sum for k different from zero of the integral h1 hat xi h2 xi plus k and then i write here the sum m equals 1 to infinity of f m hat xi f xi plus k d xi so in order for this so this is what i mean this is just stupid compute stupid careful computations and but i want i want to satisfy this to be a nice basis so we see that it's we would like this sum to be one and we would like this sum to be zero if k is different from zero so in fact what we would like is this to be delta k zero so that's all we need if we satisfied it's it turns out to be necessary as well but it's certainly sufficient so great so we're rid of at least of one of the variable of the things as well we have still conditions but we have a fewer conditions than before now remember what f m was f m hat so we had f hat 1 xi was going to be the function phi of xi and then we wanted the other ones we had this very funny condition this was one over square root two and then we had here phi of xi plus l and phi of minus l and phi xi plus l and we had a phase factor and e to d by i kappa xi uh yeah that is what i seem to remember okay so what is this well let's plug it in so we have to sum for kappa so we have first of all and here l goes from one to infinity and kappa is in zero one uh not zero zero one so we have here let's look at the sum when kappa equals one so we look at the odd ones here then what we get is a half sum for l is one to infinity and we get phi xi minus l minus minus one to the l phi xi plus l e to the pi i and then we have to phi xi minus l plus yeah this is for all k in z plus k can you get that the first factor yes my phi is going to be real but i mean i have to write the phase factor as well so e to the negative pi i xi and now i write plus no l is still negative minus minus one to the l okay so i'm trying to confuse i'm getting confused i have to take my m equals two l plus kappa and so i have here and change here to xi plus k so yes xi plus k minus l plus and then i have an e to the pi i xi plus k kappa but kappa is one so i okay fine wonderful uh so what do i have this falls out and i have a minus one to the k from here okay and so now we are we have to regroup terms i had hoped this was last messy but uh okay let's do it for if i write my kappa in here again then i'm still very general aren't i no then i have to write a plus again and kappa and then i have here i had e to the pi i kappa xi so a kappa k here okay so i have so we were going to regroup terms of in all directions so i have phi xi minus l phi xi minus l plus k that's that and that then when i add these there i also so i'm going from l equals one to infinity i also have plus phi xi plus l phi xi plus l plus k and i have this factor uh so i have to do some over kappa you mean that the sum of the kappa we cancelled some terms yes there will lots of things that will cancel and we just have to figure out which ones and how there is complex conjugation and um but let me please i'm going to take phi real so i'm going to forget a complex conjugation because that will actually come in handy for me but i mean phi is real let's try it we'll let's power through uh this phi um so you have to the this term is is multiplied by one plus minus one to the kappa k times uh one if you summon take kappa minus one to the k so yes one plus minus one to the k exactly and then i have the other term i have phi of xi plus l phi xi minus l plus k plus phi xi minus l phi xi plus l plus k and this multiplies one plus minus wait wait wait minus one to the k as before but then i also have a minus one to the no i have to be more careful on that if kappa is zero i don't have this one and i have a minus one to the l if kappa is not zero then i have a minus one to the k and a minus one to the l with a negative sign are we happy okay so i can take the minus one to the l out if this comes out i will be okay it already starts looking first of all what do we have here we are summing over l one to infinity and then the same thing but with l changed sign so we're summing really over all the integers except for zero and i can forget about this term and the same thing in the second i mean i have exactly the same expression with l changed sign and so i have here l and z l different from zero and that makes my expression a little bit less complicated again okay wonderful now and we were looking at okay if k is even then the second term drops out and the first term gives us a half some l in z l different from zero of phi psi minus l phi psi minus l let's say plus 2n and we wanted that whole thing to be delta k is zero so this should be equal to delta n zero and the one half is canceling with one plus minus one to the k exactly wonderful thank you now if k is odd let's do a little side board for that if k is odd then this cancels out but that gives us a two which cancels out with a half so we get then some l in z l different from zero and we have minus one to the l phi psi plus l phi psi minus l plus 2n plus one and now i'm going to make a changer variable i'm going to call k 2n plus one minus l so that l is 2n plus one minus k and that is going to give me phi of psi uh plus k for the second term phi of psi plus 2n plus one minus k and i get minus one to the power l and i get a change of sign i have a sum over k in z and that is a nuisance because that's not what i expected um so we see that a lot of terms cancel out already so why did i not zero so it means that you can compute the sum up to two terms yes exactly uh but i i hadn't remembered those two terms so i hope uh okay so uh so this is that so this is uh negative the sum l in z different from 2n plus one of this whole expression so okay so yes they're going to cancel let's i okay so um yes so i'm going to get uh i think a good way would be to write the second sum as the same sum for k different from zero and so you you you write it exactly as the first sum but with the change of sign and then you add the the two terms and you see what they say uh there is a term corresponding to k equals zero that you withdraw so i get this minus the term for l equals zero which is phi psi times phi psi plus 2n plus one and then i don't have this and i don't have that uh i had still minus phi psi phi psi plus 2n plus one and uh and now i add in l different from wait i have that okay and so i write that that is minus that sum over k for k different no okay i have that so i had that and i had that so that is that that is that okay and that is um so what i that's correct what i had here was that this is minus that sum uh for l and so on and so what you get is that the sum over l in z for minus one to the l phi psi plus l phi psi minus l plus 2n plus one so they don't cancel equals two minus a half of this thing so if i compute that but the second sum is the exactly minus the other one so yes so you get zero it seems that you get zero for this uh let's be careful okay i want to compute the sum for l in z and l not zero the sum for l in z of the full expression is equal to negative itself so that's zero i think that is minus and then you made a shift to remind us the same sum the changing l minus l and making a shift yes that could be dangerous that's why i'm being careful i think the best way is is really to to start from the sum you wrote to sum for l in z l different from zero and to write everything in terms of this so the the left hand side is already this one so what about the right hand side the right hand side can be written you can write this one or minus this one in fact plus two other terms and i i bet that these two terms cancel okay so minus one l phi xi plus l so that we computed that was k in z and k different from two n plus one of minus one to the k plus one phi of xi minus k plus two n plus one phi xi plus k and note that i have done a slate of hand i mean if i had conjugation that the conjugations would be in the wrong places now so it was important for me that phi was real i mean to get back to to your concern earlier okay so that's true um so you can write it down that's minus the and then k is different from that again and now can make it l again and that is minus the original thing as you were saying plus well i've subtracted this so i have to add the term that i took out for l equals zero which is phi xi phi xi plus two n plus one here i had i didn't have the term that so in order to bring it then i have to introduce that term so uh wait that's minus that so this i can write as this minus the contribution i would have gotten so l would have to be different from two n plus one if l was two n plus one then i would have so are we correct that by moving that i had to add that term and so i get this and then minus minus and another minus and the two n gives me uh a negative phi uh immense relief even being careful it still works out and so this cancels that and so this contribution is zero wonderful so we have reduced ourselves to the uh to this so let me put that on this board so we have reduced ourselves to for our original function l without all these kappas and and so on we want to prove that xi minus l xi xi minus l plus two n this should be delta and zero and what if uh l is zero did we have another condition there somewhere why did i still have the l i had another sum didn't i yes i think you you you forgot the sum for corresponding to f1 yes to f1 and that would have been uh i would not have had the a half but i only had one term and that one term would have gotten me uh exactly the term that's missing here for uh okay so let's we take this out and we get that okay and now we are i mean so now it's easy to see that you can construct very easily phi that will satisfy this because uh you can and that was the first construction uh uh that that we we we thought of uh so you want a function that uh you can even make phi nicely compactly supported because what you have is that if you have something that arises like a trigonometric function here so rises like a sine squared and then descends like a cosine squared uh and you you you translate and so on you see that you only at on every interval have to be concerned if the function is supported here from zero to two with with what happens here because as soon as you translate by more than than than two you have no overlap and here you have a sine squared and a cosine squared and a sine squared and a cosine squared and things add to one so it's very easy to construct it when you do that however you find that the uh uh remember we are in Fourier to get back to the the Wilson basis we have to take the inverse Fourier transform and so you can build that so that indeed these functions have uh uh exponential decay it turns out the decay is fairly slow if you construct them explicitly this way so the the Gaussian I mean so what we will do now oh my god it's time uh what as we will conclude a construction that you do via the dual frames turns out to be uh give much better decay and so be more convenient plus as soon as you write things in terms of of of of of uh of Gaussian decay uh if in Gaussian all your integrals become trivial integrals to compute because you have all these Gaussian expressions so you don't have to do quadrature you just have explicit expressions okay so now we're going to conclude quickly because we are we are although it seems that we're still very far we're we're very close I'm going to build a slightly different Zach transform because I can build that in many ways and uh uh but which satisfies very similar properties uh I'm going to build a Zach transform that that builds in a not a g h uh h a translation here I mean what it's really is a scaled version of the old Zach transform it doesn't matter uh that's probably a square two somewhere yes uh yeah the square two here and um what you find is that uh the condition that you satisfy this if you define this Zach transform and you work it out is exactly the condition that this well that's well the computation is given in the paper and and we are out of time so what happens is that you find that this function which is a function uh with uh with with uh periodicity uh a half in s and one in t and uh if you compute um it's Fourier transform so if you compute it's uh it's Fourier coefficients with respect to t uh then it turns out that that you get exactly this well let's say no fine so this was a definition of that Zach transform sorry and if I do this for a function phi then I get exactly uh phi uh the result is that the condition for a Wilson basis for a function phi the construction that we jaffer and jeunet and I propose to build the wilson basis is exactly the same as for that function phi to generate a tight frame and so what we have done if you look at what you build in the tight frame you combine things from different frequency regimes and you also in fact are combining two frames you combine the frame uh with with uh uh with a um so so you look at the frame that's local that's twice as dense here and you also look at the other frame that is moved by a factor by by a half in frequency and so you so you now have a redundancy of four but then what you do is you combine positive and negative frequencies always which removes a factor two and then in some cases you you you pick the cosine and in some cases you pick the sine and that also removes a factor two so you do a very funny reduction of that redundant frame but one uh the conditions for the original frames to be tight is exactly the right condition for that funny reduction to give you an abnormal basis so uh I mean you can rewrite another way but in all cases it amounts to saying I had a very nice tight frame and I am kind of of worm holding things together in order to remove the redundancy and outfalls an abnormal basis I think there is probably some very nice algebraic structure hidden in this that hasn't been been investigated in great detail um it's also a kind of interesting that um we we I mean and I just want to to close with that because it's interesting remark that also I don't have a full understanding of I can explain and prove things to you but I in in for our frames in time frequency we have seen that if you put the parameter t and omega on that we have this hyperbola that has no uh don't spend you don't spend the space and here you have basic frames very nice frames possible and then here if you have orthonormal basis and what we have done is we have looked at the hyperbola here but we have recombined things in order to get to the right density in the case of wavelets so I look at I've looked at the the the didic wavelets but in general you look at aj t minus uh uh k b and you normalize and so you you could say let me look at the the uh the b a parameter space and we know that for instance for a equals two and b equals one we have very nice orthonormal basis we also have other places very often but it seems there is not such a clear demarcation line for one even I mean uh if if you take the um if you take the meier wavelet itself and you dilate it a little bit then the resulting collection of translates and dilates but translates by a slightly bigger amount every time still form a frame you've lost the orthonormal basis because things are not felt but you're still a frame so you can have for the same a and and a slightly bigger b so here we have an orthonormal basis but even though you could still have uh uh uh spanning is still possible what happens is that even though at a fixed scale you have the impression well if I do 10 000 steps I'll have holes the things from other scales start helping you so even though I mean it's it's mind boggling to me because if we remember that lattice that I've drawn that hyperbolic lattice it looks like I'm stretching the whole hyperbolic lattice everywhere I mean I take slightly different functions but the lattice is stretched everywhere and nevertheless I still span on the other hand you can take a b that's slightly smaller and you can still have independent functions so even though I'm compressing this whole lattice taking slightly different functions I still don't get something redundant so it's a very different situation from here well this very sharp line um although I can prove that you can do this I don't really feel I understand why this is possible so I'm going to leave you on that note I mean so food for thought so sorry for having gone over time and sorry for not having quite finished this but okay thanks