 In this last installment we are going to become, well, although I have theorems for all this, it's very motivated by applications and for some of that we don't really have yet the strongest theorems that we want to do. So the motivation is the following. I'm interested in looking at signals of which I know that they will be a superposition of not very many more elementary waveforms. So each of which looks a little bit periodic. When I write it like this it looks completely foolish because of course you can write anything you like in this form. So what am I mean? SK here, I assume to be one periodic. So if you want you can write a 2 pi here. See I want phi k prime is not a constant. I mean if it were constant then I would just expand it simply and then I would have just a periodic function here. But I will assume phi k prime to be positive. So I have, and I will assume that phi k prime, double prime is small compared to phi k prime. So my mental picture is that it's almost like a periodic function of 2 pi omega kT but this if you wish is still dependent on time and there might be a phase also. So phi k of T is mentally, morally of this nature. Similarly a k is not constant so this is not zero but again I will assume a k prime small. And so the reason I have for looking at signals of this nature is the fact that they actually do happen in reality and I'll try here to open a paper. So this is a paper by my former student Hao Cheng Wu and it's something that you can find on Archive. And I mean remember I told you about, so I think I did tell you about a former student I have who is both a PhD in applied mathematics but also before he even came back was already a fully trained radiographer and trying to do research in radiography and he came back to applied mathematics because he was interested in analyzing, I mean he wanted to know more signal analysis. And one of his goals from before he came back for a PhD in applied math was to be able to extract because he knew it was a hard problem and an important problem from the electrocardiogram of a pregnant woman, the electrocardiogram of her fetus and this is the paper in which I mean so he came 15 years after he started his PhD in which he finally achieved that goal. So it's a technical paper but as we go down to, what is it, can you go page down here? I mean no I don't know Max so I'm sorry but okay so here's a typical signal, of course. I should have checked that we were projecting something, oh you can't read anything, sorry. So if you look on archive for his name and also the word capstrum then you'll find this paper. This is a typical example of a signal of the type that we look at, of this type. So it's a signal in which you clearly have some periodicity, I mean clearly mixes several frequencies, I mean so you have several case, in fact this is a signal that is read from what you get in a hospital these days in order to measure your heart rate they give you this little thing and what they really look at is the absorption rate through hemoglobin of your finger and it oscillates with and it has both your pulse, I mean so your heart pulse and a breathing, I mean so you see two frequencies in there and so that's a way in which you follow breathing which is slower than, but you see although it's clearly to our eye periodic, it's also clearly not really periodic which means that if you're going to analyze this with Fourier methods it's going to give you stuff all over the line, I mean what you want is to extract really this, you want to extract something the amplitude is not quite constant, the frequencies are not quite the same, SK doesn't in fact, in our model it's for the moment we keep it fixed, but it's not quite fixed itself as well, but you want to get a hold of all these quantities, on the other hand you do not want to model it too strictly, because you could say okay I model all this with splines and I'll let the notes fit and so on, because then you actually will probably do very much less well than what you want in capturing what's going on, there's too much variability and you do want to capture that variability to do a simple fitting with a model with, unless you put too many parameters and then you can fit anything, so that's the goal and actually in this paper if you if we go lower then okay so here, okay this is more of that same signal, we see a number of different signals in here, but I want to show you the one where he actually did extract the fetal heartbeat, because he was so proud of it and so there's a whole lot going on here and we'll come back to some of that, but well more signal, this is simulated signals, but okay so here the bottom right actually is, you have one the horizontal thing is something that comes from the measuring apparatus, but the bottom red curve is the changing instantaneous frequency of the heartbeat of the mother and the descending curve is the instantaneous frequency of the heartbeat of the fetus and so you see that he got to a stage where he could extract it carefully and actually this is now being checked for clinical applications, but so what we have to deal with, so let me minimize it so I might be able to get back to it, what we have to deal with is extracting signals, so let me put a toy model up here, so here's a toy model of this nature in which I have just two signals and in fact I took here a cosine function, so for the moment let's not even deal with the shape function, we'll come back to it, okay so the signals I'm showing here is a superposition and this here is one of the signals and you see its amplitude is varying, here is the other signal, its amplitude is varying, you see also the instantaneous frequency is not constant, here are the two instantaneous frequencies, this was a completely simulated model actually as part of this paper, this is a paper with how it's also on archive, so if you search for his name and my name and the word concept, so concentration in frequency and time, you'll find this paper, what we it's part of the paper and the software that we made available, we made a little machine for generating such superpositions of cosines with smooth but varying amplitudes and it's a little random machine so every time you have a different realization and you can pick the time where one stops and the other end starts and with smooth but varying instantaneous frequencies, so and when we did experiments we did experiments over many over many a whole range of these these models and engineers like to see that you you simulate a lot okay so when you okay when you look at something like that so it has an instantaneous frequency and that you see there when you look with a windowed Fourier transform you are going to see so locally very locally it looks a little bit like a frequency that's almost constant and it's nicely localized in in time because you've localized it by a window so you you look at f t and your window function s s minus t e to the 2 pi i omega t s s we look at this and we integrate over r and we get a windowed Fourier transform that is localized at t and I'd call this omega nu I believe so nu so what happens is that it's not precisely localized at t of course since your window has a certain width I mean you're localized around t but you have a certain width and we have seen that we could also write this as up to phase factors so we might have factors in front but as a w psi minus nu and e to the 2 minus 2 pi i t psi d psi so we can also also localized in frequency near nu but again not precisely and so what we'll see next is what we'll see next is the absolute the absolute value so the magnitude of this complex quantity which you see is at the right spot but it's widened in frequency and moreover because of the sudden onset and stopping of the constituting signals you have these these singularities in this in time so you see a broadening time-wise and in frequency a very interesting happening thing happens when I look not just at the magnitude but I look at the phase of this and I'm going to visualize that phase what I did is I took the complex number I look at so this number I can the outcome is a complex number I can write it as magnitude times a phase factor and I'm going to graph the phase factor only where the magnitude is big I mean elsewhere it's it's it I don't I don't try to define it and I am going to map it as as white if the magnitude is between minus pi over 2 and pi over 2 and as black when it's in in the other half and this is what you get now you get and this is a typical thing and I won't care about that here you get which is something that was exploited for people who like singularities you get these things converging these phase lines converging to the point where the singularity happened but I'm not interested in that what I'm interested in is the fact that where my curve was I find this very nice and beautiful zebra patterns there have been I mean what happens when you write the reconstruction formula so when you write the function f of t as 1 over 2 pi the integral and over R2 of the windowed Fourier transform and then here I write again my window and e to the minus td new when you write that and if I replace this by original you can view it you can try to expand that integral given that the function has this very peculiar form as a stationary phase integral and I mean it's an oscillating integral and you find the stationary phase regime and if you extract the stationary phase curves what will happen is that you will find curves that lie here in the middle I mean so exactly where the slope is vertical in this phase and so you can try to reconstruct from only that stationary phase behavior to function what I'm going to do here and discuss here is a little bit different instead of saying I want to find the curve on which I can read off the behavior best what I'm going to say is look I have a whole zone where I get the signature of the fact that I had a nice thing with a well-defined well-defined I mean locally making sense intuitively phase I have this whole zone that is kind of vertical so instead of trying to single out the stationary phase approximation which is called the rich transform let me try to take that whole zone and try to squeeze it together let me give you motivation for this I mean which is so motivation for what we call the synchro squeeze transform and I now regret that we called it that way because it sounds a very not very serious I mean synchro squeezing but well we are stuck with it so but the idea is that there's a region in space where you have the stuff spun out and you want to squeeze it back together of course you can't really do that I mean this is not something that's going to be valid for all functions in L2 it is because I have a fairly non-parametric but still a model for my signal that it does make sense by the way people say can you prove them so what what can you expand this way as one and the answer is no I mean it's not even unique I mean it's it's fairly simple to build examples that I can write as just one cosine that's amplified or I mean I want my ace positive so it's not quite as simple as you think but still with two I can't do it but with three I can make a sum that gives you either one cosine or three things with constant amplitudes and that give me beating things with neighboring frequencies so there's no uniqueness here I mean there might be uniqueness if I start saying that I want very sparse expansions I don't have those terms I mean it might be interesting to do them we don't even care we know that we work with signals that are that way and we want to just recover those and then finding those signatures is good for us so we implicitly are assuming that there is some smooth or some some some some uniqueness otherwise we couldn't recover them but okay motivation for the synchro squeeze transform so it all started actually both for Haotian and for me with trying to understand the work of somebody called Norden Huang Norden Huang is actually a very distinguished engineer he's a member of National Academy of Engineering in the United States he has worked for NASA for a long time he's now retired and has taken up a position in his native Taiwan but he came up with what he calls the empirical mode decomposition and I saw this about 20 years ago and I was very intrigued by what he was doing so he basically wanted to exactly decompose signals that have varying instantaneous frequency varying altitude and he saw them as coming from nonlinear physical phenomena I mean in which things do change like that and but what he'd come up with and I have slides for that but maybe I can find them oh no I can show tell you what what what he did so what he said was okay you have something highly oscillating like that and it's going to have slow modes and fast modes and so on so I want to find let me try to tease out the fast modes and then if I can peel them off one by one then but I don't want to have them strict frequency so what he said was let me take all the maximum that I have in my signal and let me take all the minima that I have and let me link the maxima with a spline I link I mean there are ways of linking things with best splines best cubic splines and the minima as well with the best cubic spline and let me take the average of the two then the difference between that average that bit of the the maxima spline and minima spline is going to be a function that oscillates and that has maxima above and minima below and so on actually sometimes it because you're subtracting something non-constant there might be a parasitical maximum that forms like an extra extreme if that happens and you have to do it again and again so that's already and so you'd keep doing that I mean and it's it's all a kind of of a juggling science of making it happen so on and when you have something that has only maxima above the line all its maximum above the line and all its minima below the line you say that's my first empirical component and you subtract that from the original thing and you now have removed very fine oscillations and you look at the remainder and you do it again and again and he says and so those are the empirical components the interesting thing is that when he showed this on signals of interest he got something really useful out of it and so I said well I mean it's always my mantra if there's something if something works there must be a reason and and he very much was looking for mathematical theory for this however when I challenged him I said look I mean this can be stable under noise I mean maxima and minima for having sake and so when you add noise it's very I mean things move but he had a fix for that too you could add noise in many different ways and take the average of all of that and became somehow more stable and people actually are understanding better how when you take an average over the perturbation by a whole ensemble you become more stable and I don't understand that but probably tell me that they can see this but in any case what was worse even was that if I added a little perturbation here high frequency perturbation of course it was going to influence the high frequency components but it was also going to influence lower frequency components way down the scale and that's definitely something you do not want if you act at a localized high frequency thing it should not influence your whole decomposition so I felt and I mean I feel that the whole stuff that I've we've done on synchro squeezing and what we've done is very much inspired by Nordens work but we decided that we would try to get some make something that had the flavor of what he wanted but that was mathematically more easy to analyze and also had a better chance of being stable as soon as we started doing that Norden was not very happy with us but I mean so I still feel that we were very much inspired by him and so I want to give him that credit although I'm okay so at the same time I had worked for a number of years at Bell Labs and was inspired by work of a person Israeli scientist who worked on hearing and speech compression and coding and so on and in the meantime we have made tremendous progress in that with machine learning and so on in a very different direction but let me tell you what he was doing and which was very interesting so at that time many most people who were trying to synthesize speech or to understand why our auditory system is very good at certain tasks were using what's called linear predictive coding what linear predictive coding really does is it tries to set parameters that mimic the way we produce speech when if I were to say twice the same utterance really trying to say the same way and you took a higher resolution recording of it and you looked at the wave shape on your scope you would not see the same wave shape because in fact what our auditory system extracts is not it doesn't characterize to try to characterize the wave shape it tries to characterize parameters of the wave shape that are determined by how we produce the sound so the way we produce sound is that we exhale and so well at least in most languages I mean people also can do things with inhaling but I mean let's talk about exhaling exhaling and that produces white noise that white noise you shape with filters in your in your trachea and so on and so you produce something that is a white noise with now correlation well it is driven by white noise but that has all these correlations in it and what you what your auditory system does is somehow in a very nonlinear way is extract those shaping parameters because those are the ones you have a control over over that white noise you have no control and so if I say twice the same thing I have used twice because I've trained my vocal system and so on the same parameter settings and you extract those twice and it sounds the same to you even though the signals are quite different so that's what linear predictive coding wanted to do it wanted to extract those parameters in feedback implementation from the sound that was produced but whenever you talk about things that involve our sensory system especially with with sound we produce sound and we listen to sound you also I mean the same you can you can add the same thing to patterns that people or animals display in their plumage or offer and that you see on the one hand it needs to be produced on the other hand it needs to be perceived and the producing limits what can be made and so you don't need to work on things that are not realizable because we won't observe them but since the perception has been shaped by evolution you can also try to mimic what the perception is doing because whatever is in the signal and that you don't perceive well if you if you don't perceive it you won't care about it so Odette Gidza was inspired by how our auditory system works in trying to to compress audio now our auditory system I mean this is a very very crude way of saying it we have much people who know about this have much better models but in a first instance you have this this this membrane the membrane in the ear and it it conveys oscillations to a membrane in the cochlea and if you unroll that membrane in the cochlea then what it does is it it so the original signal s of t that f of t that one came in f of s that came in is gives rise to excitations along this membrane that are different depending on the different frequency components in f so it it is as if f were decomposed into frequency regimes so in different so you can imagine f hat first of all you have your window it and you take it locally but then you do look at a different frequency regimes in there so it is like taking a transform and these different the different frequency things get so low frequencies you find that one end and so low frequencies I think you find here and higher frequencies you find out here and it unrolls in a kind of logarithmic way so the highest half is here then the next half there the next half there and so on and the result is that you have kind of like a wavelet transform in there because of this logarithmic treating of weak frequencies and so you so it is as if you were looking at at at windowed Fourier transform with things taken in blocks so new and block blocks of these logarithmically and but if you mean if you look at sound and decompose it that way you get a lot of bandwidth I mean because I've taken it apart in all these pieces and well how do you transmit that information to the other turner again I'm going to be very very course in my description but the Bethlehem membrane so the Bethlehem membrane is now vibrating but it's vibrating in in in kind of resonance with different frequency part of your signal so it's vibrating so I have my channel my signal has come in and I have decomposed it into things that that go fast and in things that are slower and things that are very slow schematically speaking and they happen at different parts of this Bethlehem membrane on the Bethlehem membrane which is vibrating with these there I mean it it it you have it's in contact with hairs that can can bend and when they bend they might trigger because that's done a stochastic process but an impulse to neuron so you have these little hairs and they have different stiffnesses some hairs are very stiff and some are less stiff so what what you could think of is that if this is your first thing there will be some hairs that will already give a response and typically the directional so they might give a response when you cross this amplitude there will be other hairs that cross when you go at a higher amplitude those are the stiffer hairs and so on so and you do that for each of these if you were for all the hairs that you know are there and all the different frequency regimes that you can see are represented and you take all these these crossings if you take that volume of data I mean we listen to very high frequencies I mean I probably can still in go to the low teens I mean some of you go much higher still that's 10,000 20,000 Hertz that's oscillations per second in the sound I mean that's a lot I mean the bandwidth of the auditory nerve is way smaller than that for the visual the bandwidth for the auditory nerve is not enough to provide all the information of when all these crossings happen for happen for all the buzzle of all these these hairs and so what Odette said is let me try to think of something that uses that information and instead of thinking of of of hairs what he was just thinking was level crossings of different living levels of these different outputs of auditory filters and let me try to summarize to make little local summaries in time of that that have about the bandwidth that I know can be transmitted to the brain and so the summaries he did was he said let me he called them interval statistics so he said let me for each of these level crossings look at what the intervals are I mean so I have an interval of half a microsecond how many half microseconds do I have around this time in these crossings in these crossings in these and so on so if you think of intervals as the reciprocal of instant a frequency what he was trying to see was what is the oscillation frequency that I'm noticing by measuring these crossings in the different curves and whether it came from here or from there or from there he was taking them together so he was looking at in time so he was setting himself in time and he was looking at this whole band of things coming through and he was seeing how fast do they oscillate and all the ones that oscillate with the same frequency rather than I'll take together they give me a response and he did that so he had a little he let little peaks in frequency and then at later times and so on so he would take those little distributions he now had much less information just distributions of local frequency in time and then he would send he would he would use those for speech recognition or speaker identification tasks and what he found was I mean and this was early machine learning if you wish was if he trained to use this information for certain tasks that he was typically not as good on very beautiful pure sound as what linear predictive coding could do then we're talking mid 90s but he deteriorated much more gracefully under noise and one of the big problems with audio is always what's called the cocktail party problem if you ever in a room with other people and there's a conversation of a group not very far from you and you pick up something that you might be interested in it's actually not hard to focus in on that conversation and listen to I mean you become a little more distracted for what people are telling you but you we can do that which means that we are we are good at that and even machine learning systems are still not as good at it as we are but it means it's possible to do it since we can do it I mean for the signals that we produce a speech so he found that he was much better at in this cocktail party situation of course he couldn't do it in very bad situations but he degraded he was still doing reasonably well when linear predictive coding had completely broken it's it's it's these so because I'd heard talks about this I thought well I mean of course we didn't have made as well as grass yet but what's happening in that graph is exactly what I'm describing here at at a certain time you have a whole regime where things are oscillating in sync and we want to use that whole regime in order to extract stuff so that's the motivation for all this so on the one hand northern one on the other hand oh that gives a so I have lost I have another one fortunately so oh there it is so okay now the kind of of I mean I can try to put in a little bit better words what I've been doing if if I imagine and I could do it with a very wide Gaussian and a very narrow frequency but let's be physicists for a moment and do a delta function like that so let's imagine that I excite things with a delta in in in so delta at some new not mean let's put in quotes because it's what that means is that I would then look at my and I would get here and w in new not minus new e to the minus 2 pi I teach and this is exhibiting some of the behavior it's here you look at that big bottom thing here I have a widening if I look at the absolute value I definitely have a widening in a frequency that's because my omega hat is not living only at new not whenever new is near new not I will have something and so when I look at the phase I will have a face that's well-defined here but if I look at the behavior with respect to time of this then even where new is not equal to not it's oscillating with a frequency not and that's what's happening there so that's suggested that we do the following so on the model we have we know that we will have and let's for the moment as I said look at cross science we know that we will have several different but once we have decomposed in time frequency we expect that things that live in very different frequency regime will be reasonably well separated and so in one place in time frequency play a space we expect that we only have to do with one of those components so we look at the the transform in T new and we look at places where T F T new is bigger than some threshold because otherwise doesn't make sense where it's bigger than a threshold we're going to see at the oscillation with respect to time now if I had here some amplitudes and so on I would have things still in front and I have this wavelet if I take the derivative with respect of this of time I get minus 2 pi I knew not times the same thing so I'm going to divide it by 1 over 2 pi I'm going to take the imagery part and I'm going to divide it by TF or by TF itself TF team which is well-defined because thing is barbed off threshold and that is something that define that that is defined so I'll call that a quantity omega T new and that I call the instantaneous frequency synchronous squeezed I mean because what happens is that even though I wasn't new close to new not it gets back put back so for here this puts a whole band back to new not now I had a reconstruction formula which I have unfortunately erased no it's still there I had a reconstruction formula what I can do is write this reconstruction formula as some over these different bands where things contribute and for each of them I can see where it came from so I can keep my reconstruction like I had before since I had a reconstruction formula my representation has become synchronous squeezed I have a much better recognition of where things are and because I have that recognition I will find where my signal has content and then I can reconstruct from only the content region so let me show you on this what that does okay so if I do so you saw what this is the absolute value the magnitude if I do just that just the reassignment and then after every assigned I plot the reassignment weight each time by the amplitude the magnitude I had before and and and the face this is what I get so it's become much sharper I mean it's not perfect you see you have here frequencies are close to each other so have a little bit of interference and I'll have to come back I mean remind me if I forget to the fact that here I'm nice and sharp and here I'm a little bit wider but but still it's much better than it was and that's fine except of course signals never look like like these this signal I mean this is a beautiful signal I mean you would zillions of methods could do things with that think that signals look like this plus noise like this for instance this is Gaussian noise this is long tail noise for that this is what you get now you still have I mean with goodwill you still can see the instantaneous frequency curves but it's kind of hard to extract them and it's much harder to convince your colleagues who not as emotionally invested in the method as you that that this makes sense so you although actually some of these methods even when we were at this stage if you threshold carefully and so on you can extract information and so how Chiang actually did work with doctors in in in Taiwan and they did deduce from this method a clinically applied thing for removing intubation from intubated from people who had a respiratory failure are intubated so the track yes cut and they put so that air can be put in their lungs and you can then solve the other problems they have before they die and but once they're better in condition once their system wants to start breathing itself again you want to remove the intubation as quickly as possible because it is it can lead to infection since one but you also don't want to remove it too soon because well then then they stop breathing so this this is called the problem of a ventilation removing intubation from ventilated patients is something that is not easy and it requires very experienced nurses and all all all attempts to try to automatize some of it had failed or required half hour of observation which they can do I mean it has to be done and so with this method already he could do something in in in two minutes so even with this terrible thing something was possible but I mean it was we then what we so so in in in the last few years we wanted to improve this significantly and we proposed two things the first thing is to do a multi taper synchro squeezing so we were using I don't know which window was used here but once you do multi taper it made sense for us to use Gaussian windows and her meets so you in fact of using one window you use several windows and typically you use used to her meet functions H what did I call it H1 last time probably what whatever her meet functions H0 up to some H and mean maybe 5 or 5 to 10 most and what I was showing what I'm showing you here is a representation in which I look at the so so for I done make a representation in a T omega plane so at time T I in every bin omega I give the the gray value between omega and omega plus delta omega is given by the sum over all new J so that the instantaneous frequency for the new J lies between let's say minus delta omega over 2 minus this omega not so that this minus omega not is less than delta omega over 2 and what I did was I added here in this case we are looking at the absolute value of T TW F of T new J it I looked at the sum of these sorry and that's why you have these bubbly things so what we did is we looked at that but now we do this after we make that we look at the sum for our windows of and so on and we get this so you still for this particular example clearly there was something parasitic that has happened here because your windowed Fourier transform introduces correlations in even if you have white noise then you make things correlated because you make a very done transform and some of these correlations you then see and so there was a strong one there and you still see it here but to a much lesser extent compared to the rest so this is what we got with multi tapering so already this gave us a bunch matter SNR and this was something which could convince people a little bit better but then and actually we found it because we had I mean as these things happen through a coding mistake we it was something very nonlinear because even though we use these different windows and the windowed Fourier transform is linear we were doing this nonlinear thing and we were wondering whether well we were wondering how many her meet functions and and but but our initial analysis seemed to suggest that we had to use our windows independent and so that's why we take use her meet functions because they're orthonormal windows and and what we what we done what we did well we first did a mistake in in summing things and then later we investigate it further but what we then the next step and that's really the concept step is to say look because we're doing something so highly nonlinear there's no reason to just stick to the intuition of linear superpositions trying to work in independent subspaces and so what we now do is we so we decide on an hn and typically n is not very large and I'll well I can tell you now why we can't take n too large remember the her meet functions are the eigen functions of rotations in time frequency space so while the Gaussian in time frequency space is localized here if you take the first five her meet functions they're localized in a circle a little bit bigger if I do a windowed Fourier transform so I'm now looking at here I mean this curve and then that curve here I was looking at the influence of this neighboring region and I do want to squeeze things back because I know things will have spread out a bit and it's not so bad to look at something a bit wider because we saw the figures we have a pretty widespread of the influence region of the frequency but you don't really want to go too far because if I go this far here and that far here then I'm going to start influencing things so I don't so typically at any moment I want to never look at circles that are farther than twice the than than half the distance to the near component and since we do want to look at components that are not very far from each other we don't want to look at n very large so that's why we have a limit on n but we can still look I mean so what we will do is we'll look for n going from 0 to n1 and we'll look at linear combinations of these so this is a window that's that's parametrized by this rn so we take whatever combination of these and we take r to be a random vector and for those windows we again do the transform and we do the synchro squeezing and we take the average of all that and although when we do computations the bounce we have I mean the expectation of this is the average of the curves and the variance the expectation we have on the variance we can't prove bounces are better than what we had with the orthonormal windows but when you do it in practice this is what this is but this is cheating this is what I get if I do it on the clean thing but even on the clean thing you notice that this the interferences have disappeared and this is what I get for the noisy thing I mean so we have gone from this to that so it's a significant a spectacular difference and it works very well and we don't have I mean a complete justification for it in the sense that I can't prove bounce that are better for this than for the previous one but I mean nobody's going to quarrel with this okay so and we keep because we always flag in our transport so what what happened that the sort of workflow is the following you take your noisy signal you do for a number of these transforms you squeeze for all the squeezing you keep note of where it came from so you have these planes where you had reconstruction and in a sense once I have this you're a little bit like like like in Alice in Wonderland I mean you you painted red and that would paints things red in the original planes and then you reconstruct only from things that were painted red so you have squeezed in the right place you know that this contains the right information and from that right information you reconstruct and you make things much cleaner was there question and so you use that as the noising so it always is isolating things time frequency plane is full of stuff because of all your noise time frequency localization already has helped you the synchro squeezing has helped you more using random windows and trying to say what they tell you in tandem and so only when they all vote together do you actually like it and not when when when they're not and you reconstruct okay so that's that what else well if you we go back oh here you see where the instantaneous frequency is not so constant but actually is going has a slope to it I have a kind of widening there now why is that well there are two things that are happening we we so we're using these multiple windows what that means is that we're using windows her meat functions have apart from the the the lowest order one which is just a Gaussian they have several maximum and minimum so let's look at h1 h1 has looks like something like that and so if you take its absolute value and you look at it in frequency since the thing is an eigenfunction of the Fourier transform it doesn't matter whether the absolute value is going to have two peaks so it gives it lives actually on average it lives around zero frequency but just like the functions we saw this this morning it really lives in two different frequency regimes it has two peaks and if you analyze what happens in the computation then it turns out that the the the instantaneous frequency when you have an instantaneous frequency that is constant in time like what I did there the computation then indeed both these peaks get mapped here but when you're slowly changing what happens is that you get you get mapped at two parallel lines and so since here we are making an average over a whole lot of different functions I mean if it only were the her meat functions one and two we could say okay why we're let's find out what we have to do with those peaks but especially once we see the benefit of using random windows we don't want to make an analysis for all those random windows where their peaks are and then adjusting the thing I mean that's that that direction lies madness so what we realized however is that since the computation is fine when the instantaneous frequency has a zero derivative we can use the rotation operators and last week I was asked would we ever use those rotation operators in application this is where we use them you see we already are computing a window Fourier transform with her meat functions so with if you forgive me introducing the while operators again we translate by m t and and new these oh shoot okay now I'm sorry I'm going to L instead of the end there HL so I'm already computing these things which remember where the eigenfunctions because I've moved to this particular point of rotational operators around that so what I don't need to do is since I've computed these is to just put in a face factor here e to the I and to rotate it down at m t and new L to rotate locally and I have localized it by taking this inner product locally I rotate it down I do the synchro squeezing I extract and I wrote it back and that gets rid of this widening but you could say I don't know this angle and for the moment in in test cases what we're doing is we're doing a first pass determining the angle and then do back but we won't have to because again what we will do is I mean what happens is that there will be you look at whatever angle you take and you look at how much you narrow your synchro squeezing and where in the direction which you narrow best is the one that that component tells you you had tilted and you keep track of that for the different HL and it's when they all conspire to say yes yes yes it was that angle of 26 degrees that you believe it otherwise you put so you will put a distribution on it that's governed by how peaked the solution angle is in order to do the reconstruction so that we expect will mean and I'm now really talking about ongoing work that will will will squeeze things to two two lines there so then once we have squeezed it to two lines you can do much more of course because we are now we were looking at at components so we had instantaneous frequency like that same as frequency like this so what I've said is we put little circles and we use those to rotate things locally to get the component nicely sharp in regardless of the angle but you can also I mean since scaling just gives you things that instead of circles are ellipses in time frequency space you can look at ellipses this way once you have the angle you can actually align yourself and that way you know that you will have much more of the the local component and unless the frequency changes a lot you will have a better localization again so you can actually make your time frequency localization operator adaptive to the signal and you can in places where the coefficient the things come close you can take fewer functions so as not to penetrate and so you you define a filtering which is dependent on the signal itself as you are analyzing it now that's science fiction for a moment I mean we're not there at all I mean we first have to get our angle adaptivity in in correctly so that's one thing and as I said to my surprise I mean things it's it's it's the last year I I have I have found that I was using again things that I had not looked at for 20 years or so I mean all of a sudden these things emerged again okay so that's that adaptivity but I've also been talking of only cosines I mean this whole example I've given you is just with only using cosines in practice that's not what we have and let me go back again to showing you things in how James paper so okay here is a signal it's really well there's several things illustrated in this figure the first this is electrocardiogram you see how in the electrocardiogram things are not really completely periodic as always there this RR arrow that's indicated is the RPK interval and that is used in practice in clinical machines all over the world in to determine from the electrocardiogram what your breathing rate is because these intervals breathe with your breathing I mean very little but enough that you can deduce the speed of breathing from it from a electrocardiogram but what the reason I was showing the figure here is that it's it's it's very typical you have all these harmonics and although it's very standard for people looking at Fourier series of audio signals or or other things that are where we have a high density of sampling and where things are repeating virtually periodically can't say almost periodically because that has a technical meaning and that's not what I mean but I mean but what happens if yet if you have an S that is periodic then you write it so if I have S of phi t since S of t is a sum of Fourier series if I write it as ck cosine so S of u is ck cosine of 2 pi k u once I put here a phi of t that's not constant I will have the same phi of t here and now I have a linear combination I will have a linear combination of a whole lot of trigonometric terms that have frequencies I mean if I take the instantaneous frequency of that that will give me 2 pi k phi prime of t that are multiples of a sum of and and and that's what I see here you see all these different harmonics but they're not really different components I mean I have imposed on them that they are there because I wanted to develop this in a trigonometric series because that's what I did by looking at at at at this thing I was decomposing my S of t I was first when doing it sure oh shoot f f of s and I'm here introducing my trigonometric functions and that is meaning that if I have different trigonometric components in him that in there that I will find those for nu equal k phi prime twice phi prime three times phi prime and so on and that's so it's my tool that impose that they're not there it would like to just look at that and this is what we want to get out so what happens in this paper and what in on top of concept helps how Chiang get the fetal heartbeat out was that he started also looking what's called the capstr now this is really an engineering thing I mean you see it's come from spectrum spelled backwards and so the capstr is something that you get by taking the Fourier transform and then taking its logarithm and then taking the inverse Fourier transform of that now I'm not going to give you it's a very hand wavy kind of thing I'm not going to give you the details it's hand wavily explained in there but what happens is that under the reasonable circumstances on the signal when you do a capstr and you scale things right you have that so you have these harmonics here and the capstr gives you something that lies here and then lies it half the frequency and the quarter of the frequency and so on so what you find is that the one place where they coincide is the zero order now I don't like this very much and that's why I didn't spend much time on understanding it in detail because I think what we should do and this is something that I'm doing my students is do an optimization in which we try to extract from the data the shape function itself now so this this I call the shape function the the s function here so I was making the assumption that things are equal to that so the approach that is that how Cheng follows in this particular paper and it's fine because it gives results but we'd like to do better is to say well I have also a decomposition here into cosines of L times that and so let me view the whole decomposition as a decomposition into these frequencies and then try to extract from all of these which the bottom ones are and that way get back to what I would like to do I feel that whenever you do that whenever you say let me extract one thing out of it like me extract the ridge out of the the window for your transform you're giving up potentially so much because you're restricting yourself to just a little bit and there's information elsewhere and so I don't really want to do that so what I want to do is identify that they're there and push them together so what that means is identifying these coefficients and identifying the shape function and then use the shape function to expand and that turns out to be something that is again very well feasible so so you have something an s1 with a 2 pi 5 1 t with some amplitude and you have another amplitude with an s2 2 pi 5 2 t so if you have a reasonable guess on this not a perfect guess but a reasonable guess well suppose you had a perfect guess then you could use it to warp you could use it to use a new variable u and you would get a 1 s1 of 2 pi u and you would get a 2 s2 of 2 pi 5 2 composed with 5 1 minus 1 u and if I done so these functions both of these functions are nearly let's assume they're periodic really periodic then what I find is that my samplings I will have sample of my first phi 1 function and I will have more samplings and so on they will all line up but this function will be sampled I mean because it doesn't have the same period and because its phase is not correlated with the other phase it will be it will sometimes give me something additional sometimes give me something below it and so on so when I plot all the samples of this what I will get is a sampling of this function perturbed by a distribution of samples that comes from the other signal I mean after I've done my rewoping and so I get a band that enables me to give a first approximation for my signal s1 similarly I can by doing the other resampling of course in practice you don't do it but this is rewoping but because you have that information you can by grouping the samples together according to one function get a reading on s1 on the other function a reading on s2 once you have that you can extract better because now you have a way of defining harmonics so you can bring several of them together and you can get a better reading for the time function so on so you can actually have a feedback loop that works reasonably well this so there's a paper with one of my students and postdocs and it's somewhere on archive one that shows how you can do this if indeed you have many many many periods as you have for electrocardiograms yes no you don't you don't you you you you have well it's not just purest is the warping because it's not just when it repeats it's also that it doesn't do it always at the same speed so I have I mean the paper they proved is that they knew it exactly and they prove under those assumptions that they can they know exactly the functions phi 1 and phi 2 and they have some no well and no they know phi 1 prime and phi 2 prime and so well that's important because if you know the derivatives and you don't know the thing you could easily make a face I mean face factor mistakes can accumulate so no 5 prime 5 5 prime 2 and you know you have samples of this superposition so f of t is this and you have many many samples of that super position they're regularly spaced samples and they cover many periods and they have stable reconstruction of actually I have construction that is more stable because they wanted to go they even have an s that has a sharp discontinuity and they find it back which we don't need in practice but in practice we also won't know these things exactly we'll know a reasonable gas for them so we need stability and so on they prove some robustness but we want to go much further because for some signals indeed you have many periods like for electrocardiograms I mean you you you or for for sound you know zillion I mean but in in in in other situations you don't so and actually Heizawa Wang and Tingran Gao a former student of mine actually have found that some very nice methods in dynamical systems can be used to even produce good estimates if you don't have very many periods even if you only have something like five periods this one because you you your your estimate itself gives you a way of sampling and and and you you keep autocorrecting I mean it's it's it's very beautiful works not completely written up yet in any case we we are making a lot of progress on getting these shape functions out I don't have a good way yet once I have the shape functions I think by using a certain type of dual functions I can try to then write the transform in an adapted way but I don't have very elegant ways of doing that so applications that we are doing we have well there are a lot of these medical applications of course I mean and how Chiang is now actually has just been hired last fall at Duke University which has a great medical school and so he's really looking forward to working with the medical direction there and and he has so many ideas so many projects if you if you know of students who would be interested in this kind of signal analysis and working on real serious data and using interesting a state-of-the-art methods I mean whatever methods we were open to everything and who are looking for a postdoc then let them know because tell them to contact how Chiang Wu and and or me at Duke University I mean at my stage of my career I'm not I'm not looking to another 20 years but how Chiang definitely is so and I think he's really at the state of the art and at the peak of the of progress in using this very adaptive time frequency analysis on real data with clinical applications I was going to say something else but another yes another range of applications is biologists are incredibly more and more interested in analyzing signals vocals vocalizations and sounds made by animals I mean ways in which animals learn for instance birds young birds learn to sing from hearing adults sing in the region where they are born so they imitate I mean this has been observed in the field they imitate the signal the singing that they hear adults make and it has been documented that they are regional dialects so if you look at one species of birds and you look at what it it sings here or 20 kilometers further or and so on there are slight differences in those songs and you can geographically map how they change they are interested because it will tell them a lot of interesting things about how the brain neurologically works in how these birds learn and now in order to do experiments on that you must be able to produce songs from which they could learn that you influence that you make and they better sound like real songs I mean well we know that that that birds can make any sounds but most birds cannot so we need to understand a variety of sounds that you register you need to extract the parameters over which these whopping functions or the shape functions can change so that you can generate more so that you can then see experimentally and control your experiment so they're very interested in all this the only tool they have in order to look at them are spectrograms so they have they're making spectrograms of birds songs and they divide them in syllables and they recognize and so on but the fact that we can do so much more with these signals is something they're very excited about so we have a student working on those and I think that's why I want to stop here a bit before time but that compensate for the fact that I went over time this morning so thank you