 Okay, so this is lecture 19. Okay, so the last class we saw quite a few things and I think There might have been a few things which are quite totally new to you and maybe you're confused about it The only way to get rid of that confusion is to go back and read something more about DSP and speculative factorization All that so I'm gonna give a little bit of a glimpse into those things once again real quick to get you started on that okay, so So few few few ideas on speculative factorization before we proceed Okay, it's actually a very simple thing. It just sounds like a very major result, but it's it's quite simple. Okay, so So so typically when you think of signals say Xn, okay, I'll do it in discrete time you Your you do a DTFT you get X of e power j omega Right, and you know how to go back from X of e power j omega to the unique X of t, you know, that's unique given the magnitude and face of X of e power j omega Right, you can go back to the magnitude and face of Xn. Okay, so complex to complex is unique. There's no problem there Okay, so what happens in speculative factorization is you have to imagine that you're given the What are you given you're not given X of e power j omega you're given Only the magnitude of X of e power j omega as in so what what you're given the way it's described is you're given the Fourier transform of the Auto-correlation of X. Okay, suppose I do Rx Say Rxx of M and then do its DTFT. What will happen to that? You know, this is going to be this is what it's going to be. Okay from here. You cannot go back to a Unique Xn. Is that clear? Right clearly. I cannot go back to a unique Xn. Okay, so because only the magnitude of X of e power j omega is given the face can be Arbitrary, I can pick any face I want well arbitrary according to maybe some condition if X of n is real then the face has to satisfy some symmetry Etc. But if X of n is in general complex The face of X of e power j omega can be anything Okay, so there are an infinite number of face responses that we you can pick and you can go back from mod X of e power j omega 2, mod X of e power j omega 2 Technically an infinite number of X. Okay, so in some cases you might be interested in Something more specific than an infinite number of possibilities. Okay, so in that case a good criteria is to select this Minimum face Xn. Suppose I want a minimum face Xn then it turns out there's a unique answer So the way I wrote it down. There's a unique answer. That's what I gave you right in yesterday's class So if you if you want to go to the minimum face Xn, there is a unique answer. It has to be put a minimum face Constraint, it's a big Constraint right, so it has to be unique the way you can derive this in so many different ways And it has to be unique it works out to be unique in general. Okay, so that's the basic idea and spectral factorization Okay, so the way it showed up for us is a little bit different So it might not have been very obvious that this is what I was doing So this is the basic idea and spectral factorization the way it's used in practice might be in so many different ways Okay, so several times You might have come to x of e power j omega through some other route. Okay, maybe not a minimum face Xn, okay, which is what happened to us, right? So what did we do? We had H H of t Okay, from H of t we went to the Auto-correlation of H and from there we went to the PSD of H of t right, so that's what that's how we went but H of t may not me may not be Minimum face. Okay, so So if you want to go back just given the magnitude response You might want to go back to the minimum face thing and then we saw the advantage of having minimum face Okay, so once you once you split it up as a minimum phase factorization then you can do a suitable filtering to reduce The ISI to only the causal part that was a big advantage for instance, if it is not minimum phase if it has some zeros and poles outside or Things like that you can't do that filtering very easily. It's not clear how to do that filter Okay, once you do the minimum phase factorization, you can take the maximum phase part Do the reciprocal of that and get a meaningful filter once you get a meaningful stable filter you can run that and then get only the Causal part from a minimum phase filter as the ISI. Okay, so that's the idea that seems to Seems to make sense and hopefully to that to so many new ideas here. Hopefully some of it is Reasonable clear. Okay, so if there's any question lingering questions, let me write write down the structure then I'll ask you for questions Okay, so what's happening is? You have so remember this The symbol sequence. I'm thinking of it as actually a vector of length L from the alphabet, but I can also think of it as a Discrete time signal. There's no problem. It actually goes through a transmit filter GT and then after that a channel CT okay together these two I'm going to call as HT okay, then noise gets added Okay, and I got RT. Okay, so then I wanted to see how can I find S? Okay, one criteria seem to be maybe a distance Criteria based on S. So I define this JA which was some kind of a distance between R of T and Well summation AK H of T minus KT right Right, so this was my criteria in that criteria filtering by H star of minus T came out Naturally, okay, so I had to find this YK which is basically a filtering by H star of minus T and then Sampling okay, it came out in this criteria Okay, if you if you have chosen some other criteria something else might have happened I don't know. Okay, so since we chose this criteria the filtering by H star of minus T happened to occur Naturally, okay, and you sample at symbol rate to get YK Okay, so this YK we saw can be written in terms of SK in what fashion It's actually what SK convolved with Rho HK. Okay, so it's not just plus Some n prime K. The reason why I'm putting n prime is it's it's not really white. Okay, it's some it's got some PSD Okay, so notice. How did I get the autocorrelation? It's very crucial. Okay, the reason why I got autocorrelation is because I Did a match filtering. Okay, so if I did not do this match filtering here if this H star Was some other filter say a low-pass filter or some such thing then I won't get a autocorrelation Okay, so that's crucial case the fact that I know H of T to do H star of minus T Was crucial to get an autocorrelation if I chose some other arbitrary filter and sampled I won't get autocorrelation. I'll get what instead Some yeah, maybe cross correlation or some some some weird signal here Okay, what's the advantage in getting autocorrelation as opposed to any other signal? Then I can do spectral factorization. So you see how the spectral factorization flew into that picture so if you did not have spectral factorization and if you only had Yk and you had to compute this metric for each a There's no way you can do with linear computation. It'll become l square computation Maybe there are other cases where it works out But in general it'll be an l square computation which becomes pretty much impractical to do for any even a fixed a Okay, so all these things are important steps and they seem to come naturally and quickly when I do it But when you go back it won't should be clear in your head why it's coming through that way Okay, so how I got Yk and why is it that that resulted in the autocorrelation and why that results in a PSD? Which can be then spectrally factorized. Okay, so what's the big point in spectral factorization? Okay, so before I go there. What are the statistics for this? Okay, it's going to be it's it'll not be white Okay, but each So it's going to be Gaussian with PSD. What what will be the PSD sh of Well e part j 2 pi of t if you will Okay Is that clear? What is sh? That's the Fourier transform of rho h of k. Okay, how did I get that? Okay, it's actually h star of minus t multiplied by H of t itself that works out and you get the PSD. Okay, so this n prime k is not white Okay, that's another issue floating around. Okay So all these things are there and computing j a with Yk itself turned out to be a More complicated problem. Okay, so then you do a substitution. How do you do that substitution? You do it in a weird way you take the PSD Do a spectral factorization and get sh of e part j 2 pi of t as what? e part j 2 pi of t Okay, well it has to factor as magnitude square of something right, how did I pick that specific m? I picked it so that it became gamma square times m of e part j well 2 pi of t times m star of e part j 2 pi of t in addition If you do a DTFT of m of t you get a m of k which is Minimum phase. Okay, so this is well loosely minimum phase Okay, well like like I pointed out if you're given a PSD you can do this Factorization if you don't insist for minimum phase in an infinite variety of way Okay, so you can obviously right the magnitude response is fixed once you fix the magnitude response You can vary the phase in an infinite way and get any signal you want here, but that's not what I picked I took that one which was minimum phase. Okay, so once I did that it turns out It's meaningful to well. I also I might have written this in the z-transform in z-transform if you want to write it becomes what? SHZ is m square a gamma square m of z m star of 1 by z star. Okay, and if you want to write it in the time domain, what will happen? row h of k equals gamma square mk convolved with m star of minus k Okay, all these things are equivalent ways of writing the minimum phase factorization spectral factorization unique minimum phase spectral factorization Okay, so that uniqueness must have surprised you if it didn't surprise you before so once you say insist on minimum phase The factorization becomes unique. Okay, well It's also a convenience. Okay, any other type of face also make it unique. Yes, it's not a big deal But still it's minimum phase has a nice property that What is the nice property most of the energy is going to be concentrated in the first few taps of my mk Okay, so that that we will use later on in our in our decoder when we filter. Okay, so once we did this it turned out you take the yk and Do what remember? How did I get the yk? I took r of t filtered with sorry, this is r of t filtered with h star of minus t and sampled at Kt I got yk and then what do you do? You do this filtering by one by gamma square m star of e pa j 2 pi f t Okay, you do this filtering Seems a little strange, but you do the filtering you get Zk the reason why this filter makes sense and works is I know it's going to be the denominator. I have The thing works out, right? So that's the basic basic Okay Is that clear why this works out? Yes, think about it. Okay, so you get a zk and and then in terms of zk things become really really easy Okay, so once you write it in terms of zk it turns out this j a becomes equivalent to Another metric which I called j prime a which was what? norm of Okay, so let me not write norm. I'll just happily write down. What did I mean? I think I put a summation, right? Did I have a summation? Summation K equals 0 to infinity. Did I have 0 to infinity? Okay, it will be modulus zk minus what? ak convolved with mk square Okay, norm squared. There's a norm squared. Maybe it's not showing up properly in the Okay, so maybe maybe I should move this guy Oops Think didn't move Okay, so more than this filter being Anything implementable or worrying about that those are not the big deal. That's not the big deal. The big deal was Because I did this filtering My j a the metric became much simpler than before that's that is the big deal Okay, so forget about this filter and its properties may or may not have any of the properties I said that you don't care but because I did that special special substitution with the minimum phase Minimum phase m of z of this filtering with the minimum phase m it became The j a simplified to this okay You can go back and check it and use my the way I did that the minimum phase assumptions crucial there Otherwise, it won't won't work so nicely. Okay, so at least I believe that's what should happen Okay, so those are things you can check. Okay, so this is very nice to see Okay, so not only that there's one more thing that's nice nicely happens. What is the next thing that happens? Okay, what happens to zk if you want to write it in terms of sk? It's going to be sk convolved with mk plus Nk now what is Nk now? It's going to be white. Okay, so that's a big That's a big Advantage normal zero mean I mean variance and not by to gamma square Okay, so it's a simple derivation to do Okay, so once again you multiply you see sh of z on the top the denominator you have m star But then if you do the m star times m you once again get the sh and you'll have an extra gamma square floating around Okay, right? You have a gamma square in the denominator. You might say why did I put the gamma square? Maybe I don't want the gamma square. It's okay. Those are all constants which don't matter at the end Okay, so you put the gamma square there you get this guy Okay, so so several nice things happen when I do this Filtering okay, so you have to go back and ask yourself very closely the question of why why the spectral factorization is crucial Why can't I take some other? Instead of mk, what if I take some other factorization with some other phase necessarily not minimum phase, okay? So maybe most of these things you can repeat. Okay, but the biggest advantage which you'll never have is Most of the energy being concentrated in the first few times, okay Then you can eventually approximate m with only its first few taps and you'll see the first few taps will make a big difference to your Complexity as we go along. Okay, so maybe the crucial statement is in the minimum phase thing is most of the energy is concentrated in the first few Taps most of the other steps you might be able to repeat with any other factorization Okay, so it may not be a big deal but this crucial step of most of the energy being concentrated in the first few taps comes pretty much because of the Minimum phase criteria, but anyway go back and check those things and make sure you understand Where exactly this minimum unique thing is coming in what what are all the roles it's playing, okay? All right, so any questions it's fine People are happy with this picture Okay, so now based on this I was able to draw a nice picture of the entire receiver at least for the calculation of This j prime a okay, so what did I do? I took the zk Okay, remember should remember how I got the zk, okay So I got the zk through from r of t filter with h star of minus t sample at some simple rate and then 1 by gamma square m star of 1 by z star Okay, so we do that you get zk and then what do you do in computing j a prime? Okay, so well, let me do it differently Okay It's quite easy you send it through a summer difference with what? You send your ak through Mk or m of z capital M of z, right? You get a certain thing Subtract out here and then you do what? You sum square you will get your j prime a okay So in reality in practice if you want to implement anything like this one of the things you have to struggle with is delays Okay, so delay will never be obvious k zk is coming through so many Steps in your filtering there are so many delays that will be cost you might have delayed your cost for causality You might have delayed your transmit filter for causality. You might have to even delay or 1 by gamma square I don't know how you will implement your 1 by gamma square m star 1 basis So that might have been one more crazy implementation So the end there will be all kinds of delays Okay, and you have to adjust for the delay properly if this j a prime has to make any sense Okay, otherwise it will be something else and you'll be measuring something else and you have no business doing such things Okay, so I'm sure people who are doing the lab realize how important this delay and adjustment is measure at different points You get completely different things. This doesn't make any sense Okay, so very be very careful about delay when you if at all you're trying to implement such things in practice All right, so you do this you get j a prime So now I have a nice nice expression that I wrote down for the for the For my decoder. I'm going to minimize over All a j a prime Okay Right, so maybe the computation of j a prime is doable for one a one a but that's not where my problem ends Okay, so that's where my problem pretty much begins, right? So I have to then compute J a prime for each a and I have too many of those is case the number of such a is is size of X to the power L and that's one too many. Okay, so for any kind of Reasonable L you want and reasonable size of x even if it is too to power L is Too fastly growing an L and you can't hope to do anything Better than that. Okay, so that problem still remains to be solved Okay, so we've not solved that problem at and it turns out there is a wonderful algorithm called the Viterbi algorithm Which will solve that problem and make it still linearly grow with L assuming your mk is As doesn't doesn't have too many tabs. Okay, so you can do that at least as a finite number of tabs Okay, it's possible to solve that problem. We'll come to it later. But before that I want to do this neat and clean Autogonal projection and find an orthogonal basis for our set of signals and and see that we pretty much recover the same structure that we have now Okay, so we'll see the relationship between the two of them and then proceed along Okay, so so after at that point I'll make some general comments of what to do as we go along and then we'll stop there Okay, so any questions right now? Okay Okay, so the question is basically You're saying You're coming at this picture and saying why don't you first do a c inverse of t? Okay So the problem is okay, so I have to have to make a few comments I thought I'll make the comments after I show the orthogonal projection between even now It's a good time. Okay, so any filtering you do at the receiver It'll be nice if only the signal part went in through that and the noise part didn't go through that Okay, so but you can't control that. Okay, so the noise will also go through that same filter Okay, and c inverse of t may not be a good filter It might boost up your noise variance to a great level and that that's something you may not want Okay, so what you really want to do optimally is what? Do an orthogonal projection if you know the orthogonal projection, you know that's going to be Optimal, okay, so go to the signal space find an orthogonal basis Do an orthogonal projection that is the best thing to do. Okay, which is what I'm going to do now Okay, anything else you do like c inverse of t for instance is a heuristic Including this minimum distance is also a heuristic right now. Okay, it's a heuristic It may or may not give you the best probability of error Okay, but that's definitely doable and it's done. Okay, a lot of people do this It's called a linear equalization type So you do a c inverse of t get rid of it and then continue with your g of t it can be done in practice Maybe that's it's a good choice, but but it may need not be the ideal thing. Okay, any other question any other question Okay, so in practice also this this kind of a structure where you do a Uh, where you do two stages of equalization one stage for getting rid of the Anti-causal ISI and another stage for getting rid of the causal ISI Can be shown to be very good as in even theoretically one can show that can approach capacity So those kinds of constructions are usually preferred for that reason. Okay, but you can also do the c inverse of t It's okay. All right. So the next thing I want to do is this uh, orthonormal basis projection Let's let's figure out Okay, so I'll try to do this and make some Uh comments and then maybe we'll see how we are doing Okay. All right. So So what's our uh, what's our set of signals and what's an orthonormal basis for for the signals? Okay, so What you'll see is it's very interesting Okay, so what is our set of signals? Okay, set of signals at transmitter Okay, does it make sense to talk about some set of signals at transmitter? Okay, I should now I should never talk about Set of signals at transmitter, right? So I'm not doing I should not do orthogonal projection on that. What should I do? At the receiver signal component at the receiver for for that. I have to do the orthogonal projection. Okay, so always remember that I have to look at Set of signals at the receiver or in r of t. These are summation k equals 0 to l minus 1 a k H of t minus k t k for uh, a in Excel Okay, these are my set of signals. How many signals do I have? Size x to the power l. Okay, I have a lot of them now I want to find an orthonormal basis for This set of signals once I find it then I know the optimal thing to do is to project on to that orthonormal basis and then do a Minimum distance is here. Okay, so and if I do the orthonormal basis projection, I also know that my noise is going to be Just n naught by 2 and all that I know all those things. So it's very nice to do that Okay, so it's optimal in various ways and this will be the optimal receiver. Okay, so here's a claim Okay, which is an interesting claim and you'll see where it comes from Okay, suppose I take this h of t. Okay, let me write down a few properties for this h of t I'll come to the claim is slow enough Maybe it's Fourier transform as h f Okay, I'll let that be so and then Uh, I also want to do a few things before I'm able to define the orthonormal basis Okay, h of t convolved with h star of minus t Okay, if I do a Fourier transform, what will I get? mod h f square Okay, then if I sample this h of t convolved with h star of minus t At k t, what will I get? If I do a dt of t What will I get? It'll alias. No, I'll do a 1 by t aliasing. So 1 by t summation m equals minus infinity to infinity mod h of f minus m by t square. Okay, so this I will call sh e part j 2 pi f t Okay, so that will be my aliasing. Okay, so I'll take this sh e part j 2 pi f t. I'll do a spectral factorization on it. I know it's real and non-negative So spectral factorization will give me a unique m of e part j 2 pi f t. Same as before. I mean, there's nothing new here j 2 pi f t This is gamma square m of e part j 2 pi f t m star e part j 2 pi Okay, so this is spectral factorization All right Okay So I have all my notation and tools to define the orthonormal basis Okay, so first thing I'm going to do is I'm going to define a phi t which is The Fourier transform which is the inverse Fourier transform of phi f Which is defined to be h of f divided by gamma m e part j 2 pi f t So I'm wondering if there should be an m star or m. Let me just check if I'm contradicting myself somewhere Okay, it's only m Okay, okay, so this operation is clearly when defined. I take h of f and divide by gamma m of e part j 2 pi f t Okay Right, so so there's all kinds of interesting interplay between discrete time and continuous time here Okay Okay, so hopefully or it's all that is clear. Okay, so you can think of the denominator as the Fourier transform if you Forget I mean it's a periodic Fourier transform right, so it's a properly well-defined division. There's nothing Confusing there but on the top you now don't have sh of e part j 2 pi f t if you did you would only get m star of e part j 2 pi f t now you have simply h of f itself. Okay, which is actually a Fourier transform. It's not a Discrete time thing. Okay, but still I can adjust my axis so that both of these will match and then I can divide Okay, so it works out. All right, so make sure you Get a feel for the intricacies here. Okay, so here's the claim Okay, the claim is the following suppose I define a linear space as as the span of H of t minus k t What do I mean by span of h of t minus k t? Basically all the signals that I had will form belong to this and You do the same thing. Okay, so summation basically signals of the form Summation s k h of t minus k t Okay, k equals I don't know for all over k Okay, you can put so it's a span of the shifted versions of h of t. Okay so So I'm not guaranteed that h of t minus k t is an orthonormal basis. Okay, I'm not guaranteed. Okay So what I'm guaranteed on the other hand is that Fee of t minus k t Is an orthonormal basis for? Okay, so that's where the Interesting result comes through. Okay, so h of t minus k t need not be orthonormal But fee if you define it this way fee of t minus k t becomes an orthonormal basis for Any questions It's okay People are happy. How many of you can prove this? Okay, so think about proving it. So what's the way to prove it? How do you show a set of Vectors or entities as orthonormal? How do you do it? Yeah, so you have to compute the inner product and show it goes to Zero if there is a non-zero shift and it becomes one if there is a there's no shift And also, yeah, also that it spans case. That's also an important thing that I should show that this vector Spans the entire space That's spanned by h of t minus k t. So once they prove those two steps, then I'm done Okay, so I'll prove the first step that the inner product is uh goes to zero Okay All right, so how do I show that? So basically I have to find write down What is the inner product minus infinity to infinity fee of t? fee star of t minus k t Okay, I have to try to evaluate this inner product But once again, I'll use this way of evaluating inner products, which is what I will see that this is actually fee of t convolved with fee star of minus t which is sampled at k t Okay, so this allows me to use my knowledge of Fourier transforms and all that Okay, so if I do a Fourier transform here, what will I get? If I do a DTFT on the on this I will get 1 by t summation m equals minus infinity to infinity modulus fee of f minus m by t Square is that a question? Okay, so so the way the proof is what for each k? I'm not showing it is zero Okay, I'm looking at the entire sequence for all k I look at the sequence and I'm going to show that the entire sequence is equal to a delta Okay, that's what I'm showing both of them are the same For each non-zero. Okay, I'm showing you I'm ensuring the entire sequence is delta If the entire sequence is delta, what should happen to its DTFT? It should become one So basically I have to show that this DTFT is going to become one Okay, once I show that this DTFT is one, I know that this sequence for all k is going to become a delta Which is what I want to show Okay, so that's the way in which the proof will go and to show that this thing is one is very trivial Okay, so it's not a big deal. You just go back and look at that formula You get 1 by t summation m equals minus infinity to infinity Okay, so what did you get there modulus h of f minus m by t squared right the denominator you would have Gamma squared Okay, gamma squared times what? modulus m of e power j 2 pi f minus m by t times t Square do you get that? Okay, that's all Okay, so the denominator now conveniently becomes independent of m. Okay, why is that? Where it will become okay multiplied through you get e power j 2 pi f t times Something which is one. Okay, so it goes away e power j 2 pi m and I can ignore that Okay, so I can pull that out. Okay, so what do I get 1 by Gamma squared mod m of e power j 2 pi f t Square times 1 by t Summation m equals minus infinity to infinity modulus H of f minus m by t square Okay So numerator and denominator this is both of these are exactly different ways of writing sh of e power j 2 pi f t do you notice that Okay, here I've written mod square, but what is this? What is this guy? It's m times m star. So it's also mod square only. Okay, so it's the same thing. So you notice I can write this as this k becomes F h of e power j 2 pi f t This is also What The same thing Both of them will cancel and you will get 1 Okay, so what I've shown is that the DTFT of my inner product sequence is 1 Which means the inner product sequence itself is a delta And phi of t minus kt is orthonormal. Okay, there's no problem here. Okay So I want you to go back and carefully check at what point I use the fact that m is minimum phase Go back and look at the derivation and tell me if I can repeat the same derivation with any other m which was Not minimum phase any other way of factoring sh of e power j 2 pi f t. Can I do that? Well, I still get an orthonormal basis At what point did I use the fact that m was minimum phase? I'm sorry No, but gamma square is just a convenience forget about gamma square. No, I can I can just integrate gamma into the m and write sh of e power j 2 pi f t as some a of e power j 2 pi f t times a star of e power j j 2 pi f t and then divide phi of define phi of t as the Fourier transform of phi of f which is h of f divided by a of f a of e power j 2 pi f t If I did that also will I get phi of t to be an orthonormal set in the translation? Looks like I should right. I'm not doing anything there. Okay. There's nothing wholly about the minimum phase Okay, so you keep asking your question over and over again Okay, just because I write down something don't say I think that that's the only way of doing things Okay, so in fact if I chose the simple factorization h of f times h star of f also What'll happen? Will it work? Okay What'll happen if I did that? Okay, then it may not may not work. I don't know. Okay, so you have to worry so much about Uh several things pay attention to what you're doing. What'll happen if you did that? I would have got phi of f s 1 for all f which would mean phi of t became delta of t. Well, yeah, delta of t delta of t minus k d you can think of it as orthonormal basis if you want But it's it'll be of no use to you. Okay, so there might be problems later on. The second step is also equally Crucial Okay, so what's the second step? Okay, the first step seems to be okay. Okay, there seems to be several choices and we picked one choice and we showed it was Minimum it was orthonormal. There's no problem. What's the second step? The spanning step I have to show that I can obtain Any signal that I had before by a linear combination of phi of t minus k d Right, my signal space is what summation all linear combinations of h of t minus k d I have to now show that the phi of t minus k d Doing linear combinations on that I can get any other linear combination of h of t minus k d. How do I how do I show that? Think about it for a few minutes See if you can come up with some equations which tell you that it's always possible So, what's the problem? The problem is Suppose I have so this is for spanning Okay, suppose I have an s of t which is summation S k H of t minus k t Okay, how do I write this s of t as sum Summation Some bk phi of t minus k d. Can I write it? Okay, if I can write it then I know that It spans but can I write it is the question. How do you go about doing it? I don't want energy No, I want to be able to exactly write it. I have to find s of t at each t as a linear combination of phi of t minus and k t No, you mean you give me the bk. I think let me let me not be So I have to show To show that there exists bk such that s of t is summation bk phi of t minus k t In fact, you can explicitly construct a bk. Okay, what is that bk? Okay, so the the confusing thing is the interplay between Continuous time and discrete time. So how you are sampling here and actually you're getting a discrete time filter and continuous time filter It's not very easy to quickly deal with it. Okay So one way of doing it is to do a picture to just see at least what the complication really is Okay, so you have sk which you're thinking as A discrete time signal and then I suddenly put a ht and then what do I get? Another continuous time signal. Okay to keep everything consistent Let me make everything continuous time in that case. What does sk sk actually become? summation sk Delta t minus k t Do you see that if I think of it that way then here I get s t Okay, so that is fine. This is a picture. Okay, so now I know hf is what? phi of f divided by gamma m of e power j 2 pi ft is that how I defined it Oh f is on the left side. I'm sorry Yeah, it has to be on the left side Is that fine? Okay, so this is how I defined it. So now I have to be careful when I keep taking see I don't want to take inverse Fourier transform of m of e power j 2 pi ft Okay, I can take it. What what will I get if I take an inverse Fourier transform of m of j 2 pi ft Impulse train, right? Why will I get a impulse train because it is a periodic Fourier transform? Okay, what will that impulse train be? Some impulse train with certain weights. Okay, that'll be mk, right? It'll be an impulse train with weight mk Okay, so that's the way to go back to continuous time if I want to go back to time picture here I can say h of t equals phi of t convolved with what? An impulse train defined by Well, there is a gamma. Don't forget the gamma mk delta t minus kt Okay, well it's getting ugly. So let me rewrite it Okay, so one needs to be careful about these impulse trains not to get too carried away and start differentiating or anything But as long as we write the equations like this, it's okay. It will be valid. Okay Do you see that that's a periodic Fourier transform and it will convert convert back an inverse Fourier transform to an impulse train That impulse train is given by the dtft if you freeze it down to one location It's all the same the Fourier series and all these things that they exactly match here. There's no problem Okay, so once I do this I'm done Right, am I done? Yeah, I'm really done. So all I have to do is take sk And Filter it in discrete time if I want even with mk Which is what is happening in the first step and then do what well multiply by gamma Okay, so multiplication by gamma is always there. So maybe I filter with gamma mk. Okay, and then I do phi t here And I would get st again Okay, so if I call this guy here bk I can write my st as summation bk phi of t minus kt and bk would turn out to be Gamma times sk convolved with Is that fine? So I've shown my Spanning property which worked out in a very simple way. Okay, so now the next question. What is the question? I'm going to ask At what point in this Proof did I use the fact that mk is minimum phase? It might be a trick question. I may not have used also Where did I use it? If I replace the denominator there with any other a of v pa j 2 pi of t as long as it has a proper inverse Discrete Fourier transform should it work? Is it fine? Do you see anything that is disturbing you there? Okay, it should work, right? That's what it seems like. Maybe there's maybe there is something between the lines, which is not very clear to me but But it seems to work. I mean, there's no reason why this the denominator should be minimum phase. Okay, maybe if it's minimum phase Is there a point? Yeah Yeah, but still the integrals and all will work Okay, so loosely minimum phase is not the problem But in the definition of phi f if you want phi f to be If you want phi f to be stable Maybe you want the zeros of The denominator a little to lie inside the unit circle But being minimum phase is more strict than that. You also want the poles So at least at least there is some justification for that In terms of stability. So this whole minimum phase thing is more for stability and those kinds of things as opposed to The requirement of orthonormality itself. Okay, so so that's something to keep in mind and minimum phase also is good because Later on if you want to approximate the Response by only the first few tabs, then minimum phase helps you because you know, that's the best Okay, so you might as well pick the minimum phase the utility of minimum phase in Is in a different sense as opposed to giving the basic orthonormality Okay, also basic orthonormality seems to come pretty much for free if you take anything But if you want stability for your filters and all that then you should need to do this Okay All right, so this seems fairly clean. Okay, so now I want to show some kind of an equivalence between The distance criteria we had before and the optimal criteria. Okay, so if you want to run an optimal receiver What should you do to r of t? Okay, what should you do? You filter with I'm sorry phi star of Minus t. Okay, so you do that you get your optimal filter and then sample at symbol rate k t Well, you're not done doing optimal stuff. Okay, so Okay, for instance, will you have isi now? You will have isi. Okay, don't imagine there'll be no isi Okay, phi of t need not satisfy the Nyquist criteria. It would be great if it did. Okay, if it's satisfied that It would be great case. Well, not just really phi of t. You need you probably need some some weird combination I don't know how it's going to work out. Okay, maybe there is some nyquist criteria possible here So that you could know I say but in general There will be isi so your detection problem is not at solve But you'll still be optimal. I mean, you're not going to lose any information You're not losing any information in the orthonormal projection. Okay, so what you get here will be will be what Okay, zk. So I don't want to call it zk because there is some constant missing. I'll call it z prime k Which is going to be what? gamma times sk convolved with mk plus nk and my noise now will be Normal with mean 0 and variance n naught by 2. It'll also be wide Gaussian Okay, this is completely compatible with the previous picture because there we multiplied everything divided everything by gamma Okay, so it's the same thing. It's not in any way different. Just the gamma has gone to something else Okay, don't be suddenly disturbed by the fact But clearly you have isi z prime of k It doesn't depend exactly on one sk, but it depends on several sk's sk convolved with mk, which was your minimum phase spectral factorization or that matter any other spectral factorization, but some related to the spectral factorization of sh Okay, so that's how it worked All right, so now you have the most important task of running this through the detector Some detector which we don't know how to build it. Okay Okay, we don't know how to efficiently build this detector But we know how to build it brute force How will you build it brute force? Right, what do you do? You run through all possibilities and find the minimum distance one Okay So remember now your transmit constellation was very nice. Okay, what's happening to your receive constellation if you just look at z prime All kinds of things will happen, right? You're doing filtering with mk for arbitrary sequences So your receive constellation will not will be very strange. Okay, one doesn't know how to do detection there Okay, but if you do exhaustive computation and then do minimum distance criteria, you should get the answer because after all You're in why it goes in noise. So there's nothing more complicated than that. Okay, so in theory the detector is Already given just do minimum detection. You'll minimum distance. You will get it. Okay But in practice we might have to do more things. Okay, so that's where the wittebi algorithm comes in Okay, so before we close up this the quick one minute. I want to take to show you the equivalence between this orthogonal projection and The h star of minus t sampled Followed by one by gamma square m star of z star. It's very easy to see actually Okay, so it's not a big deal. What is phi star of minus t if you do a Fourier transform? What do you get? h star of f divided by gamma m star of e part j 2 pi of t Okay, that's enough. I can pretty much stop there Okay, what did I do in the previous case? I took r of t I first filtered with h star of f whose impulse response is h star of minus t Okay, but then I sampled immediately and then filtered with one by gamma gamma square m star of one by z star. Okay, but that's okay I can sample either before or after because I know the denominator here is actually a Actually a discrete time filter so I can either sample before that or after that I won't lose any optimality Okay, so that's the way in which to quickly see the equivalence between this and that picture we have Okay, so so what's the final Selling point here. So you can reduce systems with isi to By orthogonal projection if you know h of t Okay, if you know h of t you can do the minimum phase factorization and in fact get In a in some way the least possible isi into your system Okay, when you do minimum phase you will get the least possible isi into your system, right? Because almost all your isi is concentrated in the first few tabs Only the first few symbols will affect you everything else will not affect you too much But still unless there are some special Nyquist criteria satisfied by a combination of things by sheer luck You will not be able to completely eliminate isi okay So later on we'll see some fancy multi-carrier modulation methods where people still eliminate isi But there again the nyquist criteria will be satisfied in some some way Okay, so it's possible to do that But then you'll have to do multi-carrier stuff which is more complicated with a single carrier It's clearly not possible in the general case Okay, so but it's possible to minimize the isi to the least possible effect and then you do Then you do something else like the witter be to do Okay, so before we close today, okay, maybe I'll make these comments on Monday more comments on this and how practical this Headstar of minus t and all this will will make