 okay so this is picture 26 okay let me quickly summarize where we are as far as LDPC codes are concerned we began by looking at WR no WCWR WCWR regular LDPC codes okay so then we so mostly we've been looking at all of all of these guys over binary symmetric channels so far okay so the decoder we saw was the Gallagher decoder we saw density evolution Gallagher a decoding and then we saw density evolution for Gallagher a decoding for this and then for say the lambda x rho x LDPC okay so what are the various things we saw we saw construction did we see construction well the socket construction is one thing that I talked about but there are so many other things when you went want to actually construct an LDPC matrix and hopefully people are doing the doing their project in that area will probably work on that so by the way I don't know if I made this very clear so the presentation has to be done by everybody okay so I don't know I think I probably didn't put that down in the file properly the presentation will also be done by the people who are doing the programming assignment not be graded too much on that but you'll have to do a presentation to everybody describing what you did that you have to do you can't escape that okay so let me remind you the construction is one thing we saw and then we saw message passing decoding right so we'll so just one example of message passing decoding which is the Gallagher a decoder so Gallagher a and there are the message passing decoders slightly more complicated like it was pointed out instead of the bit to check processing there can be some changes those also will give you other message passing decoders it's possible for all of these message passing decoders we saw this way of doing analysis which I call density evolution okay and this this gave us the threshold property which was very interesting okay so so now just to get a high level understanding of the what this what these things are construction of an LDPC code what do you need to know to construct an LDPC code what are the various things you need to know you need to know row x and lambda x right and then you also need the block length right so you need all three to construct a an instance of an LDPC code if from the ensemble you construct a random instance okay and to implement the message passing decoder not only do you need the block block length and the lambda of x and row of x what do you need you need the exact parity check matrix that was chosen or the exact tanner graph that was chosen okay what about density evolution what do you need for doing density evolution I'm sorry P okay well P is different right it's a channel parameter okay so well let's assume that's known that's that's not what I'm worried about I'm worried about the code what about the code do you need to know row and lambda that's it you don't need block length you don't need the actual matrix you just need row and lambda okay but you needed the IID assumption okay so what does that mean the IID assumption means block length will have to be very large okay so it will hold only for block length very large and only for as many iterations as allowed by your by the minimum length cycle that you have okay so all those things will be practical realities but density evolution the algorithm you can run just by knowing lambda of x and row of x so this threshold is purely a function of function of what lambda of x and row of x okay so those are the various things to roughly keep in mind and then I briefly talked about the optimization which is an important part but I didn't spend way too much time on it but okay so that's another part okay so you can basically you can find good row x and lambda of x okay just purely in terms of threshold which is the row of x and lambda of x of a particular rate which gives you the best threshold or you can do the other question for a particular threshold which is the highest rate possible both of those are ways of optimizing so you do that okay so today you even without writing all the optimization programs you can there are enough web resources available which will give you optimized degree distributions somebody has done a whole bunch of this optimization and they've collected all the good degree distributions and stored them somewhere okay so you can google for a while and you'll quickly find this website okay so the title of the website is LDPC opt LDPC opt is the name of the website it's hosted in EPFL by Roudigarh Arban case group okay so they have a collection of all the good degree distribution so in fact you can go there and say I'm working on a BSE thinking in Gallagher a is possible right you can pick Gallagher a decoding in that same maybe it's possible okay so you can pick your decoding and then say give me an optimized degree distribution and there are some constraints for instance you might have to fix the maximum left degree or something and then it'll give you a good lambda lambda of x and of x okay so today you can work with LDPC codes without doing the optimization yourself okay so you can do that somebody else will give you then but but the construction and the decoder implementation you have to do do the construction construct a good enough random instance okay so that's where the practical nature will come in all the other problems you might have to do simulations for okay any part here which you think deserves something something that disturbed you in one of these things some question you want to ask on any particular how the concentrations prove so it's it's it's purely a martingale argument there is really nothing more I can say you show basically see you must have been you must know something called the you heard of the Chebyshev bound Chebyshev inequality or the Markov inequality okay how does the Markov inequality work for the Chebyshev inequality so you say you bound the probability that x minus expected value of x whole squared is whatever you know something like that greater than some product of sigma and you bound that probably so one can imagine this martingale argument is a major extension of a bound like that so you can derive a more general bound based on something okay so that's the rough idea of how the concentration result is proved okay so but but look it up I mean it's not if you can find enough resources to understand this okay so now what we're going to move is we're going to move out of Mootoo's we're going to move out of BSC then go to AWGN channel BPSK AWGN channel and then figure out are there good message passing decoders that one can implement and analyze and talk about threshold can be designed good lambda of x to fx for BPSK over AWGN okay so as I said that's the main selling point for for LDPC course the fact that they have efficient message passing soft decision decoders okay not just hard decision soft decision decoders okay so that's what we're going to see next so quick reminder of this BPSK over AWGN okay so one thing I've avoided so far as the encoding okay so I'll talk about it briefly maybe later on but for now we'll assume the encoder is out there somewhere okay so how does the how does BPSK over AWGN work if given a code word what do you do first you do the modulation which I'm gonna write as this is the BPSK step and then you have AWGN okay noise vector gets added okay to receive vector R okay so this is your this is what your the decoder has to work with okay so you'll have a decoder here which may be produces okay so no no hard decision if you make a hard decision for each received value independently you go back to the BSE okay so I'm not going to do that I'm going to work with the entire received vector R which is now a real valued number vector right so this is R will belong to what right so it's very different from what we had before okay so so usually so we saw some we saw a couple of soft decoders right what are the two soft decoders that we saw optimal soft decoders so two of them the the bitwise MAP and the maximum likelihood decoder okay so what did the maximum likelihood decoder do okay so all that is abstract descriptions but in practice yeah you correlate and pick the code word which the symbol vector which gave you the maximum correlation for BPSK AWGN you can do that but if you want to have a more general description it's a minimum Euclidean distance for instance okay so that's that's what the maximum likelihood decoder does the bitwise MAP decoder what does it do it's not easy to describe that's why you need the probability actually computes the probability that a particular bit equals 0 given the entire received vector R okay so we found there was one thing that was very useful in writing down expressions for those probabilities what was that the likelihood ratio right so we saw the likelihood ratio was useful and we could write the overall expression in terms of the likelihood ratio and we had some simplifying formulas for that okay so the message passing decoders will try to compute approximately that probability okay so the theme behind soft decision message passing decoders is they approximate so soft decision message passing decoders approximations of bitwise MAP decoders and they say approximations you don't even know how good an approximation it is theoretically but in practice you'll see in simulations it's very good reasonably good approximation in many cases okay this MAP decoders okay one sense in which this is definitely valid is the message passing decoders try to approximately compute probability that a particular bit equals 0 or 1 given the entire received vector R so that is the sense in which this is true okay so so basically the idea is to approximately compute probability that a particular bit is 0 or 1 say given R okay okay so so so we saw this is the same as trying to compute the APP likelihood ratio right that's so why that's how we wrote down the expression if you go back and think about it the bitwise MAP decoder if you go back and look at what you what you saw what you wrote we did not try to compute this we actually try to compute yeah so we try to compute or this capital li which is probability that ci is 0 given R divided by probability that ci is 1 given R one could also compute the log of this quantity okay in that case you get the log likelihood ratio the APP log likelihood ratio a posteriori log likelihood ratio okay so one can do that also or log of okay so that is the probability this is the likelihood ratio this is the L L R which is log of the likelihood ratio okay so log base e if you want okay so if you want a basis if you want a base for your logarithm you can think of it as log base e okay so so this is what we will try to compute but the way in which the decoder works will be very similar to the Gallagher a decoder here remember how to however the decoder operations in Gallagher a there were two steps in each iteration right there were there were iterations and there were two steps in each iteration right the first step was something was sent from bit nodes to check nodes okay and then in the second step something was sent back from check node to bit node now we will send soft values from bit nodes to check nodes okay instead of sending one bit we will be sending actually real numbers okay from bit nodes to check nodes and back again okay so that's how that will be the major change but other than that the principle will be exactly the same you will be sending something from bit node to check node check node to bit node okay so let me figure out how that how that works out okay so before that let's look at one more likelihood ratio which which I think I called a small li did I call that small li okay which was the intrinsic likelihood ratio for a particular bit okay so what is the intrinsic or channel LLR as it is called okay so so I think I called the channel likelihood ratio probability that C i is 0 given ri divided by probability that C i is 1 given ri assuming that 0 and 1 are equally likely this ratio will be the same as what you can use base rule cancel that and go to the PDF of ri given C i equals 0 divided by PDF of ri given C i equals 1 and then you can plug in those formulas e power minus whatever and then cancel everything and finally what will you get okay this will be e to the power 2 ri by sigma square okay so this is what you would get okay so what will be the this is the intrinsic or the channel likelihood ratio what will be the channel log likelihood ratio log of this okay so I'll call that what shall we call that we need a notation for that I'll call it yi okay so this is the channel LLR is log of li which works out to 2 by sigma square times ri okay right so since anyway capital li is function purely of the small li's right I can say capital li is purely a function of the yi's there's nothing wrong in saying that so I instead of my decoder working with ri I might as well work with yi there's nothing wrong with that okay so if yi contains all the information that's out there I know eventually everything works out in the will work out depending purely on li and yi I can write capital li purely as a function of li right so you saw that expression the way I wrote it down the ratio is what's important so everything is going to work out fine so I can imagine the input to my decoder being yi okay and it's very usual for for for one to imagine that the input to the decoder is yi and not ri okay but what is the only difference between ri and yi it's a simple scaling okay so in practice it's not a big deal and we will find that this is easier to deal with okay in terms of probability because ri is not directly any probability or a probability ratio but yi is what it's the ratio of log of the ratio of two probabilities okay right so so we'll see that that is in some way useful for us as we go along so we'll imagine yi being the input to the decoder okay so okay so hopefully you remember the way I developed the decoder for the Gallagher a part okay so we did it in using the tanner graph okay so what was the input to the decoder where did I assign the input to the decoders I put them next to the bit node each bit node I assign I assign some input right so bit node i what input did I assign ri in that case now I'll assign yi which is the log likelihood ratio or channel log likelihood ratio corresponding to the particular bit that will be a natural way of assigning the input okay so that is that is the input stage okay so so okay so I'll just write that down here hopefully you can visualize a tanner graph in your head with bit nodes on one side check notes on the other side yi being associated to each bit node okay so that's the tanner graph that I want you to think about okay okay so channel llr yi is kind of assigned to ith bit node okay so that is my input to the decoder now the decoder will start have will have to process this input and try to produce some produce c hat basically okay so that's what that's what we're trying okay so like like before we will do bitwise detection okay so we won't try to produce the overall c hat what will we try to produce ci hat okay for each bit we'll try to produce an output okay that's the that's the notion of the bitwise MAP decoder okay the the drawback to that is one can argue that if you only produce ci hat if you put all of them together you may or may not get a code word right but that's fine we'll just live with it if you have a systematic code word then it's enough if you decode just the message bits and be careful be happy with it right any any as far as message bits are concerned every sequence is possible I don't have to worry about worry about a valid message sequence so one can do that in practice so it's not a big deal so that's what we will do once again here okay so another principle that you might remember in our message passing decoder for Gallagher A okay so in the first iteration what did we do okay so why I arrived was sent to all the neighboring checknotes okay so one can imagine doing the same thing here in the first iteration but what was the what was the reason for doing that I told you something about why you would do that what is the idea in each iteration what what does the bit node try to do yeah exactly the logic was each bit node would send its best estimate of what that bit is to to the neighbors okay in the soft case you will have to change that a little bit each bit node will send its best estimate of the LLR of the bit to its checknotes okay so that's the change in the soft case since you're dealing with soft messages you're going to you want to send probabilities and not the bit itself okay in the in the hard case you could send the bit itself for the soft case each bit each node will send its best estimate of LLR will send its best estimate of LLR to its neighbors oh sorry to its neighbors okay so that will be a difference between the hard and the soft decoders okay what LLR you might ask okay so if the bit node is sending something it will send its estimate the estimate best estimate of its LLR that bits LLR what what happens when the check node is sending what LLR will the check node send check node doesn't have any LLR right what happened in the what happened in the Gallagher a case what what bit did the check node send yeah it's estimate estimate of what the recipient was okay so same thing will happen here for the check node it will send the LLR of that bit okay so whichever bit is receiving that message that bits LLR the check node will try to send okay so that logic we will use once again another logic we will carry over from Gallagher a is we will not resend the information okay if you know ahead of time that if you receive some information from a particular bit node you will not try to send it back to the same bit node you'll only try to send new information okay so the same also that's that's needed for maintaining independence right so you'll avoid resending information okay all these three principles we will use okay and as I said the decoder will be iterative each decoder each iteration will have two steps that will also happen okay all these things we will use and try to develop a message passing decode okay so let's do that right honest okay iteration one is the easiest iteration one step a is the easiest right what happens in iteration one step a suppose you have bit node i okay let's say the degree is d okay so well degree is not quite relevant yet let's say degree is d okay what do you do in step a of iteration one what do you send okay what is the what is the only thing that the bit node knows it only knows yi okay and its best estimate of the llr for that bit is just yi there's nothing more that it can do okay so it will send yi to all its neighboring check nodes okay okay so step b will immediately get interesting okay suppose you have a check node of degree i'll use d again but hopefully you'll see the difference yeah check node of degree d okay so maybe we'll say e okay just to avoid the same notation okay so i'm going to say check node as degree e which means what what would have happened in step a okay so okay i should write step a somewhere okay so let me write step a here in step a this degree e check node okay the same guy as here would have received what yeah exactly it would have received okay i think i used square for check nodes right sorry for that okay so it would have received let's say okay put dot dot dot here put too many dot dots okay it will receive y1 y2 so until ye okay so oh you want to write it as yi1 yi2 it's okay man y1 y2 okay so y1 is already something else right so you want to write it as my goodness you're making me redo this so many times okay okay i'll do it it's worth it it's not a big deal you can do this it's good to see all of your keeping pace yes no the appllr probability of ca equals zero the capital l's log of the capital l see the two ri by sigma square is just the channel l lr what comes out of the channel what i want to compute is the a posteriori lr which is probability that ca is zero given the entire vector r divided by probability that ca is one given the entire vector i'm trying to estimate that you want to compute that you only want to compute the intrinsic one there's no computation to do you're right i'm trying to estimate that okay so now what should this check node do okay so let's look at the message that it will send to the bit i1 okay to the i1th bit what message it will send okay so what what does it know okay it has to send something it has to compute it and it has received yi2 through yi e in the previous iteration remember these were received in the step a in the previous step of the same iteration okay so even step a what should it do remember the principle what is the principle you should find the best estimate of l lr for bit i1 can it use why yi1 it cannot use yi1 because it knows it came from there so it won't use yi1 it has to only use the remaining bits okay what can it do what else does it know what does that what else does the check node represent yeah so what does it know remember what we used in Gallagher a use the fact that all those bits will have to xor to zero the parity has to be zero right so we'll have to use that information and the l lr's for the other bits to figure out an l lr for this bit so that is the problem okay so what else does the check node know it knows ci1 equals ci2 plus ci3 plus so on till ci e my addition is modulo 2 okay so remember that it's all xor i'm just writing plus but it's all modulo 2 addition it's the check node knows this knows that this has to be true okay what is yi2 yi2 is has l lr right l lr of ci2 right likewise the l lr for each guy is yi e okay okay given the also remember what is log likelihood ratio log of probabilities ratio of probabilities given the lr can you go back to the probabilities so suppose i give you the ratio the problem okay suppose i give you probability that c equals 0 divided by probability that c equals 1 can i find the each of those values but there are two variables and only one equation know why you have another equation right you add those two you should get 1 so the l lr is as good as giving the probability okay so you know the probability that each of these bits on the right hand side is 0 or 1 you have to find the probability that their xor is 0 or 1 you find those two probabilities find their ratio take the log you'll get you'll get what what do you want and that is what you'll pass back to the i1 okay is that clear once more okay so think about it in the Gallagher case what happened what happened in the Gallagher case think about it all of these case were bits i happily took an xor of them and said that was my best estimate of what this i1 bit should have been now i'm given probability that the i2 bit is 1 or 0 i'm given probability that the i3 bit is 1 or 0 likewise i'm given these probabilities and then i'm also told the i1 bit as an xor of all these things can i compute the probability that the i1 bit will be 0 can i compute the probability that the i1 bit is 1 if i can compute based on just this information i compute i divide i take the log and send back that's all that's all i have to do it's a very simple definition once you understand once you're once you have settled comfortably on the principles of message passing the computation you have to do is very simple okay okay is that clear okay so let's let's let's let's not worry about LLR we'll go back to LLR later but for now we'll just say probability is known okay so and i'll post this problem in a slightly different way okay i'll take a very simple case of this problem and we'll try to do this computation of LLR and probabilities right we'll do that okay before that let me write down this problem in words so that it's it's recorded okay so the calculation at the calculation at check node is what at any check node at a check node so node is basically given LLRs of bits i2 through iE okay compute LLR for bit i1 using what using the fact that all of them will have to xor to 0 or using the fact that the i1 bit is the xor of other bits okay okay so i've concentrated so much on i1 if i solve this computation can i also do the do the message to i2 yeah it's the same thing right i'll pull i2 to this side and keep everything else on that side okay right so this is enough if i can figure out how to do this computation that's good enough and i don't want to do this computation i don't want to make this computation very expensive okay i want to keep this computation as simple as possible so you'll see there'll be a lot of algebraic tricks that you use to make the computation as efficient as possible for implementation okay so you'll have to do that the reason is this needs to be done in every iteration and at every check node for every message that you're sending so it needs to be done multiple times and it's good to invest some time and simplify this and get it as easy as possible okay so let's start with the toy version of this problem a very simple version so that we get comfortable with it and then you'll see the toy version extends beautifully to the general case okay so this is a toy example i'm going to say x equals y plus z all of them are binary random variables okay and i'm going to say x is y x odd with z all of them are binary and i know i'll say say let's say probability that y equals zero is p y zero and probability that z equals zero is p z zero okay z equals zero is p z zero probability that y is zero is p y zero okay and i'm also given that x is y plus z can you compute probability that x equals zero and i'm given y and z are take their values independently okay they're independent random variables also given that yeah how will you compute probability that how will you compute p x zero yeah so it's as simple as that right it's a very elementary computation so how will you do this both will have to be zero both will have to be one okay so the toy problem seems really really clear but how will i extend this suppose i say now instead of y plus z i have x equals y one plus y two plus y three plus y three or four plus so on till y one hundred okay how will you write down the expression okay so let's try that that's the next that's the next example i want to do suppose i say x equals y one plus y two plus y three plus so on till some y i'll i'll take i'll take 100 for fun okay no problem in this and and p one zero is the probability that so so let me say probability that yi is zero is some pi zero okay so how will you compute px zero what should i do okay okay so if i do it brute force what is the computation required i need to sum over how many terms how many possibilities do i have to sum over here there were just two cases how many such cases will you have for 100 bits 2 power 99 right 2 power 99 cases and each case will be a product of how many terms 100 terms okay product of 100 terms okay and it seems like there is even no reasonable way of writing down what this expression would be okay so it seems to be a problem here so maybe maybe i need more efficiency maybe this px zero equals py zero pz zero plus one minus py zero one minus pz zero is not as efficient as it should be if it if it were to be the most efficient form then i won't get any simplification okay so the key to looking at that is to do the following okay so let me see i think this should work out okay let's go back to revisit one and try to see if this computation can be made any simpler than what it looked like okay so well i can also develop this trick in a more formal way but i think the trick is to try to compute px zero minus px one try to compute that tell me what it is should be you know i'm hoping there'll be something better than that yeah if you add them you'll get one what does this work out to times okay then that's fine okay i like that expression i'll write it slightly differently so you're saying it is one minus two times py py zero okay so i don't like that too much so i'll write that as one minus py zero minus py zero so what is one minus py zero py one so i'll write it as py one minus py zero times is it py one minus py zero or py zero minus py one but both ways are okay right so so work work it out a little bit and show that this expression is true okay did enough of you get this okay so it's a very simple derivation to show that this expression is true okay so it basically it worked out because you could factor the difference very easily okay and it worked out in a nice way okay so check your calculations if you did not find this to be true and it'll work out okay so what's the advantage of this form now if i revisit to what can i write it as the reason is in this form i can add one more variable to this without worrying about anything right this form extends to as many summations as i want if i now add y plus z plus a w y plus z i know how then when you add w what is it i'm doing i have to have multiplied once again right it extends wonderfully in a very obvious way so when you revisit to now you can happily write ph zero minus px one s what product py zero minus py one i equals one two am i right okay so this is way way better than doing whole bunch of additions over 2 power 99 terms it's quite wonderful to see that such an asti expression can simplify when you view it difference okay so here on you have to do some algebra to convert this into llr language this is purely in probability language right right this is just probability px zero minus px one but what do i want i want px zero divided by px one in fact log of that that's what i want okay so i have to do some exercise to convert it into llr language and that is not too too difficult but one can use some further simplifications here for instance i can so i'll show you how it is typically done okay so you may or may not like this like the way it's typically done but that's that simplifies in llr's it'll turn out that multiplication will become addition okay so you can imagine in empty when just probabilities when you're multiplying when you take log of those probabilities it'll become addition so an addition is easier for us but but there's one more complication to that it won't become a very simple way a lot of do something to it so let me try to do that in the next 10 minutes it's just pure algebraic exercise hopefully i won't get i won't get confused on the way we'll see okay so the first first trick is to divide by px zero plus px one can i divide on the left hand side by px zero plus px one that's just one okay so i'm just not doing anything different just dividing by one and then writing one differently as px zero plus px one okay so i'll do that i'll do that in the next step why do you think i want to divide by px zero by px one yeah so now when i divide i will get some convenient stuff with llr's because otherwise when i how can i divide i can't divide right if i divide on the left i have to divide on the right i don't know what will happen on the right division okay but if i divide by px zero plus px one i can just divide numerator and denominator and get llr's happily okay so on the right hand side what will i divide each term i'll divide by pi zero plus pi one okay so there's no problem these are all simple tricks but but they are useful finally in the simplification okay so now each expression can be comfortably converted into likelihood ratios how will i do that i'll divide by px one the numerator and the denominator on the left hand side on the on the right hand side each term i'll divide by pi one if i do that i get what what do i get the likelihood ratio of x right minus one divided by the likelihood ratio of x plus one see now once i divide the denominator is not one anymore okay so it becomes something else you can't throw it out you should be careful okay after i divide the denominator is not one anymore before i divided this one now when i divide it'll become something which is not one okay l i minus one divided by l i minus one is that okay so so from we've gone to likelihood ratio but we want what log likelihood ratio okay so what is how will you convert it into log likelihood ratios what is lx in terms of the log likelihood ratio e power yx right e power yx and here l i will be e power yi okay so let's do that it's just a series of simple steps so you get e power yx minus one divided by e power yx plus one equals product of right plus one two well i'm doing 100 but you remember this 100 can be replaced by any other number right so eventually i'll do that e power yi minus one divided by e power yi plus one okay so so it's also common to write these expressions by what should i do i should multiply and divide by e power minus yx by two okay so it's common to do that so that you get a tan hyperbolic form okay so this is not a very standard form if you're happy with this form it's fine i mean there's no problem but it's it's usual to convert this into tan hyperbolic what will you do that you multiply and divide by e power minus yx by two okay so you do that on the left hand side you'll get tan hyperbolic of what do you get yx by two equals product i equals one 200 tan hyperbolic of yi by that's okay okay it's very clear so there's no problem now so all you have to do at the receiver is be able to take tan hyperbolic okay what how do you how do you figure out yx from yi you know if i give you yi alone you have to compute yx what will you do you have to divide by two take tan hyperbolic do a lot of multiplications and then do what take inverse tan hyperbolic you will get yx by two okay but there is a way to speed this up even further okay so for instance tan hyperbolic is can be both positive and negative right so less than zero it'll be negative for greater than zero it'll be positive so directly you can't take logarithms right when you have positive and negative when you multiply you can't take logarithms you have to separate the sign first and then take look at the magnitudes and take do you understand what I'm saying okay so this tan hyperbolic will have a sign and a magnitude if you just take magnitude on both sides you can do multiplication and then take logarithms to simplify your multiplication okay otherwise you can't do it but when you take magnitude you're losing the sign so you have to keep track of sine also so from here people will usually go into sine and magnitude this is done once again to simplify your calculation so what will be the sine equation so suppose I say si is sine of well sine of tan hyperbolic is the same as sine of x right so how many of you know see tan hyperbolic is e power x minus one divided by e power x plus one if x is positive tan hyperbolic will also be positive x is negative tan hyperbolic will also be negative so sine of tan hyperbolic of something is the same as sine of that something itself okay it's an whatever function okay so it works out in a very nice fashion so suppose I say si is the sine of y a and sx is the sine of yx okay so the sine equation will be what sx equals product i equals 100 si do you agree that is one equation that will come out just based on science okay magnitude will be what see since it's anyway again even from you I mean sine depends on only the argument the magnitude I can take inside can I take inside so it'll have to be an odd function no is this tan hyperbolic an odd function right you must you must remember the way hyperbolic looks so I can take the magnitude inside my tan hyperbolic argument if I want but if you're not comfortable just keep it outside okay so it doesn't matter but it's it's good to take it inside so that you'll see there'll be a final simplification okay so you'll get log tan hyperbolic of absolute value of yx by 2 equals what product will become what now summation i equals 100 log tan hyperbolic absolute value of yi by oops is that clear okay so this is just a simple simplification this mere algebra here is nothing more to it but pay attention to it wasn't quite a part if I call a function f of x as now log tan hyperbolic of absolute value of x by 2 I want you to evaluate mod x in terms of f of x and tell me what that function looks like basically invert this do it try to simplify a little bit so basically don't simply say tan hyperbolic inverse and all write it out you'll get the final simple form that's why I'm asking you to do it give me fx by 2 do you get by 2 you'll get a by 2 oh you're getting 2 I want f to be its own inverse so I should do something else it should work out to be its own inverse no isn't it work out look at that carefully you should get a fx by 2 no will you get why will you not get fx by 2 you don't get any oh it'll get you'll get a 2 is it okay anyway what is the expression mod x is 2 log tan hyperbolic fx is it okay I know there's a way of writing it so that f becomes its own inverse is this okay or is that some mistake I thought this form itself worked out to be its own inverse fx by 2 no yeah I think that is that sounds better to me yeah you should get fx by 2 see e power x x minus 1 by e power x plus 1 is not equal to tan hyperbolic of x it's tan hyperbolic of x by 2 you have to do the be careful with that okay so this fx is its own inverse so what does it mean so this function see see if you did not know this fact what would you what would you have done given the yi's you would have done yi by 2 then taken log tan hyperbolic and added up the whole thing and then you would have done log tan hyperbolic inverse now it turns out you don't have to do that you don't have to do the log tan hyperbolic inverse it's enough if you do what yeah so it's enough if you write mod of yx equals f of summation i equals 1200 f of yi because the function is its own inverse okay you do f on both sides on the left hand side you'll get mod yx on the right hand side you get f of whatever you had okay so you can imagine this being useful and implementation point of view so ultimately this f is a non-linear function you're not going to be evaluating it using some Taylor series or something what will you do in practice when you implement this you'll have a lookup table so you don't have to have two different lookup tables for doing the inverse it's enough if you have just one lookup table for this one f you can do the whole implementation very easily beyond that lookup table what is the only other operation you have to do addition okay which is not too bad in today's technology okay so so this is the simplification for this go back stare at it for a while and then I'll come back next class and put it together with the remaining part of this softy coder and we'll get a complete algorithm okay