 Hello everyone welcome to lecture on channel capacity. At the end of this session students will be able to calculate channel capacity and also channel efficiency. Now before starting with the actual session let's pause the video and think about what is a mean by mutual information. If you remember in the previous video session we already saw that what is a mean by average information and what is mean by mutual information. So it is nothing but the difference between the two end copies. So you can see over here average mutual information is nothing but equals to difference between initial entropy and final entropy. In this case initial entropy is a source entropy, in this case initial entropy is a destination entropy and these are two nothing but the conditional entropy. Now first this is a noise free channel. So you can see this figure 1, the figure 1 shows noise free channel is in that which is having 1 to 1 correspondence between input and output means x1 is connected to y1, x2 is connected to y2 similarly up to xm connected to y, right. Now for that from this noise free channel you will get the two probabilities. One is a joint probability matrix which is nothing but having a diagonal form, another one is a conditional probability matrix which is having a unity diagonal form. So diagonal form means this diagonal form similarly this one is a joint probability matrix probability of x given y or probability of y given x which is having also diagonal form but unity means the value is 1, conditional value at p x1 y1 or p y1 given x1 is 1. So diagonal elements are 1, so that is why it is called as a diagonal unity matrix. Now from this we can have a entropy equation. So entropy equation that is a joint entropy h of x, y we have the equation already did have in the previous videos. So double summation from j equals to 1 to m and k equals to 1 to n. So in this case there are two different, j is different and k is different having different count. In that case there are two variables taken j and k. So in this case both the source and y are destination having same numbers. So we have only one summation symbol over here. So j equals to 1 to m and the probability of xj, yj, right. So this one is a joint probability log of joint probability that gives you the joint entropy, right. So from this equation if you observe this matrix diagonal form joint probability matrix you can see only for diagonal element the joint probability value is present other than that it is 0. So when it is j not equals to k it is having 0. So from that we can see that the joint probability matrix equals to nothing but the source probability of destination probability. So from this we can write joint entropy equals to source entropy or destination entropy, right. Now from this matrix conditional probability matrix we can say that h of x given y or h of y given x equals to m minus m into 1 log 1, y 1 log 1 because of probability values at every diagonal is 1 and other than diagonal it is 0 and y m so there are m elements sources are m also destinations are m count is m so it is multiplied by m. So log 1 we already know that value is 0 so 0 multiplied by 0 and then it is multiplied with anything it is 0 so that is why the value is 0 for conditional entropy. Thus we can write for noise free channel the mutual information equals to we have equation now in this case this one is 0 so which is which is equals to h of x which is equals to h of y or which is equals to h of x comma y means for noise free channel conditional entropy is 0 joint entropy is equals to source entropy is equal to destination entropy this is the equation. Now symmetric channel now what is the mean by symmetric channel it has two conditions first condition is what this conditional entropy h of y given x means you are obtaining y by observing the source is independent of j that is the entropy corresponding to each row is having a same value and second property is what the sum of all the elements of the column is the same. Now let us see the example so there are these two conditional probability matrix given each one is having 3 rows 3 column right so as per the first condition this one entropy corresponding to each row is the same if you find the entropy of each row that is a summation average of this you will get the same value similar in this case also means that this symmetric matrix let us check another condition second one the sum of all columns of is this supposed to be same now if you follow that one in the both the example if you found that this conditional probability matrix gives you the same value when you add the column and for this that is not a same so means that only this is the symmetric matrix this one is not right now for that symmetric matrix we have to find the mutual information equation. So we already have equation i of x y equals to h of 1 minus h of y given x this is a destination entropy this one is a conditional entropy now in this one this conditional entropy as said over here it is independent of j is written as this summation of j equals to 1 to m and probability of x j now this is a constant term a equals to which is independent of j is taken as a common outside so summation of j equals to 1 to m equal probability of x j is remaining. So this is nothing but value gives you the 1 so final equation for the mutual information of symmetric channel you can write i of x y equals to h of y minus a now let us see what is mean by channel capacity now channel capacity is nothing but the maximum of the mutual information so that is why we in the previous slide we derived the equation of the mutual information for noise free channel and symmetric channel. So this can be denoted as c equals to max of i x y i of x y so for every different channel the value of i of x y is different so i of x y is nothing but the difference of entropies h of x minus h of x given y or you can write h of y minus h of y given x there are two different equations you have now as I say mutual information is nothing but the difference of entropies the unit is bit per second we already derived the unit for the entropy in the previous sessions. Now the transmission efficiency or channel efficiency is given by the equation eta equals to actual information divided by maximum information now in this case actual information is nothing but your mutual information which is nothing but i of x y and maximum information is nothing but the max of i of x y so which is nothing but the c now and the redundancy is given by the r equals to 1 minus efficiency that gives you the redundancy. Now what is the channel capacity equation for noise free channel and symmetric channel let us see for noise free channel we already derived the equation for i of x y c equals to nothing but the max of i of x y so i of x y is equals to nothing but the h of x or equals to h of y or equals to h of x y which is nothing but the joint entropy. Now max of h x equals to nothing but your c that is channel capacity but we already derived equation for max of h of x equals to nothing but the log m so channel capacity for noise free channel equals to what log m bits per message similarly symmetric channel we have the equation for mutual information i of x y equals to h of y minus a so c equals to max of i of x y so max of h of y minus a so max h of y minus a that is log n minus a this is the equation for the symmetric channel now let us take a example so find the channel capacity for p equals to 0.9 p equals to 0.8 p equals to 0.7 p equals to 0.6 and p equals to 0.5. Now to find the channel capacity when p is 0.9 0.8 0.7 individually so for that we require the channel capacity equation so c equals to max of h of y because we have equation of symmetric channel c equals to max of h of y minus a log equals to log n minus a n is number of bits or number of channels two channels are there so log 2 minus h of y given x j now here in this case y is independent of j so we have equation minus summation j equals to 1 to 2 there are two sources the probability of y given x j log of probability of y given x j so this one is a conditional probability this one is also conditional probability now from this diagram we can get the values of conditional probabilities so one equation is log 2 plus minus minus plus probability p log p plus 1 minus p log 1 minus p so this is the final equation for channel capacity now we have to find c for in every value of p so let us find every value of p when p equals to 0.9 c equals to 1 log 2 having value 1 why the value is of log 2 is divided by it is a base of 2 so that is why log 2 divided by log 2 gives you the 1 so that is why it is 1 plus 0.9 p in this case is 0.9 so 0.9 log 0.9 plus 1 minus p so in this case p is 0.9 so 1 minus 0.9 is 0.1 log 1 minus p is 0.1 that gives you the answer this 0.531 bits per message now for p equals to 0.8 so in this case it is 0.8 same this one is 1 minus p that is why 1 minus 0.8 gives you 0.2 so log 0.2 the answer is this you can verify the answer similarly you have to find it for 0.7 0.6 and 0.5 now 0.5 you will get the answer is 0 right because the probability of transmitting 0 from source 1 to the destination 1 is 1.5 and to destination 2 it is 0.5 right so 0.5 and 0.5 that gives you the answer 0 if you solve that one right these are the references thank you.