 to the 31st lecture on the subject of digital signal processing in its applications. We are now going to embark on a very important theme related to discrete systems after having completed a fairly extensive discussion on the design of finite impulse response filters using the window approach, a window based filter design method. We now need to work out how we can realize systems and in fact, we had just hinted at the approach to realization in the past, but now we need to treat that subject with some degree of depth and detail. So, in fact, let us put before ourselves the specific problem that we need to discuss. You see, we have before us in realization a causal rational system and the system function takes the following form, summation m going from 0 to capital M, Bm, z raised to the power minus m divided by 1 minus summation l going from 1 to capital N, Al, z raised to the power minus l. So, we have a numerator and a denominator polynomial and our objective is to realize this system. Now, the approach to realization is essentially by using what are called signal flow graphs and we need first to define a signal flow graph in a slightly more formal way than we have done before. A signal flow graph is a collection of nodes and directed edges, you know, even among nodes. We have different kinds of nodes. We have source nodes, we have intermediate nodes and we have sync nodes. So, for example, nodes from which all the edges go outwards, that means nodes which provide to other nodes only are called source nodes. Nodes which have edges coming in as also going out are called intermediate nodes. So, they are neither source nor sync. They have some edges which come in which bring material to the node and some edges which carry material away from the node. Subsequently, we have what are called sync nodes. Sync nodes are nodes to which edges come but from which no edges leave. In fact, the word source and sync are fairly suggestive. Source means that which gives sync means that which consumes. Intermediate is neither source nor sync. Now, it is best to understand this by taking an example. So, let's take an example of a signal flow graph in which we have all the three kinds of nodes, source nodes, intermediate nodes and sync nodes. So, we have this signal flow graph that we have constructed here and what we have done is to construct 1, 2, 3, 4, 5, 6, 7 nodes and we have several edges. We have this edge, this one, all these. For each edge, we have given a multiplier on the edge. The edges, as we remember, an edge carries a multiplier on it. So, a good way to understand nodes and edges is that nodes are like stations at which material is deposited or from which material is taken. These are like trucks which carry away material from one station to the other station. And in the truck, there is some machinery which does processing on the material which is being carried. Now, you must think of every truck that carries material away from a node as carrying the same material. So, it's not as if if there are more trucks, the material is divided among the trucks. This is the conservation law here. Whatever material is available at a station is all carried by all the trucks that leave that station. However, as far as material being created at a station is concerned, all the edges which come in deposit their respective material, all the trucks if you please, which come in to a station, deposit their material on the station and what material is seen at the station is the linear combination is the sum of all this material that is so deposited. So, the rules are different for deposit and take away. For deposit, all the material that is brought is added to form the material at that node. For take away, the material is taken away the same by all the trucks that leave. As I want to repeat once again, there is no conservation law here. Anyway, so for example, here if you look at this signal flow graph, N1 and N2 are examples of what are called source nodes. So, these are source nodes here. N3, N4 and N5 are examples of what are called intermediate nodes and N6 and N7 are examples of what are called sink nodes. Now, to take an example of deposit and take away, all the edges A1, A2 and A3 carry away the same material from the edge N1. That is, whatever material is being provided by node N1 is carried away equally well by these three edges. This edge multiplies that material by A1, this one multiplies the material by A2 and this multiplies the material by A3. And of course, at N3, you have A1 times this material being deposited. At N6, you have A7 times this material, A2 times this material and A times this material coming together and being added and deposited. So, for example, N6 is an example of a sink node here. So, it has no edges going outwards. And N3, N4, N5 as I said have edges coming inward and outward and if you look at N4, for example, the material that you have at N4 is A3 times the material that you have at N1 plus A4 times the material that you have at N2 being added together and deposited to form the material at the station N4. So much so then for signal flow graphs as a mechanism of representation of transmission of sequences, signals or data. You see you must remember the signal flow graph can hold on it either a single number or a Z transform or any other entity on which an operation which the edge is capable of doing is possible. So, here in this context we have only single numbers and we will use signal flow graphs later with single numbers or unique number data for realizing discrete Fourier transforms efficiently. But we will make use of Z transforms sitting on the nodes or at the stations to realize discrete systems as we will do over the next few lectures. So, you see let us then come to a specific realization that we want to do namely the rational causal not necessarily stable though rational causal system that we have written down a few minutes ago. Let us first write down a difference equation linear constant coefficient difference equation that realizes that rational causal system. So, there the output Y of N is related to the input and in the input XN and its past samples and also the past samples of the output. So, Y of N is a combination of all the input samples from the most recent one to the sample capital M away a proper linear combination which we have represented by script B here and plus a linear combination of the past capital N outputs starting from the output one sample away. So, there are N terms here and capital N terms here and capital M terms here. This is a difference equation that describes that rational causal system that we had in the very beginning of this lecture. Now to draw a realization of that system by using adders, multipliers and delays. So, you see what we will do is to realize the feedback part first or the feed forward part first I am sorry. So, we will first realize this part we will take the sum of the input samples and put that into a structure. Well it is very easy to see what that structure says it essentially says take B0 times the input delay the input by 1 take B1 times that input and so on take M such delays in cascade and you get Xn here you get Xn minus 1 here and Xn minus capital N here take B0 times this plus B1 times this plus Bm minus 1 times just the last penultimate one and finally B capital M times X of n minus capital M and add them to at a time we have agreed that we would like to make use of two input adders to be uniform in our structure. We don't want to of course nothing stops us from using adders which have more than two inputs but it is always desirable to use the same kind of unit repeated again and again in an inner realization. So, we prefer to make use of two input adders instead of using arbitrary number of inputs at each adder so with that then we take two at a time and add them so we have Bm times this plus Bm minus 1 times this being added in the first node and you can continue to add one at a time until you reach B0 times Xn here and together all this gives you this part of the sum here Bm Xn minus small m summed for small m going from 0 to capital M this part is obtained at this node here now for the other part of the output let's assume that we have the output somewhere here generated we operate capital N delays on that output so n minus 1 n minus 2 up to n minus capital N we operate capital N delays on this output to give you Yn Yn minus 1 all the way up to Yn minus capital N here now as this equation suggests you need to multiply Yn minus 1 by A1 Yn minus 2 by A2 and so on where you reach Yn minus capital N being multiplied by A capital N and therefore you have these multipliers A1 up to A capital N minus 1 An being operated on these delayed versions of the output and then we sum them two at a time so we sum this and we sum this we get the first sum keep on doing this until you take the factor A1 times Yn minus 1 and finally what you generate here is the expression script A that you have here so we have script B being generated there script A being generated here and if you add these two it generates Yn for you so we are in good shape we've generated Yn back again by using what is called a feed forward section and a feedback section this is the feedback section and this is the feed forward section here now how many elements of different kinds have we used in this realization it is very clear that we used n plus m delay elements n plus m plus 1 multipliers and n minus 1 adders in the feedback path and m adders in the feed forward path and then one adder overall to add the feedback and the feed forward combinations resulting in n minus 1 plus m plus 1 adders and these are assumed to be two input adders remember well let's quickly convince ourselves on this you have M capital M delays here capital N delays there so n plus m delays you have capital M adders two input adders here because you are all the way from B0 to B capital M you have capital N minus 1 to input adders here and you have one more here and of course as far as multipliers go you have M plus 1 there and capital N here so that is an explanation for the numbers that we have put down here make an observation which will help us actually reduce the amount of hardware or software that we are using in realizing the system in fact we notice that what we have done is to decompose the system function into two parts a so-called feedback part and a so-called feed forward part and we have operated the feed forward part first and then the feedback part because we have written HZ in this following cascade decomposition format. The numerator has been put first summation M going from 0 to capital M BM Z raised to power minus M and the denominator has been put next and we may treat HZ as a product of the numerator system function and the denominator system function obviously this is a product of two functions of Z and that product is interchangeable multiplication in the Z domain is also of course commutative so we could put this feedback structure first and the feed forward structure next and let's indeed do that in the architecture here so if you had only the feedback structure then you would have a structure like this realizing it in direct form one you would first put capital N delays you would multiply the first output of the delay the output of the first delay by A1 and so on up to A capital N you add two at a time and then finally complete the sum with the input to produce the output here up to here so this realizes the numerator so sorry the denominator this realizes the denominator this realizes HAZ if you recall HAZ is this so here you have a numerator of one and a denominator given by this and if you use the structure that we just described this is the realization with that structure together with that you have an HBZ also being realized and HBZ is realized in this way you would simply put a cascade of capital M delays multiply subsequently by B0 B1 up to B capital M and then add two at a time to produce the output and this is indeed the final output here now when we put the system function in this form something becomes evident immediately what you have here now let us use the rules of signal flow graphs so you have a station here what is at this station is essentially what is at this station carried with a multiplier of one so what is located at this station is the same as what is located at this station there is no difference these are only arrows going outwards if you notice and therefore what we have here is just this delayed by one sample what we have here is this delayed by one sample and if you look at it these in sequence must carry the same material to this material must be the same this material must be the same similarly if you go step by step one one delay downwards the corresponding pair of stations must carry the same material so we are in fact wasting stations here and delays as well you might as well have picked because each station has the capability to provide as many outgoing edges as you desire so there is really no need to keep two stations carrying the same material all the time and therefore all that we need to do is to fuse this train of delays and this train of delays and you would do that by looking at which is larger capital N or capital M putting a stream of delays with the maximum of capital N and capital M delays on that stream and tapping off the outputs of the delays one after the other and that leads to a structure like this note carefully what's happening here you have the middle note as you did here you have this middle note the middle note these are really the same is placed here you have delayed these one sample at a time and you have a stream of delays with maximum of N and M delays in the stream you have taken A1 times what comes out of the first delay and fed it backwards and you have taken B1 times what comes out of the first delay and fed it forward and you are keeping on doing this A1 B1 here A2 B2 next and so on and except at the top branch you would need to multiply this by B0 and the rest of it is of course the same there is no change now you see obviously here we have kept the number of multipliers the same there is no change in the number of multipliers in fact there is no real change in the number of adders as well what has changed is the number of delays but yes quite a bit from N plus M delays the maximum of N and M delays and therefore this architecture is definitely more economical in terms of hardware or software requirement this is called direct form 2 this realization is called direct form 2 as against the earlier realization here which we call direct form 1 just to flash both before you this is direct form 1 where you have all these additional delays and this is direct form 2 where you have economized on the number of delays then let us take an example in fact let us put down an example where you have capital M equal to 2 and capital M equal to 3 and draw the direct form 2 architecture so you can see very clearly that you would have a middle node here you have so you see if capital M is equal to 2 and capital N is equal to 3 then their maximum is 3 and therefore you need 3 delays in cascade in a stream you tap off A1 times this A2 times this and A3 times this and add them 2 at a time ok and finally add the input to produce the intermediate node subsequently we multiply this by B0 this by B1 and this by B2 and add them again 2 at a time multiply this by B0 and of course so here you have YN being generated by the feed forward path the feed forward path is here the feedback path is here and let us give names to each of these nodes in turn V1, V2, V3, V4, V0 here we begin from this node V5, V6, V7, V8, V9 and finally V10 and then we have the input node X and the output node Y