 on a signal flow graph, the next task is to realize that signal flow graph in the form of an algorithm or in the form of what is called pseudo assembly language or pseudo code. We need to be able to program structures. We can't be satisfied simply withdrawing them. So to program the structure we need to evolve what is called a generic loop. You see remember you are going to have input samples coming in all the time and you are going to have output samples going away all the time cycle by cycle. For every sampling instant you have an input sample coming, you have some processing being done with that input sample and a few past samples of the output, a few past samples of the input and you have an output sample being generated and all this work for that output sample has to be done within the time available to you between two samples of the input arriving. So this is the generic loop that you would require to realize that signal flow graph. For each input sample do all the following that is in the nth sampling cycle. Input the next sample let's call it Xn, carry out some kind of processing and be ready for the next input sample to come that means at the end so there is a loop operating for every input sample here. After you input the next sample you have to make sure that you are ready with whatever was required for that input sample to be processed before that sample came in that's why we are saying be ready for the next sample. So let's again take the same signal flow graph and let's evolve a process to write down a pseudo code for that signal flow graph. So let's flash back that signal flow graph for you. Now the rule is very simple. You see what really are delays? Delays as you see here are mechanisms for preparing yourself for the next sample. So the idea is very simple. How do you decompose the signal flow graph into an algorithm or into pseudo code? First think of the delays as having been removed. So take a pair of scissors notionally and cut off these threads bearing the delays. Cut off the edges bearing the delays. And that would take you to a signal flow graph that looks like this as you see. Compare this. So we have essentially retained V0, V5, V6, V7 all the nodes we've retained all the nodes but we've cut off all the edges carrying delays and the multipliers are retained as they are. Now talk about being able to realize a signal flow graph. You see once you've done this and you have generated a reduced signal flow graph without the delays, the signal flow graph is realizable only if it has no loops that are left. Lopes are a succession of edges which begin from one node and which end at the same node. So if you looked at the original signal flow graph here and if you looked at this succession of edges, this one, this one, this one and this one, they form a loop. You come from here, you traverse these edges and reach back there. There's another loop here. You traverse this edge and this one, then this one, then this one, this and this that forms another loop. In fact, there's a third loop which involves A3. However, when we cut these delays off, we have no loops left at all. You can verify that there is no loop in this resultant signal flow graph. And that gives us the secret of being able to realize a signal flow graph. No loops which are devoid of delays. If you have a loop which contains a delay which does not contain a delay, there is a problem. If you have a loop that contains a delay, it will get cut when you cut the delay. Now let us translate this reduced signal flow graph into a sequence of steps. Now what we need to do is to identify what are called the first level nodes, that is, nodes which can be calculated first, nodes which can be calculated next and so on, step by step. So once we have a signal flow graph like this, we have to identify the effective source nodes in the signal flow graphs. You begin from the source and go to the sink. That is a universal rule. If you have no loops in a signal flow graph, the rule is very simple. Begin from the sources, there could be multiple sources and go to the sinks, step by step. So as you move from the sources, you move one step, you get the first level nodes. You move one more step, you get the second level nodes and you can keep doing this until you reach the sink. Now different sinks can be of different levels. Some sinks might be reached in two levels. Some sinks may require five levels to be traversed before they are reached. Whatever it be, the level is clear. From the sources, once you start propagating the material, then you encounter loads with increasing levels step after step. For example, if you look at this reduced signal flow graph, all of these V5, V6 and V7 are examples of source nodes. They have nothing coming into them and so is X. So what we do is to take the material on V5, let us write it down. So you see we could begin. In fact, it is easiest to begin with V4. Let us take V4 and put on it A3 times V7 first. That takes, so let us start ticking things. So we have taken care. We have to take care of edges one after the other. So V4 is A3 times V7. The next step is to compute V3. V3 is essentially whatever is the material on V4 plus A2 times V6. So we have taken care of these two now. So let us put a double tick on them. The next step is to generate V2. So V2 is V3 plus A1 times V5, following which we can generate V1. V1 is V2 plus X essentially. So V2 plus X generates V1. And V0 as you can see simply is the same as V1. V0 takes what V1 has on it. We have taken care of these edges now. Now we can take care of the edges which go forward. So we can again begin with V9 because we have additions to be performed on the way. So we should begin with the lowest edge here. So V9 is essentially V2 times V6. And then we have V8 which is whatever is on V9 plus V1 times what is there on V5. V1 times what is there on V5 plus V9. Subsequently we have V10 being generated. V10 is essentially whatever is on V0 multiplied by V0 plus what has been generated on V8. So V8 plus V0 times whatever is on V0. And finally Y simply takes whatever is on V10 and accepts it as it is. So we have taken care of this edge as well. So now you must check that you have taken care of every one of the edges and you have done that here. And this is where we end in terms of the program or the pseudo code. This is the last step. So here once we have done this we have completed the computation. But we need to prepare for the next sample. That is not all. You see now you have cut the delays. What do you mean by cutting the delays? You did away with the effect of the delays. What is the effect of the delays? The effect of the delays is to keep with you what the previous output was before the delay. And when you have a string of delays you need to operate the last in the string first because the oldest sample needs to go away. And one newer one step newer sample needs to come to the oldest location. So you need to operate a string of delays with the last one that is the most greatest delay being operated first. So that leads us to the step of preparation for the next input. How do we prepare for the next input? Look at the string of delays. The string of delays would take you from V0 to V5, V5 to V6, V6 to V7. And in this string this one is the last delay. And therefore you need to put at V7 what you had at V6. This is operated first. You need to put at V6 what you have at V5 and you need to put at V5 what you have at V0. So this is the preparation that you need to do for the next sample. In a sense preparation for the next input means putting the delays back in the string where they were. That is what we are saying here. Put the delays back in the string. Put the delays back in the string. We have actually completed our job because we have written the pseudo code. We now have a pseudo assembly level language program to realize the signal flow graph. And we can convert that into whatever language we desire. There are different languages that can be used for digital signal processing. There are different processors each have their own variants of instructions. But once we understand this basic signal flow graph and the manner in which it is converted into an assembly level program we can without too much of difficulty move from one language to the other. You see it is very clear that just as a signal flow graph implies a hardware realization it also implies a software realization. And both of them are connected with resources. In the hardware realization resources means specifically elements like delay elements to input adders multipliers and so on. In software resources means computations or operations. So each time you make an assignment you're doing an operation. Delays are like assignments. One thing is assigned another and delays operate in the end of each loop. Two input adders are indeed operations both of assignment and of summation. So in fact the way we wrote the program here we did a multiplication and then we combined a multiply with an ad. So you know you have multipliers and adders being taken together in statements. You know if you remember let me put back example statements for you. So these are examples of statements we had. We had a multiply and then add that kind of thing. So multiply plus accumulate is a typical kind of operation that you need in discrete time signal processing. Multiply accumulate. M-A-C multiply and accumulate. Put adders or edges that carry multiplications on them require multiply add kind of operations. Delays essentially require assignments. Of course multiply and accumulate also involves an assignment you can't do without an assignment there. This then we've got a reasonable insight into the kind of processes that we need to convert a signal flow graph into a realization either in hardware or software. Now in the subsequent lecture we would actually look at different forms of realization. We looked at direct form 1 and direct form 2 and we've also convinced ourselves how we can write down a signal flow graph for direct form 1 and direct form 2. But now we are going to have other alternate structures which can also realize the same system function. Specifically we would look at cascade structures, parallel structures, cascade and parallel structures and then the lattice structure. We shall do this beginning from the next lecture onwards. Thank you.