 Hello everyone. This is Vishwanath Chavan, assistant professor, department of computer science and engineering, Walton Institute of Technology, Singapore. Now I am here to explain the reservation and latency analysis. So at the end of this session, the students will be able to analyze the reservation table at the latency of a non-linear pipeline processor. So here, now we are going to focus on different terms like non-linear pipeline processors, dynamic pipeline, then analysis of reservation and latency, then streamline connection, feed forward connection and feedback connections. So we will see all these terms in this topic. So here, let us see the given diagram which is a three-stage pipeline which explains about its working. So total, there are three stages are there. So that is why it is called three-stage pipeline. So here, the stage one is accepting the input. So then, see the connections between different stages. So normally, these lines or say connections are of three types. The one is streamline connection, another is feed forward connection and third one is feedback connection. So we will see more detail about these three types of connections. The streamline connection is nothing but which is going to connect the stage S1 to its next immediate next stage and S2 to S3, S3 to S4 like that. So this type of connection is called streamline connection and the second type of connection we call it is feed forward connection. So here, if we observe this connection from S1, it is connected to S3, skipping this S2. So that's why it is called feed forward connection and if you see this type of connection which is feedback. Now S3 output is fed back to S2 as a input. So that's why it is called feedback connection. Similarly, this S3 to S1, this is also another feedback connection. So like this, it is made up of three types of connections and here, see at the end of stage S1, we are getting the output of x function x and the output of function y at stage S3. Now this function x and y, so how they went through different stages and how the final output will arrive will focus on this. See here, this reservation table talks about the function x as we saw in this diagram. So how this x will be evaluated. See, there are total three stages S1, S2, S3 and this is time t12, t8. So initially, the stage S1 is connected to S2, S2 to S3, so which is streamline connection S1, S2, S3 and after S3 to S2 which is feedback connection. Means like S1, S2, S3 after S3 it is coming to S2. So after S2, then once again S3, then it will come back to S1. So S3, S32, now it is coming back to S1 through this feedback connection. Then S3 to S1, so which is feed forward connection like this. Then S3, S32 S1 which is feedback, S3 to S1 and hence the final output x will be delivered. So this is how the function x will be evaluated. So this reservation table talks about the flow of data between different stages. Similarly, this is for function y. So three stage S1, S2, S3, then S1 to S3, S3 to S2, S2 to S3, S3 to S1, then S1 to S3. So which is having mixture of feedback and feed forward connection. And hence the final output will be delivered at stage S3 since the final y is here. Now answer for this question, what is non-linear pipeline? So pause this video and write your answer. A non-linear pipeline or dynamic pipeline, they can be configured to perform the variable functions at different times. And it is having feed forward and feedback connections in addition to streamline connections. Now we will focus on latency analysis. So this table gives the details about the collision with scheduling latency 2. So we will see what is latency. Here it is having three stage S1, S2, S3 and time slot T12 say infinity. So if we observe this table, the function X1 is flowing like this, S1, S2 stage, S3, then S2, then S3, then S1, then S3, then S1 and the next function which is X2 starts from here stage S1 at time T3. X2, X2, X2, then here X2, then X2. This flow is same as that of X1 but the difference is it is delayed by 2 clock cycle. So the latency is nothing but the difference between X1 and X2 in terms of clock cycle. So here after X1, here clock cycle 1 and 2. So 2 clock cycle difference is there to initialize two different functions. And hence we call it as latency 2. And the term collision is introduced if you observe this particular row and column in time slot T4 stage S2 both the function X1 and X2 they are requesting the same resource which is stage S2. And hence we call this as a collision. And hence this is nothing but collision with scheduling latency 2. Here collision is there, then next here also and here the collision between X2 and X3 during time slot T6. If you observe this during T7 stage S3. So three different functions X1, X2, X3 they are requesting the same resource which is S3 at the same time. So similarly there are n number of collisions. So this is how the collision is occurring. So collision with scheduling latency 2. So if you observe another example here the collision is occurring and the latency is 5. So X1 is having this flow like this and X2 starts from T6 and here collision is occurring. And X2 flow is same but it is delayed by 5 clock cycles from X1 and hence the latency is 5. And here collision is occurring. These are the references I have used. Thank you.