 We are going to consider very simplistic kind of a model, let us say like an ant or a bug or an insect which decides how to make its next move based on where its friends are because the friends have some you know hint about where the sugar cube is. So they send us some of these information over the network and these ants or whatever they make their moves based on that. So what is the basic model going to be? If I represent P i is equal to say X i, Y i as the position of the ith ant, right? Then the dynamics is going to look something like this, summation P j minus P i such that this notation means that j is listening or rather i is listening to j, in other words j is a neighbor of i. In graph theoretic language there is an edge connecting the ith node to the jth node. If I am not even connected to you, if I do not know what you think, how am I going to be influenced by your opinion, right? If I am always mouthing my opinions throughout the day on radio, TV, everywhere, right? And you get to hear my opinion, I never get any feedback from you, right? So unlike that in this case, j and i are neighbors means both of them listen to each other's opinions because it is an undirected graph that we have in mind. Is this clear? Why this is a logical way to formulate such dynamics? No, it is not j till day, this is a notation, j is a neighbor of i, j is a neighbor of i means there is an edge connecting j with i, it is a graph theoretic notation, it is something new, do not blame yourself if you do not understand this because we never explained this in linear algebra, it is a graph theoretic notation, right? I am introducing it now, now just think about it, what does this mean? I can decouple the dynamics along the x and the y directions, I can just say x i dot is equal to x j minus x i summation and y i dot is equal to y i, y j minus y i summation. So these dynamics are completely decoupled, I might just look at the single differential equation in terms of x's or the y's, it does not matter, right? So what does this eventually lead to? Does this tell you there is something going on? What is this? I put it to you that if I stack up all these p i's together like p 1, p 2 until p n, yeah? This is a stacked up position of all the n agents, this is a 2 n sized vector, right? Then what is this? p dot, isn't it equal to minus Laplacian times p, yeah? Because this is what the Laplacian is, that is why we were so invested in the Laplacian. It helps us in grasping this idea, what is this? If I stack this up, this is basically say p 1 minus p i plus p 2 minus p i plus p 3 minus p i, but not every one of them will appear. Suppose p 3 is not connected to p i, then that will not appear here because there is this restriction where only if j is a neighbor of i will disappear. So this is exactly this, right? So now if I solve this differential equation, subject to let us say my initial conditions p 0, okay, let us just call it some p naught belonging to some r 2 n, yeah? What do you think is the solution of this going to look like? Let us say p, isn't it equal to p of t, isn't it equal to summation c i e to the lambda i t v i, i going from 1 through n, yeah, where lambda i is at the eigenvalues of the Laplacian, v i is at the eigenvectors of the Laplacian. This is just linear differential equations, theory of linear differential equations, okay? I am not going to go very deep into this because if you have done a course on control theory or about to do, you will learn this anyway. It's basically ordinary differential equations at the end of the day. If you like, you can also write this as e to the at times x naught or p naught. That is also another representation. We have seen that 2 by 2 case by the way, just before we started our, you know, venture into this eigenvalues and eigenvectors, we have already seen at least a 2 by 2 case. The same thing holds for the n by n case. So, it's an nth order differential equation, right? Linear differential equation. So, this is the solution where how do you solve for these c's? These c's are solved essentially by using the initial conditions. See, how do you solve differential equations? You get a solution with some constants. How do you evaluate the constants? Because you know the initial condition. So, you equate them at the initial condition by putting t is equal to 0, but there is something more interesting here. We are going to assume that this Laplacian is a connected graph. After all, if there is multiple discrete groups which are not even communicating with each other, then there is no point in modeling the motion of ants or vehicles in that group, in such groups, right? So, we will assume that g is a connected graph. So, if g is a connected graph, what do we know about the eigenvalues of minus l? The eigenvalues of l will be non-negative, one of them exactly 0 and no more than 1 must be 0, because it's a connected graph. So, I might as well write this as c 1, lambda 1 is just 0, so e to the lambda 1 is also a constant, right? So, c 1, v 1. What is v 1? Isn't it the all-ones vector? We have just seen that, right? The kernel of Laplacian is spanned by the all-ones vector, right? So, this is just the all-ones vector, right? Plus c 2 e to the lambda 2 t v 2 plus so on till c n e to the lambda n t v n, right? I don't care if the lambdas are distinct or not, this matrix l is always going to be diagonalizable. If it's diagonalizable, then there's going to be no terms such as t or t squared appearing here, because there's a Jordan form, the Jordan form is just the diagonal form, right? So, we'll see that in some greater detail also. So, I'll just do a couple of steps and you will be there with the result. There are multiple ways of getting at this and I'm never sure which is the most convenient way to explain, but let's just hold on to that thought for a moment. So, what is this? From this, can I not write this as v 1, v 2 or rather this is all-ones? If I know it already, let's not pretend as if I don't know. v 1, v 2 like this. Actually, you know what? I'll just write this as x and get rid of the, because otherwise it's going to be of size 2 n, right? The dynamics along the x direction mimics the dynamics along the y direction, no? Because p is just 2 tuple of x and y. So, let me just, because I am convenient dealing with the 1n, the n-sized, right? So, let's just call this the x. The y-direction dynamics is exactly the same, except that the initial conditions have to be different. If I know how to solve the x-direction, I know how to solve the y-direction, the decoupled, right? So, let's just call it this until vn, I'm just writing it in a different fashion, but make sure that this is going to be 1. This is going to be e to the lambda 2t is going to be e to the lambda nt times c 1, c 2, until cn, right? Now, what are these c's? How do we evaluate these c's? We know that x0 is equal to what? If I equate x0 here and x0 here, what happens? So, we know that x0 is equal to c1 all 1s plus c2, at 0 the exponentials will not matter, c2 v2 plus dot dot dot till cn vn, which is nothing but all 1s n v2 until vn followed by c1, c2, cn, which means that c1, c2 until cn is nothing but the all 1s v2, I mean what I am doing here is very standard, I am not using any property of the Laplace n, this is just standard for any Laplace, here I just happen to know that this is all 1s, you could have just replace this with v1. This result is very generic, inverse times let us say x0, x0 is a vector, right? So I might as well replace this here with all 1s n v2 until vn and then this with 1 e to the lambda 2 t, e to the lambda n t and then this c1 I am going to replace with. So what do you know about this matrix? This is a matrix of the Eigen vectors of the Laplace n, is not it? So it is an orthogonal matrix, what do you know about its inverse? It is just the transpose of this fellow, so I might as well just write this as transposed to v2 transposed to vn transposed times sorry not c1 times x0, yeah? This is clear so far, this inverse because this inverse is just going to be its transpose because it is the, so this is the special case, up until this point there was nothing special that I harnessed, remember? Just any matrix, it comes up to this form and that is why you actually have this case, right? e to the at, you diagonalize, this is just the diagonalized form of that. This is true of any general matrix, here I just happen to know that one of the Eigen values is 0 and it is corresponding Eigen vector is this, apart from that I also know that the other Eigen vectors are form an orthogonal basis, right? So that is all the knowledge I have used. So let us write it out now. So x of t is going to be all once n v2 until vn times 1 e to the lambda 2t e to the lambda nt, the lambdas are the Eigen values of minus l, remember not l, yeah? Because our dynamics was governed by minus Laplacian, not Laplacian and what is this? This is just all once transposed x0 v2 transposed x0 until vn transposed x0, agreed? So as t tends to infinity, that is the asymptotic behavior of the system, what happens to xt? What happens to all of these fellows as t tends to infinity? Do we know the Laplacian to be positive semi definite, only one Eigen value is 0, the others are all positive? What do we know about the Eigen values of minus Laplacian then? Exactly one 0, all others are negative. So these lambdas are all negative as t tends to infinity each of these degenerates to 0. So we essentially have just all once n v2 until vn and this just leaves with just this one and this huge vacuum here and just, sorry, all one n transposed x0 v2 transposed x0 vn transposed x0, alright, what do you think this is going to do? If I hit it with this, it is not going to matter what these are, all that is going to be just this and then this is going to be operated on by this, right? So what is this? This is going to be equal to, what is this by the way? When you take the inner product of all one's vector with x0, what results? It is just the sum of the initial conditions of the fellows, right? So this is equal to the summation of xi0, yeah? So I think I had to actually normalize this, right? I had to actually normalize this, otherwise it is not the identity, you know? So wherever I have missed this, sorry about this, because otherwise it is not an orthonormal matrix, right? Yeah, I should have taken the normalized fellows instead of just this, yeah, otherwise it does not match with the average that I have in mind, okay? So that is how I detected that there was something amiss here, because these are all unit norm, right? Only then they give the identity. So we have an orthonormal basis. So what happens then? This is just this, but that is a scalar times the all n, all ones, right? That is your final solution. As t tends to infinity, it just converges to the average of the initial conditions of all the fellows. So if you have a connected graph of people listening to each other, exchanging information without malice or biases, everyone will eventually come to agreement on an average. Now whether you like it or not is up to you. You might say, oh, the good people are getting more mediocre, but also the weaker people are getting stronger. The good or bad, I leave it to you, but that's it. That is where consensus or agreement will happen, okay? At the average of the initial conditions, when the graph is connected, okay? You can tweak around with the final value. If you have an directed graph, but with an undirected graph, it's always going to be the average, right? So we'll end this lecture here today, and in the next and the final lecture, we shall try to see another application of this, perhaps in the domain of economics. Thank you.