 So, it turns out that the question we are asking is that this L x is equal to some v and if you want to solve for this from our knowledge of linear algebra we already know that we need L to have a non-trivial I mean to have a trivial kernel this has to be non-singular. Now, is it possible that this L does turn out to be non-singular or singular which is it? Can this L ever be non-singular? As we shall show you through arguments and through the description of another matrix related to a graph that this L must necessarily be a singular matrix you cannot help it for a simple graph it must be singular and therefore, this problem will not have a unique solution until and unless you decide to earth one of those node potentials and consider it to be of base datum otherwise there will always be something sitting in its kernel ok. So, this will happen to have solutions but you will have multiple solutions all right because this after all obeys the laws of physics you see. So, that is where your abstraction goes you will always have a solution to this problem after all I mean if I give you node numbers there will be multiple possibilities. So, earlier I gave you the node potentials already and I told you that these are the edge potentials here that you have to determine. Now, these edge potentials are the unknowns and you want to find out this can you uniquely fix it up then you will have to invert this matrix somehow and we will next show that this matrix is never going to be invertible for a simple graph ok. For that we need another description of the Laplacian up until now we have seen that the Laplacian is given by D minus A degree minus adjacency. Now we will define another matrix. So, let us not take a regular or a complete graph now. So, let us just look at this graph. So, that just to make things a little more interesting and now we will define a matrix called the incidence matrix. So, the incidence matrix has a size where the number of rows is equal to the number of vertices. So, every row of the incidence matrix corresponds to a vertex and it is not a square matrix generally the number of columns is going to be equal to the cardinality of the edge set all right. So, it is an we say n cross m where n is the number of nodes m is the number of edges ok and the how is it described? So, again I will not formally define it it admits three numbers 0 1 and minus 1 right. So, what does this incident mean? So, every column of this incidence matrix corresponds to an edge and every row corresponds to a vertex. So, how many rows should this have 4 right. So, let me level them with 1, 2, 3 and 4. So, the first row is say for 1, second row is for vertex 2, third for vertex 3 and fourth is for vertex 4 and how many edges are there 1, 2, 3, 4, 5 ok. So, I will have to have ok let us label these also without the rounds 1, 2, 3, 4 and 5. Now, when I am filling the column for edge 1 I need to do something ok. So, it turns out that for given undirected graph the incidence matrix that you write and the incidence matrix that your friend writes need not be the same ok. Why? Where does that stem from? That stems from the fact that in order to write the incidence matrix remember this is not direction what I am going to assign to every edge is an orientation ok and this orientation can be completely arbitrary it turns out which is why I said that I might assign an orientation like this to this edge you might just flip it around. So, let me just go ahead and do this arbitrary orientation anyway. Once I have given an orientation now the incidence matrix can be fixed up in that. So, I look at for edge 1 the first column. So, it is the first edge start from vertex 1. So, the first position I write 1 where does it terminate at vertex 4. So, at the 4th position I write minus 1 and the rest will be 0s similarly the second edge starting at 4 and ending at 3 starting at 4 ending at 3 the rest is 0s. The third edge starting at 3 ending at 2 starting at 3 ending at 2 the rest is 0s the 4th edge from 2 to 1. So, 2 to 1 the rest is 0s and the 5th starting from 4 to 2 4 to 2 ok. So, that is my incidence matrix. Now, you could have given any arbitrary orientations and these signs would have flipped. So, lots of possibilities there right. In fact, every one of these edges can have two orientations. So, it is 2 to the cardinality of the edge set that many different combinations you can have, but it matters not. I mean I do not even care about the fact that this is not unique why? Because just like the degree and the adjacency the incidence matrix paves the way for a construction of again the graph Laplacian. Again I will not prove this very rigorously, but I will show you why it has to be true particularly through this example. I will put it to you that this Laplacian that we have so far defined in terms of the degree and the adjacency is nothing, but E E transposed ok. We already know that the diagonal entries of the Laplacian are the degrees. So, let us just check the diagonal entries of this object should not be too hard. How do you get the diagonal entries of E with the E transposed? You take a particular row and take its inner product with itself. So, you look at any row and take its inner product with itself. It is always going to be 1 times 1 and minus 1 times minus 1. See whether your friend chooses a minus 1 here and a 1 here does not matter because in any case the 1 is going to get multiplied to 1 and the minus 1 is going to get multiplied to minus 1 right. So, what happens then? What can I just conclude from that? Laplacian of course, the direction matters not right. Why do you think this number would be equal to the degree? Because in this case look at this what matters at the end of the day is the number of non-zero entries the sign does not matter. When you take the inner product of any row with itself, every non-zero entry is counted as a 1 and that is how it appears the sum. So, basically the diagonal entries are going to only tell you the number of non-zero entries in every row, but what are the number of non-zero entries in every row of the incidence matrix? It is exactly whether you give this orientation into it or out of it, it is exactly the number of edges that are incident on that vertex. In other words, it is just the degree. So, at least the diagonal entries of this fellow are definitely members of the entries of the degree matrix. All that you now need to check are the off diagonal entries. How do you get the off diagonal entries of this? You take any row from here and take some other row from here and take its inner product. What will that lead to? What can what kind of numbers can you get from that? For instance here, if I want to find the one-third entry of the Laplacian, I have to take the inner product of the first row with the third row. See the zeros do not matter. So, it can only mean that only those entries which are both non-zero either of these vectors in this row vectors. If at least one, if this has a 0, where this has a 1 or this has a 0, where this has a 1 either way, the inner product it will not contribute anything to. So, the only way that this is going to contribute something to it is when both of them have non-zero entries at the same position. But by the very construction, when is that possible? When there is an edge connecting them, any two entries of two particular rows of the incidence matrix can have non-zero in the same position, they will only have 1 and minus 1. It has to be the opposite signs. And that only means that there is an edge connecting them, but that is the definition of the adjacency, right? That there is an edge connecting those two and there will always be a minus 1. So, we have indeed verified that every entry of this EE transpose, I mean I have actually given you a formal proof in words. If you write this down, this is exactly what the proof looks like. This example it is easier to visualize, but all you have to check is what turns out to be the diagonal entries of EE transpose and what turns out to be the off diagonal entries of EE transpose. And in either case, the off diagonal entries turn out to be exactly the entries of the negative of the adjacency and the diagonal entries turn out to be just the degrees of the vertices. So, just like you have d minus a, you also have EE transpose as this Laplacian, right? Now, why should this Laplacian necessarily be singular? Sorry? Of the Laplacian? Yeah. Yeah, right. Yeah, so yes, the determinant must be 0, but we are going to do go into something even deeper than that. You are right. That is a very straightforward way of seeing why the Laplacian must be singular, but we want to also inspect something a little deeper than that. So, you are right, absolutely. That is a very legit proof of the fact that the Laplacian must be singular, because its row sum is 0. And by the same token, its column sum is also 0, because it is symmetric in case of an undirected graph, okay? You can, you urge to also verify that and convince yourself it is true. That is done. That is a proof, legit proof. But the point is we also want to see something else, because we have learned this, you know, some fancy terms from this course like kernels and stuff. Let us try to characterize the kernel of the Laplacian with the kernel of something else, because that will help us go a little step forward, okay? So, I am going to claim that the kernel of the Laplacian is going to be the same as the kernel of E transposed. This is probably the one proof I will do a little rigorously, because it brings it, harkens us back to the days of formal linear algebra that we have been doing until the last class, okay? So, why is this going to be true? By the way, if I tell you that the Laplacian is positive semi-definite, would you agree? Yeah? Just look at the Laplacian, okay? And you look at any x transposed Lx. That is going to be x transposed E, E transposed x, which is nothing but E transposed x, whole transposed E transposed x, which is going to be equal to E transposed x and norm whole squared, which is greater than or equal to 0, yeah? So, the Laplacian definitely is positive semi-definite, yeah? Also, the Laplacian is symmetric, is it not? Of course, otherwise positive semi-definite, it does not, we would not have been able to use this. So, Laplacian is also symmetric. What do we know about symmetric matrices? Real eigenvalues, always diagonalizable, eigenvectors are all orthogonal, yeah? They provide you readily with an orthogonal basis for the entire vector space, yeah? All those beautiful properties that we have learned. But again, I digress a bit, back to this. So, first we will show inclusion. So, suppose V belongs to kernel E transposed, all right? So, V belongs to kernel E transposed. And what can you say? E transposed V is equal to 0, implies E E transposed V is equal to 0, implies L V is equal to 0, implies V belongs to kernel of L, yeah? One side of the inclusion is done. The second part, suppose V belongs to kernel of L, and actually the proof is here, if I may point out. It implies E E transposed V is equal to 0. It also implies V transposed E E transposed V is equal to 0. It implies E transposed V the norm squared is equal to 0. But a norm can only be 0 if the entry sitting inside its argument is 0 by the definition of a norm. See, all of this we have learned. So, I am not, which means that V belongs to kernel of E transposed. So, if I were to kind of characterize the kernel of Laplacian and show that it is non-trivial, it suffices to just convince you that the kernel of E transposed is non-trivial, yeah? Whatever sitting in the kernel of E transposed is going to be sitting inside the kernel of L, the Laplacian. So, if I convince you that E transposed must always have a non-trivial kernel, I am done showing that L must always be singular, okay? So, observe again from the example that I have just shown you, the couple of examples. What do you think about the incidence matrix? What is always going to be sitting in the kernel of E transposed? What is the sum of every column of E? Every column of E corresponds to an edge. So, an edge has a starting point, some starting vertex corresponding to which its entry is 1 and some terminating vertex at which its entry is minus 1 and nothing else. So, the row sum or sorry, the column sum of E or the row sum of E transposed is always going to be 0. So, 1, this is how I denote the 1, 1n belonging to Rn and 1n is basically just this object here is in kernel E transposed, yeah? It is true. Just like your friend said a while back that just take the row sum or the column sum, either case it is going to be 0, that is going to be in the kernel. So, it always has a non-trivial kernel, you cannot help it, but we do not just stop there. You see there is this question of something called a connected graph and a graph that is not connected and it is a in fact, the connected graphs that we are interested in. What are graphs used to model? Graphs are used to model interactions. If you have a bunch of people who are speaking French and a bunch of people who only speak say Hindi or English or any other language you might think of, do you think anyone from this first group can interact with anyone from the second group? No, right? So, there is no interaction. So, they are like two completely disjoint groups. So, if you want to model interactions, you might as well think of them as two discrete groups, two distinct groups and you might capture the dynamics or their interaction separately. So, if I model them as an interaction graph, do you think there is going to be any edge from purely English speakers to any member of the set of pure French speakers? No, right? This guy will say no, this guy will say OE, right? So, nothing. So, in that case that such a graph will always be called a disconnected graph, right? It turns out that there is a way of characterizing whether a graph is connected or disconnected by looking closely at the Laplacian and particularly at the kernel of the Laplacian. So, one thing we have seen is that whether a graph is connected or not, this all-ones vector is always going to be sitting in the kernel of the graph. It just cannot help it. It is just going to be sitting there, but it turns out. So, observe this line of reasoning very closely. This is very heavily reliant on what we know about symmetric matrices. So, if there is exactly one vector sitting in the kernel of E-transposed, it means there is exactly one vector sitting inside the kernel of Laplacian, by one vector I mean span of the one vector, right? So, it is a one-dimensional vector space. What does that tell you? That the nullity is 1, which means there is one 0 eigenvalue of the Laplacian guaranteed. If the Laplacian were to have more than one 0 eigenvalue, that is the algebraic multiplicity of the Laplacian or the 0 eigenvalue happen to be more than one, what can you say about its geometric multiplicity? Laplacian is symmetric, Laplacian. So, what can we say about symmetric matrices? All these diagonalizable. What do we know about diagonalizable matrices? Algebraic and geometric multiplicity must be same. So, if you only check the geometric multiplicity of an eigenvalue, you can check the algebraic multiplicity or if you want to check the algebraic multiplicity, you can also check the geometric multiplicity. It matters not. They are always going to be same because symmetric matrices like the Laplacian are always diagonalizable. Now, if you want to find out the multiplicity of the 0 eigenvalue of Laplacian, you have to find out the dimension of the kernel of the Laplacian because algebraic and geometric multiplicities match. But if you want to find out the kernel of the Laplacian, you might as well look for the kernel of E-transposed. So, you see immediately the dimension of the kernel of E-transposed. So, the dimension of the kernel of E-transposed, this is just the logic we are using, is the geometric multiplicity of 0 is equal to the algebraic multiplicity of 0 in what? In the Laplacian. Do you see this immediately follows from our reasoning, right? Just what I have explained so far. So, if I want to find out whether a connected graph can have a Laplacian whose 0 eigenvalue is repeated or not, I have to just look for what happens to the kernel of E-transposed for a connected graph. The point I am going to make is that for a connected graph, E-transposed cannot have a kernel of dimension more than 1, which means that for a connected graph, the Laplacian can have exactly 1 0 eigenvalue, which means that if I formulate the problem in this manner, let me search and because the Laplacian is symmetric, it has real eigenvalues, let me search along the real line. Let me look for the smallest non-zero eigenvalue of the Laplacian, that is going to be 0. Let me look for because it is semi-definite, let me look along the positive real axis. Let me look at the second smallest eigenvalue of the Laplacian. If that is also 0, then the graph is disconnected, but if that is non-zero, then the graph has to be connected. So, let us try and rule out the possibility of the kernel of E-transposed having a dimension more than 1 for a connected graph, why must it be so? So, if that is true, then there is another vector apart from the all ones on which if E-transposed acts, it has to be 0, but how is that possible? So, I will not go into a rigorous proof of this, but I will just convince you through an example here. So, suppose you have a graph like this. So, visually at least you understand that this is disconnected, group of two Englishmen and three Frenchmen or flip it the other way around if you like, whichever majority suits you, right? Disconnected groups. So, let us say this is 1, this is 2, this is 3, right? What about the incidence matrix of this? What happens to the incidence matrix? 1 minus 1, 0, 0, 0. What about this? This is the first edge I am talking about, ok. This is the second edge and this is the third edge. What is the second edge from 3 to 4? Let me give an arrow head direction first, alright? Let me just give an orientation arbitrarily as we have seen. What about the second edge? It starts from 3 and it is at 4. What about this? It starts from 4 and then it is at 5. That is all there is. You immediately see that this disconnected components, they can be split up into blocks such as these. Why is this important? If you just flip it around a bit, the E-transposed would look something like this. 1 minus 1, 0, 0, 0, 0, 0, 0, 1, minus 1, 0 and 0, 0, 0, 1, minus 1. What is so special about this partitioning that I have done here in the E-transposed? You see, you can get a vector which is 1, 1, 0, 0, 0 and you can get another vector which is 0, 0, 0, 1, 1, 1. Are these linearly independent? So, for every connected component on certain number of vertices through a proper ordering, you can always get and all ones of appropriate size and pad it with 0s elsewhere. So, the dimension of the kernel of E-transposed can only be more than 1. Again, this is not a formal proof by the way. I am just illustrating to you. You can generalize this. Argument is similar. I am just going through an example. So, because it is not algebraic graph theory, I am not proving this. So, the point is that the only way that the dimension of the kernel of E-transposed can increase from 1, it is always going to be at least 1, that we have convinced ourselves. But it will only increase from 1 only if you have disconnected components in the graph and the dimension is exactly equal to the number of disconnected components. In fact, it is exactly equal to the total number of components we say. So, when it is a connected graph, there is exactly one component. So, you will often see that this claim and now you will be able to recognize behind all the fancy parlance what it says. It says that the rank of the Laplacian matrix is equal to n minus c, where n is the total number of vertices and c is the total number of components. So, we have actually demonstrated that to you here. Because the nullity of this, this is the dimension of the ambient space, this is the nullity. So, this is the rank. It is the rank nullity theorem again. So, the nullity of this is exactly equal to the dimension of the kernel of Laplacian, which is the dimension of the kernel of E-transposed, from which it is very easy to see why such should be the case. For every component, you can cook up an all-ones vector of appropriate size and stitch them together and they are all guaranteed to be linearly independent. Because whatever this has once, the other fellow has zeros. So, at least I hope in a semi-formal manner you understand this result. In fact, this problem algebraically can be translated to the following. If you have a characteristic polynomial, when can it have a 0 eigenvalue? If the constant term in the polynomial is 0, when can it have repeated eigenvalues? When the coefficient of s also is 0 along with the constant term, right? So, if you have given the characteristic polynomial, you have to essentially check, if it is a characteristic polynomial of a Laplacian, you have to essentially check for the coefficient of s. And if you find out that the coefficient of s is non-zero, then such a Laplacian represents a connected graph. Because its multiplicity of 0 eigenvalues, algebraic or geometric is immaterial because they are equal in this case, is always going to be only one and no more, right? So, this is the beautiful connection, right? And in fact, the coefficient of s, if you dig a little deeper, how is the coefficient of s obtained? That is where you will find this application of this one result I had hinted at sometime back, this Cauchy-Binet theorem. It turns out to be the number of spanning trees in a graph. So, what is a spanning tree? That is another question then. So, a spanning tree in a graph is basically in a graph of n vertices. It is a set of n minus 1 nodes such that from every vertex, you can walk along these edges and reach any other vertex. You see, in this case, you cannot. So, I said a lot. Forget about the Cauchy-Binet theorem and the proof of the fact that indeed the coefficient of s, but it all fits together. What I am trying to say essentially is this algebraic graph theory is enormously reliant because you are going to be using matrices to represent objects such as these. If you understand matrices well and linear algebra well, you will find your way around the proofs and claims of this domain. So, before we conclude, I will try to give you an example of the so called consensus protocol, which is when agents or humans or anyone can interact with each other and arrive at an agreement or consensus. If they do not have any prior prejudices or biases, it turns out that they will always agree on some common ground and the common ground is exactly going to be the average of their initial positions or standings. So, I was extreme on this side of the spectrum and you are on the extreme end of the other side of the spectrum. We would all converge on the half way mark. If there are five people, it will be on the average of the five people's initial locations. So, that is the result. We are going to try and prove next.