 First of all, it's an honor to be invited in this workshop here in France. I will talk about the simplex geometry of graphs. The work has been done together with Karl de Vriend, who is doing all his PhD in Oxford. First of all, I give a background and some information on why we started on this kind of research and its electrical networks. It was inspired by power grids. I come immediately to that and then I turn, as an artifact we found, I think interesting things on the geometry of a graph. So let us first start with the background. I do a fast repetition of the symbols I'm going to use. First of all, many of you know already an adjacency matrix. If you have a simple graph with six nodes and nine capital L is nine links, then you can basically represent this as an adjacency matrix. An adjacency matrix is a zero-one matrix. Important to notice is I only talk about adjacency matrices, which are symmetric, so it's undirected. The adjacency matrix only gives you the existence of a graph. The diagonal elements are zero, so I also include self-loops. Then an important characterizes of any graph, that's the local number of nodes. That's the degree, which is the sum of the rows, the row sum of each node. The second matrix is a matrix which gives you the directions. I only use the incidence matrix for directions. The incidence matrix is not a square matrix, but it's a matrix which has, here in the rows, the number of nodes. Here, we have six nodes, so six rows, and there are then nine columns because there are nine links, and each of the columns represent a link, and you choose your own kind of numbering of the links. But what is important here is that the link starts in a node and it ends in another node. Then you can use the agreement that if we start, we use a plus one, and we end, we use a minus one. This is the incidence matrix. The nice property of this matrix is that it's easy to see that all the column sums add to zero, and I will use this kind of sum over all the columns here. It's zero, so this is the 1, 1, 1 vector, and if you multiply this by the 1, 1, 1 vector, you see that this is the characteristic property of the incidence matrix. So important, the incidence matrix I only use to specify the directions, not the adjacency matrix. The adjacency matrix is always real, I mean always symmetric. And then the matrix which will play the most important role in my story is the Laplacian matrix. The Laplacian matrix is defined in two ways. It's defined in the way as the incidence matrix multiplied by its transpose. And if you work out that matrix here, then you find the diagonal matrix of the degree which I actually noticed by the capital delta in Greek. So this is the vector of the degrees which are on the diagonal and offline in the diagonal you have the adjacency matrix with a minus sign. This is the structure of the Laplacian matrix and immediately from the definition here you see that it is a symmetric matrix. So it means that in my story both the adjacency matrix and the Laplacian matrix are only working for undirected graphs. B, the matrix B again, gives you the directions. And the basic property is, and that's a very important one, a very important one I will use and come back to that later on because it has a severe number of properties, that is that the eigenvalue equation, if you solve this, the Q has an eigenvector which is the all-1-1 vector belonging to eigenvalue 0. Okay? And I give here then very shortly where it comes from, it's related on the incidence matrix. So that's, and I stress this property because I will often use this again and again and it will play a very important role, although it's a very obvious property. So it also means that the inverse or just the inverse of the Laplacian matrix is not possible. Yeah? So now I come to network science and in network science I look to networks in two ways. A network always consists in an underlying topology which is the graph and over the graph we have done a service. So that's a network. A network is not only the graph, a network is basically the combination of, the definition, the combination of topology, a graph, a hardware if you wish, and then the function for which we are going to use the graph. And the function in most of the cases can be a service and it gives a reflection of, in most cases, a transport from another A to another B over the physical infrastructure which is the hardware. In many cases, most of the study and network science which is roughly 20 years old has been concentrated on the topology. So a lot of results are known on the topology and only quite little or fewer results are known on the service and the function. So it's fair to say that the service and the function is way more complex to handle and to get nice results than on the topology and the graph. Also because the topology and the graph is probably much more specified while the service and the functions on the graph are way more complex. So let's say here in any graph you basically have two separations that you can make. You can have a transport which is a flow like think about very simple, that's just water flowing through a network or you can have a packetized which is a discrete unit like for instance an IP packet or a postal letter or a car or a container and these separations are governed, first of all, the flow equations are governed in most of the cases by the laws of nature. Equations don't play a role because if you concentrate on waters, for instance water is flowing from a high height to a lower height over the tubes and that is governed by nature. So humans or what we want to do, we don't have anything to define because physics defines the laws how the transport is going from A to B. And probably I'm going actually to focus on this kind of equations because again they are much more simpler than the other equations which are protocol equations. In protocol equations we have actually a human standardization and then we say okay we want to have a transport from an IP packet from A to B and then we have a set of rules how we are going to do it. It's not always the shortest part but it might defiate from the part according to what we humans agree with each other. You have autonomous domains and it can be quite complicated. Another thing is that here the protocols ignore the underlying topology and that's a very strange thing because 20 years ago we have actually or even longer we have designed protocols and rules in order to be ignorant of the underlying protocol the underlying network or the graph, the hardware. In other words we've designed protocols that work over all possible network or infrastructures, right? And this is actually a quite deep engineering thing but remember there are two major companies that have followed either the coupling of the service and the graph and one is I would say Apple the other is Microsoft. Microsoft has basically focused on only the services and has defined an operating system which is actually ignorant to some extent of the underlying transistor network, right? And they defined or they designed an operating system that should run over all possible, in principle, all possible hardware infrastructures or transistor networks. While Apple has chosen more the reason that okay we would like to have a performant infrastructure where we have the coupling of the function and the underlying topology. And more and more people are thinking then if you go to optimizations you think okay you optimize the system you should take the two layers into account and also inspired by nature our brains is a coupling of the two layers very strongly to each other even to the extent that nature has an adaptive network which is changing over time in order to make the functions better. Okay so this is the introduction I'm now going to focus on the simplest thing that I can do and the simplest thing is that I look to the transport which is given by flow equation and which is even linear what I call or proportional in the underlying graph and you know all these things linear dynamics on networks are very well very basic and given actually in high school in most of the cases what I mean by a linear dynamic which is proportional to the graph just as following you have two nodes in the graph I and G you say that there is a flow whatever the flow can be it can be an electrical current or it can be water whatever it is the flow is injected in all the I and it leaves here the node J it flows over a link in my network and the flow or the magnitude of the flow is proportional to the voltage difference between the two points that's well known everybody knows this so examples are displacements in the spring heat and I will focus a little bit in the language of electrical currents voltage relations so what we have actually it is proportional so it means that we have an injected current vector here all these current in each notice you can actually inject a current so the whole vector which has as components the individual notice here the injected nodal current is proportional to the voltage vector here and the proportionality vector is the Laplacian and that you can easily derive so and that's the starting equation where we started with if you have this kind of an equation which you can easily derive from conservation of low Kirchhoff flows and the law of ohm you say that actually the current is flowing and the resistance we only look to resistances although you can also exaggerate or extend this to impedances like inductors and capacitors but okay that's a little bit more complex but still it's linear and you can do it but here for the settings I just assume that you have a kind of a resistance each link has its own positive resistance which is actually a link weight yeah and then you come up with these equations if you have this equation you assume of course that the nodal potential is known and if the known and if also the structure of the underlying graph is known and all these resistances then you can compute the nodal current okay that is our starting point an interesting thing is if you have this kind of a linear equation and you have only the voltage vector you can compute the injection of currents but we also want to have the inverse yeah so I come to that very good question we've so the basic thing is you can perfectly invert the system but you need to have one condition more the equation that we've derived basically well I didn't derive them but you all agree that it's correct that the injected current vector is proportional to the voltage vector and then this is what I call the current voltage relation and you can invert that equation the inversion is not so easy as just having the inverse of your matrix because the inverse of the matrix doesn't exist you can use the pseudo inverse and the pseudo inverse is actually denoted by q to the power wow we hear a decade this is what I call a pseudo inverse and if you do and you use the pseudo inverse I will immediately say what I mean by the pseudo inverse and how I define it then you have actually a quite of symmetric equations and these are very nice it's precisely as if the inverse exists there is of course one constraint the first is the constraint the current which I inject in my network is zero and that's a conservation law oh yeah this is u transpose is the sum overall components that's a short notation which is a little bit interesting and the thing is if you use now here this notation you need to actually specify something about the voltage because you know that the voltage is only the voltage differences and by the fact that you use voltage differences and your Laplacian is zero you have one equation you lack one equation because the rank of your system is one less because that's actually the connotation of having not a determinant or the determinant is zero so you add one equation and you say that the average potential is also zero so you say that the average of all the potential here this one right all the potentials here the average is zero and that's the only thing if you do that in a consistent way you can then use and invert these equations and we come to the spectral decomposition actually we came to the spectral decomposition you can write any weighted Laplacian here I used actually the an old notation for a weighted Laplacian while this is not weighted but anyway this is a positive summative matrix and the smallest eigenvalue mu capital N is zero so it's not in the sum here so I actually sum over n minus 1 eigenvalues and here orthogonal spaces which are governed by the outer product of my eigenvectors zk that's belonging to mu k and so then the pseudo inverse is or is the same as the more pen rolls inverse just the same equation in which we now invert the eigenvalues one over and then you have the pseudo inverse if you use that then you're fine again you can compute everything and by the way it goes very fast because in Matlab for instance if you use these computations Matlab has a function and everything works so you can invert these kind of equations nice of course if you look to power grids through networks there's the notion of effective graph resistance effective resistance which means that if you inject a unit current in some node and it leaves the network in another node you want to know actually that what is now the effective resistance because the current is actually flowing over all possible paths from A to B and in order to compute all these resistances you can actually define the effective resistance I'm not going to specify this I will need the effective resistance you can write it in the matrix and that's the matrix which is a bit complicated structure so to say here you have the pseudo inverse of our Laplacian and then here you have the diagonal elements of the pseudo inverse and these diagonal elements of the pseudo inverse have a very deep fundamental meaning which is related to the the degree vector of a of a graph often also occurs is in order to compare graphs from the flow nature we compare them by focusing on the effective graph resistance so the higher the effective graph resistance is for a graph the worse the graph is in order to transport items from or flows from A to B right so the lower it is and the complete graph well if you have all unit resistance the complete graph has the lowest effective graph resistance equals to N minus 1 okay this is the introduction this is the setting this is actually the motivation why we started with Laplacian first and the pseudo Laplacian okay now I come oh I give still a motivation because a lot of people here like Facebook and so on more in trusted and not in electrical currents but in social networks and here you see a nice application of electrical theory in social network suppose now you inject a current here in the know the eye and in all the other notice you have an equal part where the flow leaves okay so I choose a know the eye in which I inject a current a unit current for instance and in all the other notice the current flows equally out yeah in an electrical setting that's not interesting problem in a social network it's a much better problem you just replace current by information and you're looking for the person in a network that is the best spreader actually because you want actually to inject here a current in a person which spreads in a most effective way his information to all the neighbors in his social network and that is what we are going to solve with our elegant simple equations again you have these pair of inverses and the only thing what I now need to stress again we choose the voltage reference which is the average sum of the degree the average sum of the potential is zero and then yeah now if you now use this injected current here it means in one know the eye inject and this is the basic vector EI meaning all elements are zero except the element the component I which is one then in all the other nodes I have one over end one of the flow is leaving and again the U is the one one one vector the nice thing is if you plot this or you substitute this equation into our voltage equation and since U is an eigenvector belonging to the eigenvalue zero of this matrix so it disappears it has a very easy notation namely that the voltage on the point A here the voltage what we are actually looking for right is nothing else than the diagonal element of the pseudo inverse of the Laplacian and the meaning is quite simple if you want to inject a current you're going to search for the node which has the lowest potential because that's actually potential is related to energy your effort in order to distribute information in a network is optimized if you use this kind of a setting in which I assumed that each of the nodes has an equal amount of current and then it's nice this is a simplified version by the way we've used this as a kind of a graph metric in order to attack networks you can attack networks in the sense that taking out nodes one by one in order to break down the network in the fastest way and in a comparison with more than 50 real world infrastructures we found that the node between this is a little bit better and then on the second position this one is over the closeness overall other metrics what we have tried to find okay this is a kind of a kind of a motivation why I believe that this metric or the diagonal elements of the pseudo inverse have an interesting graph theoretical meaning and I will now emphasize this a little bit better if we go to the geometry of a graph okay so now I end my first part this was probably quite fast this was the background why we were interested actually to study electrical equations and now by studying these equations the electrical equations we came up or we discovered basically that you can that you can represent any undirected graph in three ways actually the most useful way is you can use it in topology domain so most of the papers have been actually I'm talking about more than 90% of the papers on graphs are in this domain the topology domain people like to see the node link domain and the representation can be the adjacency matrix or a link at list or whatever right so this is the representation in which most of the research has been done the other representation the spectral domain and again I assume that the adjacency matrix is symmetric then you have a nice eigenvalue decomposition and here I use the X as being the orthogonal eigenvector matrix which in the columns you have the eigenvectors and the lambda capital lambda here is the diagonal element the diagonal matrix with in the diagonal on the diagonals the eigenvalues of the adjacency matrix right so these are the two most used ways and then we also discovered that any graph can be represented by a simplex in the n-1 and well in the n-1 well in the n-dimensional it should be n-1 dimensional space right because I have n nodes and it should be one dimension law so this is kind of an error this is quite new work what is still not published but we put it on the archive and here I use the simplex in a way wait I use the simplex in another way than Sergei did in Simplicius complexes I will immediately find what I define as a simplex so but now it's fair to say around February we discovered a book by Miros Fiddler which is an absolute great person he has devoted more than 50 years in algebra and he wrote a book on matrices and graphs in geometry and was only appeared in 2011 but last year we found it and in that book we saw a lot of insights what we also discovered independently of Fiddler but it's fair to say he was first and some of his terminology I'm now going to use to introduce the setting and then I will say a little bit where we have extended the work of Fiddler first of all what is a simplex? well a simplex is a very simple thing it's basically a generalization of a triangle in any dimension and that's important so most of the figures I will show are well in a 2D a triangle and then yeah I can't go further that's a trahidor and that immediately gives you the weakness of our representation I'm limited by the dimensionality so normally a graph has n nodes and I need to represent them in the n-1 dimensional space and I don't know how I can visualize this but anyway mathematics will help us a bit the one thing what is a strong point is that Euclidean geometry is the oldest mathematical discipline and we are going to use these principles in order to characterize or tell something about a graph and again we start with our Laplacian because that's how we also started and discovered this you start with the spectral decomposition of Laplacian these are the eigenvectors again in the columns and in the diagonal you have the eigenvalues which I write as mu1 which is larger than mu2 and so on and the smallest one is mu sub n which is a zero because that's the crucial thing and that will come over and over again the fact that you have here an eigenvalue which we know and an eigenvector which we completely know so because this equation holds which is actually an eigenvalue equation now if you write this matrix then I write it also in two ways this is the way what most of the people know all the eigenvectors are well for a symmetric matrix all the eigenvalues are orthogonal so if you take the scalar product of any eigenvector with itself it's one and with all the others it's zero so this is the well known kind of orthogonality relation but the much deeper one which I find strange is orthogonality so it means if you take this as vector so I write here the z matrix the orthogonal matrix and here all the components of the eigenvalue the eigenvector belonging to the eigenvalue one that's z1 z2 and so on and the columns then if you take the scalar product between that vector and that vector then you end up with well zero or with itself it's one but the nicest thing is the double orthogonality that the same holds if you actually take it in the roll vector if you take this in a roll vector and you take the scalar product with itself then it's also zero and that's a little bit more what it physically means it's a little bit harder because here we can say that this is related in this column with the eigenfrequencies or the eigenvalues while in this column we tell something about the notice so if you look in this row you can say that always know the two know the two is specified by a property on several frequencies okay that's how you can actually look to this way and that's the orthogonality condition if you look to the normal frequencies to the notice itself over all the frequencies sorry and then this matrix we know that's actually the UUU matrix if you then normalize it it's one over square root n so the whole or for any and even weighted Laplacian obeys this kind of general equations all the eigenvalues are real and positive the smallest one is zero and here you see that the eigenvector belonging to the smallest one is normalized one over square root n and from that equation you can actually write then the more vectorial equation as a sum over all the eigenvalues and here then the outer product of these eigenvectors okay and now only for a positive semi-demonic matrix we can do an additional an other factorization because all the diagonal elements here the eigenvalues are positive so you can take actually the square root of a negative eigenvalue it's not a real number by the way in the adjacency matrix you know with probability one well there must be always a negative because the trace is zero because all the diagonal elements are zero so there must be a negative eigenvalue and then you cannot reuse this because then the square root of a negative eigenvalue is not a real number okay so but here you can do it or separate this matrix in two parts and now if you define now the S matrix as the transposer of this one then you see that we can write any weighted Laplacian in the form of an S matrix transpose S transpose S and that's a Gram matrix that's also known in linear algebra it's a Gram matrix and we know that because the eigenvalue is the smallest the eigenvalue is zero the rank of that matrix is n-1 okay so if you now write the S matrix yeah so you have to multiply now the eigenvectors with the square of the eigenvalues and this is not a difficult exercise then you see here that this is no the 1 this is no the 2 no the n I rewrite this matrix here and the column here what I said here the column I can be represented by a vector you can write this matrix in a vector notation and this vector is nothing else than what I wrote here in in green right and this vector you can always represent that as a point in the n-1 dimensional space so all these vectors here you start writing or drawing in the n-dimensional space here the space is only 2 yeah and then you can find that if you connect these n-points you see a kind of a triangle which we call a simplex and from now on if I go now more to the geometry I erase the fact that these are all zero yeah so I lose basically a dimension 1 so I match out the zero and I only concentrate now on the s matrix which is actually s-1 right it has one row less and that is the matrix which will define the points of my simplex and then we could prove actually that that simplex that the connection between this point is a simplex however it's fair to say that Fiedler was before us and he discovered this already so substantially it means that so far any unweighted no any undirected but weighted undirected graph can be represented by a simplex and the simplex can be represented by vertices and edges and now I use really the geometric meaning I talk in a graph about nodes and links and here in the geometric representation I talk about an edge and a vertex of course the vertices are one-to-one coupled to the nodes the edges are not and you can also talk about a face of this tetrahedron one of the faces a triangle and you can actually denote a face by the elements on which it is defined and then we will use and here we follow Fiedler we will use geometric coordinates probably he invented this in Greek is heavy so it means these are coordinates which are related with the center of the existence of the simplex and then you can say that the face so this face is basically the collection of points in the n-1 dimensional space that are generated by the S matrix which depends of our Laplacian and then you use here then a coordinate and this is a barycentric coordinate which is a real number and the real number is positive it's a real number and it's only non-negative if the node i belongs to the face here which I have or the set capital V and the set capital V defines basically the face F sub V more involved but that is what we will use to represent the points and I give here now a couple of examples first of all I talk about a centroid and the centroid is defined as now a simple barycentric coordinate which is either 1 or 0 and divided and divided by the total number of components here because the condition is that it should lie on the edges or on the edges of our simplex this is a vector and we call it a centroid vector of a face and this terminology is borrowed from Fiedler this is not ours what is now the centroid of the simplex if you now say that this is the 1 on 1 vector for all of them then you see that it's 0 and the centroid of the centroid of our simplex is just the origin and that's nice so we take this as the origin and all the other vectors the S vectors are pointing from the origin to the edges of the simplex another thing which is I find cool but rather straightforward you can use complementary phases a face is basically governed by all the ones that belong to the face and the complement is V with a bar that's the complement of them this is just this kind of an easy definition of complement but if you fill it in in the centroid and again reminding that the 1 on 1 vector is an eigenvector of the S vector so it disappears then you see that the centroid here or the centroid of a certain face is just pointing in the opposite direction as out of the complementary phase in the triangle you see it like this if you have here now the centroid you say the centroid is here and this the vector C2 is pointing in this direction and the complement is pointing in the other direction so when this is nothing else then the well-known property that the middle lines of a triangle are all going to the center they are all crossing in one point and the point here is our centroid so that's an other way of looking to a geometric property on triangles and which you can actually extend to any phases in the simplex ok let me now come to the geometric representation of a graph because this was a little bit introduction in how you represent the centroid and body centric coordinates it's a bit complex I know for the first time but anyway so I'm now going again to use the spectral decomposition of the Laplacian and now I write this as these vectors SI and the scalar product from SI over SG you can work it out it's nothing other than the element of my Laplacian this means that if I have the scalar product I can also find the norms you can compute them but in the figure it's more easy if you now see again the triangle you have the body centric well the centroid which is the origin and each of these edges is a vector S1 and S2 then you see that the length of S1 is nothing else than the degree that's nice because that has a physical representation of the graph so the length of this vector is the degree and you can also think about what is the scalar product or the angle between those points and if you take the scalar product it is nothing else than the element 1 2 in my Laplacian and that is the negative of the element in my adjacency matrix in other words if there is a link between the node of 1 and node of 2 then it's a 1 and it's minus so the cosine is larger than 90 degrees so it's larger than 90 degrees if it is 0 if there is not a link then these vectors are orthogonal it's cool I find that cool so you can make a representation in the simplex and just by seeing whether there is a link between 1 and node 2 the edge here these are orthogonal vectors and then you can also say that the distance between these two points the distance you can work it out and since the distance is actually defined based on the decomposition again of our Laplacian the matrix with all those elements here is the distance matrix and the distance matrix is a matrix in which all the elements are positive the diagonal elements are 0 because the distance to yourself is 0 and each of those elements obeys and that's more important obeys the triangle inequality and that's by no means a very simple thing but this is guaranteed by the construction of the representation here of a graph in my simplex I choose the origin to represent my vectors and they are represented by the vectors S1 and S2 the angle between those is only orthogonal if there is not a link between the node 1 and node 2 and it is larger than 90 degrees if there is a link between 1 and 2 and then the distances or the distance from the origin to the vertex itself distance squared basically I should be rigorous squared is just the degree of our vector now I am going to the dual representation because I started with saying that I like the Laplacian but also equally much the pseudo inverse of the Laplacian ok, you can just do the same principle and the same kind of computation you start with the spectral decomposition of the pseudo Laplacian the pseudo Laplacian has here diagonal elements over the eigenvalues but the important thing is they are all positive so I can take again here the square root so the square root has a meaning I can define this as the vector now which I call S with a dagger because it refers to the pseudo inverse and I can have again this kind of representation which means that the pseudo inverse of the Laplacian can be written as a gram matrix yeah and in the gram matrix what is known as each of the columns here again represents a vector which I can just treat as in the normal case a vector in the n-dimensional space and now the nice thing is that the distance between those vectors is nothing else than the effective graph resistance so that is the meaning why I stressed in the beginning the effective graph resistance has a meaning and it is also a geometric meaning ok, now what is the relation between because I call this dual representation of a graph so what is the relation we use here the pseudo inverse relation so if you have a matrix the transpose and multiply by the here the pseudo inverse then it is the identity matrix minus here a matrix which is given by the 111 vector this is just a matrix with all ones divided by n and if you write that in vector notation you have if g and i are not the same it is minus 1 over n if g here the indexes are the same it is 1 minus this 1 minus 1 over n why do I need that equation if I take this equation and I take here two different values of the g k and g here the transposer multiply by this factor then it is 0 and this is the construction I am going to use to represent the dual simplex of my original graph so the original graph is here represented by his simplex in black this is vertex 1, vertex 2, vertex 3 and now the inverse simplex I am going to construct in red I use this kind of definition which means that if I take here between vector s2 and s3 which are pointing here and here I take the difference the difference factor this one which is the vector which is well parallel with this factor then our new vector of the dual representation is orthogonal on that one and if you do the calculation quite well the difference is just on the opposite direction so if you have now each edge here or a face in general and you construct perpendicular on the face a vector which is going through the origin then that vector gives me the first component or edge of my dual representation and you do the same with the other edge and then you find that this red triangle is an equivalent dual representation of my graph nice is if you have if you have simple as you can compute the volume we also computed the volume and then you can show after a lengthy calculation that the volume of the simplex is related to the square root of xi and here some numbers which are related with n before and the interesting thing is that xi here is nothing else than the weighted number of spanning trees so the more spanning trees you have in your graph the larger the volume so the simplex with the largest volume is then just the complete graph you can also compute the inverse of the simplex and the inverse of the simplex is this kind of representation and here you see that the square root of the xi here is in the denominator and if we take the relation between both we see that the volume of the simplex and of the dual representation is nothing else than the product over all the eigenvalues of the Laplacian and this is n times the number of spanning trees so actually the previous figure here I should have drawn in scale and much much smaller because it's a very small graph if this is the volume actually the surface here or the area of this triangle should obey this kind of inverse relation and then I will skip the Steiner ellipsoids if you have a triangle you can always find an ellipsoid which is going through the points the edges here of our simplex you can also compute the volume and see that that volume is also related with or proportional with the product of the eigenvalues of the Laplacian now the altitude in the simplex the altitude in the simplex if you start with the inverse simplex then there is a very nice result due to Fiedler that says that if you take the altitude from the vertex 2 to the base here which is basically the complementary phase and it's orthogonal this altitude is given by one over the degree and that's a result which we should quote a result due to Fiedler we are going to generalize that result but Fiedler did not define the inverse representation he just was focusing on simplexes we always take the dual approach you have the original simplex and the dual representation and if you now go to the dual representation so this is now this is an inverse simplex that we concentrate here on the simplex we also define here the altitude and you can say that the altitude of this point is now nothing else than one over and then you should take the dual of the degree which is the diagonal element of the pseudo inverse okay so this in the normal simplex the altitude is one over the element of our pseudo inverse and recall again that was nothing else than our best spreader so it means that from a geometric point of view that element the pseudo inverse which I gave a simple notation as being the best spreader has a more fundamental and deeper equation mainly it is in a graph just nothing else than the altitude from a point to the complementary phase okay and this slide might be a little bit more complicated but here we attended the idea of Fiedler from the height the altitude basically to a much more general setting and the setting is complementary sets and complementary sets are very important spreading processes right if you have an epidemic process and you have the red nodes which are infected then those nodes can only infect the white nodes through the links which are connected by them right so if there is a link which has at one side a red node and at the other side a white node this number of links gives you is proportional or is not directly proportional to the strength of the spreading of an epidemic okay so that was actually one of the reasons why we looked at that so the cut size which is the number of links between the infected and the non-infected can again be written in terms of the Laplacian and we use how this decomposition of the Laplacian as a gram matrix and if you do the whole computation you just fill it in then you find here you find here that the number of links is proportional to the square of the centroid and here in a factor which is well related to the dimension of that or the defining number of edges in my face here so to be a little bit more concrete if you have now here a tetrahedron and you say that the centroid which is determined by two edges here or two vertices three and four the complementary one are one and three and so the number of links if you in this the number of links between these two sets the set three and four is just given by the centroid square by this equation and that kind of a setting this is in the normal simplex you can actually extend also to the dual representation and then you just substitute here the Laplacian by the pseudo Laplacian and then you need to fix from centroids to altitudes and then you basically have this kind of result that the the altitude in the inverse simplex here is related to one over the cut set in the Laplacian in the pseudo Laplacian here I agree it's a bit more difficult when the altitude is here the vector which is orthogonal to this face and that face so it's orthogonal both to that edge between three and four and one and three and this is a kind of a generalization what we found related to Fiedler's Fiedler's property that the altitude is one over the degree of one over the degree of a vector one of the degree of the simplex and then just to finalize we found and this is continuous work that you can define metrics in an underlying graph and I showed already these two mainly that if you take the square root here then well I should not take the square if you take this distance between the inverse and the inverse representation then it's the square root of our effective graph resistance and this is just a property of the fact that the Euclidean distance between vertices in the inverse simplex have this kind of a property and it means that all the vertices of a graph which according to the embedding of S dagger here obey this square root of the effective resistance and this is again a metric and it obeys the triangle inequality you can also see that the square here just effective graph resistance is a metric and in order to prove that that's a little bit more complex and it means that inverse simplex of a graph that only is possible if the weights on all the links are positive and of course that the angles of this one is hyperacute so that's an equivalent representation my final slide is in a generalization of metrics because I showed already that if the pseudo-inverse the pseudo-inverse of the Laplacian can be written as a gram matrix I showed that you can always do that and so it determines basically a metric then you can generalize the whole thing and you can say that if you define here now a metric of a function of the Laplacian and if the function of the Laplacian is also Laplacian because this is the basic requirement it should be a Laplacian then you can write this kind of an equation and saying that there is a metric between node i and node 1 node i and g, sorry and as an example I gave you one which we found just by curiosity and this is one given by statistical physics you can actually use statistical physics on metrics on a graph and you can define here the thing what we spend well we spend some time in showing that the function here of the Laplacian and we choose here the function of Laplacian and we show that this is the Laplacian of the complete graph but this total expression which is the function of q which is the function of Laplacian we've shown that it's also Laplacian and if it is a Laplacian then this defines basically a metric and if you write it out you can actually use this kind of presentation which is very related cyclophysics. If you use here the G equals minus one, you have the Bose-Einstein representation and so on. Okay, this is just for fun. I don't know what it means, but you can deduce these things. You can define basically that this is a metric on the underlying graph. Okay, summary. I started with using a very, very simple process which gave linearity between the injected current and the potentials. And by having done the reverse, we came to the pseudo Laplacian. And then if you do the spectral decomposition of the weighted Laplacian or it's absurdly inverse, you can actually represent any undirected graph in a simplex and it's dual notation which have nice, well, I believe, nice properties in the Euclidean space. And it's a kind of a geometry which, by the way, is exact. It's not an embedding. In an embedding, you always lose information, but this is exact. One of the focal questions is still which kind, if you see the three representations of a graph, you have the topology domain, you have the spectral domain, now the geometric domain. I'm now looking to problems that I can solve most elegantly in the geometric domain because that's actually the way why you do these kind of representations. Okay, this ends my talk. Thank you. So your representation using this spectral decomposition, when you move it the other way around, it's the set vectors provide a split basis for all functions that you can define on the network, right? So the question is, when you stick to a face, does this provide the vectors, the network, the representation that you get, does it provide a complete representation of function that you can define on that face? Only on the face, yes. Because you can take out one of those vectors. What is nice, if you take out on the simplex one vertex, you have again a simplex on a lower dimension without one node. And that provides a complete representation for all things. Yeah, but you have to recompute this a bit because your orthogonality conditions are not with one less. You need to recalculate something, it's not only taking it out.